[sqlalchemy] Re: postgres table-name length?

2008-06-23 Thread Egil Möller


Someone has recently made this claim without any supporting evidence,  
and I responded with an example of the compiler/ORM running against  
Oracle,  truncating a long generated name, which had been aliased  
twice, into a properly truncated name, results returned just fine.
This is a very actively developed feature of SQLA and it has a ton of  
unit tests.   Without trac tickets there's no bugs as far as I know.
  

It might be that this has been fixed in the latest version?

The patches are against  SQLAlchemy 0.4.5 btw.



that just creates more work for us.   Please make your patches against  
the latest version of the 0.4 maintenance branch checked out from SVN.
  
This was the version that was current when our project begun, and which 
we have been working on.


I will make an effort to port them to the latest version before posting 
them to trac.


P.S. I could send you the patches privately if you want - I don't  
want to spam the whole list w 16kb of tgz that most people wouldn't  
care about...



I would *greatly* appreciate if all discussion of SQLA implementation/ 
bugs be kept public, and bugs are dealt with in trac tickets, which  
accept patches as attachments.  This is an open source project and  
when patches are made in secret and kept privately, you work against  
the entire thing.
  
No worries - I did not intend to keep things /secret in any way/. Maybe 
I should not have offered the patches in that way in the PS. I nearly 
attached the whole tgz och patch-files to the mail to help the thread 
originator ASAP, and then realized that maybe that'd be rather impolite 
to most people on the list - most people might not be very interrested 
in the code, and might not want 16kb of junk in their inbox. So I 
removed it and added the PS.




signature.asc
Description: OpenPGP digital signature


[sqlalchemy] Re: Problem:maximum recursion depth exceeded

2008-06-23 Thread Marin

I was trying to do something yesterday evening and I found something
strange. After running this code:
q = Q()
for j in range(1):
a = A()
a.Q = q
a.C = WC()

I get this graph edges (tuples):
import pprint
ppp = pprint.PrettyPrinter(indent=4)
ppp.pprint([(x.class_, y.class_) for x,y in tuples])
[   (class '__main__.Q', class '__main__.A'),
(class '__main__.WC', class '__main__.A'),
(class '__main__.Q', class '__main__.A'),
(class '__main__.WC', class '__main__.A')]

It seems they are dubled. Maybe that's a part of the problem.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: postgres table-name length?

2008-06-23 Thread az

i've made something but before u jump on my overly generic ways of 
coding, here the matrix to be tested:

matrix/possibilities:

A. mangling: for those longer, try 2 of same kind (in one parent), 
both same  size, beginning same but diff. at the end, with diff after 
MAXLEN, e.g. sometable( vee...rylongcolumn1, vee..rylongcolumn2 ) - 
the columns should be mangled differently (just truncating will make 
them same)

B   length combinations
* 'schema.table.column' MAXLEN
* 'schema.table.column' =MAXLEN
  'schema.table.column' MAXLEN
   * each MAXLEN
   * each =MAXLEN
   * each MAXLEN
   * one less others longer  x3
   * one less others equal   x3
   * one equal others longer x3
   * one longer others less  x3
   * one longer others equal x3

C some column is primary_key: - automatic sequence_name
  sequence_name = table+'_'+somecolumn+'_seq' 
(len= len(table)+len(somecolumn)+1+4)
* schema.sequence_nameMAXLEN
* schema.sequence_name=MAXLEN
* schema.sequence_nameMAXLEN
   * each MAXLEN
   * each =MAXLEN
   * each MAXLEN
   * first less other longer
   * first less other equal
   * first equal other less
   * first equal other longer
   * first longer other less
   * first longer other equal

D index - any automatic naming?
E foreignkey, other constraints ?


question: is there a dialect that restricts overall size of a 
composite multilevel name (schema1.table2.column3), or all are sane 
enough and have limits only on single identifier length? 
schema1 table2 column3 each are single identifiers.
if only identifier length is at stake, then only our automatic-made 
names should be extra-checked (e.g. sequences, indexes, what else)

ciao
svilen


On Sunday 22 June 2008 23:27:43 Michael Bayer wrote:
 the fix is along these lines, and its dependent on PG's behavior of
 truncating, then appending _colname_seq.   However, I've no idea
 what it does when tablename + colname add up to more than 60, or
 when colname alone is more than 60 - theres several variations
 here.

 theres also many workarounds here, including using autoload=True,
 as well as a Sequence object with the exact name.

 One way to obliterate the issue totally would be to use a
 PG-specific reflection on the column to get its default generator. 
   Care to work up a set of tests for me (for at least #1 above ?)


 Index: lib/sqlalchemy/databases/postgres.py
 ===
 --- lib/sqlalchemy/databases/postgres.py  (revision 4870)
 +++ lib/sqlalchemy/databases/postgres.py  (working copy)
 @@ -781,9 +781,9 @@
   # TODO: this has to build into the Sequence
 object so we can get the quoting
   # logic from it
   if sch is not None:
 -exc = select nextval('\%s\.\%s_%s_seq\')
 % (sch, column.table.name, column.name)
 +exc = select nextval('\%s\.\%s_%s_seq\')
 % (sch, column.table.name[0:56], column.name)
   else:
 -exc = select nextval('\%s_%s_seq\') %
 (column.table.name, column.name)
 +exc = select nextval('\%s_%s_seq\') %
 (column.table.name[0:56], column.name)
   return
 self.execute_string(exc.encode(self.dialect.encoding))

   return super(PGDefaultRunner,
 self).get_column_default(column)

 On Jun 22, 2008, at 4:11 PM, [EMAIL PROTECTED] wrote:
  rom sqlalchemy import *
  from sqlalchemy.orm import *
  import sys
  metadata = MetaData( sys.argv[1:] and sys.argv[1] or 'sqlite://')
  metadata.bind.echo = 'echo' in sys.argv
 
  aa = 'itm'*30
 
  #56 works on postgres
  #57 and above - not
  item_table = Table( aa[:57],
 metadata,
 Column('id', Integer, primary_key=True),
 Column('name', String(50)))
 
  class Item(object):
 def __init__(self, name): self.name = name
 def __str__(self): return self.name
  item_mapper = mapper(Item, item_table, )
  metadata.create_all()
 
  session = create_session()
 
  session.save(Item('aaa'))
  session.save(Item(''))
  session.flush()
 
  items = session.query(Item)
  for item in items: print item
  ---
  results in:
  sqlalchemy.exceptions.ProgrammingError: (ProgrammingError)
  relation
  itmitmitmitmitmitmitmitmitmitmitmitmitmitmitmitmitmitmitm_id_se
  does not exist
  'select
  nextval
  (\'itmitmitmitmitmitmitmitmitmitmitmitmitmitmitmitmitmitmitm_id_
 seq \')'
  None
 
  seems mangling should start for any len(name)  maxsize -
  len(_id_seq)

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en

[sqlalchemy] Re: problem with server_default (and/or sa.PassiveDefault in 0.5.beta1

2008-06-23 Thread Werner F. Bruhin

Svilen and Michael,

Thanks for all the pointers.  Will look into this all and read up some 
more on declarative (I like its approach, having things together) and do 
some more test scripts for my application.

Werner

Michael Bayer wrote:
 that __repr__ is pretty tortured too; a typical ORM-agnostic approach  
 is:

  def __repr__(self):
  return %s(%s) % (
  (self.__class__.__name__),
  ', '.join([%s=%r % (key, getattr(self, key))
 for key in sorted(self.__dict__.keys())
 if not key.startswith('_')]))



 


   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Problem:maximum recursion depth exceeded

2008-06-23 Thread Michael Bayer


On Jun 23, 2008, at 4:18 AM, Marin wrote:


 I was trying to do something yesterday evening and I found something
 strange. After running this code:
 q = Q()
for j in range(1):
a = A()
a.Q = q
a.C = WC()

 I get this graph edges (tuples):
 import pprint
 ppp = pprint.PrettyPrinter(indent=4)
 ppp.pprint([(x.class_, y.class_) for x,y in tuples])
 [   (class '__main__.Q', class '__main__.A'),
(class '__main__.WC', class '__main__.A'),
(class '__main__.Q', class '__main__.A'),
(class '__main__.WC', class '__main__.A')]

 It seems they are dubled. Maybe that's a part of the problem.

that shouldn't be an issue, those tuples are all placed into a  
dictionary structure before topological.py sorts them.I'm pretty  
confident that topological.py is very strong except for the tree part.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: postgres table-name length?

2008-06-23 Thread Michael Bayer


On Jun 23, 2008, at 3:27 AM, Egil Möller wrote:


 I will make an effort to port them to the latest version before  
 posting them to trac.


OKbut I am *really really* curious what the bugs are.   It would  
be better for me to have a look to see what the preferred approach is  
for them before running down that path.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Item count along object results

2008-06-23 Thread Gaetan de Menten

Hi list,

Is there really no easier/nicer way to get a count of items alongside
object results than the one described at:
http://www.sqlalchemy.org/docs/05/ormtutorial.html#datamapping_joins_subqueries
?

 from sqlalchemy.sql import func
 stmt = session.query(Address.user_id, 
 func.count('*').label('address_count')).group_by(Address.user_id).subquery()

 for u, count in session.query(User, stmt.c.address_count).\
... outerjoin((stmt, User.id==stmt.c.user_id)).order_by(User.id):
... print u, count

===

I thought something simpler using a subquery instead of a outer join
on a subquery would work... But I couldn't make it work. [I'm no
expert at SQL optimization but I'd believe the speed should be
similar].

session.query(User, select([func.count('*')], User.id ==
Address.user_id).as_scalar())

It doesn't seem to correlate correctly: no from clause... I think
similar queries work (or at least used to) in column_properties?!?. I
had to add the manual correlate clause:

session.query(User, select([func.count('*')], User.id ==
Address.user_id).correlate(users).as_scalar())

but even then it doesn't yield any result through the ORM. The
generated query yields the correct results in the DB command-line
interface though. So I guess it's the ORM part which cannot load it
back correctly.

It might be related to the fact that the generated query is:

SELECT user.[...], (SELECT count(?) AS count_1 FROM backend_blogpost
WHERE backend_blog.user = backend_blogpost.blog_user) AS anon_1
FROM backend_blog, backend_blogpost

Notice the unnecessary backend_blogpost at the end.

Is it a bug, or am I doing something wrong/missing something to get
the query right?

On a related note, I'm dreaming that something like this would be supported:

 for u, count in session.query(User, User.addresses.count()):
... print u, count

-- 
Gaëtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Item count along object results

2008-06-23 Thread Michael Bayer


On Jun 23, 2008, at 10:27 AM, Gaetan de Menten wrote:


 Hi list,

 Is there really no easier/nicer way to get a count of items alongside
 object results than the one described at:
 http://www.sqlalchemy.org/docs/05/ormtutorial.html#datamapping_joins_subqueries
 ?

different SQL, or different usage ?   someone would have to contribute  
a new operator or method of some kind.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: postgres table-name length?

2008-06-23 Thread Michael Bayer


On Jun 23, 2008, at 2:16 AM, [EMAIL PROTECTED] wrote:

 i can make several tests about how the combination of tablename,
 colname, seqname, indexname alone and some of them in pairs behave
 around max_single_name_len=64 - below, at and above it. i've no iea
 about schemas but i guess they can be added later.

well I'm going off this doc:

http://www.postgresql.org/docs/8.3/interactive/datatype-numeric.html#DATATYPE-SERIAL

which doesn't say what to do when the identifiers are too long.

I think we'd be more immune to issues in this area if we just do a  
little reflection from pg_catalog to get the actual sequence name.   
This is a straightforward but somewhat tedious enhancement to the PG  
module.   The current workaround of just putting in your own  
Sequence() doesn't seem too terrible to me.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Multiple DBs in 0.5

2008-06-23 Thread kris



We have an application (Turbogears) that uses multiple databases.
A main (postgres/mysql) database for most data (TG + application) and
another
datbase (sqlite) used for tracking some file info.

The main database uses the standard turbogears session, however
we ran into trouble using the turbogear's session for the sqlist db.

Instead we defined its own session with
  db= 'sqlite:///blobdb.sqlite'
  BlobSession = sessionmaker(bind=db, autoflush=True,
transactional=True)

and perform  when needed.

   session = BloBSession()
   session.query (...)
  ...
   session.commit ()
   session.close()

This worked well under 0.4.X, but seems to have changed under 0.5beta1
as seen below:
=
 File /home/kgk/work/bisquik/generative/TG/bisquik/IS/blobsrv.py,
line 218, in accessPermission
log.debug( 'user: %s, id: %s, owner: %s, perm: %s'%( str(userId),
str(id), str(binfo.owner), str(binfo.perm) )  )
  File /home/kgk/work/bisquik/generative/python/lib/python2.4/site-
packages/SQLAlchemy-0.5.0beta1-py2.4.egg/sqlalchemy/orm/
attributes.py, line 123, in __get__
return self.impl.get(instance_state(instance))
  File /home/kgk/work/bisquik/generative/python/lib/python2.4/site-
packages/SQLAlchemy-0.5.0beta1-py2.4.egg/sqlalchemy/orm/
attributes.py, line 307, in get
value = callable_()
  File /home/kgk/work/bisquik/generative/python/lib/python2.4/site-
packages/SQLAlchemy-0.5.0beta1-py2.4.egg/sqlalchemy/orm/
attributes.py, line 866, in __call__
class_manager.deferred_scalar_loader(self, [
  File /home/kgk/work/bisquik/generative/python/lib/python2.4/site-
packages/SQLAlchemy-0.5.0beta1-py2.4.egg/sqlalchemy/orm/mapper.py,
line 1580, in _load_scalar_attributes
raise sa_exc.UnboundExecutionError(Instance %s is not bound to a
Session; attribute refresh operation cannot proceed %
(state_str(state)))
UnboundExecutionError: Instance [EMAIL PROTECTED] is not bound to a
Session; attribute refresh operation cannot proceed

=

Are we setting up the multiple database situation incorrectly for 0.5?

Thx,
Kris

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Adding a filter_by() clause by default.

2008-06-23 Thread RommeDeSerieux

In my application, a lot of models have a deleted_at field which is 
either null or set to a date. And in most places where i select from 
those models, i only need the instances where deleted_at is null. Of 
course, i can do it manually by adding just another filter_at on every 
ORM operation, but that's error-prone and breaks the DRY severely.

But i've read on the sqlalchemy website that it's quite extensible, so 
how can i do it automatically for certain tables? I'm looking for 
something like Django ORM's custom managers.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: postgres table-name length?

2008-06-23 Thread az

On Monday 23 June 2008 18:23:27 Michael Bayer wrote:
 On Jun 23, 2008, at 2:16 AM, [EMAIL PROTECTED] wrote:
  i can make several tests about how the combination of tablename,
  colname, seqname, indexname alone and some of them in pairs
  behave around max_single_name_len=64 - below, at and above it.
  i've no iea about schemas but i guess they can be added later.

 well I'm going off this doc:

 http://www.postgresql.org/docs/8.3/interactive/datatype-numeric.htm
l#DATATYPE-SERIAL

 which doesn't say what to do when the identifiers are too long.
grammar-wise, it will truncate anything to 63. And then complain about 
the thing is missing - or worse, find another one (wrong) and use 
that.

 I think we'd be more immune to issues in this area if we just do a
 little reflection from pg_catalog to get the actual sequence name.
whatever, as long as duplicates are avoided (i.e two different long 
names truncated into same shorter name)
 This is a straightforward but somewhat tedious enhancement to the
 PG module.   The current workaround of just putting in your own
 Sequence() doesn't seem too terrible to me.
it's all ok, i've renamed my models - i'm just spotting some things 
along the way.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Multiple DBs in 0.5

2008-06-23 Thread Michael Bayer



On Jun 23, 1:08 pm, kris [EMAIL PROTECTED] wrote:


 Are we setting up the multiple database situation incorrectly for 0.5?

0.4 had an implicit behavior whereby when you hit the lazy loader on
an attribute for an object that was not bound to a session, it would
automatically bind to the current session used by Session.mapper.
This behavior wasn't implemented consistently in all use cases and is
somewhat magic so in 0.5 its removed.  So when you do your
Session.close(), you're going to want to transfer all in-memory
objects which you'd like to continue working with to your
scoped_session, using Session.add() or Session.add_all([unbound
objects]).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Adding a filter_by() clause by default.

2008-06-23 Thread Michael Bayer

using 0.5:

from sqlalchemy.orm import Query, sessionmaker

class MyQuery(Query):
def __new__(cls, entities, **kwargs):
if hasattr(entities[0], 'deleted_at'):
return Query(entities,
**kwargs).filter_by(deleted_at=None)
else:
return object.__new__(cls)

Session = sessionmaker(query_cls=MyQuery)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---