[sqlalchemy] Re: Trying to query a relationship of a relationship

2011-06-09 Thread Jules Stevenson
I don't think the info I sent last night was particularly clear,
apologies it was late. The code for the classes is below:

# ArkClient - clientprojectshot module
orm.mapper(ArkClient, clients_table, properties={
   'contacts':orm.relation(ArkContact,
           backref='client'),
   'projects':orm.relation(ArkProject,
           backref='client',
           cascade=all, delete, delete-orphan)
})

# ArkProject - clientprojectshot module
orm.mapper(ArkProject, projects_table, properties={
   'contacts': orm.relation(ArkContact,
           secondary=project_contact_table,
           backref='projects'),
   'invoices':orm.relation(ArkInvoice,
           backref='project',
           cascade=all, delete),
   'shots':orm.relation(ArkShot,
           backref='project',
           cascade=all, delete, delete-orphan),
   'users':orm.relation(ArkUser,
           backref='projects',
           secondary=user_projects_primary_table)
})

# ArkInvoice - invoices module
orm.mapper(ArkInvoice, invoices_table, properties={
'entries': orm.relation(ArkInvoiceEntry,
secondary=invoice_entries_primary_table,
backref='invoice',
cascade=all, delete),
'user': orm.relation(ArkUser, backref='invoice'),
'child_invoices':orm.relation(ArkInvoice,
backref=backref('parent_invoice',
remote_side=[invoices_table.c.id]),
cascade=all,
lazy=False,
join_depth=3)
})

What I am trying to do is query the client of an invoice, and to do
this I need to build a query something along the lines of:

invoice  project  client, and filter by client, or at least I thnk I
need to do this.

So the current query code I have looks something like this:

  invoices = query(ArkInvoice).\
join(ArkInvoice.project).\
join(ArkProject.client).\

options(sa.orm.contains_eager(model.ArkInvoice.project.client)).\
filter(model.ArkInvoice.project.client.id == id)

But this doesn't work, and I've tried many variations around this
theme with no joy. I'm clearly missing something fundamental, but I'm
not sure what. Any pointers gratefully received.

Many thanks,

Jules


On Wed, Jun 8, 2011 at 9:56 PM, Jules Stevenson
droolz...@googlemail.com wrote:
 sorry, hit the send button a little too soon.

 Any help on the above much appreciated,

 Jules


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Trying to query a relationship of a relationship

2011-06-09 Thread Jules Stevenson
Sorry, for the spamming, code typo (was trying to simplify it), should read:

  invoices = query(ArkInvoice).\
join(ArkInvoice.project).\
join(ArkProject.client).\
options(sa.orm.contains_eager(ArkInvoice.project.client)).\
filter(ArkInvoice.project.client.id == id)

On Thu, Jun 9, 2011 at 8:50 AM, Jules Stevenson
droolz...@googlemail.com wrote:
 I don't think the info I sent last night was particularly clear,
 apologies it was late. The code for the classes is below:

 # ArkClient - clientprojectshot module
 orm.mapper(ArkClient, clients_table, properties={
    'contacts':orm.relation(ArkContact,
            backref='client'),
    'projects':orm.relation(ArkProject,
            backref='client',
            cascade=all, delete, delete-orphan)
 })

 # ArkProject - clientprojectshot module
 orm.mapper(ArkProject, projects_table, properties={
    'contacts': orm.relation(ArkContact,
            secondary=project_contact_table,
            backref='projects'),
    'invoices':orm.relation(ArkInvoice,
            backref='project',
            cascade=all, delete),
    'shots':orm.relation(ArkShot,
            backref='project',
            cascade=all, delete, delete-orphan),
    'users':orm.relation(ArkUser,
            backref='projects',
            secondary=user_projects_primary_table)
 })

 # ArkInvoice - invoices module
 orm.mapper(ArkInvoice, invoices_table, properties={
    'entries': orm.relation(ArkInvoiceEntry,
            secondary=invoice_entries_primary_table,
            backref='invoice',
            cascade=all, delete),
    'user': orm.relation(ArkUser, backref='invoice'),
    'child_invoices':orm.relation(ArkInvoice,
            backref=backref('parent_invoice',
 remote_side=[invoices_table.c.id]),
            cascade=all,
            lazy=False,
            join_depth=3)
 })

 What I am trying to do is query the client of an invoice, and to do
 this I need to build a query something along the lines of:

 invoice  project  client, and filter by client, or at least I thnk I
 need to do this.

 So the current query code I have looks something like this:

      invoices = query(ArkInvoice).\
                join(ArkInvoice.project).\
                join(ArkProject.client).\

 options(sa.orm.contains_eager(model.ArkInvoice.project.client)).\
                filter(model.ArkInvoice.project.client.id == id)

 But this doesn't work, and I've tried many variations around this
 theme with no joy. I'm clearly missing something fundamental, but I'm
 not sure what. Any pointers gratefully received.

 Many thanks,

 Jules


 On Wed, Jun 8, 2011 at 9:56 PM, Jules Stevenson
 droolz...@googlemail.com wrote:
 sorry, hit the send button a little too soon.

 Any help on the above much appreciated,

 Jules



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: porting GeoAlchemy to 0.7

2011-06-09 Thread Eric Lemoine
On Mon, Jun 6, 2011 at 9:47 PM, Eric Lemoine
eric.lemo...@camptocamp.com wrote:
 Hi

 i'm currently in the process of porting GeoAlchemy to SQLAlchemy 0.7.

 The first issue I'm having is related to before_create and
 after_create DDL listeners we have in GeoAlchemy.

 We use before_create and after_create listeners to prevent SQLA from
 adding the geometry column, and do it ourselves.

 Basically, the before_create function removes the geometry column from
 table._columns, and the after_create function adds the geometry column
 by calling the AddGeometryColumn SQL function.

 I'm trying to use a similar mechanism with 0.7, relying on
 before_create and after_create event listeners. That doesn't work,
 because  setting table._colums seems to have no effect, i.e. SQLA
 still attempts to add the gemetry column.

 I've been thinking about resetting table.c (setting it to None or
 something) and using table.append_column to add all columns but the
 geometry column in before_create, but I'm wondering if that's the
 proper way.

 Thanks for any guidance on that,

 PS: I was hoping to get inspiration from examples/postgis.py, but this
 example looks outdated. Maybe it should be removed from the 0.7 code
 base.

Hi

Here's another issue with porting GeoAlchemy to SQLAlchemy 0.7.


So GeoA defines a TypeEngine, which looks like this:

class Geometry(TypeEngine):

def __init__(self, dimension=2, srid=4326, spatial_index=True, **kwargs):
self.dimension = dimension
self.srid = srid
self.spatial_index = True
self.kwargs = kwargs
super(GeometryBase, self).__init__()


Using the Geometry type with Oracle requires passing an additional
argument to the constructor, namely diminfo:

Geometry(dimension=2, srid=4326, spatial_index=True,
diminfo='the_diminfo_string')

Then our Oracle-specific code uses type.kwargs['diminfo'] to access
the diminfo value.


This worked well with SQLA 0.6, but it doesn't work with SQLA 0.7.

It doesn't work with 0.7 because SQLA may clone the type instance, and
because of the way SQLA clones object (constructor_copy), the clone
does not have self.kwargs['diminfo'].


What is the recommended way to address the issue?


We've considered using an additional_args argument:

class Geometry(TypeEngine):

def __init__(self, dimension=2, srid=4326, spatial_index=True,
additional_args={}, **kwargs):
self.dimension = dimension
self.srid = srid
self.spatial_index = True
self.additional_args = additional_args
self.kwargs = kwargs
super(GeometryBase, self).__init__()

which would be used like this:

Geometry(dimension=2, srid=4326, spatial_index=True,
additional_args={'diminfo'='the_diminfo_string'})

but introducing an additional_args argument doesn't look very pythonic.


Thanks a lot for any guidance on the way to address the issue.



-- 
Eric Lemoine

Camptocamp France SAS
Savoie Technolac, BP 352
73377 Le Bourget du Lac, Cedex

Tel : 00 33 4 79 44 44 96
Mail : eric.lemo...@camptocamp.com
http://www.camptocamp.com

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Sqlalchemy beaker cache and memcached

2011-06-09 Thread Łukasz Czuja
Thank you for a quick response. It's much appreciated. Let me be more
specific:
1. Do you use the code provided in the above link? If not how do cache
and retrieve results via query.options() or cache.get() and then
session.merge() ?
2. Do you use it in any framework like pylons or diango?
3. My problem is that I want to cache results of rpc function calls
that return sqlalchemy table objects so that method output is heavily
based on the parameters received. While one method is responsible for
fetching records others do data manipulation so they must invalidate
all parametrized cached results of the 'get method' (and use some kind
of wildcart?). I saw that it is possible to cache diffrent results
based on diffrent filter() values (example in the link) but that
effectively means building the same query to invalidate data in the
add/delete/edit methods (and how do I do this while cached resullts
are diffrent among diffrent values submitted to filter(). I dont know
if I'm being clear enough, can you or anyone provide any input on that
particular matter?

On 9 Cze, 02:14, BenH ben.hesk...@gmail.com wrote:
 Hi,

 I use Beaker in production to help speed up the delivery of game
 content.
 We've seen enormous (seconds to milliseconds) speed ups for caching
 large queries that don't change.

 We don't use it at the query level but as a way to cache whole results
 from sqlalchemy.
 As long as you remember to merge all the objects back into the Session
 you're fine.

 We currently don't use the 'memcache' but the 'memory' setting, it's
 the easiest to setup and gives very good results.
 The only caveat I have about the memory caching is that there is no
 way of setting how big it gets, if that's a problem use memcache
 instead.
 For me, I just set the lifetime of objects to an hour and that stops
 the memory growing without bounds.

 File caching doesn't give as good results because you have to hit the
 disk and that will lead to IO problems especially if you are using a
 database which will be using the disk as well.

 I hope this helps,

 Ben Hesketh

 On Jun 8, 2:43 pm, £ukasz Czuja luk...@czuja.pl wrote:







  Hi,

  I reviewed a couple of messages on the list concerning caching. While:

 http://www.sqlalchemy.org/trac/browser/examples/beaker_caching

  is quite comprehensive, it does not include any information about
  performance gains or wether this example works for more complicated
  queries, joining couple of tables, returning agregates or diffrent
  objects from the same query. Are there any limitations on data types
  that cannot be cached? Also there is no information as wether this
  example would work if beaker is setup with memcache as storage and
  wether it would be faster than file storage and if so by how much?

  Can any one provide some numbers on this concept? Does anyone have a
  success story to share?

  I'm dying to see how this performs and if it is usable on production
  environment.

  Any input is appreciated. Thanks in advance.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Re: Trying to query a relationship of a relationship

2011-06-09 Thread King Simon-NFHD78
 -Original Message-
 From: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com]
 On Behalf Of Jules Stevenson
 Sent: 09 June 2011 08:53
 To: sqlalchemy@googlegroups.com
 Subject: [sqlalchemy] Re: Trying to query a relationship of a
 relationship
 
 Sorry, for the spamming, code typo (was trying to simplify it),
 should read:
 
   invoices = query(ArkInvoice).\
 join(ArkInvoice.project).\
 join(ArkProject.client).\
 
 options(sa.orm.contains_eager(ArkInvoice.project.client)).\
 filter(ArkInvoice.project.client.id == id)
 

I think you probably want something like this (all untested):

invoices = (session.query(ArkInvoice)
.join(ArkInvoice.project)
.join(ArkProject.client)
.filter(ArkClient.id == id)).all()

If you need contains_eager (which is purely an optimisation allowing you
to access invoice.project without a subsequent query), I think it would
look like this:

invoices = (session.query(ArkInvoice)
.join(ArkInvoice.project)
.join(ArkProject.client)
.options(contains_eager(ArkInvoice.project),
 contains_eager(ArkProject.client))
.filter(ArkClient.id == id)
.all())


However, if you are actually going to be working with the client,
project and invoice objects after this query, you may find it easier to
start from the client:

client = (session.query(ArkClient)
  .options(joinedload_all('projects.invoices'))
  .filter(ArkClient.id == id)
  .one())

After this query, you could access client.projects and
client.projects[n].invoices without further database queries.

See http://www.sqlalchemy.org/docs/orm/loading.html for a description of
joinedload_all.

I hope that helps,

Simon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Filtered backref

2011-06-09 Thread King Simon-NFHD78
 -Original Message-
 From: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com]
 On Behalf Of Joril
 Sent: 08 June 2011 22:41
 To: sqlalchemy
 Subject: [sqlalchemy] Filtered backref
 
 Hi everyone!
 Is it possible to have a many-to-one declarative relation between two
 classes and a _filtered_ backref?
 
 I'm trying to build a tagging system for my bloglike application, and
 to allow a user to apply private tags to posts of other people. My
 classes are:
 Owner
 Post
 TagAssociation
 Tag
 
 A Post has an Owner, while TagAssociation has a Tag, a Post and an
 Onwer
 
 Between TagAssociation and Post there's a many-to-one, and I'd like
 to
 configure a tags backref so that it would handle only the
 TagAssociations having the same Owner as the Post... Is this possible?
 
 Many thanks!
 

The 'relationship' function takes optional primaryjoin and secondaryjoin
parameters that control the join conditions for the relationship. So I
think you should be able to do something like this:

import sqlalchemy as sa

class TagAssociation(Base):
# columns including owner_id and post_id

class Post(Base):
# columns including id and owner_id
tags = relationship(
TagAssociation,
primary_join=(sa.and_(id == TagAssociation.post_id,
  owner_id == TagAssociation.owner_id)))

I think you would have to treat this relationship as readonly, so you
might need/want to add viewonly=True.

Hope that helps,

Simon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Polymorphic forces table definition outside of declarative class?

2011-06-09 Thread Eric Ongerth
Good fix, thanks.

And thanks for the first answer too, I had not thought of just using
__table__.c.discriminator.  I was thinking I would need to use
sometable.c.discriminator, which I wasn't sure would work.

Incidentally, I'm finally going to quit using inheritance for cross-
cutting concerns and try mixins instead... In particular, keeping one
set of unique identifiers for trackable objects that can be users,
documents, events, products, etc... so that a note or issue/ticket can
be affixed to any of the above.  But without the mess of having to
draw primary keys for all of those tables from a single parent class
and all the accompanying baggage of inheritance and incurring joins on
more queries than necessary.  Thanks again for the guidance on that in
the past.


On Jun 8, 2:41 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jun 8, 2011, at 5:19 PM, Eric Ongerth wrote:

  # Meanwhile, the following way of doing it doesn't work.
  # But it seems like it would make sense and might be worth enabling.

  class SomeClass(Base):

     __table__ = Table('sometable', Base.metadata,
         Column('id', Integer, primary_key=True),
         Column('discriminator', Text),
         Column('data', Text))

     __mapper_args__ = {'polymorphic_on': 'discriminator',
                        'polymorphic_identity': 'default'}

 polymorphic_on someday might be able to handle more than just a column, and 
 there's no dependency-oriented rationale for allowing the string there like 
 there is with relationship(), so its better that it only accept the column 
 for now, which you can do easily enough via 
 {polymorphic_on:__table__.c.discriminator} above.

  error it causes is not very clear.  The error would be fine if it
  quoted the piece of code which triggered it, for instance if it said:
  * AttributeError: 'str' object 'discriminator' has no attribute
  'proxy_set'.

 we have a canned columns only function used by relationship() to check its 
 arguments, while there are probably a lot more places such a checker could 
 go, its additionally affixed to polymorphic_on in r760197daa0c2.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Status of executed update

2011-06-09 Thread Martin
Hi,

I have a question:

Is it possible to get the information if an SQL Update was successful
from the Result-Proxy returned from executing the update statement:

result = connection.execute (update.where (pk_column == pk))

Thanks,
Martin

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] grouping of a query results after select_all()

2011-06-09 Thread Eduardo
Dear Sirs,
My problem is following as follows I have multiple of queries that I
save in a list. When I have all my queires in the list I apply
intersect_all method
q=qlist[0].intersect_all(*qlist[1:])
I get my result simply by
res=q.all()
However I would like to group results according to some other column
q1=q.group_by(tableints.columns['name'])
However if I try
res=q1.all()
I get the error message meaning that I have to apply group_by for each
of the wueries stored in the list. Is there any way to applygroup_by
only one for all queries in the list?
Thanks

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: porting GeoAlchemy to 0.7

2011-06-09 Thread Michael Bayer

On Jun 9, 2011, at 3:57 AM, Eric Lemoine wrote:

 Hi
 
 Here's another issue with porting GeoAlchemy to SQLAlchemy 0.7.
 
 
 So GeoA defines a TypeEngine, which looks like this:
 
 class Geometry(TypeEngine):
 
def __init__(self, dimension=2, srid=4326, spatial_index=True, **kwargs):
self.dimension = dimension
self.srid = srid
self.spatial_index = True
self.kwargs = kwargs
super(GeometryBase, self).__init__()
 
 
 Using the Geometry type with Oracle requires passing an additional
 argument to the constructor, namely diminfo:
 
Geometry(dimension=2, srid=4326, spatial_index=True,
 diminfo='the_diminfo_string')
 
 Then our Oracle-specific code uses type.kwargs['diminfo'] to access
 the diminfo value.
 
 
 This worked well with SQLA 0.6, but it doesn't work with SQLA 0.7.
 
 It doesn't work with 0.7 because SQLA may clone the type instance, and
 because of the way SQLA clones object (constructor_copy), the clone
 does not have self.kwargs['diminfo'].

That's the default adaption provided by TypeEngine.adapt().Provide your own 
adapt() that does what's needed.  For examples see Interval, Enum.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: porting GeoAlchemy to 0.7

2011-06-09 Thread Eric Lemoine
On Thu, Jun 9, 2011 at 6:28 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 On Jun 9, 2011, at 3:57 AM, Eric Lemoine wrote:

 Hi

 Here's another issue with porting GeoAlchemy to SQLAlchemy 0.7.


 So GeoA defines a TypeEngine, which looks like this:

 class Geometry(TypeEngine):

    def __init__(self, dimension=2, srid=4326, spatial_index=True, **kwargs):
        self.dimension = dimension
        self.srid = srid
        self.spatial_index = True
        self.kwargs = kwargs
        super(GeometryBase, self).__init__()


 Using the Geometry type with Oracle requires passing an additional
 argument to the constructor, namely diminfo:

    Geometry(dimension=2, srid=4326, spatial_index=True,
 diminfo='the_diminfo_string')

 Then our Oracle-specific code uses type.kwargs['diminfo'] to access
 the diminfo value.


 This worked well with SQLA 0.6, but it doesn't work with SQLA 0.7.

 It doesn't work with 0.7 because SQLA may clone the type instance, and
 because of the way SQLA clones object (constructor_copy), the clone
 does not have self.kwargs['diminfo'].

 That's the default adaption provided by TypeEngine.adapt().    Provide your 
 own adapt() that does what's needed.  For examples see Interval, Enum.

Ok, I'll take a look at adapt(). Note that our Geometry type isn't
specific to Oracle though.

Thanks again.


-- 
Eric Lemoine

Camptocamp France SAS
Savoie Technolac, BP 352
73377 Le Bourget du Lac, Cedex

Tel : 00 33 4 79 44 44 96
Mail : eric.lemo...@camptocamp.com
http://www.camptocamp.com

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: porting GeoAlchemy to 0.7

2011-06-09 Thread Michael Bayer

On Jun 9, 2011, at 12:37 PM, Eric Lemoine wrote:

 On Thu, Jun 9, 2011 at 6:28 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 
 
 That's the default adaption provided by TypeEngine.adapt().Provide your 
 own adapt() that does what's needed.  For examples see Interval, Enum.
 
 Ok, I'll take a look at adapt(). Note that our Geometry type isn't
 specific to Oracle though.

When you get it going, if you can show us what you're doing, we can create a 
prototypical version of your type, demonstrating the kind of add new arguments 
per dialect functionality it has,  and add it to our test suite, to ensure 
those usage patterns don't break.   SQLAlchemy usually uses distinct type 
classes per backend to handle backend-specific arguments, so your approach of 
allowing DB-specific keyword arguments to a single type, which while entirely 
appropriate in your case, isn't a pattern we test for at the moment.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Incorrect DDL generated for server_default

2011-06-09 Thread MikeCo
Server defaults appear to be handled incorrectly in some cases (looks
like for string values) in versions 0.6 and 0.7. The code works in
0.5.

I run into this with sqlite, and don't have access other databases
right now.
Test case (code below):
1. create a table with server side default value (I am using DEFAULT
'A B' for illustration)  by executing a DDL statement on a connection.
2. reflect that table into a metadata instance
3. drop the table
4. use metadata to recreate table

This should recreate the same table, and will do so in 0.5. In 0.6 and
0.7 the default is rendered as DEFAULT A B without the single quote
enclosing 'A B' leading to an OperationalError exception. The
interesting thing is that if step 1 is done by creating a Table
instance directly in the metadata, then the missing single quotes are
generated correctly. It appears that the root cause is that when
reflecting the table from sqlite (and maybe other databases as well)
the the c.server_default.arg property is missing the single quotes in
0.6+, but the quotes are present in 0.5. I suspect that DDL generation
blindly plugs in whatever is present in c.server_default.arg leading
to the error.

Sample code:
from sqlalchemy import __version__ as sa_ver
print 'SQLAlchemy version:', sa_ver
from sqlalchemy import *

def setup1():
print '* Create table with Table class *'
e = create_engine('sqlite:///', echo=True)
meta = MetaData(bind=e)
dflt_tbl = Table('test_default', meta,
Column('id', Integer, primary_key=True),
Column('int_dflt', Integer, server_default=text(str(0))),
Column('str_dflt', String, server_default=text('A B'))
)
meta.create_all()
return meta

def setup2():
print '* Create table with external DDL *'
e = create_engine('sqlite:///', echo=True)
meta = MetaData(bind=e)
e.execute(text(CREATE TABLE test_default (
 id INTEGER NOT NULL,
 int_dflt INTEGER DEFAULT 0,
 str_dflt VARCHAR DEFAULT 'A B',
 PRIMARY KEY (id)
 )))
meta.reflect()
return meta

def tryit(meta):
print '* Examine / use the metadata *'
tbl = meta.tables['test_default']
for c in tbl.c:
if c.server_default:
print '* Column [%s] has server default [%s]' \
% (c.name, c.server_default.arg)
tbl.drop()
meta.create_all()

meta = setup1()
tryit(meta)
meta = setup2()
tryit(meta)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Incorrect DDL generated for server_default

2011-06-09 Thread Michael Bayer

On Jun 9, 2011, at 5:10 PM, MikeCo wrote:

 This should recreate the same table, and will do so in 0.5. In 0.6 and
 0.7 the default is rendered as DEFAULT A B without the single quote
 enclosing 'A B' leading to an OperationalError exception. The
 interesting thing is that if step 1 is done by creating a Table
 instance directly in the metadata, then the missing single quotes are
 generated correctly. It appears that the root cause is that when
 reflecting the table from sqlite (and maybe other databases as well)
 the the c.server_default.arg property is missing the single quotes in
 0.6+, but the quotes are present in 0.5. I suspect that DDL generation
 blindly plugs in whatever is present in c.server_default.arg leading
 to the error.

The column inspection features of each backend tend to give to us the default 
value appropriately quoted.   The SQLite dialect itself is stripping out the 
quotes, most likely due to an artifact of 0.5's system which was inconsistent 
in its treatment of quoting around server defaults.

Narrowing down your test gives us:

from sqlalchemy import *
from sqlalchemy.schema import CreateTable

e = create_engine('sqlite:///', echo=True)
#e = create_engine('mysql://scott:tiger@localhost/test', echo=True)
#e = create_engine('postgresql://scott:tiger@localhost/test', echo='debug')

meta = MetaData(e)
dflt_tbl = Table('test_default', meta,
   Column('id', Integer, primary_key=True),
   Column('int_dflt', Integer, server_default=text(str(0))),
   Column('str_dflt', String(10), server_default=text('A B'))
   )
meta.create_all()


m2 = MetaData(e)
t = Table(test_default, m2, autoload=True)

print CreateTable(dflt_tbl).compile(dialect=e.dialect)
print CreateTable(t).compile(dialect=e.dialect)

MySQL and PG both produce a table definition that maintains the default.   
SQLite's doesn't due to line 643 of base.py.

Here's a patch:

diff -r f9faaf09e7b7 lib/sqlalchemy/dialects/sqlite/base.py
--- a/lib/sqlalchemy/dialects/sqlite/base.pyWed Jun 08 17:56:00 2011 -0400
+++ b/lib/sqlalchemy/dialects/sqlite/base.pyThu Jun 09 18:28:22 2011 -0400
@@ -637,10 +637,9 @@
 row = c.fetchone()
 if row is None:
 break
-(name, type_, nullable, default, has_default, primary_key) = 
(row[1], row[2].upper(), not row[3], row[4], row[4] is not None, row[5])
+(name, type_, nullable, default, has_default, primary_key) = \
+(row[1], row[2].upper(), not row[3], row[4], row[4] is not 
None, row[5])
 name = re.sub(r'^\|\$', '', name)
-if default:
-default = re.sub(r^\'|\'$, '', default)
 match = re.match(r'(\w+)(\(.*?\))?', type_)
 if match:
 coltype = match.group(1)


this is ticket 2189 targeted at 0.7.2, mostly needs tests in this regard (since 
the quoting clearly isn't tested):

http://www.sqlalchemy.org/trac/ticket/2189




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to tweak pylint for SA models?

2011-06-09 Thread kris
Did you ever find a solution to this?  I am facing the same issue.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/Rt3A6BJIeWIJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: How to tweak pylint for SA models?

2011-06-09 Thread Michael Bayer
I would think using declarative would solve most of it as the attributes are 
defined explicitly on the class.




On Jun 9, 2011, at 8:20 PM, kris wrote:

 Did you ever find a solution to this?  I am facing the same issue.
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/sqlalchemy/-/Rt3A6BJIeWIJ.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Invitation to connect on LinkedIn

2011-06-09 Thread Ting Zhou
LinkedIn


   
I'd like to add you to my professional network on LinkedIn.

- Ting

Ting Zhou
Postdoc at University of Zurich 
Zürich Area, Switzerland

Confirm that you know Ting Zhou
https://www.linkedin.com/e/dxhyml-goqo1vgt-46/isd/3169879102/cAUkydte/


 
-- 
(c) 2011, LinkedIn Corporation

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.