[sqlalchemy] SA and pyodbc connections

2007-06-06 Thread scurra

Hi

I cannot connect to SQL Server 2005 (Express) from my linux box using
the suggested SA method.  I can connect to SQL Server using sqsh which
indicates that freetds and unixodbc are correctly set up.

I can also connect using pyodbc. Although, only with the DSN method or
with DRIVER={}.  For details, see:
  http://sourceforge.net/forum/message.php?msg_id=4348161

This is probably more of a pyodbc problem (or a problem created by me
(wouldn't be the first time)) so this post is more of an FYI and a
WTF? than a bug report ;-)

The following thread contains reference to SA's pyodbc connection
mechanism:
  
http://groups.google.co.uk/group/sqlalchemy/browse_thread/thread/b5909dceb2178a1d

It seems that setting the DRIVER keyword to {SQL Server} is the method
employed by SA:

 def make_connect_string(self, keys):
 connectors = [Driver={SQL Server}]
 ...


So, I guess my questions are how I could get SA to use the DSN I have
defined in /etc/freetds/freetds.conf and /etc/odbc*.conf.  Or, how I
could force SA to use DRIVER={} instead of Driver={SQL Server}. (Note
that, for me, the DRIVER keyword is case-sensitive when using pyodbc
directly.)

Thanks for a kick-ass database doo-hicky (I am still getting to know
all the TLA's)

Cheers,
Ryan


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Query generation in 0.3.8 is broken?

2007-06-06 Thread [EMAIL PROTECTED]

Hi there.

I have just mirgated to 0.3.8 from 0.3.6 and got the followin error in
my app:
class 'sqlalchemy.exceptions.NoSuchColumnError'

Investigation shows, that queries generated in 0.3.6 and 0.3.8 are
differ:

(diff, i changed all spaces to line breaks before):
--- 1   2007-06-06 14:40:44.0 +0400
+++ 2   2007-06-06 14:40:50.0 +0400
@@ -240,64 +240,6 @@
 task_id

 FROM
-(SELECT
-task.id
-AS
-task_id,
-prop_c_s.task_id
-AS
-prop_c_s_task_id,
-task.updated
-AS
-task_updated
-
-FROM
-task
-JOIN
-(SELECT
-task.id
-AS
-task_id,
-count(msg.id)
-AS
-props_cnt
-
-FROM
-task
-LEFT
-OUTER
-JOIN
-msg
-ON
-task.id
-=
-msg.task_id
-GROUP
-BY
-task.id)
-AS
-prop_c_s
-ON
-task.id
-=
-prop_c_s.task_id
-
-WHERE
-(task.prj_id
-=
-%s)
-ORDER
-BY
-task.updated
-DESC
-
-
-LIMIT
-10
-OFFSET
-0)
-AS
-tbl_row_count,
 task
 JOIN
 (SELECT
@@ -439,24 +381,12 @@
 task.task_type_id

 WHERE
-task.id
-=
-tbl_row_count.task_id
-AND
-task.id
-=
-tbl_row_count.prop_c_s_task_id
-AND
-prop_c_s.task_id
-=
-tbl_row_count.task_id
-AND
-prop_c_s.task_id
+(task.prj_id
 =
-tbl_row_count.prop_c_s_task_id
+%s)
 ORDER
 BY
-tbl_row_count.task_updated
+task.updated
 DESC,
 anon_1649.id,
 anon_f48c.task_id,
@@ -470,7 +400,7 @@
 anon_3d17.task_id,
 anon_0e68.id
 2007-06-04
-19:58:33,976

The query is following:

j  = outerjoin( task_t, message_t,
task_t.c.id==message_t.c.task_id)
jj = select([ task_t.c.id.label('task_id'),
  func.count(message_t.c.id).label('props_cnt')],
  from_obj=[j],
group_by=[task_t.c.id]).alias('prop_c_s')
jjj = join(task_t, jj, task_t.c.id == jj.c.task_id)
#jjj = outerjoin(task_effort_t, jjj, task_effort_t.c.task_id
== jjj.c.task_id)

cls.mapper = mapper( cls, jjj,
order_by=[desc(task_t.c.updated)],
  properties=dict(type=relation(Task_Type,
lazy=False),
  status=relation(Task_Status,
lazy=False, uselist=False),
  
publication=relation(Task_Publication,
lazy=False, uselist=False),
 
summary=deferred(task_t.c.summary),
  progress=relation(Task_Progress,
lazy=False, uselist=False),
 
appointment=relation(Task_Appointment, lazy=False, uselist=False),
 ))

the idea of this query that I make mapper for join for some tables
which already have mappers,
and add some group functions.  It worked ok in 0.3.6

and (not exactly, but something like)

session.query(cls.mapper).limit(...).offset().list

It works pretty good without limit/offset. Does anybody have idea,
what's happened?
Is it my fault, or it's may be a bug in 0.3.8 ?

I may, of course, roll back to 0.3.6, but I do not want to, and there
is a some bug with unicode rows in 0.3.6, wich made me updrade the
version.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Accessing tables in multiple databases with one connection (MSSQL)

2007-06-06 Thread Rick Morrison
MSSQL has a history of confusion about what is owner and what is schema,
and there has been shifting back and forth over the years as they tried to
decide if they wanted to be like Oracle, or like Sybase.

Recently the 2005 version added real schemas as well, so a table
identifier can I believe now be a *four*-part entity:
database.schema.owner.table or if you like, it's probably just as correct to
say database.owner.schema.table.

That, I think is getting into overkill-land. If SA can support a three-part
table namespace, then let's decide on a convention that makes sense across
most of the supported databases. To me, that convention seems best to be
schema, owner and table.

How that then maps to MSSQL is up to the user according to local convention.
Whether you're doing cross-joins against database.schema.table, or
schema.owner.table, who cares -- it's partone.parttwo.partthree. As long as
it's clear that for SA, partone is schema, parttwo is owner and
partthree is table, you can map as you like.

PS - if you like this, ya shoulda be here for the metadata debates a while
back

On 6/5/07, A Monkey [EMAIL PROTECTED] wrote:


 It would be terrific to be able to satisfy my use case, but is
 schema.owner.table the right way to go? It seems as though schema
 and owner mean similar things in MSSQL and it might be confusing to
 use them as Rick proposed. I guess you'd just have to be pretty clear
 in documentation.

 What about the quote_schema attribute on the Table? I was going to
 suggest adding a check of  quote_schema in the
 ANSIIdentifierPreparer.format_table(), but that doesn't seem quite
 right either. Even if the quote_schema default were changed to True
 this would still mean that you would not be quoting some identifiers
 that required it. It would feel kind of like a hack.

 I guess the schema.owner.table thing is probably best.

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-06 Thread svilen

on a side note, here or for the query(), once i add .order_by() and 
similar, will be a possibility to remove/cancel them? 
e.g. .order_by(None) - similar to .join(None)?

or should i keep a copy at the point before adding .order_by()?

e.g.
 i want:
 q1 = query.filter( ...).order_by(z)#the final most used query
 ...
 q2 = q1.order_by(None)#used sometimes e.g. for count

vs
 i could do:
 q0 = query.filter( ...) #saved
 q1 = q0.clone().order_by(z) #the final most used query
 ...
 q2 = q0 #used sometimes e.g. for count

i know this above looks very similar, but in overall workflow, q1 and 
q2 can be separated very far away.

 I want to wake this thread up again.  can we get some arguments
 pro/ con to converting select() to work in a generative style ?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-06 Thread Marco Mariani

svilen ha scritto:
 on a side note, here or for the query(), once i add .order_by() and 
 similar, will be a possibility to remove/cancel them? 
 e.g. .order_by(None) - similar to .join(None)?

 or should i keep a copy at the point before adding .order_by()?

 e.g.
  i want:
  q1 = query.filter( ...).order_by(z)#the final most used query
  ...
  q2 = q1.order_by(None)  #used sometimes e.g. for count

 vs
  i could do:
  q0 = query.filter( ...) #saved
  q1 = q0.clone().order_by(z) #the final most used query
  ...
  q2 = q0 #used sometimes e.g. for count
   

and why not

q = query.filter(...)
q1 = q.order_by(z)
...
q2 = q

??


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-06 Thread svilen

 svilen ha scritto:
  on a side note, here or for the query(), once i add .order_by()
  and similar, will be a possibility to remove/cancel them?
  e.g. .order_by(None) - similar to .join(None)?
 
  or should i keep a copy at the point before adding .order_by()?
 
  e.g.
   i want:
   q1 = query.filter( ...).order_by(z)#the final most used
  query ...
   q2 = q1.order_by(None)#used sometimes e.g. for count
 
  vs
   i could do:
   q0 = query.filter( ...) #saved
   q1 = q0.clone().order_by(z) #the final most used query
   ...
   q2 = q0 #used sometimes e.g. for count

 and why not

 q = query.filter(...)
 q1 = q.order_by(z)
 ...
 q2 = q

 ??
because q1 with the order is _the_ query, made at point A somewhen, 
and stored there as a construct; much later at some point B i need to 
use that query but without the ordering - now i have to keep 2 copies 
of the query, w/ and w/out order. And this strip-the-ordering could 
be generic, applicable to any query...


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-06 Thread Marco Mariani

svilen ha scritto:

 because q1 with the order is _the_ query, made at point A somewhen, 
 and stored there as a construct; much later at some point B i need to 
 use that query but without the ordering - now i have to keep 2 copies 
 of the query, w/ and w/out order. And this strip-the-ordering could 
 be generic, applicable to any query...
   

Basically you're asking for that to gain some performance on 
q2.execute(), and at the same time you want to avoid littering your 
function's namespace.

Ok, I understand. I find the API is cleaner without that feature, but I 
am nobody here :-)



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-06 Thread Rick Morrison
Beyond the API littering, there may be instances where it is difficult or
impossible to remove a query attribute, because adding the attribute caused
a join calculation or reordered parenthesis, or whatever.

The second pattern is better, e.g. save a copy, rather than mucking things
up with removal code. One of the aims here is to simplify the API, and IMO
adding removal code works against that.

On 6/6/07, Marco Mariani [EMAIL PROTECTED] wrote:


 svilen ha scritto:

  because q1 with the order is _the_ query, made at point A somewhen,
  and stored there as a construct; much later at some point B i need to
  use that query but without the ordering - now i have to keep 2 copies
  of the query, w/ and w/out order. And this strip-the-ordering could
  be generic, applicable to any query...
 

 Basically you're asking for that to gain some performance on
 q2.execute(), and at the same time you want to avoid littering your
 function's namespace.

 Ok, I understand. I find the API is cleaner without that feature, but I
 am nobody here :-)



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-06 Thread svilen

it's not about littering API / namespaces or not, but if u percieve it 
just as such... nevermind.
i admit that undoing some change can be difficult or just impossible 
in certain cases. So i'll do a sort of command pattern then, keeping 
intermediate queries.
Forget that i asked.

 Beyond the API littering, there may be instances where it is
 difficult or impossible to remove a query attribute, because adding
 the attribute caused a join calculation or reordered parenthesis,
 or whatever.

 The second pattern is better, e.g. save a copy, rather than
 mucking things up with removal code. One of the aims here is to
 simplify the API, and IMO adding removal code works against that.

 On 6/6/07, Marco Mariani [EMAIL PROTECTED] wrote:
  svilen ha scritto:
   because q1 with the order is _the_ query, made at point A
   somewhen, and stored there as a construct; much later at some
   point B i need to use that query but without the ordering - now
   i have to keep 2 copies of the query, w/ and w/out order. And
   this strip-the-ordering could be generic, applicable to any
   query...
 
  Basically you're asking for that to gain some performance on
  q2.execute(), and at the same time you want to avoid littering
  your function's namespace.
 
  Ok, I understand. I find the API is cleaner without that feature,
  but I am nobody here :-)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: In Turbogears, using the same transaction for mapped objects and direct operations

2007-06-06 Thread Sanjay

 conn = session.context.get_current().connection(SomeMappedClass)
 conn.execute(...)

Thanks! It is exactly what I needed.

BTW, the session transaction is stored in
cherrypy.request.sa_transaction. Does this help simplify the
statements?

thanks again,
Sanjay


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: like doesn't work with objects

2007-06-06 Thread Techniq



On Jun 6, 12:37 am, Mike Orr [EMAIL PROTECTED] wrote:
 On 6/5/07, Techniq [EMAIL PROTECTED] wrote:





  I'm going through the wiki cookbook
 http://docs.pythonweb.org/display/pylonscookbook/SQLAlchemy+for+peopl...
  and I'm discovering that even though 'model.class.c.column_name.like'
  is available it doesn't perform a LIKE in the query.

  from 'paster shell'

  In [20]:
  model.Accounting.select(model.Accounting.c.usr.like('TSmith'))
  Out[21]:
  [syslogsql.models.Accounting object at 0x2626d70,
   syslogsql.models.Accounting object at 0x2626ad0,
   syslogsql.models.Accounting object at 0x2626910]
  In [22]: model.Accounting.select(model.Accounting.c.usr.like('Smith'))
  Out[22]: []

  ...BUT...

  In [23]: model.Accounting.select(model.Accounting.c.usr.like('%Smith
  %'))
  Out[27]:
  [syslogsql.models.Accounting object at 0x262d670,
   syslogsql.models.Accounting object at 0x2626d70,
   syslogsql.models.Accounting object at 0x2626ad0,
   syslogsql.models.Accounting object at 0x262d770,
   syslogsql.models.Accounting object at 0x262d790,
   syslogsql.models.Accounting object at 0x262d7b0,
   syslogsql.models.Accounting object at 0x2626910]

  Should I have to add the '%' around the string?

 What is the SQL in those cases?  (engine.echo = True)

 Yes, you need the '%':  column.like('%Smith%')  The reason is that you
 may prefer the wildcard in a different position: '%son', 'ra%s'.

 It should work.  I'm using an ORM query like that now, and I think I
 had a select query with .like before that.

 --
 Mike Orr [EMAIL PROTECTED]\

OK, that's all I needed was to confirm that the '%' was required, I
didn't think about having in in a diff position in the string, makes
perfect sense.  Thanks.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query generation in 0.3.8 is broken?

2007-06-06 Thread Michael Bayer

can you send me full Table/Mapper/class constructs, running against  
sqlite://,  so i can run this example, thanks.

On Jun 6, 2007, at 8:30 AM, [EMAIL PROTECTED] wrote:


 To be more precise,

 the code:
 j  = outerjoin( task_t, message_t, task_t.c.id==message_t.c.task_id)
 jj = select([ task_t.c.id.label('task_id'),
   func.count(message_t.c.id).label('props_cnt')],
   from_obj=[j], group_by=[task_t.c.id]).alias('prop_c_s')
 jjj = join(task_t, jj, task_t.c.id == jj.c.task_id)
 #jjj = outerjoin(task_effort_t, jjj, task_effort_t.c.task_id
 == jjj.c.task_id)

 class cls(object):pass

 props =dict(type=relation(Task_Type, lazy=False))
 cls.mapper = mapper( cls, jjj, order_by=[desc(task_t.c.updated)],
 properties=props)

 #works
 cls.mapper.properties = {}
 for t in session.query(cls.mapper).limit(10).offset(0).list():
 print t.id, t.title, t.props_cnt

 # works
 for t in select([jjj], from_obj=[jjj],
limit=10, offset=0).execute(bind_to=session.bind_to):
 print t.id, t.title, t.props_cnt

 orm.clear_mapper(cls.mapper)
 cls.mapper = mapper( cls, jjj, order_by=[desc(task_t.c.updated)],
 properties=props)

 # not works
 for t in session.query(cls.mapper).limit(10).offset(0).list():
 print t.id, t.title, t.props_cnt


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-06 Thread Michael Bayer


On Jun 5, 2007, at 10:30 PM, Mike Orr wrote:

 I do think .append_whereclause should be changed to .append_to_where.
 A SQL statement can have only one WHERE clause; what you're actually
 appending is an AND operand.  .append_to_where seems to get that
 across better than .append_whereclause or .append_where.  The word
 clause is superfluous because all parts of a SQL statement are
 called clauses.

I know phrases like append_to_where are more correct, but its a  
lot of typing.  I had in mind just where().  i dont think people  
trip over the meaning of multiple where() statements.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] UNION types x and y cannot be matched

2007-06-06 Thread Eric Ongerth

For reference:

http://www.mail-archive.com/[EMAIL PROTECTED]/msg02239.html

I found the above discussion when googling a ProgrammingError i've
been getting with a polymorphic_union:

quote
sqlalchemy.exceptions.SQLError: (ProgrammingError) UNION types numeric
and character varying cannot be matched
'SELECT ijoin.type AS ijoin_type, ijoin.id AS ijoin_id, ijoin.size AS
ijoin_size \nFROM (SELECT items.type AS type, skiboots.id AS id,
skiboots.size AS size \nFROM items JOIN skiboots ON items.id =
skiboots.id UNION ALL SELECT anon_c93f.type AS type, anon_c93f.id AS
id, CAST(NULL AS VARCHAR(20)) AS size \nFROM (SELECT items.id AS id,
items.type AS type \nFROM items \nWHERE items.type = %(items_type)s)
AS anon_c93f UNION ALL SELECT items.type AS type, skis.id AS id,
skis.size AS size \nFROM items JOIN skis ON items.id = skis.id) AS
ijoin ORDER BY ijoin.id' {'items_type': 'item'}
/quote

Here's a minimal test script:

from sqlalchemy import *

db = create_engine('postgres://postgres:[EMAIL PROTECTED]:5432/
testrentals',
encoding='utf-8')
metadata = BoundMetaData(db)

items = Table('items', metadata,
Column('id', Integer, primary_key=True, autoincrement=True),
Column('type', String(20)))

skis = Table('skis', metadata,
Column('id', Integer, primary_key=True),
Column('size', String(20), nullable=False),
ForeignKeyConstraint(['id'], ['items.id']))

skiboots = Table('skiboots', metadata,
Column('id', Integer, primary_key=True),
Column('size', types.Numeric(3,1)),
ForeignKeyConstraint(['id'], ['items.id']))

item_join = polymorphic_union(
{
'ski':items.join(skis),
'skiboot':items.join(skiboots),
'item':items.select(items.c.type=='item'),
}, None, 'ijoin')

class Item(object):pass
class Ski(Item): pass
class SkiBoot(Item): pass

item_mapper = mapper(Item, items,
select_table=item_join,
polymorphic_on=item_join.c.type,
polymorphic_identity='item')

ski_mapper = mapper(Ski, skis,
inherits=item_mapper,
polymorphic_identity='ski')

skiboot_mapper = mapper(SkiBoot, skiboots,
inherits=item_mapper,
polymorphic_identity='skiboot',
inherit_condition = items.c.id==skiboots.c.id)

if __name__ == __main__:
session = create_session()
print session.query(Item).select()

##


So, skis are working fine but skiboots aren't.  If I either comment
out the 'size' column in the skiboots table:
# Column('size', types.Numeric(3,1)),
- or - comment out the 'skiboots' line in the item_join:
# 'skiboot':items.join(skiboots),

...then it runs ok.

Maybe I'm making incorrect use of the Numeric type?  Or is this a
bug?  I want to use the Numeric type because i'd like to represent
boot sizes as, e.g., 9.5, 10.0, 10.5.  For skis the size is a String
because sometimes it's a number and sometimes a nominal size like s,
m, xl, etc.  No problem with the string; i'm just wondering if I
can use the Numeric type as sqla currently stands.  Do I have to
define a custom type and stash Numerics in string representations?

Thanks!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-06 Thread Eric Ongerth


On Jun 6, 8:32 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jun 5, 2007, at 10:30 PM, Mike Orr wrote:

  I do think .append_whereclause should be changed to .append_to_where.
  A SQL statement can have only one WHERE clause; what you're actually
  appending is an AND operand.  .append_to_where seems to get that
  across better than .append_whereclause or .append_where.  The word
  clause is superfluous because all parts of a SQL statement are
  called clauses.

 I know phrases like append_to_where are more correct, but its a
 lot of typing.  I had in mind just where().  i dont think people
 trip over the meaning of multiple where() statements.

+1 for where()

also for group_by() and having(), no?

...see:
http://www.mail-archive.com/[EMAIL PROTECTED]/msg03449.html


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-06 Thread Rick Morrison
Yeah, I'm +1 on .where() as well.


On 6/6/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Jun 5, 2007, at 10:30 PM, Mike Orr wrote:

  I do think .append_whereclause should be changed to .append_to_where.
  A SQL statement can have only one WHERE clause; what you're actually
  appending is an AND operand.  .append_to_where seems to get that
  across better than .append_whereclause or .append_where.  The word
  clause is superfluous because all parts of a SQL statement are
  called clauses.

 I know phrases like append_to_where are more correct, but its a
 lot of typing.  I had in mind just where().  i dont think people
 trip over the meaning of multiple where() statements.


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UNION types x and y cannot be matched

2007-06-06 Thread Michael Bayer


On Jun 6, 2007, at 11:32 AM, Eric Ongerth wrote:
 So, skis are working fine but skiboots aren't.  If I either comment
 out the 'size' column in the skiboots table:
 # Column('size', types.Numeric(3,1)),
 - or - comment out the 'skiboots' line in the item_join:
 # 'skiboot':items.join(skiboots),

 ...then it runs ok.

 Maybe I'm making incorrect use of the Numeric type?  Or is this a
 bug?  I want to use the Numeric type because i'd like to represent
 boot sizes as, e.g., 9.5, 10.0, 10.5.  For skis the size is a String
 because sometimes it's a number and sometimes a nominal size like s,
 m, xl, etc.  No problem with the string; i'm just wondering if I
 can use the Numeric type as sqla currently stands.  Do I have to
 define a custom type and stash Numerics in string representations?

 Thanks!



your size column differs in type.  you cant create a UNION with  
differing types in the unioned queries.  so it can either be both  
string, both numeric, or use distinct columns.

its basically this:

create table a (id integer primary key, size varchar(30))

create table b (id integer primary key, size integer)

select id, size from a UNION ALL select id, size from b   - bzzt

note that version 0.4 of SQLAlchemy, which you can play with right in  
its branch, has a new feature whereby polymorphic loading can be  
achieved without using UNION (it issues additional SELECT statements,  
so is not as efficient for large loads).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-06 Thread Michael Bayer


just to note, I am leaning towards very simple generative method  
names for all the things we need, where(), having(), order_by(),  
group_by(), distinct(), etc.  I am going to have it do copy on  
generate by default.

the copy operation itself will be pretty quick, working the way I  
have it working with Query now in the 0.4 branch, i.e. its just  q=  
Query.__new__() then a q.__dict__ = self.__dict__.copy().  the deep  
copy operations occur upon the generative methods...i.e. instead of  
saying q.where_clauses.append(clause), it does q.where_clauses  
=q.where_clauses + [clause]...so you localize the more expensive  
collection copying operations to that of one per each generative method.

I will probably have a little flag generative=False which will just  
be mentioned in the docstring and thats it (i.e., im not going to  
push the usage of that flag, it wont usually be needed).

to make a select statement all at once, you will still be able to do  
what we do now in most cases, just using select([columns],  
whereclause, **otherargs).

one thing i like about rebuilding select this way is that i will be  
able to rewrite its internals to halfway make sense...its a little  
overgrown in there right now.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-06 Thread Mike Orr

.where() is OK.

On 6/6/07, svilen [EMAIL PROTECTED] wrote:
  q2 = q1.order_by(None)#used sometimes e.g. for count

This would be useful.  If a second .order_by can replace the ordering
(as opposed to appending to it), I don't see why it would be difficult
to delete it.  .order_by shouldn't add another join condition, and
even if it did and that can't be deleted, so what?  If the use wanted
an absolutely clean query, they should have constructed it cleanly in
the first place.

However, I like the way Query.count() ignores the order_by.This
allows my functions to return a Query, and the caller can call
.list(), .count(), selectone(), or another aggregate method as they
wish, and it does the right thing.

I'm concerned that we're building an elaborate API that is kinda like
Query but not identical.

-- 
Mike Orr [EMAIL PROTECTED]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UNION types x and y cannot be matched

2007-06-06 Thread Eric Ongerth

So then I thought: maybe  I just need to reflect the skiboots table
and override the size column to the desired type?  That would make
sense... so I tried it, using the same script as above but adding the
line autoload=True as the final clause in each Table definition.

Now i'm getting a different error:
sqlalchemy.exceptions.ArgumentError: Can't determine join between
'items' and 'skis'; tables have more than one foreign key constraint
relationship between them. Please specify the 'onclause' of this join
explicitly.

Ok, so my item_join definition was too loose.
Changed the ski and skiboot lines in it to read:
'ski':join(items, skis, items.c.id==skis.c.id),
'skiboot':join(items, skiboots, items.c.id==skiboots.c.id)

...and still get the same error.  How much more specific can I get
with my onclause?  In each case the items table and each of its
children are only joined by a single column, 'id'.

I can't see that the 'items' and 'skis' table have more than one
foreign key constraint relationship between them.  What am i missing?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UNION types x and y cannot be matched

2007-06-06 Thread Eric Ongerth

oops, sorry -- I was adding my reply while you were still writing
yours.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-06 Thread Mike Orr

On 6/6/07, Michael Bayer [EMAIL PROTECTED] wrote:
 just to note, I am leaning towards very simple generative method
 names for all the things we need, where(), having(), order_by(),
 group_by(), distinct(), etc.  I am going to have it do copy on
 generate by default.

If a generative default can be efficient, it would avoid the dilemma
of Generative or not?, while also being parallel with Query.

Otherwise, returning 'self' would be fine, and I promise to look the
other way. :)  Then I could do:
q.order_by(...)
instead of
q = q.order_by(...)
While those who prefer the latter can do that, and if you really need a copy:
q = q.clone().order_by(...)

Keep in mind that modifying the query is much more frequent than copying it.

-- 
Mike Orr [EMAIL PROTECTED]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UNION types x and y cannot be matched

2007-06-06 Thread Eric Ongerth


On Jun 6, 8:47 am, Michael Bayer [EMAIL PROTECTED] wrote:
 your size column differs in type.  you cant create a UNION with
 differing types in the unioned queries.  so it can either be both
 string, both numeric, or use distinct columns.

Ah!  Ok, if i was more experienced with unions/joins I would have
realized that the way I was doing it, i was asking sa to smash two
'size' columns of different type together.  So I can just ask sa to do
some column aliasing if I really need the column on each child table
to be identically named simply size, or if I don't mind the change I
can switch to having skis have a 'ski_size' column and skiboots have a
'skiboot_size' column, etc.  Correct?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-06 Thread svilen

 While those who prefer the latter can do that, and if you really
 need a copy: 
   q = q.clone().order_by(...) 
explicit is better than implicit is one rule that may apply here. 
Not that i enslave myself with those rules but they do make sense in 
most cases in the long-run.

Michael, u hold the bread, u hold the knife, the choice is yours (-;)
ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SA and pyodbc connections

2007-06-06 Thread scurra

Hi Paul

Thanks for the advice.  I have checked out the subversion SQLAlchemy,
made the change to mssql.py, and run the unit tests.

648c648
 connectors = [Driver={SQL Server}]
---
 connectors = [DRIVER={}]

src/sqlalchemy$ python test/alltests.py --dburi='mssql://
ryant:[EMAIL PROTECTED]/testdb' --verbose --coverage | tee -a sqlaut.log

Forgive my ignorance, but, would the .coverage file be enough or would
you need the log file as well.  Where would you prefer I send any
attachments?

I'll gladly re-run the unit tests if need be.

Regards,
Ryan

On 6 Jun, 15:29, Paul Johnston [EMAIL PROTECTED] wrote:
 Hi Ryan,

 To my knowledge you're the first to be using SA with pyODBC on Unix. For now
 you will have to edit the source code of mssql.py to change the DSN.

 Longer term, we will have to have a think about how to do this
 automatically. Perhaps we could look at the platform and if it's Windows use
 DRIVER={SQL Server} and otherwise use DRIVER={}.

 If you get a chance, I'd be very interested to see the output of the unit
 tests on Unix.

 Regards,

 Paul

 On 6/6/07, scurra [EMAIL PROTECTED] wrote:



  Hi

  I cannot connect to SQL Server 2005 (Express) from my linux box using
  the suggested SA method.  I can connect to SQL Server using sqsh which
  indicates that freetds and unixodbc are correctly set up.

  I can also connect using pyodbc. Although, only with the DSN method or
  with DRIVER={}.  For details, see:
   http://sourceforge.net/forum/message.php?msg_id=4348161

  This is probably more of a pyodbc problem (or a problem created by me
  (wouldn't be the first time)) so this post is more of an FYI and a
  WTF? than a bug report ;-)

  The following thread contains reference to SA's pyodbc connection
  mechanism:

 http://groups.google.co.uk/group/sqlalchemy/browse_thread/thread/b590...

  It seems that setting the DRIVER keyword to {SQL Server} is the method
  employed by SA:

  def make_connect_string(self, keys):
   connectors = [Driver={SQL Server}]
   ...

  So, I guess my questions are how I could get SA to use the DSN I have
  defined in /etc/freetds/freetds.conf and /etc/odbc*.conf.  Or, how I
  could force SA to use DRIVER={} instead of Driver={SQL Server}. (Note
  that, for me, the DRIVER keyword is case-sensitive when using pyodbc
  directly.)

  Thanks for a kick-ass database doo-hicky (I am still getting to know
  all the TLA's)

  Cheers,
  Ryan


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: In Turbogears, using the same transaction for mapped objects and direct operations

2007-06-06 Thread Marco Mariani

Sanjay ha scritto:

 BTW, the session transaction is stored in
 cherrypy.request.sa_transaction.

Yes, but it's been added recently.

 Does this help simplify the statements?
   

Transaction instances have a connection() property.

So, cherrypy.request.sa_transaction.connection should work.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SA and pyodbc connections

2007-06-06 Thread Rick Morrison
I'm interested in the results too, as well as the ODBC config. I'm still in
the process of setting up a buildbot slave on Ubuntu that's going to be
running pyodbc as well. I normally use pymssql, and it'll be my first
serious go with pyodbc.

If it's OK with the list, maybe you can post the results right here, if not,
then please copy me on the attachments if you will.

Thanks,
Rick



On 6/6/07, Paul Johnston [EMAIL PROTECTED] wrote:

 Hi Ryan,

 To my knowledge you're the first to be using SA with pyODBC on Unix. For
 now you will have to edit the source code of mssql.py to change the DSN.

 Longer term, we will have to have a think about how to do this
 automatically. Perhaps we could look at the platform and if it's Windows use
 DRIVER={SQL Server} and otherwise use DRIVER={}.

 If you get a chance, I'd be very interested to see the output of the unit
 tests on Unix.

 Regards,

 Paul



 On 6/6/07, scurra [EMAIL PROTECTED] wrote:
 
 
  Hi
 
  I cannot connect to SQL Server 2005 (Express) from my linux box using
  the suggested SA method.  I can connect to SQL Server using sqsh which
  indicates that freetds and unixodbc are correctly set up.
 
  I can also connect using pyodbc. Although, only with the DSN method or
  with DRIVER={}.  For details, see:
 http://sourceforge.net/forum/message.php?msg_id=4348161
 
  This is probably more of a pyodbc problem (or a problem created by me
  (wouldn't be the first time)) so this post is more of an FYI and a
  WTF? than a bug report ;-)
 
  The following thread contains reference to SA's pyodbc connection
  mechanism:

  http://groups.google.co.uk/group/sqlalchemy/browse_thread/thread/b5909dceb2178a1d
 
  It seems that setting the DRIVER keyword to {SQL Server} is the method
  employed by SA:
 
  def make_connect_string(self, keys):
   connectors = [Driver={SQL Server}]
   ...
 
 
  So, I guess my questions are how I could get SA to use the DSN I have
  defined in /etc/freetds/freetds.conf and /etc/odbc*.conf.  Or, how I
  could force SA to use DRIVER={} instead of Driver={SQL Server}. (Note
  that, for me, the DRIVER keyword is case-sensitive when using pyodbc
  directly.)
 
  Thanks for a kick-ass database doo-hicky (I am still getting to know
  all the TLA's)
 
  Cheers,
  Ryan
 
 
 

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SA and pyodbc connections

2007-06-06 Thread Paul Johnston

Hi,

src/sqlalchemy$ python test/alltests.py --dburi='mssql://
ryant:[EMAIL PROTECTED]/testdb' --verbose --coverage | tee -a sqlaut.log
  

Try this command line:

python test/alltests.py 
--dburi='mssql://ryant:[EMAIL PROTECTED]/testdb?text_as_varchar=1' 
 sqlaut.log 21

If you could mail the output off list to both myself and Rick Morrison, 
that would be magic.

Paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UNION types x and y cannot be matched

2007-06-06 Thread Eric Ongerth

Yeah.  My problem has been solved by altering the tables...
skiboots.c.size is now skiboots.c.skiboot_size, and skis.c.size is now
skis.c.ski_size.

Is there a way I could avoid that, making use of the use_labels=True
parameter on select()?  I've been trying to work out how to rewrite my
item_join to do that, so that identically columns in child tables
would not collide.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query generation in 0.3.8 is broken?

2007-06-06 Thread [EMAIL PROTECTED]

I have just submitted the ticket #523, there is a minimalistic code
snippet, wich reproduces the error.
I am sorry that I did not sent the working example right in ticket
#592, but I could not reproduce it.

But not I did (see below, or ticket #523):

The problem appears when mapper, relations and limit/offset come
together

#!/usr/bin/env python
from sqlalchemy import *
import sys, datetime

#init db
#global_connect('mysql://test:[EMAIL PROTECTED]/test')
#engine = create_engine('mysql://test:[EMAIL PROTECTED]/test')

global_connect('sqlite:///tutorial.db')
engine = create_engine('sqlite:///tutorial.db')

project_t = Table('prj',
  Column('id',Integer,
primary_key=True),
  Column('title', Unicode(100),
nullable=False),
  mysql_engine='InnoDB')


task_t = Table('task',
  Column('id',Integer,
primary_key=True),
  Column('status_id', Integer,
ForeignKey('task_status.id'), nullable=False),
  Column('title', Unicode(100),
nullable=False),
  Column('task_type_id',  Integer ,
ForeignKey('task_type.id'), nullable=False),
  Column('prj_id',Integer ,
ForeignKey('prj.id'), nullable=False),
  mysql_engine='InnoDB')

task_status_t = Table('task_status',
Column('id',Integer,
primary_key=True),
mysql_engine='InnoDB')

task_type_t = Table('task_type',
Column('id',   Integer,primary_key=True),
mysql_engine='InnoDB')

message_t  = Table('msg',
Column('id', Integer,  primary_key=True),
Column('posted',DateTime, nullable=False,
index=True, default=func.current_timestamp()),
Column('type_id',   Integer,
ForeignKey('msg_type.id'), nullable=False, index=True),
Column('from_uid',  Integer, nullable=False,
index=True),
Column('to_uid',Integer, nullable=False,
index=True),
Column('task_id',   Integer,
ForeignKey('task.id'), nullable=True,  index=True),
Column('time_est_days', Integer, nullable=True),
Column('subject',   Unicode(60), nullable=True),
Column('body',  Unicode, nullable=True),
Column('new',   Boolean, nullable=False,
default=True),
Column('removed_by_sender',  Boolean,
nullable=False, default=False),
Column('removed_by_recipient',   Boolean,
nullable=False, default=False),
mysql_engine='InnoDB')

message_type_t = Table('msg_type',
Column('id',Integer,
primary_key=True),
Column('name',  Unicode(20),
nullable=False, unique=True),
Column('display_name',  Unicode(20),
nullable=False, unique=True),
mysql_engine='InnoDB')

class Task(object):pass

class Task_Type(object):pass

class Message(object):pass

class Message_Type(object):pass

tsk_cnt_join = outerjoin(project_t, task_t,
task_t.c.prj_id==project_t.c.id)

ss = select([project_t.c.id.label('prj_id'),
func.count(task_t.c.id).label('tasks_number')],
from_obj=[tsk_cnt_join],
group_by=[project_t.c.id]).alias('prj_tsk_cnt_s')
j = join(project_t, ss, project_t.c.id == ss.c.prj_id)

Task_Type.mapper = mapper(Task_Type, task_type_t)


Task.mapper = mapper( Task, task_t,
  properties=dict(type=relation(Task_Type,
lazy=False),
 ))

Message_Type.mapper = mapper(Message_Type, message_type_t)

Message.mapper = mapper(Message, message_t,
 properties=dict(type=relation(Message_Type,
lazy=False, uselist=False),
 ))

tsk_cnt_join = outerjoin(project_t, task_t,
task_t.c.prj_id==project_t.c.id)
ss = select([project_t.c.id.label('prj_id'),
func.count(task_t.c.id).label('tasks_number')],
from_obj=[tsk_cnt_join],
group_by=[project_t.c.id]).alias('prj_tsk_cnt_s')
j = join(project_t, ss, project_t.c.id == ss.c.prj_id)

j  = outerjoin( task_t, message_t, task_t.c.id==message_t.c.task_id)
jj = select([ task_t.c.id.label('task_id'),
  func.count(message_t.c.id).label('props_cnt')],
  from_obj=[j], group_by=[task_t.c.id]).alias('prop_c_s')
jjj = join(task_t, jj, task_t.c.id == jj.c.task_id)

class cls(object):pass

props =dict(type=relation(Task_Type, lazy=False))
cls.mapper = mapper( cls, jjj, properties=props)


default_metadata.engine.echo = True
default_metadata.drop_all()
default_metadata.create_all()

session = create_session()

engine.execute(INSERT INTO prj (title) values('project 1');)
engine.execute(INSERT INTO task_status (id) values(1);)
engine.execute(INSERT INTO task_type(id) values(1);)
engine.execute(INSERT INTO task 

[sqlalchemy] Re: Column name mapping problem in 0.3.7

2007-06-06 Thread Graham Stratton

I'm bringing this old thread up because I'm still having the same
issue with 0.3.8.  In order to use mssql I have to add

 def max_identifier_length(self):
 return 30

to the pymssql dialect.

I also find that I need to set has_sane_rowcount=False (as I have had
to with every release).

Is anyone else using pymssql? Do you have the same problems?  Should
these changes be made on the trunk?

Thanks,

Graham
On May 1, 7:13 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 it is max_identifier_length() on Dialect.

 ive also gone and figured out why it is hard to separate the max
 length of columns vs. that of labels...its because of some issues
 that arise with some auto-labeling that happens inside of
 ansisql.pyso its fortunate i dont have to get into that.

 On May 1, 2007, at 12:57 PM, Rick Morrison wrote:

  The underlying DBlib limits *all* identifier names, including
  column names to 30 chars anyway, so no issue there.

  Where does the character limit go in the dialect? Can I follow
  Oracle as an example?

  On 5/1/07, Michael Bayer [EMAIL PROTECTED] wrote:

  On May 1, 2007, at 11:18 AM, Rick Morrison wrote:

   The label-truncation code is fine. The issue isn't SA. It's the
   DBAPI that pymssql rides on top of...identifier limit is 30 chars,
   is deprecated by Microsoft, it will never be fixed.

   Try pyodbc, which has no such limitation.

  OK well, we should put the 30-char limit into pymssql's dialect.
  however, the way the truncation works right now, its going to chop
  off all the column names too...which means unless i fix that, pymssql
  cant be used with any columns over 30 chars in size.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Column name mapping problem in 0.3.7

2007-06-06 Thread Rick Morrison
Hi Graham,

There's a good chance that only you and I are using pymssql, and I don't
have have the long identifiers problem, so it kind of dropped throught the
cracks, sorry.

I've checked in the 30-character thing, but I've left off the sane_rowcount
for now. I had run into issues with that back in March, and I ended up
patching pymssql to fix the problem rather than set sane_rowcount to False.
Can't remember why now, I'm currently running our local test suite which
should remind me.

Rick






On 6/6/07, Graham Stratton [EMAIL PROTECTED]  wrote:


 I'm bringing this old thread up because I'm still having the same
 issue with 0.3.8.  In order to use mssql I have to add

  def max_identifier_length(self):
  return 30

 to the pymssql dialect.

 I also find that I need to set has_sane_rowcount=False (as I have had
 to with every release).

 Is anyone else using pymssql? Do you have the same problems?  Should
 these changes be made on the trunk?

 Thanks,

 Graham
 On May 1, 7:13 pm, Michael Bayer [EMAIL PROTECTED] wrote:
  it is max_identifier_length() on Dialect.
 
  ive also gone and figured out why it is hard to separate the max
  length of columns vs. that of labels...its because of some issues
  that arise with some auto-labeling that happens inside of
  ansisql.pyso its fortunate i dont have to get into that.
 
  On May 1, 2007, at 12:57 PM, Rick Morrison wrote:
 
   The underlying DBlib limits *all* identifier names, including
   column names to 30 chars anyway, so no issue there.
 
   Where does the character limit go in the dialect? Can I follow
   Oracle as an example?
 
   On 5/1/07, Michael Bayer [EMAIL PROTECTED] wrote:
 
   On May 1, 2007, at 11:18 AM, Rick Morrison wrote:
 
The label-truncation code is fine. The issue isn't SA. It's the
DBAPI that pymssql rides on top of...identifier limit is 30 chars,
is deprecated by Microsoft, it will never be fixed.
 
Try pyodbc, which has no such limitation.
 
   OK well, we should put the 30-char limit into pymssql's dialect.
   however, the way the truncation works right now, its going to chop
   off all the column names too...which means unless i fix that, pymssql
   cant be used with any columns over 30 chars in size.


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UNION types x and y cannot be matched

2007-06-06 Thread Michael Bayer


On Jun 6, 2007, at 11:58 AM, Eric Ongerth wrote:


 So then I thought: maybe  I just need to reflect the skiboots table
 and override the size column to the desired type?  That would make
 sense... so I tried it, using the same script as above but adding the
 line autoload=True as the final clause in each Table definition.

 Now i'm getting a different error:
 sqlalchemy.exceptions.ArgumentError: Can't determine join between
 'items' and 'skis'; tables have more than one foreign key constraint
 relationship between them. Please specify the 'onclause' of this join
 explicitly.

 Ok, so my item_join definition was too loose.
 Changed the ski and skiboot lines in it to read:
 'ski':join(items, skis, items.c.id==skis.c.id),
 'skiboot':join(items, skiboots, items.c.id==skiboots.c.id)

 ...and still get the same error.  How much more specific can I get
 with my onclause?  In each case the items table and each of its
 children are only joined by a single column, 'id'.


the error is probably not raised there, maybe in mapper compile when  
it tries to join the tables together as part of the ordinary joined  
table inheritance.  try specifying inherit_condition on the 'skis'  
mapper.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query generation in 0.3.8 is broken?

2007-06-06 Thread Michael Bayer
do me several huge favors:

- do not redefine j three times, ss and tsk_count_join two  
times, etc.  Particuilarly i have no idea which j join you actually  
would like to use.  define the individual clauses youd like to use  
once, then use that same instance.  to SQLAlchemy,  two identical  
clauses are *not* interchangeable since it uses identity in many  
cases with selectables.

- do *not* use orm.clear_mapper().  this method is gone in the next  
version.  use clear_mappers(), and redefine *all* mappers for each  
test you are creating.   mappers have lots of dependencies on each  
other which are compiled when any of the mappers are first used.   
there is no capability to surgically remove a mapper from that  
compiled structure, you have to start over again.

- do *not* say, mapper.properties = {}.  once a mapper is compiled,  
the properties argument is not even looked at.

once all that is done, then ill have a better idea of what youre  
actually trying to do (if it doesnt actually fix the problem).

On Jun 6, 2007, at 12:58 PM, [EMAIL PROTECTED] wrote:

 #!/usr/bin/env python
 from sqlalchemy import *
 import sys, datetime

 #init db
 #global_connect('mysql://test:[EMAIL PROTECTED]/test')
 #engine = create_engine('mysql://test:[EMAIL PROTECTED]/test')

 global_connect('sqlite:///tutorial.db')
 engine = create_engine('sqlite:///tutorial.db')

 project_t = Table('prj',
   Column('id',Integer,
 primary_key=True),
   Column('title', Unicode(100),
 nullable=False),
   mysql_engine='InnoDB')


 task_t = Table('task',
   Column('id',Integer,
 primary_key=True),
   Column('status_id', Integer,
 ForeignKey('task_status.id'), nullable=False),
   Column('title', Unicode(100),
 nullable=False),
   Column('task_type_id',  Integer ,
 ForeignKey('task_type.id'), nullable=False),
   Column('prj_id',Integer ,
 ForeignKey('prj.id'), nullable=False),
   mysql_engine='InnoDB')

 task_status_t = Table('task_status',
 Column('id',Integer,
 primary_key=True),
 mysql_engine='InnoDB')

 task_type_t = Table('task_type',
 Column('id',   Integer,primary_key=True),
 mysql_engine='InnoDB')

 message_t  = Table('msg',
 Column('id', Integer,  primary_key=True),
 Column('posted',DateTime, nullable=False,
 index=True, default=func.current_timestamp()),
 Column('type_id',   Integer,
 ForeignKey('msg_type.id'), nullable=False, index=True),
 Column('from_uid',  Integer, nullable=False,
 index=True),
 Column('to_uid',Integer, nullable=False,
 index=True),
 Column('task_id',   Integer,
 ForeignKey('task.id'), nullable=True,  index=True),
 Column('time_est_days', Integer, nullable=True),
 Column('subject',   Unicode(60), nullable=True),
 Column('body',  Unicode, nullable=True),
 Column('new',   Boolean, nullable=False,
 default=True),
 Column('removed_by_sender',  Boolean,
 nullable=False, default=False),
 Column('removed_by_recipient',   Boolean,
 nullable=False, default=False),
 mysql_engine='InnoDB')

 message_type_t = Table('msg_type',
 Column('id',Integer,
 primary_key=True),
 Column('name',  Unicode(20),
 nullable=False, unique=True),
 Column('display_name',  Unicode(20),
 nullable=False, unique=True),
 mysql_engine='InnoDB')

 class Task(object):pass

 class Task_Type(object):pass

 class Message(object):pass

 class Message_Type(object):pass

 tsk_cnt_join = outerjoin(project_t, task_t,
 task_t.c.prj_id==project_t.c.id)

 ss = select([project_t.c.id.label('prj_id'),
 func.count(task_t.c.id).label('tasks_number')],
 from_obj=[tsk_cnt_join],
 group_by=[project_t.c.id]).alias('prj_tsk_cnt_s')
 j = join(project_t, ss, project_t.c.id == ss.c.prj_id)

 Task_Type.mapper = mapper(Task_Type, task_type_t)


 Task.mapper = mapper( Task, task_t,
   properties=dict(type=relation(Task_Type,
 lazy=False),
  ))

 Message_Type.mapper = mapper(Message_Type, message_type_t)

 Message.mapper = mapper(Message, message_t,
  properties=dict(type=relation(Message_Type,
 lazy=False, uselist=False),
  ))

 tsk_cnt_join = outerjoin(project_t, task_t,
 task_t.c.prj_id==project_t.c.id)
 ss = select([project_t.c.id.label('prj_id'),
 func.count(task_t.c.id).label('tasks_number')],
 from_obj=[tsk_cnt_join],
 

[sqlalchemy] Re: SA and pyodbc connections

2007-06-06 Thread scurra

Hi Paul

 src/sqlalchemy$ python test/alltests.py --dburi='mssql://
 ryant:[EMAIL PROTECTED]/testdb' --verbose --coverage | tee -a sqlaut.log

 Try this command line:

 python test/alltests.py
 --dburi='mssql://ryant:[EMAIL PROTECTED]/testdb?text_as_varchar=1'
  sqlaut.log 21

 If you could mail the output off list to both myself and Rick Morrison,
 that would be magic.

On its way.

Cheers,
Ryan


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query generation in 0.3.8 is broken?

2007-06-06 Thread Dmitry Zuikov

Well,

Sorry, I just copy-pasted it from real code, trying to get minimal
example. I was in rush.

I do not really use  clear_mapper, just put it there to demostrate how
it fails.

Here the more clean code (or you may download it from
http://dzuikov2.firstvds.ru/qqq.py)


#!/usr/bin/env python
from sqlalchemy import *
import sys

#init db
#global_connect('mysql://test:[EMAIL PROTECTED]/test')
#engine = create_engine('mysql://test:[EMAIL PROTECTED]/test')


global_connect('sqlite:///tutorial.db')
engine = create_engine('sqlite:///tutorial.db')

project_t = Table('prj',
  Column('id',Integer,
primary_key=True),
  Column('title', Unicode(100),
nullable=False),
  mysql_engine='InnoDB')


task_t = Table('task',
  Column('id',Integer,
primary_key=True),
  Column('status_id', Integer,
ForeignKey('task_status.id'), nullable=False),
  Column('title', Unicode(100),
nullable=False),
  Column('task_type_id',  Integer ,
ForeignKey('task_type.id'), nullable=False),
  Column('prj_id',Integer ,
ForeignKey('prj.id'), nullable=False),
  mysql_engine='InnoDB')

task_status_t = Table('task_status',
Column('id',Integer,
primary_key=True),
mysql_engine='InnoDB')

task_type_t = Table('task_type',
Column('id',   Integer,primary_key=True),
mysql_engine='InnoDB')

message_t  = Table('msg',
Column('id', Integer,  primary_key=True),
Column('posted',DateTime, nullable=False,
index=True, default=func.current_timestamp()),
Column('type_id',   Integer,
ForeignKey('msg_type.id'), nullable=False, index=True),
Column('from_uid',  Integer, nullable=False,
index=True),
Column('to_uid',Integer, nullable=False,
index=True),
Column('task_id',   Integer,
ForeignKey('task.id'), nullable=True,  index=True),
Column('time_est_days', Integer, nullable=True),
Column('subject',   Unicode(60), nullable=True),
Column('body',  Unicode, nullable=True),
Column('new',   Boolean, nullable=False,
default=True),
Column('removed_by_sender',  Boolean,
nullable=False, default=False),
Column('removed_by_recipient',   Boolean,
nullable=False, default=False),
mysql_engine='InnoDB')

message_type_t = Table('msg_type',
Column('id',Integer,
primary_key=True),
Column('name',  Unicode(20),
nullable=False, unique=True),
Column('display_name',  Unicode(20),
nullable=False, unique=True),
mysql_engine='InnoDB')

class Task(object):pass

class Task_Type(object):pass

class Message(object):pass

class Message_Type(object):pass


Task_Type.mapper = mapper(Task_Type, task_type_t)


Task.mapper = mapper( Task, task_t,
  properties=dict(type=relation(Task_Type,
lazy=False),
 ))

Message_Type.mapper = mapper(Message_Type, message_type_t)

Message.mapper = mapper(Message, message_t,
 properties=dict(type=relation(Message_Type,
lazy=False, uselist=False),
 ))

# join with messages
j  = outerjoin( task_t, message_t, task_t.c.id==message_t.c.task_id)

# get count
jj = select([ task_t.c.id.label('task_id'),
  func.count(message_t.c.id).label('props_cnt')],
  from_obj=[j], group_by=[task_t.c.id]).alias('prop_c_s')

# etc
jjj = join(task_t, jj, task_t.c.id == jj.c.task_id)

class cls(object):pass

props =dict(type=relation(Task_Type, lazy=False))
cls.mapper = mapper( cls, jjj)


default_metadata.engine.echo = True
default_metadata.drop_all()
default_metadata.create_all()

session = create_session()

engine.execute(INSERT INTO prj (title) values('project 1');)
engine.execute(INSERT INTO task_status (id) values(1);)
engine.execute(INSERT INTO task_type(id) values(1);)
engine.execute(INSERT INTO task (title, task_type_id, status_id,
prj_id) values('task 1',1,1,1);)

 #works
cls.mapper.properties = {}
for t in session.query(cls.mapper).limit(10).offset(0).list():
print t.id, t.title, t.props_cnt

 #works
for t in select([jjj], from_obj=[jjj],
   limit=10, offset=0).execute(bind_to=session.bind_to):
print t.id, t.title, t.props_cnt


#now fail
orm.clear_mappers()

Task_Type.mapper = mapper(Task_Type, task_type_t)


Task.mapper = mapper( Task, task_t,
  properties=dict(type=relation(Task_Type,
lazy=False),
 ))

Message_Type.mapper = mapper(Message_Type, message_type_t)

Message.mapper = 

[sqlalchemy] Re: Mapping existing structure of a database

2007-06-06 Thread Paul Johnston

Hi,

 What kind of overhead is associated with using the autoload flag?

Quite high, although not a problem if a long running app only does it at 
startup.

 Is there a way to dump the structure of a database to a file and 
 import this as a kind of module?

There is, although it's very basic
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/AutoCode

Paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Mapping existing structure of a database

2007-06-06 Thread Paul Johnston

Hi,

 What kind of overhead is associated with using the autoload flag?

Quite high, although not a problem if a long running app only does it at
startup.

 Is there a way to dump the structure of a database to a file and 
 import this as a kind of module?

There is, although it's very basic
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/AutoCode

Paul


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] group_by and column labels

2007-06-06 Thread Huy Do

Hi,

I am trying to reuse my column list in my group by clause but some of my 
columns use label() which is causing a sql syntax error because of the 
column as label in the group by clause. Is it possible to get the 
group_by to only use the label side of a column .

eg. (This doesn't work because of the labels
group_by = [
service.date,
service.client_id,
service_type.type_code,
service.service_code,
client.code.label('client_code'),
client.name.label('client_name'),
func.coalesce(func.nullif(client_rate.description, ''),  
service_type.description).label('service_description'),
service.rate_amt
]
columns = group_by[:].extend([
func.sum(service.quantity).label('quantity'),
func.sum(service.service_amt).label('service_amt')
])
  
s = select(columns, criteria,
from_obj=[outerjoin(db.service_table, db.client_rate_table), 
db.job_table, db.client_table, db.service_type_table],
group_by=group_by,
order_by=[service.date, client.name, service.service_code]
)

Many thanks,

Huy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query generation in 0.3.8 is broken?

2007-06-06 Thread Michael Bayer

the queries generated in 0.3.6 and 0.3.8 are identical except for an  
anonymous alias name.  they are also both incorrect, and its only  
because 0.3.6 is less accurate about targeting Column objects in  
result sets that it works.  the task_id column which youve  
labeled inside of jj does not appear due to a column targeting bug  
that is present in all versions, resolved in changeset 2708.  0.3.6  
just grabs the same column twice which is wrong as well.

the mapper in your example considers the primary key of cls to be  
the composite of task.id and the task_id label youve defined inside  
of jj:

illustration of the primary key:

cls.mapper = mapper( cls, jjj, properties=props)
cls.mapper.compile()
print cls.mapper.primary_key

OrderedSet([Column('id',Integer(),primary_key=True,nullable=False),  
Column('task_id',Integer(),primary_key=True,nullable=False)])

which you probably should define as:

cls.mapper = mapper( cls, jjj, properties=props, primary_key= 
[jjj.c.task_id])

the bug also prevented the primary_key setting above from compiling  
properly in the mapper.

also, heres an alternate mapping to avoid the ambiguity presented by  
mapping to a join:

j  = outerjoin( task_t, message_t, task_t.c.id==message_t.c.task_id)
jj = select([ task_t.c.id.label('task_id'), func.count 
(message_t.c.id).label('props_cnt')],
   from_obj=[j], group_by=[task_t.c.id])

jjj = join(task_t, jj, task_t.c.id == jj.c.joined_task_id).select 
().alias('hi')

mapper(cls, jjj, primary_key=[jjj.c.task_id])



On Jun 6, 2007, at 3:55 PM, Dmitry Zuikov wrote:


 once all that is done, then ill have a better idea of what youre
 actually trying to do (if it doesnt actually fix the problem).

 Okay, I have posted the cleaned code above (or here http:// 
 dzuikov2.firstvds.ru/qqq.py).
 There are some comments about it.

 What I am trying to do? The simple thing: I have some related tables
 with mappers and I need the query wich shows some statistics (group
 functions involved). So I created a new mapper against the query.
 That's all. The query and the mapper work ok in 0.3.6 - you may run
 this code to check it out. In 0.3.8 it works with properties and
 relations but without offset/limit, or with offset/limit,  but without
 properties.  The query seems correct - it runs  without mapper.

 I do not use clear_mapper or even clear_mappers in real code.


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UNION types x and y cannot be matched

2007-06-06 Thread Michael Bayer


On Jun 6, 2007, at 12:49 PM, Eric Ongerth wrote:


 Yeah.  My problem has been solved by altering the tables...
 skiboots.c.size is now skiboots.c.skiboot_size, and skis.c.size is now
 skis.c.ski_size.

 Is there a way I could avoid that, making use of the use_labels=True
 parameter on select()?  I've been trying to work out how to rewrite my
 item_join to do that, so that identically columns in child tables
 would not collide.

the polymorphic_union isnt going to work so great with use_labels.   
thats a very problematic function as far as column naming conventions  
and its not very open to trickery (which is why you can, of course,  
build your own UNION manually).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Postgres MD Auth

2007-06-06 Thread Techniq

Any reason why I can auth with psql and not the sqlachemy.dburi in
Pylons?  I can see that the user, password, host, port are all passed
correctly.  These are the same parameters I use with psql to connect.
I'm connecting from the same machine in both cases.

psql -U username -h ip address databasename
Works.

sqlachemy.dburi = postgrs://username:password@ip address:5432/
databasename
Doesn't work.

Ideas?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UNION types x and y cannot be matched

2007-06-06 Thread Eric Ongerth

Thanks for your responses, Mike.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---