[sqlalchemy] changing implicit association's row

2008-12-10 Thread az

hi.
can i hook somewhere around implicit associative collections 
(secondary_table=...), so i can influence somehow the row being 
inserted?
e.g. i have column disabled in the secondarytable, and it has to be 
populated at runtime... 
something like someX.items.append( y, disabled=True) meaning there is 
link but it is disabled.

one possibility i see is to have a functor as column's default_value 
and somehow convince it to put the proper value into it... 

or maybe it's not possible, thinking of the sequences of SA... i guess 
inserting the row is very-very late.

is explicit assoc_object the only real possibility? 
i'm half way there, having a semi-explicit one - it has 
readonly-mapper+queries but still becomes a secondary_table in the 
relations.

svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Pickling a restriction

2008-12-10 Thread Joril

Hi everyone!
Is there a way to pickle a Query restriction?
If I try this:

import cPickle

restr = (MyDataClass.intproperty==1)

cPickle.dumps(restr, 1)

I get a
cPickle.PicklingError: Can't pickle class
'sqlalchemy.orm.properties.ColumnComparator': attribute lookup
sqlalchemy.orm.properties.ColumnComparator failed

Many thanks for your time!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] problem with flush

2008-12-10 Thread chingi

hi all,
i am building a multi threaded  web server which do lot of
save and update to database. sqlalchemy version that i am using is
0.4.4 python version 2.5 and mysql server 4.1.

The problem is when i save or update and do flush then it is
not reflected in database immediately and due which the select query
done by another thread gets the old data.
  i don't want to give row level lock.so can anyone suggest me
some other way

thanks in advance

Regards
naykodi



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: case_sensitive

2008-12-10 Thread Michael Bayer

OK you might have to use the DDL() construct instead, if Index doesn't  
support func() yet.


On Dec 10, 2008, at 2:19 AM, jose wrote:


 I tried it, as you suggested me, Michael...
Index('valuta_desc_uniq', func.lower(valuta.c.descrizione),  
 unique=True)

  File /usr/lib/python2.4/site-packages/sqlalchemy/schema.py, line
 1045, in __init__
self._init_items(*columns)
  File /usr/lib/python2.4/site-packages/sqlalchemy/schema.py, line
 1052, in _init_items
self.append_column(column)
  File /usr/lib/python2.4/site-packages/sqlalchemy/schema.py, line
 1065, in append_column
self._set_parent(column.table)
 AttributeError: '_Function' object has no attribute 'table'

 Michael Bayer wrote:

 I think this would involve sending func.lower(table.c.somecolumn) to
 the Index construct.


 On Dec 9, 2008, at 5:50 AM, jo wrote:



 Maybe I'm using the 'case_sensitive' in a wrong way.
 Here what I want to reach :

 create unique index myname on mytable (lower(mycolumn));

 How can I create it on sqlalachemy?

 j


 Glauco ha scritto:


 jo ha scritto:



 Hi all,

 Trying to migrate from 0.3.10 to 0.5 I have this error:

 sqlalchemy.exc.ArgumentError: Unknown UniqueConstraint  
 argument(s):
 'case_sensitive'

 how can I define the case_sensitive=True for a unique constraint?

 thank you,
 j




 http://www.sqlalchemy.org/trac/browser/sqlalchemy/tags/rel_0_4_8/CHANGES


 case_sensitive=(True|False) setting removed from schema items,  
 since
 checking this state added a lot of method call overhead and there
 was no
 decent reason to ever set it to False.  Table and column names
 which are
 all lower case will be treated as case-insenstive (yes we adjust  
 for
 Oracle's UPPERCASE style too).


 Glauco














 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: small inheritance problem

2008-12-10 Thread Julien Cigar

On Tue, 2008-12-09 at 12:38 -0500, Michael Bayer wrote:
 
 On Dec 9, 2008, at 5:08 AM, Julien Cigar wrote:
 
 
 
  My question is: do I need to explicitely specify the join condition  
  when
  inheritance is involved ? Why is SQLAlchemy able to detect the join
  condition for my Content (line 127) but not for my Folder (line
  128) ?
 
 Folder extends Content and a FolderContent has two foreign key  
 references to folder.join(content).I would also collapse  
 FolderContent.folder and Folder.items into a single bidirectional  
 relation.   I have something I might commit today that would improve  
 upon that though
 
 

Indeed, it works with :

mapper(Folder, table.folder, inherits=Content,
polymorphic_identity=_get_type_id('folder'), 
properties = {
'items' : relation(FolderContent, cascade='all, delete-orphan',
backref=backref('folder', cascade='all, delete-orphan', 
primaryjoin=table.folder.c.content_id ==
table.folder_content.c.folder_id
)
)
}
)

thanks,
Julien


 
  
-- 
Julien Cigar
Belgian Biodiversity Platform
http://www.biodiversity.be
Université Libre de Bruxelles (ULB)
Campus de la Plaine CP 257
Bâtiment NO, Bureau 4 N4 115C (Niveau 4)
Boulevard du Triomphe, entrée ULB 2
B-1050 Bruxelles
Mail: [EMAIL PROTECTED]
@biobel: http://biobel.biodiversity.be/person/show/471
Tel : 02 650 57 52


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problem with flush

2008-12-10 Thread Michael Bayer


On Dec 10, 2008, at 6:56 AM, chingi wrote:


 hi all,
i am building a multi threaded  web server which do lot of
 save and update to database. sqlalchemy version that i am using is
 0.4.4 python version 2.5 and mysql server 4.1.

The problem is when i save or update and do flush then it is
 not reflected in database immediately and due which the select query
 done by another thread gets the old data.
  i don't want to give row level lock.so can anyone suggest me
 some other way


you should make sure the transaction is committed after the flush()  
(i.e. session.commit()).in the 0.4 series it depends on if you are  
using transactional=True or not how this would play out.

in general any web request should be doing a full commit of whatever  
data has changed by the end of its lifecycle, race conditions between  
individual requests should not be an issue since we're not dealing  
with long-running transactions.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Pickling a restriction

2008-12-10 Thread Joril

Michael Bayer ha scritto:

 There's a new extension in 0.5 which allows pickling of any ORM
 structure called sqlalchemy.ext.serializer.

I see, thanks :)

 Although the error below  shouldn't occur in any case, try out the latest 
 trunk.

Do you mean normal Pickle shouldn't raise the PicklingError in my
case? I tried with a simpler test:

from sqlalchemy.ext.declarative import declarative_base

from sqlalchemy import Column, Integer

import cPickle



Base = declarative_base()



class Test(Base):

__tablename__ = 'test'

id = Column(Integer, primary_key=True)



restr = Test.id==1

cPickle.dumps(restr,1)


but it throws the same error (I'm using 0.5rc4, but I tried rev 5457
too)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] TypeError: can't compare offset-naive and offset-aware datetimes

2008-12-10 Thread jose

Hi,

What this message means?

self.save_objects(trans, task)
  File /usr/lib/python2.4/site-packages/sqlalchemy/orm/unitofwork.py, line 
1023, in save_objects
task.mapper.save_obj(task.polymorphic_tosave_objects, trans)
  File /usr/lib/python2.4/site-packages/sqlalchemy/orm/mapper.py, line 1195, 
in save_obj
mapper._postfetch(connection, table, obj, c, c.last_updated_params())
  File /usr/lib/python2.4/site-packages/sqlalchemy/orm/mapper.py, line 1263, 
in _postfetch
elif v != params.get_original(c.key):
TypeError: can't compare offset-naive and offset-aware datetimes


both values are datetime type

v = 2008-12-10 14:28:17.537583 
params.get_original(c.key) =  2008-12-10 16:47:43.209162+01:00

Any idea?

j


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Computed RowProxy values no longer work

2008-12-10 Thread GregF

Hi,

I dusted off a project that had been dormant for a few months.
Upgrading to sqlalchemy .5 broke some code where I was inserting
computed values directly into  a rowproxy object, before I passed the
rows to a template.

I'm getting 'RowProxy' object has no attribute 'excerpt'

Here is a very much simplified version of what I'm doing that used to
work before the .5 upgrade:
--
posts_table = Table('posts', metadata, autoload=True)
q = posts_table.select()
... some q.where stuff here

r = q.execute()

posts = r.fetchall()

for post in posts:
post.excerpt = post.post_body[0:100]

return posts
-

The code bombs on any computed value, whether slicing is used or not.
I was trying to avoid building a new object like posts as an array
of dict, for simplicity and efficiency reasons.

Is there any way to fix this, or will I just have to build my own
object?

Thank, Greg

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Computed RowProxy values no longer work

2008-12-10 Thread Michael Bayer

this is because RowProxy is using __slots__ now.  which ironically is  
also for efficiency reasons :) .there are ways to build mutable  
wrapper objects which are pretty efficient in a case like this, such  
as a named tuple.   my own minimal version of that looks like:

def named_tuple(names):
 class NamedTuple(tuple):
 pass
 for i, name in enumerate(names):
 setattr(NamedTuple, name, property(itemgetter(i)))
 return NamedTuple

Post = named_tuple('id' 'post_body', 'excerpt')
for post in posts:
post = Post([post.id, post.post_body, post.post_body[0:100]])




On Dec 10, 2008, at 11:20 AM, GregF wrote:


 Hi,

 I dusted off a project that had been dormant for a few months.
 Upgrading to sqlalchemy .5 broke some code where I was inserting
 computed values directly into  a rowproxy object, before I passed the
 rows to a template.

 I'm getting 'RowProxy' object has no attribute 'excerpt'

 Here is a very much simplified version of what I'm doing that used to
 work before the .5 upgrade:
 --
posts_table = Table('posts', metadata, autoload=True)
q = posts_table.select()
... some q.where stuff here

r = q.execute()

posts = r.fetchall()

for post in posts:
post.excerpt = post.post_body[0:100]

return posts
 -

 The code bombs on any computed value, whether slicing is used or not.
 I was trying to avoid building a new object like posts as an array
 of dict, for simplicity and efficiency reasons.

 Is there any way to fix this, or will I just have to build my own
 object?

 Thank, Greg

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread desmaj

mssql through pyodbc (and unixODBC and FreeTDS on debian) connections
are broken for me when I pass a url to create engine. I can see that
databases/mssql.py:make_connect_string has been changed to express
host and port as ''server=host,port from
server=host;port=port when the driver attribute is 'SQL Server'.
It seems that this change was made to address ticket 1192[0].

It seems that the way that the port should be expressed in the
connection string is more related to the underlying driver
implementation and not its name. As I mentioned above, we're using
FreeTDS with unixODBC. The FreeTDS documentation on connection strings
[1] indicates that the port specification is not a special case and
that it should be specified int the 'key=value format. If, on my dev
box, I change my /etc/odbcinst.ini entry from [SQL Server] to
[FreeTDS] and I specify driver=FreeTDS (per the MSSQL section of the
SA Database Notes[2]), then my connection seems to work.

So clearly, for my configuration, the driver name is configurable and
it doesn't actually say anything about the driver it names. Is the
driver name on windows configurable? I'm guessing that it is not.
Regardless, I think that making decisions based on the driver name is
probably not a good idea.

Maybe the way to handle this is to introduce another connect argument
for pyodbc, but I'm not sure that this should look like. Maybe
'port_style', with possible values of 'windows', 'freetds', and
whatever else people are using? Or maybe the values are 'comma' and
'attribute', or something. I'm terrible at naming things, so somebody
please step in here. If we can get the names resolved, I'll be glad to
put together a patch.

[0] http://www.sqlalchemy.org/trac/ticket/1192
[1] http://www.freetds.org/userguide/odbcconnattr.htm
[2] http://www.sqlalchemy.org/trac/wiki/DatabaseNotes
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Rick Morrison
This has been bandied back and forth for months, and I think it's becoming
clear that having sqla map dburl's to ODBC connection strings is a losing
battle. Yet another connection argument is not sounding very attractive to
me.

Perhaps a simple reductionist policy for ODBC connections would be best. The
user would either specify a DSN, which would be passed to the driver, or
give a full ODBC connection string (presumably via a keyword arg) which
would be used verbatim. Anything else seems to degrade to the kind of
error-prone heuristics we see here.

It is a big change from the current behavior of trying to fit in with the
way that the dburl works for other dialects, though. Jason's dialect
refactor is going to confront this problem head-on as well. Any thoughts
regarding this from that perspective?

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] (OperationalError) Could not decode to UTF-8 column

2008-12-10 Thread dasacc22

Hi,

Im getting a (OperationalError) Could not decode to UTF-8 column. Ok,
I know how to fix this to prevent it, but I have this on a database
that i need intact and I cant query on the row to make an object to
manipulate, delete etc. Any advice? If I could just delete the record,
that would suffice.

Thanks,
Daniel
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Computed RowProxy values no longer work

2008-12-10 Thread GregF

Thanks for not only explaining the problem, but also suggesting a
solution.  I'll give it a shot.

-


On Dec 10, 10:58 am, Michael Bayer [EMAIL PROTECTED] wrote:
 this is because RowProxy is using __slots__ now.  which ironically is
 also for efficiency reasons :) .there are ways to build mutable
 wrapper objects which are pretty efficient in a case like this, such
 as a named tuple.   my own minimal version of that looks like:

 def named_tuple(names):
  class NamedTuple(tuple):
  pass
  for i, name in enumerate(names):
  setattr(NamedTuple, name, property(itemgetter(i)))
  return NamedTuple

 Post = named_tuple('id' 'post_body', 'excerpt')
 for post in posts:
 post = Post([post.id, post.post_body, post.post_body[0:100]])

 On Dec 10, 2008, at 11:20 AM, GregF wrote:



  Hi,

  I dusted off a project that had been dormant for a few months.
  Upgrading to sqlalchemy .5 broke some code where I was inserting
  computed values directly into  a rowproxy object, before I passed the
  rows to a template.

  I'm getting 'RowProxy' object has no attribute 'excerpt'

  Here is a very much simplified version of what I'm doing that used to
  work before the .5 upgrade:
  --
 posts_table = Table('posts', metadata, autoload=True)
 q = posts_table.select()
 ... some q.where stuff here

 r = q.execute()

 posts = r.fetchall()

 for post in posts:
 post.excerpt = post.post_body[0:100]

 return posts
  -

  The code bombs on any computed value, whether slicing is used or not.
  I was trying to avoid building a new object like posts as an array
  of dict, for simplicity and efficiency reasons.

  Is there any way to fix this, or will I just have to build my own
  object?

  Thank, Greg
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] PickleType copy_values and compare_values bug

2008-12-10 Thread Jonathan Marshall

In a transaction I am updating values, then querying, then updating
some more, ... then eventually committing the whole batch at once.

If a value being updated is a dictionary to be pickled then I may,
depending on the data in the dictionary, find that it is flushed to
the database before every subsequent query even if the value is never
modified again. So I will see something like this:

  query for A
  update A
  query for B (A gets flushed by SA before the query)
  update B
  query for C (A and B get flushed by SA before the query)
  update C
  query for D (A, B and C get flushed by SA before the query)
  etc.

I.e., O(n*n) run-time. Which is a problem as my batch job has 1000
items in it!

I think the problem may be that for some dictionaries that are to
pickled PickleType.copy_value returns something that does not equal
the original value according to PickleType.compare_values.

E.g.
 from sqlalchemy.types import PickleType
 p = PickleType()
 from decimal import Decimal
 d1 = {'Amt': Decimal(6420.), 'Broker': 'A', 'F': 'B'}
 d2 = {'Amt0': Decimal(6420.), 'Broker': 'A', 'F': 'B'}
 p.compare_values(p.copy_value(d1), d1)
False
 p.compare_values(p.copy_value(d2), d2)
True

Yet:
 p.copy_value(d1) == d1
True

I can work around the issue by setting comparator=operator.eq on
PickleType however the current behaviour seems to be broken.

I can post a test case that shows objects being flushed multiple times
if you like. I'm running SA 0.5.0rc4 on OS X 10.5.5 (using Apple's
python 2.5.1 interpreter)

Ta,
Jon.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Efficient dictificationof result sets

2008-12-10 Thread Andreas Jung
Hi there,

is there some more efficient way for dictifying a resultset other than

lst = list()
for row in session.query(...).all():

 d = self.__dict__.copy()
 for k in d.keys():
 if k.startswith('_sa'):
 del d[k]

 lst.append(d)

Especially the loop of the keys takes pretty long for large result sets
and tables with lots of columns.

Andreas


-- 
ZOPYX Ltd.  Co. KG - Charlottenstr. 37/1 - 72070 Tübingen - Germany
Web: www.zopyx.com - Email: [EMAIL PROTECTED] - Phone +49 - 7071 - 793376
Registergericht: Amtsgericht Stuttgart, Handelsregister A 381535
Geschäftsführer/Gesellschafter: ZOPYX Limited, Birmingham, UK

E-Publishing, Python, Zope  Plone development, Consulting


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

begin:vcard
fn:Andreas Jung
n:Jung;Andreas
org:ZOPYX Ltd.  Co. KG
adr;quoted-printable:;;Charlottenstr. 37/1;T=C3=BCbingen;;72070;Germany
email;internet:[EMAIL PROTECTED]
title:CEO
tel;work:+49-7071-793376
tel;fax:+49-7071-7936840
tel;home:+49-7071-793257
x-mozilla-html:FALSE
url:www.zopyx.com
version:2.1
end:vcard



[sqlalchemy] Re: PickleType copy_values and compare_values bug

2008-12-10 Thread Michael Bayer

On Dec 10, 2008, at 2:24 PM, Jonathan Marshall wrote:


 I think the problem may be that for some dictionaries that are to
 pickled PickleType.copy_value returns something that does not equal
 the original value according to PickleType.compare_values.

 E.g.
 from sqlalchemy.types import PickleType
 p = PickleType()
 from decimal import Decimal
 d1 = {'Amt': Decimal(6420.), 'Broker': 'A', 'F': 'B'}
 d2 = {'Amt0': Decimal(6420.), 'Broker': 'A', 'F':  
 'B'}
 p.compare_values(p.copy_value(d1), d1)
 False
 p.compare_values(p.copy_value(d2), d2)
 True

 Yet:
 p.copy_value(d1) == d1
 True

 I can work around the issue by setting comparator=operator.eq on
 PickleType however the current behaviour seems to be broken.

 I can post a test case that shows objects being flushed multiple times
 if you like. I'm running SA 0.5.0rc4 on OS X 10.5.5 (using Apple's
 python 2.5.1 interpreter)

this is the documented behavior and the comparator=operator.eq setting  
is provided for exactly the purpose of efficiently comparing objects  
which do implement an __eq__() that compares internal state, like that  
of a dict (and who also don't provide identical pickles each time). 
The default behavior of comparing pickles is to support user-defined  
objects with mutable state which do not have an __eq__() method  
provided.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Efficient dictificationof result sets

2008-12-10 Thread Michael Bayer


On Dec 10, 2008, at 2:27 PM, Andreas Jung wrote:

 Hi there,

 is there some more efficient way for dictifying a resultset other than

 lst = list()
 for row in session.query(...).all():

 d = self.__dict__.copy()
 for k in d.keys():
 if k.startswith('_sa'):
 del d[k]

 lst.append(d)

 Especially the loop of the keys takes pretty long for large result  
 sets
 and tables with lots of columns.

the most efficient way would be to use a ResultProxy instead of the  
ORM and to call dict() on each row.Otherwise a more succinct way  
of what you're doing there, not necessarily much faster, might be


list = [
dict((k, v) for k, v in obj.__dict__.iteritems() if not  
k.startswith('_sa'))
for obj in session.query.all()
]


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread desmaj

On Dec 10, 1:27 pm, Rick Morrison [EMAIL PROTECTED] wrote:
 This has been bandied back and forth for months, and I think it's becoming
 clear that having sqla map dburl's to ODBC connection strings is a losing
 battle. Yet another connection argument is not sounding very attractive to
 me.

 Perhaps a simple reductionist policy for ODBC connections would be best. The
 user would either specify a DSN, which would be passed to the driver, or
 give a full ODBC connection string (presumably via a keyword arg) which
 would be used verbatim. Anything else seems to degrade to the kind of
 error-prone heuristics we see here.

You make a good point about heuristics. That describes the current
implementation as well as my initial proposal involving argument
values of 'windows' and 'freetds' and stuff. I'm glad you pointed that
out. For the reasons I'll add below, I still favor a connection
argument, but the argument values should indicate the outcome and not
try to attribute behaviors to platforms when there's a lack of full
understanding about what those behaviors are. I still can't think of
good names.

I dislike the idea of relying heavily on DSNs since I don't want SA to
tell people how to manage their systems. Giving full support to DSN-
less connections let's SA work with existing systems.

Depending on a keyword argument seems troublesome, too, since there
are so many existing configurations using SA that count on being able
to specify connection information through a dburl. sqlalchemy-migrate
is the one that I'm working with now. I may be overstating this, but
the sense that I have is that using keyword arguments to specify
connection information is something that most people don't do. I do it
all the time so I'm glad it's there, but it's always felt like a
workaround.

 It is a big change from the current behavior of trying to fit in with the
 way that the dburl works for other dialects, though. Jason's dialect
 refactor is going to confront this problem head-on as well. Any thoughts
 regarding this from that perspective?

I don't know what this wll look like, so maybe the discussion is moot.
I look forward to hearing about it.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Michael Bayer


On Dec 10, 2008, at 2:49 PM, desmaj wrote:


 I dislike the idea of relying heavily on DSNs since I don't want SA to
 tell people how to manage their systems. Giving full support to DSN-
 less connections let's SA work with existing systems.

DSNs would eliminate these issues for SQLAlchemy.  DSNs are the  
recommended way to connect with ODBC and im always confused why people  
don't use them.   Back when I used to use ODBC heavily, it was the  
*only* way to connect.   All of this host/port stuff with ODBC client  
libraries is unknown to me.

this is just my 2c, im not here to say how it should be done or not.
I would think that the standard SQLA host/port connect pattern should  
work as well if we just are aware of what kind of client library we're  
talking to.   If we can't autodetect that, then we just use a keyword  
argument ?connect_type=FreeTDS.   We can document all the different  
connect types somewhere and just adapt to all the newcomers that  
way.   With DSN being the default.   I definitely do not want to start  
allowing raw connect strings through create_engine() - if you need to  
do that, use the creator() function.

Whats the status of 0.5, is DSN the default in trunk now ?



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Lukasz Szybalski

On Wed, Dec 10, 2008 at 1:49 PM, desmaj [EMAIL PROTECTED] wrote:

 On Dec 10, 1:27 pm, Rick Morrison [EMAIL PROTECTED] wrote:
 This has been bandied back and forth for months, and I think it's becoming
 clear that having sqla map dburl's to ODBC connection strings is a losing
 battle. Yet another connection argument is not sounding very attractive to
 me.

 Perhaps a simple reductionist policy for ODBC connections would be best. The
 user would either specify a DSN, which would be passed to the driver, or
 give a full ODBC connection string (presumably via a keyword arg) which
 would be used verbatim. Anything else seems to degrade to the kind of
 error-prone heuristics we see here.

 You make a good point about heuristics. That describes the current
 implementation as well as my initial proposal involving argument
 values of 'windows' and 'freetds' and stuff. I'm glad you pointed that
 out. For the reasons I'll add below, I still favor a connection
 argument, but the argument values should indicate the outcome and not
 try to attribute behaviors to platforms when there's a lack of full
 understanding about what those behaviors are. I still can't think of
 good names.

 I dislike the idea of relying heavily on DSNs since I don't want SA to
 tell people how to manage their systems. Giving full support to DSN-
 less connections let's SA work with existing systems.

 Depending on a keyword argument seems troublesome, too, since there
 are so many existing configurations using SA that count on being able
 to specify connection information through a dburl. sqlalchemy-migrate
 is the one that I'm working with now. I may be overstating this, but
 the sense that I have is that using keyword arguments to specify
 connection information is something that most people don't do. I do it
 all the time so I'm glad it's there, but it's always felt like a
 workaround.

 It is a big change from the current behavior of trying to fit in with the
 way that the dburl works for other dialects, though. Jason's dialect
 refactor is going to confront this problem head-on as well. Any thoughts
 regarding this from that perspective?

 I don't know what this wll look like, so maybe the discussion is moot.
 I look forward to hearing about it.



I've been using the following connection string with unixodbc for a
year now since 0.4.6 version.

sqlalchemy.create_engine(mssql://user:[EMAIL 
PROTECTED]:1433/databasename?driver=TDSodbc_options='TDS_Version=8.0')
which gets translated into :
pyodbc.connect(SERVER=;UID=xxx;PWD=xxx;DRIVER={TDS};TDS_Version=7.0)

It works great for me.

So what is not working exactly? Can you provide an example:

Have you setup the unixodbc/freetds already?
http://lucasmanual.com/mywiki/unixODBC

Thanks,
Lucas

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Lukasz Szybalski

On Wed, Dec 10, 2008 at 2:05 PM, Michael Bayer [EMAIL PROTECTED] wrote:


 On Dec 10, 2008, at 2:49 PM, desmaj wrote:


 I dislike the idea of relying heavily on DSNs since I don't want SA to
 tell people how to manage their systems. Giving full support to DSN-
 less connections let's SA work with existing systems.

 DSNs would eliminate these issues for SQLAlchemy.  DSNs are the
 recommended way to connect with ODBC and im always confused why people
 don't use them.   Back when I used to use ODBC heavily, it was the
 *only* way to connect.   All of this host/port stuff with ODBC client
 libraries is unknown to me.

 this is just my 2c, im not here to say how it should be done or not.
 I would think that the standard SQLA host/port connect pattern should
 work as well if we just are aware of what kind of client library we're
 talking to.   If we can't autodetect that, then we just use a keyword
 argument ?connect_type=FreeTDS.   We can document all the different
 connect types somewhere and just adapt to all the newcomers that
 way.   With DSN being the default.   I definitely do not want to start
 allowing raw connect strings through create_engine() - if you need to
 do that, use the creator() function.

 Whats the status of 0.5, is DSN the default in trunk now ?

In my opinion sqlalchemy should provide dsn and dsn-less connections
options. I know it does dns-less and I'm not sure what the status is
on dsn connections.  If both are available I think that is great, if
you force users to use one or another that would cause problems for me
especially if you don't allow dns-less connections.

Thanks,
Lucas

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] first() fails with from_statement in 0.5rc4

2008-12-10 Thread David Gardner

It seems what when using .first() with from_statement() I get a 
traceback with 0.5rc4 when no rows are found (works when a row is fount):

using 0.4.8 I get:
/usr
0.4.8

with 0.5rc4 I get:
/users/dgardner/dev
0.5.0rc4
Traceback (most recent call last):
  File assetdb_test.py, line 38, in module
session.query(Note).from_statement(qry%(uid,name,updated)).first()
  File 
/users/dgardner/dev/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc4-py2.5.egg/sqlalchemy/orm/query.py,
 
line 1025, in first
return list(self)[0]
IndexError: list index out of range



My test:

import sys
print sys.prefix
from sqlalchemy import *
import sqlalchemy
from sqlalchemy.orm import *

DB_HOST = 'localhost'
DB_NAME = 'transitional'
DB_USER = 'sqluser'
DB_PASS = 'not_my_password'

db_uri = 'postgres://%s:[EMAIL PROTECTED]/%s' % 
(DB_USER,DB_PASS,DB_HOST,DB_NAME)

db = create_engine (db_uri, pool_size=200,max_overflow=200)
#db.echo = True
metadata = MetaData(db)

class Note(object):
pass

note_table = Table('note', metadata,
   Column('id', Integer, primary_key=True),
   Column('created', DateTime, default=func.now()),
   Column('updated', DateTime, default=func.now(), 
onupdate=func.current_timestamp()),
   Column('author', String(255),ForeignKey('users.userid')),
   Column('note', String),
   Column('asset', String(20),  
ForeignKey('nodehierarchy.uid'), nullable=False)
   )
mapper(Note, note_table)

print sqlalchemy.__version__
qry=SELECT note.* FROM note_task JOIN note ON note_task.note=note.id 
WHERE note_task.task_asset='%s' AND note_task.task_name='%s' AND 
note.updated='%s'
uid='00420123774551347239'
name='UV'
updated='2008-12-05 16:45:46.299124-08:00'
session = create_session()

session.query(Note).from_statement(qry%(uid,name,updated)).first()


sys.exit(0)

-- 
David Gardner
Pipeline Tools Programmer, Sid the Science Kid
Jim Henson Creature Shop
[EMAIL PROTECTED]



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread desmaj



On Dec 10, 3:05 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 this is just my 2c, im not here to say how it should be done or not.
 I would think that the standard SQLA host/port connect pattern should
 work as well if we just are aware of what kind of client library we're
 talking to.   If we can't autodetect that, then we just use a keyword
 argument ?connect_type=FreeTDS.   We can document all the different
 connect types somewhere and just adapt to all the newcomers that
 way.   With DSN being the default.   I definitely do not want to start
 allowing raw connect strings through create_engine() - if you need to
 do that, use the creator() function.

As Rick said, autodetection of this stuff is error-prone.

Maybe 'connect_type' can work, but I'd want to get all the mssql/
pyodbc folks on board before that got finalized. I don't know enough
about FreeTDS to stand behind any configuration and say that it should
work in all cases. If connect type is decided as the right approach,
though, then I will work something up for that.

 Whats the status of 0.5, is DSN the default in trunk now ?

DSN is the first choice in MSSQLDialect_pyodbc.make_connect_string
right now.

Thanks to Rick, Mike and Lukasz for helping me out and weighing in on
this.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: first() fails with from_statement in 0.5rc4

2008-12-10 Thread David Gardner
Thanks, I'm not sure its a good thing that I found new ways to use SA 
that were unintended by the author :).
In this case the raw sql query should return only a single row.

Michael Bayer wrote:
 I dont think this was ever known behavior in 0.4 and its a little
 strange that it workedit works in 0.5 as of r5458, but keep in
 mind first() still fetches the full result set of the query.


 On Dec 10, 2008, at 3:14 PM, David Gardner wrote:

   
 It seems what when using .first() with from_statement() I get a
 traceback with 0.5rc4 when no rows are found (works when a row is
 fount):

 using 0.4.8 I get:
 /usr
 0.4.8

 with 0.5rc4 I get:
 /users/dgardner/dev
 0.5.0rc4
 Traceback (most recent call last):
  File assetdb_test.py, line 38, in module
session.query(Note).from_statement(qry%(uid,name,updated)).first()
  File
 /users/dgardner/dev/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc4-
 py2.5.egg/sqlalchemy/orm/query.py,
 line 1025, in first
return list(self)[0]
 IndexError: list index out of range



 My test:
 
 import sys
 print sys.prefix
 from sqlalchemy import *
 import sqlalchemy
 from sqlalchemy.orm import *

 DB_HOST = 'localhost'
 DB_NAME = 'transitional'
 DB_USER = 'sqluser'
 DB_PASS = 'not_my_password'

 db_uri = 'postgres://%s:[EMAIL PROTECTED]/%s' % 
 (DB_USER,DB_PASS,DB_HOST,DB_NAME)

 db = create_engine (db_uri, pool_size=200,max_overflow=200)
 #db.echo = True
 metadata = MetaData(db)

 class Note(object):
pass

 note_table = Table('note', metadata,
   Column('id', Integer, primary_key=True),
   Column('created', DateTime, default=func.now()),
   Column('updated', DateTime, default=func.now(),
 onupdate=func.current_timestamp()),
   Column('author',
 String(255),ForeignKey('users.userid')),
   Column('note', String),
   Column('asset', String(20),
 ForeignKey('nodehierarchy.uid'), nullable=False)
   )
 mapper(Note, note_table)

 print sqlalchemy.__version__
 qry=SELECT note.* FROM note_task JOIN note ON
 note_task.note=note.id
 WHERE note_task.task_asset='%s' AND note_task.task_name='%s' AND
 note.updated='%s'
 uid='00420123774551347239'
 name='UV'
 updated='2008-12-05 16:45:46.299124-08:00'
 session = create_session()

 session.query(Note).from_statement(qry%(uid,name,updated)).first()


 sys.exit(0)

 --
 David Gardner
 Pipeline Tools Programmer, Sid the Science Kid
 Jim Henson Creature Shop
 [EMAIL PROTECTED]



 


 

   


-- 
David Gardner
Pipeline Tools Programmer, Sid the Science Kid
Jim Henson Creature Shop
[EMAIL PROTECTED]


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] MSSQL pyodbc - unicode problems since 0.5rc2

2008-12-10 Thread desmaj

I'm using MSSQL + pyodbc + unixODBC + FreeTDS ...

I have been tracking the 0.5 line for a while now, but I only recently
noticed that I am unable to insert unicode into the database since
0.5rc2. Starting with 0.5rc3, when I do try to insert unicode into a
column defined as Unicode or UnicodeText, I get an error like:
[HY000] [FreeTDS][SQL Server]Could not perform COMMIT or ROLLBACK (0)
(SQLEndTran)

I should probably also add that I autoload my tables.

So before I start tearing code apart ... does this sound familiar to
anyone out there? Can anyone think of changes since 0.5rc2 that might
not be local to mssql and that might result in such errors?

Thanks,
Matthew
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Rick Morrison

  Whats the status of 0.5, is DSN the default in trunk now ?

 DSN is the first choice in MSSQLDialect_pyodbc.make_connect_string
 right now.


That's not what I see. I just pulled the 0.5 trunk, which I haven't been
tracking lately. Still uses the 'dsn' keyword build a connection string with
DSN, otherwise defaults to the former dsn-less connection string behavior.

What Mike is taking about is an idea initially advanced by Marc-Andre
Lemburg to make the 'host' portion of the dburl to represent a DSN for
pyodbc connections (any supplied database name would be ignored) for the 0.5
trunk, and have the existing dsn-less connection behavior become a
keyword-only kind of thing.

I made a patch for that many moons ago, but never committed it; it makes a
lot of sense, and the 0.5 release would be an appropriate time to introduce
this kind of compatibilty-breaking behavior. But there's been a rather
surprising lack of fans for the idea, and I would expect to hear some
grumbling from users if/when we do it.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Michael Bayer


DSN is the one keyword for host that is universally recognized as  
part of odbc proper, so it makes sense that host/port would be the  
exception case - especially considering it seems like we now have to  
pick among many formats for propagating host/port and are going to  
require some kind of host_format/connect_type or something for the  
host/port case anyway.

lets just nail this down so 0.5 can go out soon.   lets also add  
awesome docs to our awesome new doc system.

On Dec 10, 2008, at 4:42 PM, Rick Morrison wrote:

  Whats the status of 0.5, is DSN the default in trunk now ?

 DSN is the first choice in MSSQLDialect_pyodbc.make_connect_string
 right now.

 That's not what I see. I just pulled the 0.5 trunk, which I haven't  
 been tracking lately. Still uses the 'dsn' keyword build a  
 connection string with DSN, otherwise defaults to the former dsn- 
 less connection string behavior.

 What Mike is taking about is an idea initially advanced by Marc- 
 Andre Lemburg to make the 'host' portion of the dburl to represent a  
 DSN for pyodbc connections (any supplied database name would be  
 ignored) for the 0.5 trunk, and have the existing dsn-less  
 connection behavior become a keyword-only kind of thing.

 I made a patch for that many moons ago, but never committed it; it  
 makes a lot of sense, and the 0.5 release would be an appropriate  
 time to introduce this kind of compatibilty-breaking behavior. But  
 there's been a rather surprising lack of fans for the idea, and I  
 would expect to hear some grumbling from users if/when we do it.

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Lukasz Szybalski

On Wed, Dec 10, 2008 at 3:42 PM, Rick Morrison [EMAIL PROTECTED] wrote:
  Whats the status of 0.5, is DSN the default in trunk now ?

 DSN is the first choice in MSSQLDialect_pyodbc.make_connect_string
 right now.

 That's not what I see. I just pulled the 0.5 trunk, which I haven't been
 tracking lately. Still uses the 'dsn' keyword build a connection string with
 DSN, otherwise defaults to the former dsn-less connection string behavior.

 What Mike is taking about is an idea initially advanced by Marc-Andre
 Lemburg to make the 'host' portion of the dburl to represent a DSN for
 pyodbc connections (any supplied database name would be ignored) for the 0.5
 trunk, and have the existing dsn-less connection behavior become a
 keyword-only kind of thing.

 I made a patch for that many moons ago, but never committed it; it makes a
 lot of sense, and the 0.5 release would be an appropriate time to introduce
 this kind of compatibilty-breaking behavior.

Rick, Michael could you guys give me an example of the connection string then?
If I understand this correctly:
You guys are saying that dsn=... would replace the host:port or
host,port,databasename?

Would this change allow to supply odbc options just like I can
currently with TDS_version=7.0 which can only be supplied at a
connection?
http://www.freetds.org/userguide/odbcconnattr.htm

I'm not clear as far as what functionality you guys want to break:

From pyodbc docs the following are the connection strings so it would
seem to me that these could be fairly easy to create based on dsn= or
not in a sqlalchemy connection string. Unless I'm way of topic then
just let me know.

cnxn = pyodbc.connect(DSN=dsnname;UID=user;PWD=password)
or
cnxn = pyodbc.connect('DRIVER={SQL
Server};SERVER=server;DATABASE=database;UID=user;PWD=password)


Thanks,
Lucas

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Extending sqlalchemy.schema.Column and metaprogramming traps

2008-12-10 Thread Angri

 if you'd like to submit a patch which defines __visit_name__ for all  
 ClauseElements and removes the logic from VisitableType to guess the  
 name, it will be accepted.  The second half of VisitableType still may  
 be needed since it improves performance.

Ok, I did it. Can not find where I can attach file to message in
google groups, so I put it in paste bin: http://paste.org/index.php?id=4463

I made some automated tests to make sure that nothing will break.
Existing Visitable's descendants after applying the patch will have
exactly the same value of __visit_name__ property. I also put small
test from first message in the topic. It fails with vanila trunk and
pass ok with patched.

Take a look, please.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Michael Bayer


Here is my take, keeping in mind i havent used a windows machine in  
about a year:


On Dec 10, 2008, at 5:13 PM, Lukasz Szybalski wrote:


 cnxn = pyodbc.connect(DSN=dsnname;UID=user;PWD=password)

mssql://user:[EMAIL PROTECTED]/



 or
 cnxn = pyodbc.connect('DRIVER={SQL
 Server};SERVER=server;DATABASE=database;UID=user;PWD=password)

mssql://user:[EMAIL PROTECTED]/database? 
connect_type=TDS7other=argsthat=areneeded=foo

using connect_type, or some better name, we can map the URL scheme  
to an unlimited number of vendor specific connect strings on the back. 
  

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Rick Morrison
Something like this:

As of 0.5 for pyodbc connections:

a) If the keyword argument 'odbc_connect' is given, it is assumed to be a
full ODBC connection string, which is used for the connection (perhaps we
can include a facility for Python sting interpolation into this string from
the dburi components).

b) otherwise, the host portion of the dburl represents a ODBC DSN. A simple
connection string is constructed using the user name and password and DSN
(host) from the dburl. Any given database name is ignored.

Finally, if present, the contents of the keyword argument 'odbc_options'
(assumed to be a string) are concatenated to the connection string generated
in either (a) or (b).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Rick Morrison

 mssql://user:[EMAIL PROTECTED]/database?
 connect_type=TDS7other=argsthat=areneeded=foo

 using connect_type, or some better name, we can map the URL scheme
 to an unlimited number of vendor specific connect strings on the back.


Yeah, it's exactly that kind of mapping that has so far been a fussy pain in
the neck to use and maintain. I'm for nuking the idea and just going with a
straight-up ODBC connection string as in my last last posting.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Extending sqlalchemy.schema.Column and metaprogramming traps

2008-12-10 Thread Michael Bayer

hey send it as an email attachment, or create a ticket in trac as  
guest/guest and attach it there:  http://www.sqlalchemy.org/trac/newticket


On Dec 10, 2008, at 5:19 PM, Angri wrote:


 if you'd like to submit a patch which defines __visit_name__ for all
 ClauseElements and removes the logic from VisitableType to guess the
 name, it will be accepted.  The second half of VisitableType still  
 may
 be needed since it improves performance.

 Ok, I did it. Can not find where I can attach file to message in
 google groups, so I put it in paste bin: http://paste.org/index.php?id=4463

 I made some automated tests to make sure that nothing will break.
 Existing Visitable's descendants after applying the patch will have
 exactly the same value of __visit_name__ property. I also put small
 test from first message in the topic. It fails with vanila trunk and
 pass ok with patched.

 Take a look, please.
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Michael Bayer


On Dec 10, 2008, at 5:21 PM, Rick Morrison wrote:

 Something like this:

 As of 0.5 for pyodbc connections:

 a) If the keyword argument 'odbc_connect' is given, it is assumed to  
 be a full ODBC connection string, which is used for the connection  
 (perhaps we can include a facility for Python sting interpolation  
 into this string from the dburi components).


so you mean create_engine('mssql://? 
odbc_connect 
= 
DRIVER 
= 
{SQLServer 
};SERVER=server;DATABASE=database;UID=user;PWD=password') ?I can't  
go with that, there has to be some way to plug in a URL handler.if  
you really need to send that raw string straight through you can use  
the creator() function. Its really not a big deal to maintain if  
we just make a modular URLHandler class that given a sqlalchemy.URL  
and a DBAPI, returns a connection.   Anytime someone has some new  
goofy string they need, they can provide one of these, we add it to  
the MSSQL dialect, and we wont have to change it again, since it will  
be keyed to a name.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Connection Pool behaviour change

2008-12-10 Thread Robert Ian Smit

Hi,

I am porting our pylons app from 0.4.5 to 0.5.

It appears that implicit queries are using a new connection instead of
re-using the existing one that is also used by the ORM Session.

I have read the Migration Docs and looked at the changelog and didn't
find anything related to this matter.

To avoid opening many connections or to re-use a transaction handle, I
had to rewrite code such as this

from:

   s = select()
   dbsession.query(Model).instances(s.execute())


to:

  s = select()
  list(dbsession.query(Model).instances(dbsession.execute(s)))

Is this change expected or is it an indication of an error somewhere?

Thanks,

Bob

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Michael Bayer


On Dec 10, 2008, at 5:39 PM, Rick Morrison wrote:


 Its really not a big deal to maintain if
 we just make a modular URLHandler class that given a sqlalchemy.URL
 and a DBAPI, returns a connection.   Anytime someone has some new
 goofy string they need, they can provide one of these, we add it to
 the MSSQL dialect, and we wont have to change it again, since it will
 be keyed to a name.


 OK, that sounds promising and like it has a chance to keep working  
 in 0.6 too. Is that a new idea, or does that already exist somewhere  
 in SA?

its just a new idea to solve the problem locally within ms-sql for  
now, its just a dictionary of callables or objects of some kind. 
   

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Remote query mapping without join

2008-12-10 Thread channing

I've been wondering about this for a while, but for some reason I
didn't get that the ColumnProperty would let me query:

query(units).filter_by(depth=2581)

I'm not sure whether that's just me, or if the documentation isn't
clear.

Thanks!

-Channing
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---