[sqlalchemy] Re: Support for CouchDB

2010-08-25 Thread phrrn...@googlemail.com
Roger Binns has incorporated support for CouchDB into apsw by using
SQLite virtual tables. Although apsw cannot be used directly by
SQLAlchemy (as it is not dbapi compliant),  you can pass an apsw
connection to pysqlite connect() and then use that connection as a
SQLAlchemy engine. I did some minimal work on a thin DBAPI
compatibility layer on top of apsw and the initial results were
encouraging but I have not been able to allocate any further time to
it. I am not far enough in to know if it is a great idea that will
work or an incredibly dumb one that has no chance of working!

So if you want to do some hacking about with create_engine(creator =
lambda ...) then you may just be able to get couchdb up and  running
with SQLAlchemy via apsw. If you try it, let me know how you get on.

pjjH


On Aug 25, 11:16 am, Rawlins rob...@supporttime.com wrote:
 Hello Guys,

 Thanks for the info, i did expect that to be the case! I can
 understand that the architecture is indeed very different.

 As an extension to my question then, which DBMS would you guys
 recommend for stability. We have a small embedded platform which has
 an unreliable power source. We're currently using SQLite which appears
 to work pretty well, is that something you would recommend? or is
 there something else even simpler and more robust?

 Cheers,

 Robert

 On Aug 25, 3:32 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  On Aug 25, 2010, at 6:26 AM, Rawlins wrote:

   Hello Guys,

   Do we have support for CouchDB with SQlAlchemy? Can't seem to find
   much information anywhere so I'm guessing not, just thought I would
   check.

  There's a long term plan to allow plugins that would provide attribute 
  instrumentation and Session persistence services to objects that are 
  persisted by a non-SQL database.  Such a plugin would need to be written by 
  a third party for each database backend and querying would also be an 
  ad-hoc affair.   So there's a notion of it but nothing is happening on that 
  front right now.

   Thanks,

   Robert

   --
   You received this message because you are subscribed to the Google Groups 
   sqlalchemy group.
   To post to this group, send email to sqlalch...@googlegroups.com.
   To unsubscribe from this group, send email to 
   sqlalchemy+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Opened ticket 1866 to provide SA support for APSW driver for SQLite

2010-08-06 Thread phrrn...@googlemail.com

I would like to use the backup API from SA (as part of our testing
infrastructure for setting up :memory: databases from a known,
persisted state). Also, UDFs (both scalar and aggregate) and virtual
tables are very nicely suported by APSW.


pjjH

http://www.sqlalchemy.org/trac/ticket/1866
http://stackoverflow.com/questions/1072444/use-sqlites-backup-api-from-python-sqlalchemy

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] mapping columns phone1, phone2, phone3 to a list-based property, phones

2010-08-03 Thread phrrn...@googlemail.com
Say you have a denormalized table with columns phone1, phone2, phone3
and you would like to map the class so that the .phones property is an
iterable.
e.g. if I have data like
user_id, phone1, phone2, phone3
1, 1234, 5678, 9012
2, 3456,7890,1234

I would like to say something like
for p in S.query(User).get(1).phones:
   print p.ordinal, p.number

and get this as output:

  1 1234
  2 5678
  3 9012

While one could use an operator like SQL Server's UNPIVOT, I would be
quite happy to have the mapper do the magic. I was reading through the
examples/vertical.py source today so I think that what I want is
doable, I am just not sure how to approach it. I assume that I would
proxy a list-based relation?

pjjH


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Support for Common Table Expressions (CTE)?

2010-07-29 Thread phrrn...@googlemail.com
Is there any support yet in SQLAlchemy for Common Table Expressions
e.g.

WITH foo(blurp, bletch) AS (SELECT bar*2, bletch from banana where
bletch  3)
SELECT f.* FROM foo as f
where blurp  1

I have not been following SA development for some months (due to a job
move) so I am not yet up to speed with the new yummy @compiles stuff
(which looks like it could certainly help with rolling my own support
for CTEs if it is not already in the core)

thanks,
pjjH

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Support for Common Table Expressions (CTE)?

2010-07-29 Thread phrrn...@googlemail.com
I see that a ticket has just been opened up on this topic.

http://www.sqlalchemy.org/trac/ticket/1859

I will follow up there.
thanks,

pjjH


On Jul 29, 8:33 am, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
 Is there any support yet in SQLAlchemy for Common Table Expressions
 e.g.

 WITH foo(blurp, bletch) AS (SELECT bar*2, bletch from banana where
 bletch  3)
 SELECT f.* FROM foo as f
 where blurp  1

 I have not been following SA development for some months (due to a job
 move) so I am not yet up to speed with the new yummy @compiles stuff
 (which looks like it could certainly help with rolling my own support
 for CTEs if it is not already in the core)

 thanks,
 pjjH

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Difference between result_processor and process_result_value?

2009-10-07 Thread phrrn...@googlemail.com

I had some (working) code with a typeadaptor that  used the
process_result_value style. I changed this to use the result_processor
style and returned a calleable and some tests started to fail. It
seems that with the process_result_value form, value is a datetime
object. When using result_processor, value is a unicode string.

pjjH


-def result_processor(self, dialect):
-def process(value):
-if value is None:
-return None
-return Date(value)

-return process
+def process_result_value(self, value, dialect):
+if value is None:
+return None
+return Date(value)




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Want a custom implementation of get_bind that ignores metadata bindings

2009-09-24 Thread phrrn...@googlemail.com

Yup. We screwed up by using metadata.bind but I think we may be stuck
with it. Is it possible to bind a metadata collection within a
session? i.e would session.configure(binds={metadata_collection_1 :
e1, metadata_collection_2 : e2}) work? We would like to be able to
bind groups of tables at the same time rather than doing them
individually or having a single common bind for the session  ... a lot
of our applications access data across multiple data-servers and with
multiple-logins

pjjH


On Sep 24, 12:12 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Sep 23, 2009, at 6:36 PM, phrrn...@googlemail.com wrote:





  I have a hidden WriterSession which I am using behind the scenes to
  manage a number of API entries that write data in bulk e.g. upsert
  (MappedClass, iterator_that_returns_dicts). I want the session to only
  look at its own binds and to ignore any that are in place on the
  metadata collection. I wrote my own get_bind that does this
  (horrible!) hack:

         if self._Session__binds:
             b = self._Session__binds
             if c_mapper:
                 if c_mapper.base_mapper in b:
                     return b[c_mapper.base_mapper]
                 elif c_mapper.mapped_table in b:
                     return b[c_mapper.mapped_table]

         if self.bind:
             return self.bind

  I don't really understand how the double underscore stuff works in
  Python. Mike, how would you feel about exposing the session bind
  information with an interface that is more amenable to subclassing?

 The binds collection on Session is set via the binds argument, or  
 one at a time using bind_mapper() and bind_table().   get_bind() does  
 not consult the metadata's bind unless none of session.bind or or  
 __binds has been configured.    So there shouldn't be any need to  
 hack get_binds().

 Also I would strongly advise against using metadata.bind for any  
 application that uses more than one engine.    Here's what the 0.5  
 docs 
 athttp://www.sqlalchemy.org/docs/05/metadata.html#binding-metadata-to-a...
   have to say:

 Note that the feature of binding engines is completely optional. All  
 of the operations which take advantage of “bound” MetaData also can be  
 given an Engine or Connection explicitly with which to perform the  
 operation.

 Here's what 0.6 has to say 
 athttp://www.sqlalchemy.org/docs/06/metadata.html#binding-metadata-to-a...
   :

 Binding the MetaData to the Engine is a completely optional feature.  
 The above operations can be achieved without the persistent bind using  
 parameters: (examples)

 Should you use bind ? It’s probably best to start without it. If you  
 find yourself constantly needing to specify the same Engine object  
 throughout the entire application, consider binding as a convenience  
 feature which is applicable to applications that don’t have multiple  
 engines in use and don’t have the need to reference connections  
 explicitly. It should also be noted that an application which is  
 focused on using the SQLAlchemy ORM will not be dealing explicitly  
 with Engine or Connection objects very much in any case, so it’s  
 probably less confusing and more “future proof” to not use the bind  
 attribute.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Want a custom implementation of get_bind that ignores metadata bindings

2009-09-23 Thread phrrn...@googlemail.com

I have a hidden WriterSession which I am using behind the scenes to
manage a number of API entries that write data in bulk e.g. upsert
(MappedClass, iterator_that_returns_dicts). I want the session to only
look at its own binds and to ignore any that are in place on the
metadata collection. I wrote my own get_bind that does this
(horrible!) hack:

if self._Session__binds:
b = self._Session__binds
if c_mapper:
if c_mapper.base_mapper in b:
return b[c_mapper.base_mapper]
elif c_mapper.mapped_table in b:
return b[c_mapper.mapped_table]

if self.bind:
return self.bind

I don't really understand how the double underscore stuff works in
Python. Mike, how would you feel about exposing the session bind
information with an interface that is more amenable to subclassing?

pjjH

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: making an instance read-only

2009-08-25 Thread phrrn...@googlemail.com

You are correct: the code-snippet will cause an exception to be thrown
when SA attempts to flush any changes. However, the connection
callable is called *per-instance* and it is supposed to return the
connection to use to perform the flush. Inside the callable, you can
peek at the mapper and/or the instance and return whatever you deem
appropriate. What kind of failure mode are you looking for? If you
have a session with forbidden writes, what should happen? Nothing gets
written? The legitimate updates occur but the disallowed ones generate
exceptions? disallowed writes are silently swallowed by a mock
connection (probably make programmers very upset, confused and angry!)

The problem with any of these approches is -- as you point out -- that
the application is not informed of the boo-boo at the time it occurs.
I wonder what Michael is alluding to in his comment about implementing
__getattribute__? I assume he meant __setattr__?

pjjH


On Aug 25, 9:26 am, Martijn Faassen faas...@startifact.com wrote:
 Hey,



 phrrn...@googlemail.com wrote:
  I implemented a very crude flush/commit-time version of this today
  that disables all modifications. Michael suggested the use of
  connection_callable to provide fine-grained control of which engine to
  use for modifications. I haven't gone through all the details yet but
  it seems to work for my basic tests.

  pjjH

  class ReadOnlySession(DefaultSession):
      def __init__(self, **kwargs):
          super(ReadOnlySession, self).__init__(**kwargs)
          self._mapper_flush_opts = {'connection_callable',
  self._disable_any_modifications}

      def _disable_any_modifications(self, mapper=None, instance=None,
  **kwargs):
          return None

 If I understand it correctly, this is on a per-session basis, correct?
 I'd like to block modification to some but not all objects in the same
 session.

 Regards,

 Martijn
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using a different set of engines for reading versus writing

2009-08-24 Thread phrrn...@googlemail.com

I like the sneaky way. I put the basic mechanism in place but I am not
sure yet how to implement a useful and practical set of defaults (I am
referring to stuff which is totally particular to our organization and
not anything to do with SA).

very cool.

pjjH


On Aug 12, 9:30 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 the most APIish way to do that would be using merge().

 the sneaky way would be to tap into the connection_callable argument  
 that the sharded session uses.    you'd have to read the source to  
 shard.py to see what thats about.

 On Aug 12, 2009, at 7:13 PM, phrrn...@googlemail.com wrote:



  One of the very nice things about using SQLAlchemy is that since so
  much of the grunt-work is taken care of for you, it gives you the
  opportunity to come up with (potentially) hare-brained schemes like
  the one I just thought of. We would like to do reading of data with
  one set of logins and do writing with another. The reader 'binds' are
  pretty straightforward but the writer binds vary quite a bit from
  database to database and platform to platform (we have integrated
  security on some OS/dataserver/drivers/languages) The idea I have is
  to maintain a writer session with the specialized engine bindings and
  have the reader session copy any dirty data over to it at commit/
  flush  time (I got the idea from the thread on this group entitled
  Updating database after after_insert()). Is this a total kludge? Has
  anyone tried to do something like this already?

  thanks very much.

  pjjH
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: making an instance read-only

2009-08-24 Thread phrrn...@googlemail.com

I implemented a very crude flush/commit-time version of this today
that disables all modifications. Michael suggested the use of
connection_callable to provide fine-grained control of which engine to
use for modifications. I haven't gone through all the details yet but
it seems to work for my basic tests.

pjjH


class ReadOnlySession(DefaultSession):
def __init__(self, **kwargs):
super(ReadOnlySession, self).__init__(**kwargs)
self._mapper_flush_opts = {'connection_callable',
self._disable_any_modifications}

def _disable_any_modifications(self, mapper=None, instance=None,
**kwargs):
return None

On Aug 24, 1:28 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 Martijn Faassen wrote:

  Hi there,

  I'm investigating ways to make an ORM-mapped instance read-only,
  dependent the value of a particular attribute (database backed or not).
  If an object has a certain state, I want to prevent normal users from
  making a modification. Other objects connected to the same session
  should be editable however.

  I've tried to figure out various possibilities, but none of them seem
  entirely satisfactory:

  * I could use AttributeExtension that checks the 'readonly' attribute
  and if so, raises an exception when modification is attempted. As far as
  I can see I'd need to manually define an AttributeExtension for *all*
  attributes, and I'd just like to do this once per mapped class at most.

  * It may be possible to use MapperExtension and modify before_update
  somehow. This is a bit late however - I'd prefer an exception to be
  raised as soon as someone tries to modify an attribute.

  * I could proxy the whole instance with a security proxy, along the
  lines of zope.security, which could then do the checks. I'd like to
  avoid security proxies if I can get away with it however. I'd prefer it
  if the objects that came back from session.query() were already
  configured to do read-only checks.

  * I could try to use some form of row-level security on the database
  level. MySQL, the database I'm working with, doesn't have such feature
  as far as I'm aware, however. Also this'd be warning the developer a bit
  late - I'd prefer if it happened directly when modifying the attribute.

  Does anyone have any clues as to what a good implementation strategy
  would be?

 did you try implementing __getattribute__() ?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using a different set of engines for reading versus writing

2009-08-12 Thread phrrn...@googlemail.com

One of the very nice things about using SQLAlchemy is that since so
much of the grunt-work is taken care of for you, it gives you the
opportunity to come up with (potentially) hare-brained schemes like
the one I just thought of. We would like to do reading of data with
one set of logins and do writing with another. The reader 'binds' are
pretty straightforward but the writer binds vary quite a bit from
database to database and platform to platform (we have integrated
security on some OS/dataserver/drivers/languages) The idea I have is
to maintain a writer session with the specialized engine bindings and
have the reader session copy any dirty data over to it at commit/
flush  time (I got the idea from the thread on this group entitled
Updating database after after_insert()). Is this a total kludge? Has
anyone tried to do something like this already?

thanks very much.

pjjH

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] How to avoid SQLAlchemy generating transactions for read-only queries

2009-08-06 Thread phrrn...@googlemail.com

We are rolling out some SQLAlchemy infrastructure at work and came
across an interesting issue. We are using the mssql dialect on top of
pyodbc to talk to both SQL Server and Sybase dataservers. We use bound
metadata and scoped session_maker with autocommit=True. First off, the
problem we encountered was with the transaction log on master filling
up when the SQLAlchemy transaction ran This was very curious as the
login was read-only. It turns out that the application was emitting
thousands of queries (SELECT only, coming from my_table.select() on
the SA Table metadata objects, not via the ORM) and a transaction was
generated for each one.

It looks (from tracing what is written down the socket to the
dataserver)  like SA rolls back the transaction on connection checkin
and then *creates another transaction*?
write(4, 0x08CC0888, 59)= 59
  0F01\0 ;\0\0\0\0 ! .\0\0\0\0 I F   @ @ T R A N C O U N T  0
 R O L L B A C K   B E G I N   T R A N S A C T I O N

Where is this coming from? SA? pyodbc? TDS?

Ideally, our connection pool should provide connections without any
transactions active and in unchained mode (@@trancount = 0,
@@transtate = 1, @@tranchained = 0). The checkin code can rollback if
@@trancount  0.

What is the suggested configuration to achieve this?

thanks,
pjjH

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to avoid SQLAlchemy generating transactions for read-only queries

2009-08-06 Thread phrrn...@googlemail.com

Thanks for the information Mike. I do have a listener in place already
but decided to poke in the configuration directly when creating the
engine:

 'connect_args' : {'autocommit' : True,
  }

I found some very interesting results by experimenting with vanilla
pyodbc and SA and seeing how they impacted the Sybase transaction log.
Two transaction log IOs *per query* (one for BEGINXACT and one for
ENDXACT) would have devastating performance impact. However, it seems
that these are taken care of by the Sybase User Log Cache (ULC) (or
some other as yet unknown mechanism) otherwise we would surely have
noticed their impact.

I will do some more testing on this tomorrow and followup with a
summary. Auto-wrapping SA models over several thousand tables across
hundreds of databases across dozens of dataservers has certainly
brought up some interesting stuff!

pjjH

On Aug 6, 10:46 am, Michael Bayer mike...@zzzcomputing.com wrote:
 Michael Bayer wrote:

   Some DBAPI's provide autocommit modes, but since these are not
  standardized or universally available, and because SQLA has its own
  autocommit that works very nicely and consistently, SQLA has no support
  for them.  I dont know what Pyodbc provides.

 note however that this is not to say you can't add a connect hook to your
 pool (using PoolListener) and set the pyodbc connections into autocommit
 mode, if you identify such an option.   as far as the conditional shoved
 into do_begin() I still think we should just flatly deny SAVEPOINT support
 on MSSQL for now which would obviate the need for that statement.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to avoid SQLAlchemy generating transactions for read-only queries

2009-08-06 Thread phrrn...@googlemail.com

They differ mainly on 'new' stuff (where new is anything  15 years
old!) but for the kind of SELECT queries generated by SA we haven't
seen any problems at all (yet!). We decided to go ahead and use the
mssql dialect for read-only apps and postpone the remaining work on
the Sybase dialect on top of sybasect until we run into the limitation
of the mssql dialect.

pjjH


On Aug 6, 8:47 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Aug 6, 2009, at 8:39 PM, phrrn...@googlemail.com wrote:





  Thanks for the information Mike. I do have a listener in place already
  but decided to poke in the configuration directly when creating the
  engine:

  'connect_args' : {'autocommit' : True,
                           }

  I found some very interesting results by experimenting with vanilla
  pyodbc and SA and seeing how they impacted the Sybase transaction log.
  Two transaction log IOs *per query* (one for BEGINXACT and one for
  ENDXACT) would have devastating performance impact. However, it seems
  that these are taken care of by the Sybase User Log Cache (ULC) (or
  some other as yet unknown mechanism) otherwise we would surely have
  noticed their impact.

  I will do some more testing on this tomorrow and followup with a
  summary. Auto-wrapping SA models over several thousand tables across
  hundreds of databases across dozens of dataservers has certainly
  brought up some interesting stuff!

 oh, sybase.  Youre using MSSQL dialect to talk to both huh ? :)    
 interesting do you think sybase and MSSQL should inherit from some  
 common base ?   not surprising you're having probs with that.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: More SA, pyodbc, *nux and MSSQL problems

2009-07-30 Thread phrrn...@googlemail.com

The FreeTDS driver needs to be passed a value encoded in the encoding
specified in the 'client charset'  part of the odbc.ini configuration
file for the data-source in question. FreeTDS will use iconv to re-
encode the data in UCS-2 (which is either what is used by SQL Server
or is equivalent to the encoding used by SQL Server).

I thought there was a bug in the mssql bind_processor but I just re-
checked the code and don't see anything obvious. I haven't been doing
any low-level SA stuff for several weeks so take this with a pinch of
salt.

pjjH


On Jul 30, 8:28 am, Ed Singleton singleto...@gmail.com wrote:
 On 30 Jul 2009, at 04:52, Michael Trier wrote:



  Hi,

  On Wed, Jul 29, 2009 at 1:30 PM, Ed Singleton  
  singleto...@gmail.com wrote:
  Stupidly forgot to attach the files.

   What I'm trying to achieve is:

   1) Ability to insert non-ascii chars into the db on Linux
   2) The above but for all column types (varchar, text, nvarchar,  
  ntext)

   Absolutely any ideas will be appreciated.  I'm not sure what to try
   next.  For now I'm going to document how I set up Linux.

  Excellent.  I'll dig into this tomorrow if I get a chance to see if  
  I can help propel this along.

 I've put a post explaining my setup on my blog at:

 http://blog.singletoned.net/2009/07/connecting-to-ms-sql-server-from-...

 It's pretty similar to the Mac one, understandably, and it's aimed  
 fairly low.  I'll be writing up more concise ones aimed at a higher  
 level audience for work.  I'll also post them anywhere else that's  
 useful.

 Ed
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: csv engine?

2009-07-10 Thread phrrn...@googlemail.com

The spatialite extensions for SQLlite include a *read-only* virtual
table adapator (VirtualText) for CSV files.

http://www.gaia-gis.it/spatialite-2.2/spatialite-2.2_manual.html#t7

If the data-volumes are reasonably small, and the csv clients are read-
only and the frequency of updates to the database are fairly small
then I would be inclined to port the table to a vanilla sqlite
database and have some mechanism to dump out a CSV representation.
pjjH


On Jul 9, 12:51 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 SQLAlchemy is a SQL generation tool.   a SQL engine for csv files (I
 believe MS Access can do this against excel files) would be required, as
 well as a DBAPI.

 pierre imbaud wrote:

  google wont let me answer to  Lukasz Szybalski, I guess the message
  was too old, so I have to quote it by hand.

  I am New to sqlAlchemy, I have another good reason to ask for a csv
  engine:
  I wish to adapt an existing system, built around a csv file (to be
  modelled as a single table), by a pylons application. Some existing
  pieces of code use the csv file; If I use a db engine, I have 2 de
  facto repositories, I have to keep in sync: wether automatically, or
  thru a sync screen with 3 functions (file  db, db  file, diff):
  would be much easier with a csv engine! No progress on this?

  original message:
  Hello, I was wondering if there are any plans to have a csv engine for
  sqlalchemy. I would like to see support for csv. There are some cases
  where csv is the best way to convert data to and from especially when
  they require cleaning. What I would like to see is a sqlalchemy
  wrapping over csv module and providing file definition and some of the
  sql functionality into csv world. This would be really helpful when
  moving things over from one system to another. Ideas? Thanks, Lucas --
  OpenOffice and Pythonhttp://lucasmanual.com/mywiki/OpenOffice
  Commercial Grade Backup with Baculahttp://lucasmanual.com/mywiki/Bacula
      More options Sep 3 2008, 10:25 pm
  From: Lukasz Szybalski szybal...@gmail.com
  Date: Wed, 3 Sep 2008 15:25:36 -0500
  Local: Wed, Sep 3 2008 10:25 pm
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Simple MapperExtension is breaking session.merge()

2009-07-10 Thread phrrn...@googlemail.com

I have a MapperExtension which defines one method, instrument class.
This method in turn pokes in a number of convenience methods such as
lookup, search and a custom __str__ and __repr__. We do this dynamic
instrumentation to avoid having to inherit from a common base class.

mapper(ModelPresentValue, model_present_value, extension=banana)


The problem is with an UPDATE that attempts to set the PK values to
NULL (see error text below. Note the bind values [None, None, 32,
100]).

When I remove the extension, everything works fine. I intend to debug
this further during the course of the day but wanted to see if anyone
might be able to see straight off what the problem is (my thought is
that I am simply not implementing enough of the extension hooks)

regards,
pjjH



sqlalchemy.exc.IntegrityError: (IntegrityError) ('23000', [23000]
[FreeTDS][SQL Server]Cannot insert the value NULL into column 'md5id',
table 'sa_test.dbo.model_present_value'; column does not allow nulls.
UPDATE fails. (515)) u'UPDATE model_present_value SET md5id=?,
scenario=? WHERE model_present_value.md5id = ? AND
model_present_value.scenario = ?' [None, None, 32, 100]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: AttributeError: 'MetaData' object has no attribute 'engine'

2009-07-10 Thread phrrn...@googlemail.com

I think you should be able to say metadata.bind = engine

pjjH


On Jul 10, 8:30 am, jo jose.soa...@sferacarta.com wrote:
 Hi all,

 I'm trying migrate from 0.3.10 to 0.4 and there's the following error:

 ...   engine  = database.metadata.engine
 AttributeError: 'MetaData' object has no attribute 'engine'

 I was reading the section 0.3 to 0.4 Migration of  
 http://www.sqlalchemy.org/docs/04/intro.html#overview_migration,
 but sincerly I didn't understand where engine is rigth now.

 -

 this is my old code:
 -
 from turbogears                         import database
 from sqlalchemy                         import *
 from sqlalchemy.ext.assignmapper        import assign_mapper
 from sqlalchemy.ext.selectresults       import SelectResultsExt
 database.bind_meta_data()
 session = database.session
 engine  = database.metadata.engine
 context = database.session.context

 user=Table('user',database.metadata,
     Column('code', Unicode(6),primary_key=True,nullable=False),
     Column('name', Unicode(200)),
     Column('address', Unicode(200)),
     )

 Please help me to migrate this code to 0.4.

 j
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Simple MapperExtension is breaking session.merge()

2009-07-10 Thread phrrn...@googlemail.com

I found the problem: it is with the instrumentation_finders stuff you
helped us with months ago (see below). The problem can be reproduced
with this:

class DummyMapperExtension(MapperExtension):
pass

mapper(ModelPresentValue, model_present_value,
extension=DummyMapperExtension())

If the instrumentation hook is put in and you use a mapper extension
(even a dummy one) then the session update code stops working.

instrumentation + mapper_extension = fail
instrumentation only= works
mapper_extension only   = works

I tried debugging the instrumentation_find stuff before but didn't
make much progress. I will open up a ticket on trac (if there isn't
one there already)


pjjH

# Set docstrings of attributes to the empty string, in order to avoid
# ugly-looking stuff that would otherwise appear as default.
class AttachNotesAsDocstring(interfaces.InstrumentationManager):
def instrument_attribute(self, class_, key, attr):
if isinstance(attr.property, properties.ColumnProperty):
 if hasattr(attr.property.columns[0], 'info'):
 attr.__doc__ = attr.property.columns[0].info.get
('notes')
 else:
 attr.__doc__ = ''

# We get a weird error from SQLAlchemy when we have customized
# instrumentation and try to cog a file:
#
#AttributeError: 'ColumnProperty' object has no attribute
'strategy'
#
# As a workaround, we do not attempt to do any fancy instrumentation
# if the cog module has been imported.
if not 'cog' in sys.modules:
sqlalchemy.orm.attributes.instrumentation_finders.append(
lambda cls: AttachNotesAsDocstring)





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Simple MapperExtension is breaking session.merge()

2009-07-10 Thread phrrn...@googlemail.com

Sure. I'll work on the distillation after lunch.  It is not that the
UPDATE is broken, per se, it is more that update should not be called.
The code below works OK on an empty database but fails on subsequent
runs when pre-existing data is found.

tmp = [
{ 'md5id': 32, 'scenario': 100, 'value': 21. },
{ 'md5id': 57, 'scenario': 200, 'value': -3. },
]

for i in tmp:
o = ModelPresentValue()
o.md5id= i['md5id']
o.scenario = i['scenario']
o.value= i['value']
session.merge(o)



On Jul 10, 12:21 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 well can you distill it into a small test case...it seems suspect that
 UPDATE would break.  more likely is, __init__ doesn't work correctly or
 perhaps history events fail.

 phrrn...@googlemail.com wrote:

  I found the problem: it is with the instrumentation_finders stuff you
  helped us with months ago (see below). The problem can be reproduced
  with this:

  class DummyMapperExtension(MapperExtension):
      pass

  mapper(ModelPresentValue, model_present_value,
  extension=DummyMapperExtension())

  If the instrumentation hook is put in and you use a mapper extension
  (even a dummy one) then the session update code stops working.

  instrumentation + mapper_extension = fail
  instrumentation only                        = works
  mapper_extension only                   = works

  I tried debugging the instrumentation_find stuff before but didn't
  make much progress. I will open up a ticket on trac (if there isn't
  one there already)

  pjjH

  # Set docstrings of attributes to the empty string, in order to avoid
  # ugly-looking stuff that would otherwise appear as default.
  class AttachNotesAsDocstring(interfaces.InstrumentationManager):
      def instrument_attribute(self, class_, key, attr):
          if isinstance(attr.property, properties.ColumnProperty):
               if hasattr(attr.property.columns[0], 'info'):
                   attr.__doc__ = attr.property.columns[0].info.get
  ('notes')
               else:
                   attr.__doc__ = ''

  # We get a weird error from SQLAlchemy when we have customized
  # instrumentation and try to cog a file:
  #
  #    AttributeError: 'ColumnProperty' object has no attribute
  'strategy'
  #
  # As a workaround, we do not attempt to do any fancy instrumentation
  # if the cog module has been imported.
  if not 'cog' in sys.modules:
      sqlalchemy.orm.attributes.instrumentation_finders.append(
          lambda cls: AttachNotesAsDocstring)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Simple MapperExtension is breaking session.merge()

2009-07-10 Thread phrrn...@googlemail.com

assembling the test was a good exercise as I can see that the problem
exists with 0.5.3  but not against 0.5.4p2 or a checked out copy of
svn.

I logged this ticket but more for reference/tracking than anything
else.
http://www.sqlalchemy.org/trac/ticket/1464


pjjH


On Jul 10, 1:14 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
 Sure. I'll work on the distillation after lunch.  It is not that the
 UPDATE is broken, per se, it is more that update should not be called.
 The code below works OK on an empty database but fails on subsequent
 runs when pre-existing data is found.

 tmp = [
     { 'md5id': 32, 'scenario': 100, 'value': 21. },
     { 'md5id': 57, 'scenario': 200, 'value': -3. },
     ]

 for i in tmp:
     o = ModelPresentValue()
     o.md5id    = i['md5id']
     o.scenario = i['scenario']
     o.value    = i['value']
     session.merge(o)

 On Jul 10, 12:21 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  well can you distill it into a small test case...it seems suspect that
  UPDATE would break.  more likely is, __init__ doesn't work correctly or
  perhaps history events fail.

  phrrn...@googlemail.com wrote:

   I found the problem: it is with the instrumentation_finders stuff you
   helped us with months ago (see below). The problem can be reproduced
   with this:

   class DummyMapperExtension(MapperExtension):
       pass

   mapper(ModelPresentValue, model_present_value,
   extension=DummyMapperExtension())

   If the instrumentation hook is put in and you use a mapper extension
   (even a dummy one) then the session update code stops working.

   instrumentation + mapper_extension = fail
   instrumentation only                        = works
   mapper_extension only                   = works

   I tried debugging the instrumentation_find stuff before but didn't
   make much progress. I will open up a ticket on trac (if there isn't
   one there already)

   pjjH

   # Set docstrings of attributes to the empty string, in order to avoid
   # ugly-looking stuff that would otherwise appear as default.
   class AttachNotesAsDocstring(interfaces.InstrumentationManager):
       def instrument_attribute(self, class_, key, attr):
           if isinstance(attr.property, properties.ColumnProperty):
                if hasattr(attr.property.columns[0], 'info'):
                    attr.__doc__ = attr.property.columns[0].info.get
   ('notes')
                else:
                    attr.__doc__ = ''

   # We get a weird error from SQLAlchemy when we have customized
   # instrumentation and try to cog a file:
   #
   #    AttributeError: 'ColumnProperty' object has no attribute
   'strategy'
   #
   # As a workaround, we do not attempt to do any fancy instrumentation
   # if the cog module has been imported.
   if not 'cog' in sys.modules:
       sqlalchemy.orm.attributes.instrumentation_finders.append(
           lambda cls: AttachNotesAsDocstring)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Simple MapperExtension is breaking session.merge()

2009-07-10 Thread phrrn...@googlemail.com

Cool. Thanks for taking a look at it.

pjjH


On Jul 10, 4:02 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 phrrn...@googlemail.com wrote:

  assembling the test was a good exercise as I can see that the problem
  exists with 0.5.3  but not against 0.5.4p2 or a checked out copy of
  svn.

  I logged this ticket but more for reference/tracking than anything
  else.
 http://www.sqlalchemy.org/trac/ticket/1464

 yah that had been resolved by 0.5.4 but in my poking around to understand
 it I found a not totally related bug related to pickling which I just
 fixed in r6100.



  pjjH

  On Jul 10, 1:14 pm, phrrn...@googlemail.com
  phrrn...@googlemail.com wrote:
  Sure. I'll work on the distillation after lunch.  It is not that the
  UPDATE is broken, per se, it is more that update should not be called.
  The code below works OK on an empty database but fails on subsequent
  runs when pre-existing data is found.

  tmp = [
      { 'md5id': 32, 'scenario': 100, 'value': 21. },
      { 'md5id': 57, 'scenario': 200, 'value': -3. },
      ]

  for i in tmp:
      o = ModelPresentValue()
      o.md5id    = i['md5id']
      o.scenario = i['scenario']
      o.value    = i['value']
      session.merge(o)

  On Jul 10, 12:21 pm, Michael Bayer mike...@zzzcomputing.com wrote:

   well can you distill it into a small test case...it seems suspect that
   UPDATE would break.  more likely is, __init__ doesn't work correctly
  or
   perhaps history events fail.

   phrrn...@googlemail.com wrote:

I found the problem: it is with the instrumentation_finders stuff
  you
helped us with months ago (see below). The problem can be reproduced
with this:

class DummyMapperExtension(MapperExtension):
    pass

mapper(ModelPresentValue, model_present_value,
extension=DummyMapperExtension())

If the instrumentation hook is put in and you use a mapper extension
(even a dummy one) then the session update code stops working.

instrumentation + mapper_extension = fail
instrumentation only                        = works
mapper_extension only                   = works

I tried debugging the instrumentation_find stuff before but didn't
make much progress. I will open up a ticket on trac (if there isn't
one there already)

pjjH

# Set docstrings of attributes to the empty string, in order to
  avoid
# ugly-looking stuff that would otherwise appear as default.
class AttachNotesAsDocstring(interfaces.InstrumentationManager):
    def instrument_attribute(self, class_, key, attr):
        if isinstance(attr.property, properties.ColumnProperty):
             if hasattr(attr.property.columns[0], 'info'):
                 attr.__doc__ = attr.property.columns[0].info.get
('notes')
             else:
                 attr.__doc__ = ''

# We get a weird error from SQLAlchemy when we have customized
# instrumentation and try to cog a file:
#
#    AttributeError: 'ColumnProperty' object has no attribute
'strategy'
#
# As a workaround, we do not attempt to do any fancy instrumentation
# if the cog module has been imported.
if not 'cog' in sys.modules:
    sqlalchemy.orm.attributes.instrumentation_finders.append(
        lambda cls: AttachNotesAsDocstring)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Quoting Issue with Sybase ASE

2009-07-01 Thread phrrn...@googlemail.com

Yes, you should type coerce the values coming in from the web so that
they are type-compatible with the database columns. I don't know if
other dialects are more permissive and allow string representations of
numbers etc.

pjjH


On Jun 30, 7:15 pm, Vishakh vish...@gmail.com wrote:
 Hello,

 Thanks for your replies. I am working on Windows XP with Sybase OCS,
 perhaps that is why pjjH's code doesn't work out of the box for me.
 For example, I am unable to import _FetchLazy. Would I have to install
 FreeTDS to get this working? Is it a wise idea to adapt this to
 Windows?

 I wrote a small test script after reading Michael's message:

 import Sybase
 db = Sybase.connect ('*', '*', '*', '*')
 c = db.cursor()
 c.execute (select * from SomeTable where lab...@label and i...@id,
 {'@label': 'FOO', '@id': 7})
 for r in c.fetchall():
     print r
 db.close

 The above works without any problems. However, when I quote the 7 and
 change the relevant line:

 c.execute (select * from SomeTable where lab...@label and i...@id,
 {'@label': 'FOO', '@id': '7'})

 I get the following error:
 DatabaseError: Msg 257, Level 16, State 1, Line 1
 Implicit conversion from datatype 'CHAR' to 'INT' is not allowed.  Use
 the CONVERT function to run this query.

 The relevant code in my Pylons controller (using my modified version
 of Alexander Houben's code) iterates over all HTTP POST variables and
 appends them to a query:

 for q in request.params:
    if request.params[q]: statuses = statuses.filter(q+=:+q)
 statuses = statuses.params(request.params).all()

 This gives the same error as above since all the param values are
 strings and some column types are integers:

 DataError: (DataError) ('22018', [22018] [DataDirect][ODBC Sybase
 Wire Protocol driver][SQL Server]Implicit conversion
 from datatype 'CHAR' to 'INT' is not allowed.  Use the CONVERT
 function to run this query.\n (257) (SQLExecDirectW))

 Am I doing something fundamentally stupid by not checking the types of
 and casting the POST values or is this a problem specific to Sybase?
 If it is the latter, is there any way I can modify the SybaseInteger
 or Sybase SQL Dialect class to cast all columns which are integers?

 Thanks again!

 On Jun 30, 11:02 am, phrrn...@googlemail.com

 phrrn...@googlemail.com wrote:
   as a note, I made some attempts with the sybase dialect and I can say  
   that FreeTDS with sybase is not usable at all since it doesn't render  
   bind parameters correctly.  

  This is correct if one is using the FreeTDS implementation of the CT-
  Lib API but is not the case when using pyodbc: placeholders work OK
  (at least for vanilla datatypes)

  pjjH
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Quoting Issue with Sybase ASE

2009-06-30 Thread phrrn...@googlemail.com

I spent a while on converting the ASA dialect to run on ASE via pyodbc
and python Sybase on top of both 12.5.x and 15.x. It was interesting
but in hindsight I wish I had started from scratch as I burned a lot
of time on some very low-level (and ultimately uninteresting) issues
with unicode, FreeTDS, Sybase ct-lib messages etc. etc.

The code is in sybase.py in http://www.sqlalchemy.org/trac/ticket/785
and definitely has bugs/missing features. However, it does generate
prepared statements correctly with placeholders, executes them with
bind values and so on. I think that this is working for both pyodbc
over FreeTDS and the Sybase driver over ct-lib. I am lobbying with my
management to allocate some of my time for developing an ASE backend
targets at SA 0.6 .. the layout of the 0.6 code should make things a
lot simpler and cut down on the cut-and-paste between the dialects
+driver pairs.

pjjH


On Jun 30, 10:01 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jun 30, 2009, at 12:09 AM, Vishakh wrote:



  Hello,

  I am modifying Alexander Houben's Sybase module in order to get
  SQLAlchemy working with Sybase Adaptive Server Enteprise. I am using
  Python 2.6, Pylons and Sybase OCS 12.5, with both mxODBC and pyodbc.
  So far, I have had some success with reflecting tables and issuing
  basic queries. However, when I filter my queries, integer values are
  quoted and this causes an error with Sybase since it doesn't support
  implicit conversions from varchar to int. Could you please tell me
  what change I would have to make in order to not quote integer values
  in my modified module?

 SQLA doesn't quote values, it sends them along as bind parameters and  
 its up to the DBAPI to interpret them, which may involve rendering  
 them directly in the SQL statement with quoting (and also may  
 not).     So this issue would reside within the DBAPI in use.    Just  
 as a note, I made some attempts with the sybase dialect and I can say  
 that FreeTDS with sybase is not usable at all since it doesn't render  
 bind parameters correctly.   phrrn...@googlemail.com is also working  
 on a working ASE dialect so you might want to coordinate efforts.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Quoting Issue with Sybase ASE

2009-06-30 Thread phrrn...@googlemail.com

 as a note, I made some attempts with the sybase dialect and I can say  
 that FreeTDS with sybase is not usable at all since it doesn't render  
 bind parameters correctly.  

This is correct if one is using the FreeTDS implementation of the CT-
Lib API but is not the case when using pyodbc: placeholders work OK
(at least for vanilla datatypes)

pjjH

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: unit testing idioms

2009-06-29 Thread phrrn...@googlemail.com


From http://code.google.com/p/fixture/,

fixture provides several utilities for achieving a fixed state when
testing Python programs. Specifically, these utilities setup /
teardown databases and work with temporary file systems.



On Jun 27, 3:14 pm, Chris Withers ch...@simplistix.co.uk wrote:
 phrrn...@googlemail.com wrote:
  I am learning fixtures right now to see if I can use it to
  successfully solve this problem.

 What is fixtures?

 Chris

 --
 Simplistix - Content Management, Zope  Python Consulting
             -http://www.simplistix.co.uk
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: introspecting models

2009-06-26 Thread phrrn...@googlemail.com

The MetaData object has a sorted_tables method/property that returns a
list of Table objects. Also, MetaData.tables[]  can be accessed by
table-name (fully qualified including schema, if any). Table.columns
has name and type information. Assuming a MetaData instance md, here
is how you would list the columns in the 'banana' table:
for i in md['banana'].columns:
  print i.name, i.type


Likewise, for a mapped class one can iterate over the mapped
properties:
from sqlalchemy.orm import class_mapper
for i in class_mapper(Banana).iterate_properties:
   print i

pjjH


On Jun 26, 12:24 pm, Didip Kerabat did...@gmail.com wrote:
 I believe you can look inside object's __dict__ for list of field names.

 There are plenty of information you can pull from orm.Mapper. Those are
 explained better here:

 http://www.sqlalchemy.org/docs/05/reference/orm/mapping.html

 - Didip -

 On Thu, Jun 25, 2009 at 6:12 PM, Chris Withers ch...@simplistix.co.ukwrote:



  Hi All,

  What's the correct way to find out what fields a model contains?
  (eg: one that's been reflected from an existing table)

  At this stage, I only care about the field names, if that makes things
  easier...

  Also, how would I do the same from a mapped object?

  cheers,

  Chris

  --
  Simplistix - Content Management, Zope  Python Consulting
             -http://www.simplistix.co.uk
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: unit testing idioms

2009-06-26 Thread phrrn...@googlemail.com

I am learning fixtures right now to see if I can use it to
successfully solve this problem.

pjjH


On Jun 24, 6:11 pm, Chris Withers ch...@simplistix.co.uk wrote:
 Hi All,

 I'm wondering what the common idiom is for unit testing w.r.t. data and
 transactions...

 So, coming from my ZODB background, in unit tests we usually:

 - set up the objects required
 - run the code to be tested (which might change, add or delete objects)
 - abort the current transaction so no changes are saved and everything
 goes back to how it was prior to the start of the test

 What's the equivalent in sqlalchemy-land?

 How do I get test data needed for a specific test to exist for only that
 test?

 How do I abort changes made by a test in a unittest.TestCase sublcasses
 tearDown method?

 What do I do if the app framework I'm using goes and does a commit as
 part of the unit test? (I wish it wouldn't... ;-) )

 (this is in the context of a Pylons app if that makes things easier/harder)

 cheers,

 Chris

 --
 Simplistix - Content Management, Zope  Python Consulting
             -http://www.simplistix.co.uk
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SA 0.4 with SQLServer and PyODBC Unicode data into nvarchar column

2009-06-18 Thread phrrn...@googlemail.com

You should put this into your .freetds.conf file to ensure that
FreeTDS will tell iconv to do the right thing (my understanding is
that all unicode data is encoded to UCS-2 by FreeTDS)

   tds version = 8.0
   client charset = UTF-8

SQL Alchemy create_engine has an encoding kwarg:
encoding=’utf-8’ – the encoding to use for all Unicode translations,
both by engine-wide unicode conversion as well as the Unicode type
object

This should be set to the same value as you have for 'client charset'
in the .freetds.conf file.

You can run into problems when bogus data has been stuff into the
nvarchar field at the dataserver as it will cause the python codec to
blow up when retrieving the data so Don't Do That (I discovered this
the hard way by having a server-side job that was populating the data
and not ensuring that the encoding was well-formed)


pjjH



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Engine.execute interpreting all parameters as string data

2009-06-08 Thread phrrn...@googlemail.com

I was about to follow up to Mike's suggestion of a CAST with That
will never work because ...  but I decided to try it out and .. it
works!!!:

for i in e.execute(sql.text(select result = dateadd(day, CONVERT
(INTEGER,:days), getdate())), dict(days = 7)):
print i
(datetime.datetime(2009, 6, 15, 11, 32, 21, 88),)


for i in e.execute(sql.text(select dateadd(day, CONVERT
(INTEGER,:days) AS result, getdate())), dict(days = 7)):
print i
# This fails with:
sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('42000', '[42000]
[FreeTDS][SQL Server]Statement(s) could not be prepared. (8180)')
u'select dateadd(day, CONVERT(INTEGER,?) AS result, getdate())' [7]


This is very interesting and surprising to me as my understanding of
the rules for placeholders for prepared statements in SQL Server and
Sybase limit their use to:

• In place of one or more values in an insert statement
• In the set clause of an update statement
• In the where clause of a select or update statement

I don't know what it is about the result= syntax that makes it work
but it is good to know. However, there are a couple of problems with
dateadd and friends (datediff,datepart, datename) in mssql  and
Sybase: the datepart parameter is not bindable (or, rather, I have
never been able to figure out how to do it). It must be *unquoted*
literal in the set:

yearyy
quarter qq
month   mm
weekwk
day dd
dayofyear   dy
weekday dw
hourhh
minute  mi
second  ss
millisecond ms
calweekofyear   cwk
calyearofweek   cyr
caldayofweekcdw

Mike, have you any suggestions on how to declare the various date-
related function in SA so that the first parameter is passed through
as an unquoted literal (maybe with some client-side checks to make
sure it is in the list above) and not as a bind parameter?

pjjH


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Formatting return of 'query'

2009-06-08 Thread phrrn...@googlemail.com

I have something like this to serialize a result-set to delimited file-
format. It is not very pretty and probably not at all pythonic but I
find it handy.

pjjH


def as_delimited(q, *args):
csvdata = StringIO()
w = writer(csvdata, delimiter='|')
for i in q.values(*args):
w.writerow(i)
yield csvdata.getvalue()
csvdata.truncate(0)

q = session.query(User)
for i in as_delimited(q,
User.firstname,User.lastname,User.age,User.password):
  print i,

On Jun 8, 10:18 am, Glauco gla...@sferacarta.com wrote:
 Harish Vishwanath ha scritto: cut

  How can I modify this query to return something like :
  [(fname~lname~22~pwd)...] with '~' being preferred delimiter.

 SA return a list or record, what exactly you are searching for? a
 string  or something else



  I would like to know if I can return something like above directly
  from the query itself.

 something like ?

 [ '~'.join(x) for x in qry.fetchall() ]

 Glauco
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Advice on integrating Alchemy with a caching mechanism such as Shove

2009-04-24 Thread phrrn...@googlemail.com

I wasn't able to achieve what I wanted without mucking around with
private methods but the code below does work very nicely with
reference tables. The code generator pokes in the log10 rowcount as of
the time the schema was sampled (I use log10 so that we don't get a
bunch of spurious diffs if a reference table had grown by a couple of
rows since the last time the generator was run)

We have a hacked version of a LRU dict as per 
http://code.activestate.com/recipes/498245/
and have mailed the Shove maintainer with a request to have LRU
replacement policy available for Shove caches at some point in the
future.

There is probably a much, much better way of doing this but I am happy
that SA made this hack so easy.

pjjH


# There is probably a much more elegant way to hook Session
# 'gets'. The idea is to call lookup to prefetch small tables and
# merge them into the session.
def _get(self, key, ident):
self.session._merge_into_session(self._only_mapper_zero(), key
[1])
return Query._get(self, key, ident)


# prefetch any small reference tables
def _merge_into_session(self, klass, *args):
if type(klass) == Mapper:
m = klass
else:
m = class_mapper(klass)

cache = None
try:
 cache = _cache_of_caches[m]
except KeyError:
_cache_of_caches[m] = {}
i = m.local_table.info
log10_cardinality = i.get('log10_cardinality', -1)
if log10_cardinality  0 and log10_cardinality = 4:
cache = _cache_of_caches[m]
#print prefetching for %s % (m)
for o in self.query(klass):
pk_as_tuple = tuple(m.primary_key_from_instance(o))
#print prefetch populating cache with key %s and
object %s  %(pk_as_tuple, o)
cache[pk_as_tuple] = o



On Apr 15, 6:26 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
 Beautiful. I am impressed at the elegance of this sample and the
 others also.

 pjjH

 On Apr 15, 3:26 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  for SQLA integration you'd probably build this into the Session/Query,
  which is most easily accomplished by subclassing Query.   some examples
  are in the distribution in examples/query_caching.  I use a variant of the
  with_cache_key version for my own purposes.

  phrrn...@googlemail.com wrote:

   Hi,
   I would like to export a 'lookup' interface to programmers that does
   object caching (a bounded size cache with a LRU replacement policy)
   and prefetching of small tables for read-only reference data. This is
   a rewrite in Python over SQL Alchemy of similar functionality in Perl
   (originally implemented over Class::DBI)

   I would like to use some general purpose and configurable mechanism
   such as Shove :
  http://pypi.python.org/pypi/shove

   for c in lookup(Country, [23, 45, 47, 48, 'CA', 'IE', 'FR']):
      print %s %s % (c.name, c.iso_code)

   Ideally, the above code should cause an initialization of a cache for
   'Country' objects followed by a prefetch of all countries (as the
   table is small). The lookup() function would return a list of the same
   cardinality as the input list where the elements of the return list
   are the corrosponding object ref or -- in the case that the element
   could not be found -- None.

   I intend to use the session to retrieve the data, inflate the objects
   and so on.
   I am wondering where is a good place to put this kind of behavior in?
   a session or mapper extension?

   In the meantime, I will implement the behavior I want 'outside' of SA
   and hope that I can eventually use a native SA mechanism.

   thanks,
   pjjH
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mapping class against multiple databases

2009-04-24 Thread phrrn...@googlemail.com

I did just that this afternoon to link across heterogeneous
dataservers. Modulo the inevitable awkwardness from having two
different names for the same thing (i.e. Asset and EjvAsset), this was
very easy:

class_mapper(Asset).add_properties({
   'EjvAsset' : relation(EjvAsset,
 primaryjoin=Asset.asset_id ==
EjvAsset.asset_id,
 foreign_keys=[Asset.asset_id],
 viewonly=True,
 ),
   })

very neat!


pjjH



On Apr 16, 10:57 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Apr 16, 2009, at 10:50 AM, JanW wrote:



  OK, thanks, so does that mean that mapping one class against multiple
  tables in different databases is something very exotic and probably
  bad practice?

 its an impossible practice unless you're usingDBLINK...



  Or is there some elegant way to achieve this?

 map a class to each table and use a relation() between them to load  
 from one, then the other.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Changing Python type associated with temporal datatypes without using custom TypeDecorator?

2009-04-24 Thread phrrn...@googlemail.com

My employers have a custom Python type (derived from datetime) for
dealing with dates and datetimes.Let's call it 'BananaDate'. They
would like BananaDate be used by SQL Alchemy. The standard way of
doing this appears to be with a TypeDecorator:

class BananaDate(types.TypeDecorator):
from   banana.date import Date
impl = types.Date

def process_result_value(self, value, dialect):
return banana(value)

I understand that the Column definitions would change from:

Column('first_reset_dt', DateTime, nullable=True)

to
Column('first_reset_dt', BananaDate, nullable=True)

These seems to imply that I have to explicitly change the models (and
I would prefer not to do that)

Is there some neat 'hook-based' approach that would allow me to leave
the metadata models intact and yet work with BananaDates? Would it be
very rude to monkey-patch the relevent base types in sqlalchemy.types
with new definition of process_result_value?


pjjH



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Changing Python type associated with temporal datatypes without using custom TypeDecorator?

2009-04-24 Thread phrrn...@googlemail.com

My manager came up with a rather clever -- if not devious --
suggestion: implement the type adapators as usual but then diddle the
namespace of the package where the SA model is defined. I tried it and
it works but is sufficiently confusing that I am now in favor of
changing the models so that the columns explicitly reference the
appropriate decorator. The main objection was the use of a specialized
type in such an indiscriminate fashion. However, since most of the
models are code-generated, I can write it off as yet another
convention in place in the code.

pjjH


# Here are the decorators
class BananaDate(types.TypeDecorator):
impl = types.Date

def process_result_value(self, value, dialect):
from   deshaw.datetime.date import Date
return Date(value)

class BananaTimestamp(types.TypeDecorator):
impl = types.DateTime

def process_result_value(self, value, dialect):
from  deshaw.datetime import timestamp
ts = timestamp.Timestamp()
return ts.from_parts(value.year, value.month, value.day,
value.hour, value.minute, value.second, value.microsecond)


=== in the model file ===
# here in the model file, import the type decorators but aliased to
the corrosponding SA types

from banana.dbo import BananaDate as Date
from banana.dbo import BananaTimestamp as DateTime


On Apr 24, 5:00 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
 My employers have a custom Python type (derived from datetime) for
 dealing with dates and datetimes.Let's call it 'BananaDate'. They
 would like BananaDate be used by SQL Alchemy. The standard way of
 doing this appears to be with a TypeDecorator:

 class BananaDate(types.TypeDecorator):
     from   banana.date import Date
     impl = types.Date

     def process_result_value(self, value, dialect):
         return banana(value)

 I understand that the Column definitions would change from:

     Column('first_reset_dt', DateTime, nullable=True)

 to
     Column('first_reset_dt', BananaDate, nullable=True)

 These seems to imply that I have to explicitly change the models (and
 I would prefer not to do that)

 Is there some neat 'hook-based' approach that would allow me to leave
 the metadata models intact and yet work with BananaDates? Would it be
 very rude to monkey-patch the relevent base types in sqlalchemy.types
 with new definition of process_result_value?

 pjjH
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Auto-generating class definitions

2009-04-22 Thread phrrn...@googlemail.com

I tried autocode but I found it limiting due to the use of SA's
MetaData collection as the model. An alternative approach has worked
quite well: persist the results introspection queries to a set of
physical tables and wrap those (metadata) tables with SA. The code-
generation is now a standard 'pushing objects through templates
problem'  (for which I use Cheetah). I use the cog tool to allow me to
mix auto-generated code in the same file as hand-maintained stuff (for
example, I find it difficult to auto-generate relations with
appropriate names and uselist so prefer to manually name them with
correct pluralization and so on)

pjjH


#[[[cog
#import cog
#from chimera import Mongo
#m = Mongo(dataserver='BANANA', database='fruit')
#]]]
#[[[end]]]

#[[[cog
#   cog.outl(%s % m.emit_sql_alchemy_tables())
#]]]

#[[[end]]]

#[[[cog
#  cog.out(%s % m.emit_sql_alchemy_mappers())
#]]]
#[[[end]]]

# This is hand-maintained code
class_mapper(Foo).add_properties({'Fruits' : relation(Fruit)})

On Apr 22, 3:04 pm, thatsanicehatyouh...@mac.com wrote:
 Hi Huy,

 Thanks for your comments!

 On 22 Apr 2009, at 05:33, huy wrote:

  I guess it depends if you are going the standard SA table/mapping way
  or the sqlalchemy.ext.declarative way

 So this is a good question to ask. As I'm just starting out with SA, I  
 have no legacy code to update, and thus have started with the 0.5.x  
 version where the documentation recommends using the declarative way.  
 I don't know how this differs from the old way (besides which base  
 class to use). What is the difference, and should I be using that?

  - I want to use reflection whenever possible.
  Not sure exactly which reflection you want here but my experience with
  a database of about 30+ tables, it's so much faster for development
  to have a static SA table definitions file then to have it reflect on
  every server reload.

 Of course - I want to generate static definitions in a single file,  
 and then import that file from my python scripts. What I'm trying to  
 avoid is when I make modifications to my db schema that I don't have  
 to tune the class/table definitions by hand in this file.

  - I want to create a second python script that will contain one class
  definition for each in the first file. For example, let's say I  
  have a
  table called plate. The first file will contain the full definition
  for a class called Plate_. The second file might contain:

  class Plate(Plate_): pass

  The second definition is a subclass of the first where I can put
  custom logic (if I need any) for each class. This is the class I will
  use in my scripts. I will then import this file from the many scripts
  I need to write that use this database.

  Just wondering why you need both ? (unless you are going the
  sqlalchemy.ext.declarative way.)

 Let's say here the Plate class represents the table plate in the  
 database. I want to write some custom logic into the Plate class, for  
 example a method is_finished. This is not a field in the table, but  
 a calculation that could depend on both data from the db and external  
 information passed to it at runtime. The problem comes when I want to  
 regenerate the static db definitions - my custom (non-database)  
 definitions would be overwritten. By keeping them separate, I can  
 regenerate the static definitions any time.

 I'm not sure of the meaning of your second remark about using the  
 declarative method? How does this change things in what I'm trying to  
 do?

  If you are using the standard table/mapping, your model classes don't
  have to extend explicitly an SA base class.
  Also, SA can work with a simple class definition like

  class Plate(object):
     pass

  and it auto injects everything itself, when you do the mapping. It's
  not like java where you generate
  setters and getters for every database column.

 That's definitely nice. The WebObjects code that was generated did  
 create all of the setters and getters (which I see I don't need), but  
 also defined all the relationships between the objects. I think this  
 was the part that the latest autocode was missing. Then my subclass of  
 that object would contain my custom logic.

 I'm a little unclear about that object definition above - how does  
 this declaration talk to SA?

  I think it's good to do some things manually when you first start out.
  Heck, it's not really that much code.  I guess you come from WO which
  generated pretty much everything.

 Is that a bad thing? :) If a program can do it, I don't want to!

 My mindset is that I'd really prefer to be able to get up and running  
 very quickly to be able to do the most common stuff, and only have to  
 dig into the specifics when I need to optimise or do something  
 unusual. I've been doing DBI programming in Perl for a long time, but  
 this just thinly wrapped SQL. SA is really promising to remove the  
 tedious SQL coding, but I think it can be made a bit easier (much like 

[sqlalchemy] Advice on integrating Alchemy with a caching mechanism such as Shove

2009-04-15 Thread phrrn...@googlemail.com

Hi,
I would like to export a 'lookup' interface to programmers that does
object caching (a bounded size cache with a LRU replacement policy)
and prefetching of small tables for read-only reference data. This is
a rewrite in Python over SQL Alchemy of similar functionality in Perl
(originally implemented over Class::DBI)

I would like to use some general purpose and configurable mechanism
such as Shove :
http://pypi.python.org/pypi/shove

for c in lookup(Country, [23, 45, 47, 48, 'CA', 'IE', 'FR']):
   print %s %s % (c.name, c.iso_code)

Ideally, the above code should cause an initialization of a cache for
'Country' objects followed by a prefetch of all countries (as the
table is small). The lookup() function would return a list of the same
cardinality as the input list where the elements of the return list
are the corrosponding object ref or -- in the case that the element
could not be found -- None.

I intend to use the session to retrieve the data, inflate the objects
and so on.
I am wondering where is a good place to put this kind of behavior in?
a session or mapper extension?

In the meantime, I will implement the behavior I want 'outside' of SA
and hope that I can eventually use a native SA mechanism.

thanks,
pjjH

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Advice on integrating Alchemy with a caching mechanism such as Shove

2009-04-15 Thread phrrn...@googlemail.com

Beautiful. I am impressed at the elegance of this sample and the
others also.

pjjH


On Apr 15, 3:26 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 for SQLA integration you'd probably build this into the Session/Query,
 which is most easily accomplished by subclassing Query.   some examples
 are in the distribution in examples/query_caching.  I use a variant of the
 with_cache_key version for my own purposes.

 phrrn...@googlemail.com wrote:

  Hi,
  I would like to export a 'lookup' interface to programmers that does
  object caching (a bounded size cache with a LRU replacement policy)
  and prefetching of small tables for read-only reference data. This is
  a rewrite in Python over SQL Alchemy of similar functionality in Perl
  (originally implemented over Class::DBI)

  I would like to use some general purpose and configurable mechanism
  such as Shove :
 http://pypi.python.org/pypi/shove

  for c in lookup(Country, [23, 45, 47, 48, 'CA', 'IE', 'FR']):
     print %s %s % (c.name, c.iso_code)

  Ideally, the above code should cause an initialization of a cache for
  'Country' objects followed by a prefetch of all countries (as the
  table is small). The lookup() function would return a list of the same
  cardinality as the input list where the elements of the return list
  are the corrosponding object ref or -- in the case that the element
  could not be found -- None.

  I intend to use the session to retrieve the data, inflate the objects
  and so on.
  I am wondering where is a good place to put this kind of behavior in?
  a session or mapper extension?

  In the meantime, I will implement the behavior I want 'outside' of SA
  and hope that I can eventually use a native SA mechanism.

  thanks,
  pjjH


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] table reflection: include_columns does not include a component of the primary key?

2009-04-08 Thread phrrn...@googlemail.com

I am trudging through the unit tests for the Sybase backend and found
an interesting one in test_nonreflected_fk_raises in engine/
reflection.py. A couple of the drivers look like they skip over the
column if it is not contained within table.c but go ahead and create
the primary_key in any case! This seems a bit odd to me. Is this the
intended behavior?

pjjH


# I think we have to raise some kind of exception here if
# we try and reflect on an index when the column is
# omitted from include_columns?

if include_columns and column_name not in include_columns:
raise exc.NoReferencedColumnError(
Could not create PrimaryKey/Index '%s' on table
'%s': 
table '%s' has column named '%s' but it is not
present in include_columns:%s % (
index_name, table.name, table.name,
column_name,','.join(include_columns)))


if r.status  0x800 == 0x800:
table.primary_key.add(table.c[row[0]])
if not index_name in PK.keys():
PK[index_name] = PrimaryKeyConstraint(name =
index_name)
PK[index_name].add(table.c
[column_name])
else:
if not index_name in INDEXES.keys():
INDEXES[index_name] =  Index(index_name, unique=
(r.status  0x2 == 0x2))
INDEXES[index_name].append_column(table.c[column_name])
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Supporting fully-qualified table names and cross-database references in Sybase and SQL Server?

2009-04-06 Thread phrrn...@googlemail.com

I uploaded a patch to trac

On Mar 13, 12:51 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
 opened ticket 1341 for this.

 http://www.sqlalchemy.org/trac/ticket/1341
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Odd-looking error AttributeError: 'RelationProperty' object has no attribute 'strategy' when *working* code is moved to another file

2009-04-03 Thread phrrn...@googlemail.com

So I guess that symbols starting with underscore ('_') are treated
differently in Python when it comes to be exporting/importing? Sorry
to be such a newb but this is the only conclusion I can (rationally!)
come to.

Traceback (most recent call last):
  File chimera_driver.py, line 42, in module
q = s.query(_PrimaryKey).filter(_PrimaryKey.TABLE_NAME ==
'banana')
NameError: name '_PrimaryKey' is not defined

pjjH


On Apr 3, 11:34 am, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
 I copied the list of import statements from the module file
 (deshaw.dbo.chimera) to the driver file. The driver file also has a
 line:
 from deshaw.dbo.chimera import *

 Note that this is happening with a particular class, DatabaseTable,
 *not* with other classes I have declared and mapped such as
 Dataserver. Interestingly, the DatabaseClass works when I comment out
 a bunch of relations:

 mapper(DatabaseTable, tables, properties = {
 #  'attributes' : relation(Attribute,  lazy=False,
 order_by = asc(Attribute.ORDINAL_POSITION)),
 #  'primary_key': relation(_PrimaryKey, uselist=False,
 lazy=False), # At most one PK is allowed.
 #  'indexes': relation(_Index,lazy=False),
 #  'foreign_keys'   : relation(_ForeignKey,
 lazy=False)

 })

 I experimented with adding the properties later on after everything
 else had been defined but still get the same error

 class_mapper(DatabaseTable).add_properties({
 #  'attributes' : relation(Attribute,  lazy=False,
 order_by = asc(Attribute.ORDINAL_POSITION)),
 #  'primary_key': relation(_PrimaryKey, uselist=False,
 lazy=False), # At most one PK is allowed.
 #  'indexes': relation(_Index,lazy=False),
 #  'foreign_keys'   : relation(_ForeignKey,
 lazy=False)

 })

 How do I find out what is special about 'DatabaseTable' or, more
 precisely, the properties I am trying to define on it. I tried putting
 compile_mappers() in both the module and the driver but it has no
 impact. I assume that one of 'attributes', 'primary_key', 'indexes' or
 'foreign_keys' is already in use .. OK. Let me try that:

 class_mapper(DatabaseTable).add_properties({
 #  'apple' : relation(Attribute,  lazy=False, order_by
 = asc(Attribute.ORDINAL_POSITION)),
 #  'banana': relation(_PrimaryKey, uselist=False,
 lazy=False), # At most one PK is allowed.
   'pear': relation(_Index,lazy=False),
 #  'kiwi'   : relation(_ForeignKey, lazy=False)

 })

 No, didn't do anything. Let's try with lazy=True. OK that works. Let's
 try lazy=True with the original names. OK. That works also. So the
 problem appears to be with setting lazy=True for these properties.

 What is the debugging incantation to debug the orm mapping? Or do you
 have any advice on how to proceed from here?

 thanks,

 pjjH

 On Apr 2, 9:20 pm, Michael Bayer zzz...@gmail.com wrote:

  make sure everything that's needed is imported, and that you arent
  suppressing any exceptions which occur when the mappers first compile
  themselves.   try calling compile_mappers() to force the issue.

  On Apr 2, 8:19 pm, phrrn...@googlemail.com phrrn...@googlemail.com
  wrote:

   This code works when executed within a if __name__ == '__main__'
   block in the .py that contains the model:

   s = MySession(bind=e)
   q = s.query(DatabaseTable).filter(DatabaseTable.TABLE_CAT=='credit')
   for i in q:
   print i

   However, if I take it out and put it in a separate file, I get an
   error like this. I hope that this is something simple that I am doing
   wrong?
   pjjH

   Traceback (most recent call last):
 File H:\work\base_python\python\chimera_driver.py, line 14, in
   module
   for i in q:
 File C:\PROGRA~1\Python25\lib\site-packages\sqlalchemy-0.5.2-
   py2.5.egg\sqlalchemy\orm\query.py, line 1276, in __iter__
   context = self._compile_context()
 File C:\PROGRA~1\Python25\lib\site-packages\sqlalchemy-0.5.2-
   py2.5.egg\sqlalchemy\orm\query.py, line 1718, in _compile_context
   entity.setup_context(self, context)
 File C:\PROGRA~1\Python25\lib\site-packages\sqlalchemy-0.5.2-
   py2.5.egg\sqlalchemy\orm\query.py, line 1972, in setup_context
   column_collection=context.primary_columns
 File C:\PROGRA~1\Python25\lib\site-packages\sqlalchemy-0.5.2-
   py2.5.egg\sqlalchemy\orm\interfaces.py, line 580, in setup
   self.__get_context_strategy(context, path +
   (self.key,)).setup_query(context, entity, path, adapter, **kwargs)
 File C:\PROGRA~1\Python25\lib\site-packages\sqlalchemy-0.5.2-
   py2.5.egg\sqlalchemy\orm\interfaces.py, line 566, in __get_context_
   strategy
   return self.strategy
   AttributeError: 'RelationProperty' object has no attribute 'strategy'
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post

[sqlalchemy] Re: Odd-looking error AttributeError: 'RelationProperty' object has no attribute 'strategy' when *working* code is moved to another file

2009-04-03 Thread phrrn...@googlemail.com

OK. This appears to be the source of the problem. You may recall a
couple of weeks ago that I was looking for information on how to
surface comments from the database metadata into the sqlalchemy model
and into docstrings in mapped classes? I took your suggestion and
modified it slightly. When I comment out this code, everything seems
to work just fine (although I have been hacking on our code so much
yesterday and today that I may have to revert back to the last copy in
svn to be sure)

Do you have any idea who/what is gobbing up the exception? I am about
to unleash my SA code generator on our internal developer community
but would really like to understand how to debug this kind of issue
better so that I don't get killed debugging the generated code!

pjjH


# class AttachNotesAsDocstring(interfaces.InstrumentationManager):
#  def instrument_attribute(self, class_, key, attr):
#  if isinstance(attr.property, properties.ColumnProperty):
#  if hasattr(attr.property.columns[0], 'info'):
#  attr.__doc__ = attr.property.columns[0].info.get
('notes')

# attributes.instrumentation_finders.insert(0, lambda cls:
AttachNotesAsDocstring)




On Apr 3, 12:57 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
 I did RTFM  athttp://www.python.org/doc/2.5.2/ref/import.htmland now
 have the symbols explicitly enumerated in the __all__ attribute in the
 module. However,  I am still getting this error when I attempt to do
 an ORM query on any of these:
 q = s.query(_ForeignKey).filter(_ForeignKey.FKTABLE_NAME == 'banana')
 for i in q:
 print i

 q = s.query(_Index).filter(_Index.INDEX_NAME == 'banana')
 for i in q:
 print i

 q = s.query(_PrimaryKey).filter(_PrimaryKey.TABLE_NAME == 'banana')
 for i in q:
 print i

 I get the same error each time:
 AttributeError: 'ColumnProperty' object has no attribute 'strategy'

 I have tried prepending 'Banana' onto the symbols starting with an
 underscore but am still getting the same error:

 q = s.query(Banana_Index).filter(Banana_Index.INDEX_NAME == 'banana')
 for i in q:
 print i

 I am not aware of anything that is catching any exceptions.

 pjjH

 __all__ = [
 Attribute,
 Catalog,
 DatabaseTable,
 Dataserver,
 Sample,
 _ExtendedProperty,
 _ForeignKey,
 _ForeignKeyElement,
 _Index,
 _IndexElement,
 _PrimaryKey,
 _PrimaryKeyElement,
 ]

 On Apr 3, 11:58 am, phrrn...@googlemail.com

 phrrn...@googlemail.com wrote:
  So I guess that symbols starting with underscore ('_') are treated
  differently in Python when it comes to be exporting/importing? Sorry
  to be such a newb but this is the only conclusion I can (rationally!)
  come to.

  Traceback (most recent call last):
File chimera_driver.py, line 42, in module
  q = s.query(_PrimaryKey).filter(_PrimaryKey.TABLE_NAME ==
  'banana')
  NameError: name '_PrimaryKey' is not defined

  pjjH

  On Apr 3, 11:34 am, phrrn...@googlemail.com

  phrrn...@googlemail.com wrote:
   I copied the list of import statements from the module file
   (deshaw.dbo.chimera) to the driver file. The driver file also has a
   line:
   from deshaw.dbo.chimera import *

   Note that this is happening with a particular class, DatabaseTable,
   *not* with other classes I have declared and mapped such as
   Dataserver. Interestingly, the DatabaseClass works when I comment out
   a bunch of relations:

   mapper(DatabaseTable, tables, properties = {
   #  'attributes' : relation(Attribute,  lazy=False,
   order_by = asc(Attribute.ORDINAL_POSITION)),
   #  'primary_key': relation(_PrimaryKey, uselist=False,
   lazy=False), # At most one PK is allowed.
   #  'indexes': relation(_Index,lazy=False),
   #  'foreign_keys'   : relation(_ForeignKey,
   lazy=False)

   })

   I experimented with adding the properties later on after everything
   else had been defined but still get the same error

   class_mapper(DatabaseTable).add_properties({
   #  'attributes' : relation(Attribute,  lazy=False,
   order_by = asc(Attribute.ORDINAL_POSITION)),
   #  'primary_key': relation(_PrimaryKey, uselist=False,
   lazy=False), # At most one PK is allowed.
   #  'indexes': relation(_Index,lazy=False),
   #  'foreign_keys'   : relation(_ForeignKey,
   lazy=False)

   })

   How do I find out what is special about 'DatabaseTable' or, more
   precisely, the properties I am trying to define on it. I tried putting
   compile_mappers() in both the module and the driver but it has no
   impact. I assume that one of 'attributes', 'primary_key', 'indexes' or
   'foreign_keys' is already in use .. OK. Let me try that:

   class_mapper(DatabaseTable).add_properties({
   #  'apple' : relation(Attribute,  lazy=False, order_by
   = asc(Attribute.ORDINAL_POSITION)),
   #  'banana': relation(_PrimaryKey, uselist

[sqlalchemy] Re: Odd-looking error AttributeError: 'RelationProperty' object has no attribute 'strategy' when *working* code is moved to another file

2009-04-03 Thread phrrn...@googlemail.com

I don't see any output from it at all. I am calling it after importing
everything but before making any queries. The mapper registry looks
unsurprising (albeit based on a guess of what it does)

pjjH


from sqlalchemy.orm import compile_mappers

s = MySession(bind=e)

print compile_mappers()
q = s.query(Dataserver)


(Pdb) p _mapper_registry
WeakKeyDictionary at 139496364
(Pdb) p _mapper_registry.keys()
[Mapper at 0x8669e6c; _PrimaryKeyElement, Mapper at 0x866eb0c;
_ForeignKeyElement, Mapper at 0x868272c; _ForeignKey, Mapper at
0x861fe8c; Attribute, Mapper at 0x868ea0c; Sample, Mapper at
0x8676bac; _Index, Mapper at 0x865ca6c; _IndexElement, Mapper at
0x861f40c; _ExtendedProperty, Mapper at 0x8687c0c; Dataserver,
Mapper at 0x868786c; Catalog, Mapper at 0x868730c; DatabaseTable,
Mapper at 0x867da0c; _PrimaryKey]
(Pdb) n
 /usr/local/python-2.5.1/lib/python2.5/site-packages/SQLAlchemy-0.5.2-py2.5.egg/sqlalchemy/orm/__init__.py(842)compile_mappers()
- m.compile()
(Pdb) p m
Mapper at 0x8669e6c; _PrimaryKeyElement
(Pdb) p repr(m)
'Mapper at 0x8669e6c; _PrimaryKeyElement'
(Pdb) p str(m)
'Mapper|_PrimaryKeyElement|widened_sp_primarykeys'





On Apr 3, 1:37 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 what does compile_mappers() say?can you call this mapper, and at the
 same time all mappers within the entire application have been called ?

 phrrn...@googlemail.com wrote:

  I copied the list of import statements from the module file
  (deshaw.dbo.chimera) to the driver file. The driver file also has a
  line:
  from deshaw.dbo.chimera import *

  Note that this is happening with a particular class, DatabaseTable,
  *not* with other classes I have declared and mapped such as
  Dataserver. Interestingly, the DatabaseClass works when I comment out
  a bunch of relations:

  mapper(DatabaseTable, tables, properties = {
  #  'attributes' : relation(Attribute,  lazy=False,
  order_by = asc(Attribute.ORDINAL_POSITION)),
  #  'primary_key': relation(_PrimaryKey, uselist=False,
  lazy=False), # At most one PK is allowed.
  #  'indexes': relation(_Index,lazy=False),
  #  'foreign_keys'   : relation(_ForeignKey,
  lazy=False)
  })

  I experimented with adding the properties later on after everything
  else had been defined but still get the same error

  class_mapper(DatabaseTable).add_properties({
  #  'attributes' : relation(Attribute,  lazy=False,
  order_by = asc(Attribute.ORDINAL_POSITION)),
  #  'primary_key': relation(_PrimaryKey, uselist=False,
  lazy=False), # At most one PK is allowed.
  #  'indexes': relation(_Index,lazy=False),
  #  'foreign_keys'   : relation(_ForeignKey,
  lazy=False)
  })

  How do I find out what is special about 'DatabaseTable' or, more
  precisely, the properties I am trying to define on it. I tried putting
  compile_mappers() in both the module and the driver but it has no
  impact. I assume that one of 'attributes', 'primary_key', 'indexes' or
  'foreign_keys' is already in use .. OK. Let me try that:

  class_mapper(DatabaseTable).add_properties({
  #  'apple' : relation(Attribute,  lazy=False, order_by
  = asc(Attribute.ORDINAL_POSITION)),
  #  'banana': relation(_PrimaryKey, uselist=False,
  lazy=False), # At most one PK is allowed.
'pear': relation(_Index,lazy=False),
  #  'kiwi'   : relation(_ForeignKey, lazy=False)
  })

  No, didn't do anything. Let's try with lazy=True. OK that works. Let's
  try lazy=True with the original names. OK. That works also. So the
  problem appears to be with setting lazy=True for these properties.

  What is the debugging incantation to debug the orm mapping? Or do you
  have any advice on how to proceed from here?

  thanks,

  pjjH

  On Apr 2, 9:20 pm, Michael Bayer zzz...@gmail.com wrote:
  make sure everything that's needed is imported, and that you arent
  suppressing any exceptions which occur when the mappers first compile
  themselves.   try calling compile_mappers() to force the issue.

  On Apr 2, 8:19 pm, phrrn...@googlemail.com phrrn...@googlemail.com
  wrote:

   This code works when executed within a if __name__ == '__main__'
   block in the .py that contains the model:

   s = MySession(bind=e)
   q = s.query(DatabaseTable).filter(DatabaseTable.TABLE_CAT=='credit')
   for i in q:
   print i

   However, if I take it out and put it in a separate file, I get an
   error like this. I hope that this is something simple that I am doing
   wrong?
   pjjH

   Traceback (most recent call last):
 File H:\work\base_python\python\chimera_driver.py, line 14, in
   module
   for i in q:
 File C:\PROGRA~1\Python25\lib\site-packages\sqlalchemy-0.5.2-
   py2.5.egg\sqlalchemy\orm\query.py, line 1276, in __iter__
   context = self._compile_context()
 File C:\PROGRA~1\Python25\lib\site-packages\sqlalchemy-0.5.2-
   py2.5.egg

[sqlalchemy] Re: Odd-looking error AttributeError: 'RelationProperty' object has no attribute 'strategy' when *working* code is moved to another file

2009-04-03 Thread phrrn...@googlemail.com

Not sure if this is relevant? As you might have guessed from my posts,
I am at the 'waving a dead chicken' around stage of debugging.

pjjH


(Pdb) import pickle
(Pdb) print pickle.dumps(m)
*** PicklingError: Can't pickle class
'sqlalchemy.orm.properties.Comparator': it's not found as
sqlalchemy.orm.properties.Comparator
(Pdb) import sqlalchemy.orm.properties.Comparator
*** ImportError: No module named Comparator
(Pdb) import sqlalchemy.orm.properties
(Pdb) print pickle.dumps(m)
*** PicklingError: Can't pickle class
'sqlalchemy.orm.properties.Comparator': it's not found as
sqlalchemy.orm.properties.Comparator


On Apr 3, 1:51 pm, phrrn...@googlemail.com phrrn...@googlemail.com
wrote:
 I don't see any output from it at all. I am calling it after importing
 everything but before making any queries. The mapper registry looks
 unsurprising (albeit based on a guess of what it does)

 pjjH

 from sqlalchemy.orm import compile_mappers

 s = MySession(bind=e)

 print compile_mappers()
 q = s.query(Dataserver)

 (Pdb) p _mapper_registry
 WeakKeyDictionary at 139496364
 (Pdb) p _mapper_registry.keys()
 [Mapper at 0x8669e6c; _PrimaryKeyElement, Mapper at 0x866eb0c;
 _ForeignKeyElement, Mapper at 0x868272c; _ForeignKey, Mapper at
 0x861fe8c; Attribute, Mapper at 0x868ea0c; Sample, Mapper at
 0x8676bac; _Index, Mapper at 0x865ca6c; _IndexElement, Mapper at
 0x861f40c; _ExtendedProperty, Mapper at 0x8687c0c; Dataserver,
 Mapper at 0x868786c; Catalog, Mapper at 0x868730c; DatabaseTable,
 Mapper at 0x867da0c; _PrimaryKey]
 (Pdb) n 
 /usr/local/python-2.5.1/lib/python2.5/site-packages/SQLAlchemy-0.5.2-py2.5.egg/sqlalchemy/orm/__init__.py(842)compile_mappers()

 - m.compile()
 (Pdb) p m
 Mapper at 0x8669e6c; _PrimaryKeyElement
 (Pdb) p repr(m)
 'Mapper at 0x8669e6c; _PrimaryKeyElement'
 (Pdb) p str(m)
 'Mapper|_PrimaryKeyElement|widened_sp_primarykeys'

 On Apr 3, 1:37 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  what does compile_mappers() say?can you call this mapper, and at the
  same time all mappers within the entire application have been called ?

  phrrn...@googlemail.com wrote:

   I copied the list of import statements from the module file
   (deshaw.dbo.chimera) to the driver file. The driver file also has a
   line:
   from deshaw.dbo.chimera import *

   Note that this is happening with a particular class, DatabaseTable,
   *not* with other classes I have declared and mapped such as
   Dataserver. Interestingly, the DatabaseClass works when I comment out
   a bunch of relations:

   mapper(DatabaseTable, tables, properties = {
   #  'attributes' : relation(Attribute,  lazy=False,
   order_by = asc(Attribute.ORDINAL_POSITION)),
   #  'primary_key': relation(_PrimaryKey, uselist=False,
   lazy=False), # At most one PK is allowed.
   #  'indexes': relation(_Index,lazy=False),
   #  'foreign_keys'   : relation(_ForeignKey,
   lazy=False)
   })

   I experimented with adding the properties later on after everything
   else had been defined but still get the same error

   class_mapper(DatabaseTable).add_properties({
   #  'attributes' : relation(Attribute,  lazy=False,
   order_by = asc(Attribute.ORDINAL_POSITION)),
   #  'primary_key': relation(_PrimaryKey, uselist=False,
   lazy=False), # At most one PK is allowed.
   #  'indexes': relation(_Index,lazy=False),
   #  'foreign_keys'   : relation(_ForeignKey,
   lazy=False)
   })

   How do I find out what is special about 'DatabaseTable' or, more
   precisely, the properties I am trying to define on it. I tried putting
   compile_mappers() in both the module and the driver but it has no
   impact. I assume that one of 'attributes', 'primary_key', 'indexes' or
   'foreign_keys' is already in use .. OK. Let me try that:

   class_mapper(DatabaseTable).add_properties({
   #  'apple' : relation(Attribute,  lazy=False, order_by
   = asc(Attribute.ORDINAL_POSITION)),
   #  'banana': relation(_PrimaryKey, uselist=False,
   lazy=False), # At most one PK is allowed.
 'pear': relation(_Index,lazy=False),
   #  'kiwi'   : relation(_ForeignKey, lazy=False)
   })

   No, didn't do anything. Let's try with lazy=True. OK that works. Let's
   try lazy=True with the original names. OK. That works also. So the
   problem appears to be with setting lazy=True for these properties.

   What is the debugging incantation to debug the orm mapping? Or do you
   have any advice on how to proceed from here?

   thanks,

   pjjH

   On Apr 2, 9:20 pm, Michael Bayer zzz...@gmail.com wrote:
   make sure everything that's needed is imported, and that you arent
   suppressing any exceptions which occur when the mappers first compile
   themselves.   try calling compile_mappers() to force the issue.

   On Apr 2, 8:19 pm, phrrn...@googlemail.com phrrn...@googlemail.com
   wrote:

This code

[sqlalchemy] Re: Odd-looking error AttributeError: 'RelationProperty' object has no attribute 'strategy' when *working* code is moved to another file

2009-04-03 Thread phrrn...@googlemail.com

Yes, the erroneous behavior goes away when this code is commented out.
No, an empty instrumentation manager *does not cause* the error. Note
that I had a sqlalchemy Table object called 'attributes' in my model.
I have changed the code so that I do not import 'attributes' from
sqlalchemy.orm but instead fully-qualify it. Similarly, I have changed
the model code to refer to 'chimera_columns' rather than 'attributes'.
I have also renamed the mapped property 'attributes' as 'columns' as
at my workplace, nobody other than myself refers to database columns
as 'attributes'!

class AttachNotesAsDocstring(interfaces.InstrumentationManager):
pass

sqlalchemy.orm.attributes.instrumentation_finders.insert(0, lambda
cls: AttachNotesAsDocstring)


However, despite all these changes, if I provide an implementation of
instrument_attribute as follows then I get the dreaded
'ColumnProperty' object has no attribute 'strategy' error.

def instrument_attribute(self, class_, key, attr):
if isinstance(attr.property, properties.ColumnProperty):
if hasattr(attr.property.columns[0], 'info'):
attr.__doc__ = attr.property.columns[0].info.get
('notes')


I can live without the docstring fanciness for now but it has been
educational tracking down the various problems.

pjjH





On Apr 3, 3:04 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 phrrn...@googlemail.com wrote:

  # class AttachNotesAsDocstring(interfaces.InstrumentationManager):
  #  def instrument_attribute(self, class_, key, attr):
  #  if isinstance(attr.property, properties.ColumnProperty):
  #  if hasattr(attr.property.columns[0], 'info'):
  #  attr.__doc__ = attr.property.columns[0].info.get
  ('notes')

  # attributes.instrumentation_finders.insert(0, lambda cls:
  AttachNotesAsDocstring)

 the erroneous behavior is narrowed down to this, correct ?   is anything
 within the method raising an error (try putting pdbs or print statements
 in it)?  does having just an empty do-nothing instrumentationmanager cause
 the error ?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Odd-looking error AttributeError: 'RelationProperty' object has no attribute 'strategy' when *working* code is moved to another file

2009-04-03 Thread phrrn...@googlemail.com

Sure. I am going to leave it until Monday or Tuesday as I hope by that
time to return to my work on getting the Sybase driver to pass the
test cases. I was away on vacation until last Monday and found it
difficult to get back into coding until of course I was confronted by
this issue!

Thanks again for the debugging help.

pjjH


On Apr 3, 3:53 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 any chance of producing a *small* test case for this

 phrrn...@googlemail.com wrote:

  Yes, the erroneous behavior goes away when this code is commented out.
  No, an empty instrumentation manager *does not cause* the error. Note
  that I had a sqlalchemy Table object called 'attributes' in my model.
  I have changed the code so that I do not import 'attributes' from
  sqlalchemy.orm but instead fully-qualify it. Similarly, I have changed
  the model code to refer to 'chimera_columns' rather than 'attributes'.
  I have also renamed the mapped property 'attributes' as 'columns' as
  at my workplace, nobody other than myself refers to database columns
  as 'attributes'!

  class AttachNotesAsDocstring(interfaces.InstrumentationManager):
  pass

  sqlalchemy.orm.attributes.instrumentation_finders.insert(0, lambda
  cls: AttachNotesAsDocstring)

  However, despite all these changes, if I provide an implementation of
  instrument_attribute as follows then I get the dreaded
  'ColumnProperty' object has no attribute 'strategy' error.

  def instrument_attribute(self, class_, key, attr):
  if isinstance(attr.property, properties.ColumnProperty):
  if hasattr(attr.property.columns[0], 'info'):
  attr.__doc__ = attr.property.columns[0].info.get
  ('notes')

  I can live without the docstring fanciness for now but it has been
  educational tracking down the various problems.

  pjjH

  On Apr 3, 3:04 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  phrrn...@googlemail.com wrote:

   # class AttachNotesAsDocstring(interfaces.InstrumentationManager):
   #  def instrument_attribute(self, class_, key, attr):
   #  if isinstance(attr.property, properties.ColumnProperty):
   #  if hasattr(attr.property.columns[0], 'info'):
   #  attr.__doc__ = attr.property.columns[0].info.get
   ('notes')

   # attributes.instrumentation_finders.insert(0, lambda cls:
   AttachNotesAsDocstring)

  the erroneous behavior is narrowed down to this, correct ?   is anything
  within the method raising an error (try putting pdbs or print statements
  in it)?  does having just an empty do-nothing instrumentationmanager
  cause
  the error ?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Odd-looking error AttributeError: 'RelationProperty' object has no attribute 'strategy' when *working* code is moved to another file

2009-04-02 Thread phrrn...@googlemail.com



This code works when executed within a if __name__ == '__main__'
block in the .py that contains the model:

s = MySession(bind=e)
q = s.query(DatabaseTable).filter(DatabaseTable.TABLE_CAT=='credit')
for i in q:
print i


However, if I take it out and put it in a separate file, I get an
error like this. I hope that this is something simple that I am doing
wrong?
pjjH


Traceback (most recent call last):
  File H:\work\base_python\python\chimera_driver.py, line 14, in
module
for i in q:
  File C:\PROGRA~1\Python25\lib\site-packages\sqlalchemy-0.5.2-
py2.5.egg\sqlalchemy\orm\query.py, line 1276, in __iter__
context = self._compile_context()
  File C:\PROGRA~1\Python25\lib\site-packages\sqlalchemy-0.5.2-
py2.5.egg\sqlalchemy\orm\query.py, line 1718, in _compile_context
entity.setup_context(self, context)
  File C:\PROGRA~1\Python25\lib\site-packages\sqlalchemy-0.5.2-
py2.5.egg\sqlalchemy\orm\query.py, line 1972, in setup_context
column_collection=context.primary_columns
  File C:\PROGRA~1\Python25\lib\site-packages\sqlalchemy-0.5.2-
py2.5.egg\sqlalchemy\orm\interfaces.py, line 580, in setup
self.__get_context_strategy(context, path +
(self.key,)).setup_query(context, entity, path, adapter, **kwargs)
  File C:\PROGRA~1\Python25\lib\site-packages\sqlalchemy-0.5.2-
py2.5.egg\sqlalchemy\orm\interfaces.py, line 566, in __get_context_
strategy
return self.strategy
AttributeError: 'RelationProperty' object has no attribute 'strategy'

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Supporting fully-qualified table names and cross-database references in Sybase and SQL Server?

2009-03-13 Thread phrrn...@googlemail.com

opened ticket 1341 for this.

http://www.sqlalchemy.org/trac/ticket/1341
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] dropping tables that are referenced in foreign key constraints on tables

2009-03-13 Thread phrrn...@googlemail.com

I am encountering a problem with getting the unit tests to run on
Sybase because of cascades of errors originating from failure to drop
a table that is referenced by a FK constraint in another table. When
attempting to drop the people table, I need the SybaseSQLSchemaDropper
to emit SQL  like this which first does an ALTER TABLE to remove FK
constraints from all tables which reference people:

344:1 ALTER TABLE managers  DROP CONSTRAINT managers_1466289798
344:2 go
345:1 drop table people
345:2 go
346:1

How should one deal with the situation when the referring tables are
not part of the same metadata collection?

pjjH




341:2 sp__revtable people
341:3 go
 -- Table_DDL
 
--
 CREATE TABLE  people
 (
  person_id  int  identityNOT NULL,
  name   varchar(50)  NULL,
  type   varchar(30)  NULL
 )





 
-
---

ALTER TABLE people ADD  PRIMARY KEY  CLUSTERED  ( person_id)
-- FOREIGN REFERENCE



 ---
 -- No Indexes found in Current Database

(return status = 0)
342:1 sp_helpconstraint people
342:2 go
 name
definitioncreated
 ---
-
---
 managers_1466289798 managers FOREIGN KEY (person_id) REFERENCES people
(person_id) Mar  4 2009  9:12PM
 people_17556708171  PRIMARY KEY INDEX ( person_id) : CLUSTERED,
FOREIGN REFERENCE Mar  4 2009  9:11PM

Total Number of Referential Constraints: 1

Details:
-- Number of references made by this table: 0
-- Number of references to this table: 1
-- Number of self references to this table: 0

Formula for Calculation:
Total Number of Referential Constraints
= Number of references made by this table
+ Number of references made to this table
- Number of self references within this table

(2 rows affected, return status = 0)


343:1 sp__revtable managers
343:2 go
 -- Table_DDL
 
--
 CREATE TABLE  managers
 (
  person_id int  identityNOT NULL,
  statusvarchar(30)  NULL,
  manager_name  varchar(50)  NULL
 )





 
-
---

ALTER TABLE managers ADD  PRIMARY KEY  CLUSTERED  ( person_id)



constraint_desc



 
-
-
-

ALTER TABLE managers ADD CONSTRAINT managers_1466289798
FOREIGN KEY (person_id) REFERENCES people(person_id)



 ---
 -- No Indexes found in Current Database

(return status = 0)




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: dropping tables that are referenced in foreign key constraints on tables

2009-03-13 Thread phrrn...@googlemail.com

Hi Mike,
the situation I am encountering is when the other table is not part of
the metadata collection i.e. SQLAlchemy doesn't know anything about
it. It looks like the unit-tests enumerate the tables by calling
table_names() which causes has_table() and reflecttable() to be called
in turn. There doesn't appear to be a way of expressing that a table
is referenced by a foreign key constraint .. from what I can see, the
various dialects implementation of reflecttable check to see if this
table references other tables.

At the moment, I am hacking up my own visit_tables() in the
SchemaDropper and putting in an explicit check to see if I am targeted
by any FKs .. if so, I will do an ALTER TABLE on the other tables to
remove the FK constraint to me.

pjjH



On Mar 13, 2:41 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 SQLAlchemy normally drops tables in order of foreign key dependency so
 that there's no need for ALTER.  in the case that two tables have a mutual
 foreign key dependency, one of the ForeignKey objects has the
 use_alter=True flag set so that just the one FK gets dropped first via
 ALTER.

 phrrn...@googlemail.com wrote:

  I am encountering a problem with getting the unit tests to run on
  Sybase because of cascades of errors originating from failure to drop
  a table that is referenced by a FK constraint in another table. When
  attempting to drop the people table, I need the SybaseSQLSchemaDropper
  to emit SQL  like this which first does an ALTER TABLE to remove FK
  constraints from all tables which reference people:

  344:1 ALTER TABLE managers  DROP CONSTRAINT managers_1466289798
  344:2 go
  345:1 drop table people
  345:2 go
  346:1

  How should one deal with the situation when the referring tables are
  not part of the same metadata collection?

  pjjH

  341:2 sp__revtable people
  341:3 go
   -- Table_DDL

  --
   CREATE TABLE  people
   (
person_id  int  identityNOT NULL,
name   varchar(50)  NULL,
type   varchar(30)  NULL
   )

  -
  ---

  ALTER TABLE people ADD  PRIMARY KEY  CLUSTERED  ( person_id)
  -- FOREIGN REFERENCE

   ---
   -- No Indexes found in Current Database

  (return status = 0)
  342:1 sp_helpconstraint people
  342:2 go
   name
  definitioncreated
   ---
  -
  ---
   managers_1466289798 managers FOREIGN KEY (person_id) REFERENCES people
  (person_id) Mar  4 2009  9:12PM
   people_17556708171  PRIMARY KEY INDEX ( person_id) : CLUSTERED,
  FOREIGN REFERENCE Mar  4 2009  9:11PM

  Total Number of Referential Constraints: 1

  Details:
  -- Number of references made by this table: 0
  -- Number of references to this table: 1
  -- Number of self references to this table: 0

  Formula for Calculation:
  Total Number of Referential Constraints
  = Number of references made by this table
  + Number of references made to this table
  - Number of self references within this table

  (2 rows affected, return status = 0)

  343:1 sp__revtable managers
  343:2 go
   -- Table_DDL

  --
   CREATE TABLE  managers
   (
person_id int  identityNOT NULL,
statusvarchar(30)  NULL,
manager_name  varchar(50)  NULL
   )

  -
  ---

  ALTER TABLE managers ADD  PRIMARY KEY  CLUSTERED  ( person_id)

  constraint_desc

  -
  -
  -

  ALTER TABLE managers ADD CONSTRAINT managers_1466289798
  FOREIGN KEY (person_id) REFERENCES people(person_id)

   ---
   -- No Indexes found in Current Database

  (return status = 0)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: dropping tables that are referenced in foreign key constraints on tables

2009-03-13 Thread phrrn...@googlemail.com

Then I must have a bug in the FK introspection. Which unit tests would
you suggest getting running first? Is there one that specifically
tests foreign key stuff?

pjjH


On Mar 13, 3:41 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 phrrn...@googlemail.com wrote:

  Hi Mike,
  the situation I am encountering is when the other table is not part of
  the metadata collection i.e. SQLAlchemy doesn't know anything about
  it. It looks like the unit-tests enumerate the tables by calling
  table_names() which causes has_table() and reflecttable() to be called
  in turn. There doesn't appear to be a way of expressing that a table
  is referenced by a foreign key constraint .. from what I can see, the
  various dialects implementation of reflecttable check to see if this
  table references other tables.

 I'm not familiar with any test that relies upon that method - every unit
 test within SQLA deals with a single MetaData object which contains all
 tables, and the foreign key references between those tables are known.

 There is an option called --dropfirst which does do the table_names()
 thing you mention, but that option is only a convenience measure to rerun
 the tests on a database that still has tables leftover from a previously
 failed run.  It also makes usage of foreign keys to drop tables in the
 correct order.

 Any foreign key constraint is represented in a Table object using the
 ForeignKey() or ForeignKeyConstraint() object.  When tables are loaded via
 reflecttable(), the tables are all loaded into a single MetaData object,
 and the foreign keys are reflected into ForeignKey objects, and the drop
 in order of dependency works in all cases.   So I don't know what you mean
 by there doesn't appear to be a way of expressing that a table is
 referenced by a foreign key constraint.



  At the moment, I am hacking up my own visit_tables() in the
  SchemaDropper and putting in an explicit check to see if I am targeted
  by any FKs .. if so, I will do an ALTER TABLE on the other tables to
  remove the FK constraint to me.

  pjjH

  On Mar 13, 2:41 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  SQLAlchemy normally drops tables in order of foreign key dependency so
  that there's no need for ALTER.  in the case that two tables have a
  mutual
  foreign key dependency, one of the ForeignKey objects has the
  use_alter=True flag set so that just the one FK gets dropped first via
  ALTER.

  phrrn...@googlemail.com wrote:

   I am encountering a problem with getting the unit tests to run on
   Sybase because of cascades of errors originating from failure to drop
   a table that is referenced by a FK constraint in another table. When
   attempting to drop the people table, I need the SybaseSQLSchemaDropper
   to emit SQL  like this which first does an ALTER TABLE to remove FK
   constraints from all tables which reference people:

   344:1 ALTER TABLE managers  DROP CONSTRAINT managers_1466289798
   344:2 go
   345:1 drop table people
   345:2 go
   346:1

   How should one deal with the situation when the referring tables are
   not part of the same metadata collection?

   pjjH

   341:2 sp__revtable people
   341:3 go
-- Table_DDL

   --
CREATE TABLE  people
(
 person_id  int  identityNOT NULL,
 name   varchar(50)  NULL,
 type   varchar(30)  NULL
)

   -
   ---

   ALTER TABLE people ADD  PRIMARY KEY  CLUSTERED  ( person_id)
   -- FOREIGN REFERENCE

---
-- No Indexes found in Current Database

   (return status = 0)
   342:1 sp_helpconstraint people
   342:2 go
name
   definitioncreated
---
   -
   ---
managers_1466289798 managers FOREIGN KEY (person_id) REFERENCES
  people
   (person_id) Mar  4 2009  9:12PM
people_17556708171  PRIMARY KEY INDEX ( person_id) : CLUSTERED,
   FOREIGN REFERENCE Mar  4 2009  9:11PM

   Total Number of Referential Constraints: 1

   Details:
   -- Number of references made by this table: 0
   -- Number of references to this table: 1
   -- Number of self references to this table: 0

   Formula for Calculation:
   Total Number of Referential Constraints
   = Number of references made by this table
   + Number of references made to this table
   - Number of self references within this table

   (2 rows affected, return status = 0)

   343:1 sp__revtable managers
   343:2 go
-- Table_DDL

   --
CREATE TABLE  managers
(
 person_id int  identityNOT NULL,
 statusvarchar(30

[sqlalchemy] Re: Supporting fully-qualified table names and cross-database references in Sybase and SQL Server?

2009-03-11 Thread phrrn...@googlemail.com

I wasn't able to get it working so easily using the existing entry-
points so I created a new one, quote_schema, and use it explicitly in
a couple of places in compiler.py. The default implementation is the
same as the old one.

pjjH


+def quote_schema(self, schema, force):
+Quote a schema.
+
+Subclasses should override this to provide database-dependent
+quoting behavior.
+
+return self.quote(schema, force)
+


def quote_schema(self, schema, force=True):
Prepare a quoted table and schema name.
result = '.'.join([self.quote(x, force) for x in schema.split
('.')])
return result


On Mar 10, 5:30 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
 As it happens, this works on the Sybase dialect without fixing the
 quoting at all!  Apparently SQL such as this is happily accepted by
 Sybase:

 SELECT [fdcommon.dbo].organization.org_id,
 [fdcommon.dbo].organization.abbrev
 FROM [fdcommon.dbo].organization JOIN [fdcommon.dbo].org_type
 ON [fdcommon.dbo].org_type.org_type_id =
 [fdcommon.dbo].organization.org_type

 I resorted to some brute-force list operations rather than regular
 expressions to parse out the component names (see diff below). I will
 fix the quoting shortly (within the next day or so) and submit a
 single diff.

 thanks,

 pjjH

 Index: schema.py

 ===

 --- schema.py   (revision 5816)

 +++ schema.py   (working copy)

 @@ -876,17 +876,22 @@

  raise exc.ArgumentError(
  Parent column '%s' does not descend from a 
  table-attached Column % str(self.parent))
 -m = re.match(r^(.+?)(?:\.(.+?))?(?:\.(.+?))?$,
 self._colspec,
 - re.UNICODE)
 +m = self._colspec.split('.')
  if m is None:
  raise exc.ArgumentError(
  Invalid foreign key column specification: %s %
  self._colspec)
 -if m.group(3) is None:
 -(tname, colname) = m.group(1, 2)
 +
 +m.reverse()
 +(colname, tname) = m[0:2]
 +
 +if m[2] is None:
  schema = None
  else:
 -(schema, tname, colname) = m.group(1, 2, 3)
 +m1 = m[2:]
 +m1.reverse()
 +schema = '.'.join(m1)
 +

 On Mar 5, 7:21 pm, phrrn...@googlemail.com phrrn...@googlemail.com
 wrote:

  OK. If it might be as easy as that, I will have a go and see how well
  it works.

  pjjH

  On Mar 5, 4:31 pm, Michael Bayer mike...@zzzcomputing.com wrote:

   phrrn...@googlemail.com wrote:

Sybase (and SQL Server) support cross-database JOINs (Sybase even
supports cross-database foreign-key constraints). There are four
components to an object identifier:

1 = Object name
2 = Schema name
3 = Database name
4 = Server name

the dataserver, database and schema are assumed for one-part
identifiers (e.g. 'foo'), dataserver and database assumed for two-part
identifiers (e.g. 'dbo.foo') and, finally, dataserver assumed for
three-part identifiers ('production.dbo.foo')

e.g. SELECT foo.* FROM BANANA.production.dbo.foo
   SELECT foo.* FROM production.dbo.foo
   SELECT foo.* FROM production..foo -- same as the previous query
if the callers default schema is dbo
   SELECT foo.* FROM dbo.foo
   SELECT foo.* FROM foo
   SELECT foo.* FROM ..foo
I am not so interested in supporting four-part identifiers in SA but I
would like to figure out how to support three-part identifiers as very
many of our databases have cross-database references.

One natural (to me!) way of doing this is to add a 'database' property
to the Table and ForeignKeyConstraint schema items and have the Sybase/
SQL Server dialects always emit fully-qualified three-part identifiers
for table names.

   we have an element on Table called schema.  I had in mind that schema
   should accept dotted names, so SQLA generally doesn't need to get
   involved.  The only change needed is to the IdentifierPreparer, such that
   when quote_schema is called, it separates the name along the dot first so
   that it can quote each token separately.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Suggestions on using a dialect outside of the Alchemy installation tree

2009-03-10 Thread phrrn...@googlemail.com

Hi,
I have done a fair bit on a Sybase ASE dialect for Alchemy and it is
now in a primitive but usable condition for simple applications. My
employers are fine with contributing the code back to the project and
I intended to coordinate with Mike Bayer about this shortly. In the
meantime, we would like to deploy the driver locally and work out some
of the bugs. Ideally, we would like to do this separately from our
centralized SQL Alchemy installation as the release cycles for
production s/w are much longer than the anticipated cycles for the
Sybase dialect.Is it possible to use a dialect located  outside the
main installation by something as simple as the connection URI?

Have any of you similar situations? Have you any suggestions on ways
to address this issue?

pjjH



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Suggestions on using a dialect outside of the Alchemy installation tree

2009-03-10 Thread phrrn...@googlemail.com

Thanks Mike. This sounds great although I have to admit that I don't
follow it completely as I have not used authored anything via
setuptools. If this is trivial for you, could you sketch out what this
would look like?

pjjH


On Mar 10, 11:25 am, Michael Bayer mike...@zzzcomputing.com wrote:
 you can install the dialect using a setuptools entry point.  SQLAlchemy
 looks for dialect modules using the sqlalchemy.databases entry point
 name, so in this case you might name it sqlalchemy.databases.sybase-ase.

 phrrn...@googlemail.com wrote:

  Hi,
  I have done a fair bit on a Sybase ASE dialect for Alchemy and it is
  now in a primitive but usable condition for simple applications. My
  employers are fine with contributing the code back to the project and
  I intended to coordinate with Mike Bayer about this shortly. In the
  meantime, we would like to deploy the driver locally and work out some
  of the bugs. Ideally, we would like to do this separately from our
  centralized SQL Alchemy installation as the release cycles for
  production s/w are much longer than the anticipated cycles for the
  Sybase dialect.Is it possible to use a dialect located  outside the
  main installation by something as simple as the connection URI?

  Have any of you similar situations? Have you any suggestions on ways
  to address this issue?

  pjjH
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Suggestions on using a dialect outside of the Alchemy installation tree

2009-03-10 Thread phrrn...@googlemail.com

The quality of the support in this group is remarkable. The answers
are starting to remind me of Guy Harris in terms of quality and
clarity! (If the name is not familiar to you then check out the Usenet
archives from the mid to late 80's)

thanks very much.
pjjH


On Mar 10, 11:34 am, jason kirtland j...@discorporate.us wrote:
 It'd look like this:

 http://code.google.com/p/ibm-db/source/browse/trunk/IBM_DB/ibm_db_sa/...

 Your dialect will be available to SA after you 'python setup.py install'
 or 'python setup.py develop' in your -ase distribution.

 phrrn...@googlemail.com wrote:
  Thanks Mike. This sounds great although I have to admit that I don't
  follow it completely as I have not used authored anything via
  setuptools. If this is trivial for you, could you sketch out what this
  would look like?

  pjjH

  On Mar 10, 11:25 am, Michael Bayer mike...@zzzcomputing.com wrote:
  you can install the dialect using a setuptools entry point.  SQLAlchemy
  looks for dialect modules using the sqlalchemy.databases entry point
  name, so in this case you might name it sqlalchemy.databases.sybase-ase.

  phrrn...@googlemail.com wrote:

  Hi,
  I have done a fair bit on a Sybase ASE dialect for Alchemy and it is
  now in a primitive but usable condition for simple applications. My
  employers are fine with contributing the code back to the project and
  I intended to coordinate with Mike Bayer about this shortly. In the
  meantime, we would like to deploy the driver locally and work out some
  of the bugs. Ideally, we would like to do this separately from our
  centralized SQL Alchemy installation as the release cycles for
  production s/w are much longer than the anticipated cycles for the
  Sybase dialect.Is it possible to use a dialect located  outside the
  main installation by something as simple as the connection URI?
  Have any of you similar situations? Have you any suggestions on ways
  to address this issue?
  pjjH
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using Alchemy across tens of thousands of tables, hundreds of databases and dozens of dataservers.

2009-03-10 Thread phrrn...@googlemail.com

Thanks to all the help from the group, I am now facing an issue much
earlier than anticipated: how to manage Alchemy in -- apologies for
using a dreadfully overused phrase -- an 'Enterprise Setting'. This
really boils down to matters of scale: tens of thousands of tables in
hundreds of databases across several dozen dataservers.  I am
interested in how to structure and organize the Python code-artifacts
containing the metadata collections, POPO class declarations and
mapper() invocations which associate the classes to the tables. I am
also interested in configuration and credential management i.e. how to
control which dataservers are queried and the credentials used to
connect to them. Various use-cases include: use a replica reporting
dataserver for queries; use a development system for the
foo.bar.bletch class/class-hierarchy but use 'production' for
everything else; use SQLite for high-performance querying of stable
(i.e. does not change much if ever over time) reference/lookup data
but use production systems for live, trading-related data.

Now how does one manage all of this at the kinds of scale described
above? Hoes does one stitch together the various mapped classes to the
appropriate database engines at runtime? What kind of namespaces -- if
any -- would you use to manage large numbers of metadata collections?

If your eyes have glazed over at this point, there are more details
below!

I am looking forward to hearing if anyone has used Alchemy 'in the
large' and what their experiences have been, either positive or
negative.

thanks,

pjjH


Starting at the lowest levels, we have a centralized time-series of
physical meta-data for a number of our dataservers (production,
development and QA) with a reasonly unified representation independent
of the underlying dataserver technology (thanks to the ODBC catalog
calls). It is reasonably easy to add in new dataserver platforms and
the system is linearly scalable. In conjunction with the Cheetah
templating system and the wonderful cog code-generation tool, we can
code-generate Alchemy meta-data collections for arbitrary subsets of
tables: this fragment will generate the Python code to populate a
MetaData collection with all tables from the 'pear' database on the
'BANANA' dataserver

metadata = MetaData()

#[[[cog
#m = Mongo(dataserver='BANANA', database='pear)
#cog.outl(# auto-generated SQLAlchemy stuff here)
#tables = m.lookup()
#for table in tables:
#   cog.outl(%s % m.apply_template('sa.tmpl', table))
#]]]

Similarly, later on in the same file or in a completely different
file, we can have a cog fragment like this one that generates stub
POPO class declarations and mapper invocations that map the POPO class
to the given table.

#[[[cog
#def camelize(s):
#   return ''.join(word[0].upper() + word[1:] for word in s.split
('_'))
#
#for table in tables:
#   cog.outl(%s % m.apply_template('sa_mappers.tmpl', table,
{'camelize': camelize}))
#]]]


We also have hand-written mapper code that adds what I call 'cooked
accessors' to the POPO classes: this one adds a property called 'Type'
which returns a single OrgType object (OrgType being the wrapper class
around the foreign key reference/key/lookup table 'org_type'

class_mapper(Organization).add_properties({
   'Type' : relation(OrgType, uselist=False),
   }

This basic mechanism can be used to build up a collection of 'boring'
classes (which I sometimes hear referred to as Data Transfer Objects
or DTOs). cog allows us to mix hand-written and auto-generated code in
the same file so we can have a reasonably loose, 'build-time' coupling
between Python and the database schema and we get change management
and auditability (because now the interesting bits of the database
schema are serialized as Python code and get checked in, tagged etc
just like any other file). We also get documentation, thanks to Mike's
suggestion to use attributes.instrumentation_finders.

It seems obvious that related groups of objects on the same dataserver
should be grouped together in the same metadata collection as part of
the 'build'/code-generation process. It also seems obvious that each
metadata collection should have some form of default association with
a dataserver URI. However, we also want to be able to configure
metadata subsets to talk to development dataservers while everything
else talks to 'production' or, perhaps less controversially, have
reads go against a read-only replica while writes go to a master.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Supporting fully-qualified table names and cross-database references in Sybase and SQL Server?

2009-03-10 Thread phrrn...@googlemail.com

As it happens, this works on the Sybase dialect without fixing the
quoting at all!  Apparently SQL such as this is happily accepted by
Sybase:

SELECT [fdcommon.dbo].organization.org_id,
[fdcommon.dbo].organization.abbrev
FROM [fdcommon.dbo].organization JOIN [fdcommon.dbo].org_type
ON [fdcommon.dbo].org_type.org_type_id =
[fdcommon.dbo].organization.org_type

I resorted to some brute-force list operations rather than regular
expressions to parse out the component names (see diff below). I will
fix the quoting shortly (within the next day or so) and submit a
single diff.

thanks,

pjjH


Index: schema.py

===

--- schema.py   (revision 5816)

+++ schema.py   (working copy)

@@ -876,17 +876,22 @@

 raise exc.ArgumentError(
 Parent column '%s' does not descend from a 
 table-attached Column % str(self.parent))
-m = re.match(r^(.+?)(?:\.(.+?))?(?:\.(.+?))?$,
self._colspec,
- re.UNICODE)
+m = self._colspec.split('.')
 if m is None:
 raise exc.ArgumentError(
 Invalid foreign key column specification: %s %
 self._colspec)
-if m.group(3) is None:
-(tname, colname) = m.group(1, 2)
+
+m.reverse()
+(colname, tname) = m[0:2]
+
+if m[2] is None:
 schema = None
 else:
-(schema, tname, colname) = m.group(1, 2, 3)
+m1 = m[2:]
+m1.reverse()
+schema = '.'.join(m1)
+


On Mar 5, 7:21 pm, phrrn...@googlemail.com phrrn...@googlemail.com
wrote:
 OK. If it might be as easy as that, I will have a go and see how well
 it works.

 pjjH

 On Mar 5, 4:31 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  phrrn...@googlemail.com wrote:

   Sybase (and SQL Server) support cross-database JOINs (Sybase even
   supports cross-database foreign-key constraints). There are four
   components to an object identifier:

   1 = Object name
   2 = Schema name
   3 = Database name
   4 = Server name

   the dataserver, database and schema are assumed for one-part
   identifiers (e.g. 'foo'), dataserver and database assumed for two-part
   identifiers (e.g. 'dbo.foo') and, finally, dataserver assumed for
   three-part identifiers ('production.dbo.foo')

   e.g. SELECT foo.* FROM BANANA.production.dbo.foo
  SELECT foo.* FROM production.dbo.foo
  SELECT foo.* FROM production..foo -- same as the previous query
   if the callers default schema is dbo
  SELECT foo.* FROM dbo.foo
  SELECT foo.* FROM foo
  SELECT foo.* FROM ..foo
   I am not so interested in supporting four-part identifiers in SA but I
   would like to figure out how to support three-part identifiers as very
   many of our databases have cross-database references.

   One natural (to me!) way of doing this is to add a 'database' property
   to the Table and ForeignKeyConstraint schema items and have the Sybase/
   SQL Server dialects always emit fully-qualified three-part identifiers
   for table names.

  we have an element on Table called schema.  I had in mind that schema
  should accept dotted names, so SQLA generally doesn't need to get
  involved.  The only change needed is to the IdentifierPreparer, such that
  when quote_schema is called, it separates the name along the dot first so
  that it can quote each token separately.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Surfacing table and column-level comments to Python classes as docstrings

2009-03-07 Thread phrrn...@googlemail.com

This is beautiful and exactly what I want. Thanks very much.

pjjH


On Mar 7, 12:08 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Mar 6, 2009, at 10:37 PM, phrrn...@googlemail.com wrote:





  I have some metadata on table and some of the columns and would like
  to surface these as docstrings on the mapped class and columns.

  If table foo has columns i, j, k with comments 'apple', 'banana',
  'pear', respectively,  and the table is mapped via class Foo then I
  would like the programmer to do a help(Foo) and see something like:

  i()
  apple

  j()
  banana

  k()
  pear

  or whatever the common Python idiom is. I am looking for something
  that will work well for interactive work with ipython. In our case,
  the comments are MS_Description properties from the
  sys.extended_properties table in SQL Server. I searched through the
  list archives and see that there has already been some discussion
  about providing support for comments in the DDL producerers. Although
  I am more interested in the mapper side of things, I can see how
  sp_addextendedproperty, sp_updateextendedproperty  could be used to
  set the comments as part of the DDL generation.

  Since I am code-generating all the alchemy models right now, it seems
  reasonable to poke in the comment into the docstring of the class and
  the the __doc__ attribute column properties after the mapper()
  invocation. Or should one use some magic Python hook to get the value
  of the docstring from elsewhere? For reference, I append the a
  fragment of the code that retrieves the table and column
  MS_Description properties that are widely used by a number of SQL
  Server tools.

 reflection of column notes can be added, im not sure if any dialects  
 support that currently.  the plumbing to propagate that from info  
 column to instrumented descriptor could be done using an  
 InstrumentationManager, which allows you to receive events as  
 descriptors are placed on classes.

 from sqlalchemy import *
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import attributes, interfaces, properties

 Base = declarative_base()

 class AttachNotesAsDocstring(interfaces.InstrumentationManager):
      def instrument_attribute(self, class_, key, attr):
          if isinstance(attr.property, properties.ColumnProperty):
              attr.__doc__ = attr.property.columns[0].info.get('notes')

 attributes.instrumentation_finders.insert(0, lambda cls:  
 AttachNotesAsDocstring)

 class User(Base):
      __tablename__ = users

      id = Column(Integer, primary_key=True, info={'notes':'the primary  
 key'})
      name = Column(String, info={'notes':'the name'})

 help(User)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Surfacing table and column-level comments to Python classes as docstrings

2009-03-06 Thread phrrn...@googlemail.com

I have some metadata on table and some of the columns and would like
to surface these as docstrings on the mapped class and columns.

If table foo has columns i, j, k with comments 'apple', 'banana',
'pear', respectively,  and the table is mapped via class Foo then I
would like the programmer to do a help(Foo) and see something like:

i()
apple

j()
banana

k()
pear

or whatever the common Python idiom is. I am looking for something
that will work well for interactive work with ipython. In our case,
the comments are MS_Description properties from the
sys.extended_properties table in SQL Server. I searched through the
list archives and see that there has already been some discussion
about providing support for comments in the DDL producerers. Although
I am more interested in the mapper side of things, I can see how
sp_addextendedproperty, sp_updateextendedproperty  could be used to
set the comments as part of the DDL generation.

Since I am code-generating all the alchemy models right now, it seems
reasonable to poke in the comment into the docstring of the class and
the the __doc__ attribute column properties after the mapper()
invocation. Or should one use some magic Python hook to get the value
of the docstring from elsewhere? For reference, I append the a
fragment of the code that retrieves the table and column
MS_Description properties that are widely used by a number of SQL
Server tools.

pjjH




EXEC ( 'sp_MSforeachdb @command1=''
SELECT
  DB_NAME()   AS TABLE_CAT,
  OBJECT_SCHEMA_NAME(major_id)AS TABLE_SCHEM,
  OBJECT_NAME(major_id)   AS TABLE_NAME,
  CASE
  WHEN minor_id = 0
   THEN NULL
  ELSE
   COL_NAME(major_id, minor_id)
  END AS COLUMN_NAME,
  nameAS name,
  CONVERT(VARCHAR(max),value) AS value
FROM  ?.sys.extended_properties
WHERE name = MS_Description'',
@precommand = ''use ?''
) AT FOO
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Sybase DB-API driver uses @foo for placeholder names and expect parameter dict keys to be similarly named

2009-03-05 Thread phrrn...@googlemail.com

The problem with the connection being returned to the pool was due to
executing the SET IDENTITY_INSERT statement on the *cursor* rather
than the *connection*. The documentation states that the connection
will be returned to the pool when a statement is executed on it that
doesn't return any results (such as the SET statement). using
self.connection.execute solved that problem.

There was a difficult to diagnose problem with python-sybase in that
sometimes the keys of the parameter dict were in unicode which caused
the buf.name = name assignment to throw a TypeError. Coercing the
param keys via str() solved that problem.

In Sybase, a column declaration without NULL/NOT NULL defaults to NOT
NULL so a bunch of the tests need to be updated.

The schema introspection stuff seems to work OK albeit with some low-
level querying of the system tables. I started off with a higher-level
implementation but abandoned it due to all kinds of (possibly
spurious) problems. There are still some problems with the test tear-
downs as tables are not being dropped in the correct order.

All in all, the driver is now in a state that can be called
'buggy' (as opposed to being completely dysfunctional)

pjjH


On Feb 27, 4:29 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
 Yes, it is based off the mssql code but I made some modifications to
 it to take care of situations like, for example, where there is an
 identity column but it is not the primary key (and hence not a
 'sequence'). This means a read off the catalog to find the identity
 column (I believe that only one identity column is permitted per
 table). I was wondering if some 'bad thing' happens if you execute a
 select on the cursor and retrieve results when you are in the
 pre_exec.

 I don't know what you are referring to when you say 'throw a pdb' .. I
 hope it has something to do with the debugger!

 As for being in deep, I am afraid we are only starting: Sybase has
 enough 'special' stuff to keep us busy for a long time e.g. cross-
 database referential integrity constraints. database-specific default
 schemas (e.g. login foo may have schema 'dbo' in database apple  but
 schema 'guest' in database pear and schema 'prod' in database banana).
 Then what does one do about remote objects mapped in via CIS (e.g.
 REMOTE.production.dbo.very_important_table) (actually this is a
 problem with SQL Server also)

 pjjH

 On Feb 27, 4:05 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  phrrn...@googlemail.com wrote:

   I want to automatically set IDENTITY_INSERT for a table if the
   identity column is explicitly listed. Likewise, after execution of an
   insert on a table with an identity column we want to retrieve the
   identity value.

   Any idea why the following code would cause the connection to be
   checked in between the pre_exec() and the actual execution of the
   statement?  I have enabled high levels of debugging on the python-
   sybase driver and can see that a new connection is made *after* the
   'SET IDENTITY_INSERT foo ON'  and the actual command runs on that new
   connection (and, of course, fails as IDENTITY_INSERT is not enabled on
   that table for the new connection).

  Assuming you took that code from the MSSQL dialect, it should be fine.
  that looks like an older version of it, though...in 0.6 take a look in
  mssql/base.py for the latest version of the IDENTITY_INSERT dance.

  but no there's nothing in there in any case that would cause a second
  connection to be checked out. throw a pdb into the Connection constructor,
  or perhaps in pool.connect(), to track where that's coming from.

  sorry you're in deep  :)

   pjjH

   class SybaseSQLExecutionContext(default.DefaultExecutionContext):
   def _table_identity_column(self, t):
   Return the name of the this table's identity column
   # negative caching
   if not hasattr(t, '_identity_column'):
   t._identity_column = None
   s = rSELECT cols.name FROM syscolumns as cols JOIN
   sysobjects as o ON (cols.id = o.id) WHERE o.name ='%s' and cols.status
0x80 = 0x80 % (t.name)
   self.cursor.execute(s)
   r = self.cursor.fetchone()
   if r:
   t._identity_column  = r[0]
   return t._identity_column

   def pre_exec(self):
   self.HAS_IDENTITY = False
   self.IDENTITY_INSERT = False
   # What about UPDATE statements? Is this even possible in
   Sybase?
   if self.compiled.isinsert:
   if self._table_identity_column
   (self.compiled.statement.table):
   self.HAS_IDENTITY = True
   identity_column = self._table_identity_column
   (self.compiled.statement.table)
   if identity_column in self.compiled_parameters[0].keys
   ():
   self.IDENTITY_INSERT = True
   self.cursor.execute(SET IDENTITY_INSERT %s

[sqlalchemy] Re: Supporting fully-qualified table names and cross-database references in Sybase and SQL Server?

2009-03-05 Thread phrrn...@googlemail.com

OK. If it might be as easy as that, I will have a go and see how well
it works.

pjjH


On Mar 5, 4:31 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 phrrn...@googlemail.com wrote:

  Sybase (and SQL Server) support cross-database JOINs (Sybase even
  supports cross-database foreign-key constraints). There are four
  components to an object identifier:

  1 = Object name
  2 = Schema name
  3 = Database name
  4 = Server name

  the dataserver, database and schema are assumed for one-part
  identifiers (e.g. 'foo'), dataserver and database assumed for two-part
  identifiers (e.g. 'dbo.foo') and, finally, dataserver assumed for
  three-part identifiers ('production.dbo.foo')

  e.g. SELECT foo.* FROM BANANA.production.dbo.foo
 SELECT foo.* FROM production.dbo.foo
 SELECT foo.* FROM production..foo -- same as the previous query
  if the callers default schema is dbo
 SELECT foo.* FROM dbo.foo
 SELECT foo.* FROM foo
 SELECT foo.* FROM ..foo
  I am not so interested in supporting four-part identifiers in SA but I
  would like to figure out how to support three-part identifiers as very
  many of our databases have cross-database references.

  One natural (to me!) way of doing this is to add a 'database' property
  to the Table and ForeignKeyConstraint schema items and have the Sybase/
  SQL Server dialects always emit fully-qualified three-part identifiers
  for table names.

 we have an element on Table called schema.  I had in mind that schema
 should accept dotted names, so SQLA generally doesn't need to get
 involved.  The only change needed is to the IdentifierPreparer, such that
 when quote_schema is called, it separates the name along the dot first so
 that it can quote each token separately.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Is it possible to find a column in a Table SchemaItem by the original database name?

2009-03-04 Thread phrrn...@googlemail.com

It is common (for us) to have tables with an identity column and that
column *not* be the primary key. I am dealing with this by querying
the system catalogs and caching the result within the table meta-data
object itself (probably a bad idea but doing it for expediency). The
system catalog query returns -- of course -- the database-level
identifier for the column with the identity property set. How do I
find out if the insert parameters have a value for the identity
column? I am using something like this which has a very evil look to
it:

 if identity_column in [t.c[key].name for key in
self.compiled_parameters[0]]:


pjjH


if self.compiled.isinsert:
self.logger.debug('pre_exec for an
INSERT')
t = self.compiled.statement.table
   if self._table_identity_column(t) is not None:
self.HAS_IDENTITY = True
# This returns the database-level name of the column
# Since the parameters may have different names (this
# will happen if a named parameter, 'key', was passed
# to the Column constructor), we need to project out
the names
identity_column = self._table_identity_column(t)
self.logger.debug([t.c[key].name for key in
self.compiled_parameters[0]])
if identity_column in [t.c[key].name for key in
self.compiled_parameters[0]]:
self.IDENTITY_INSERT = True
self.logger.debug('setting IDENTITY_INSERT ON FOR
%s' % (t))

# detect if the table has an identity column by direct query against
the system catalogs
   def _table_identity_column(self, t):
Return the name of the this table's identity column
# negative caching

if not hasattr(t, '_identity_column'):
t._identity_column = None
s = sql.select([syscolumns.c.name],
from_obj=syscolumns.join(sysobjects))
s = s.where(sql.and_(sysobjects.c.name == t.name, sql.text
((status  128 = 128
r = self.connection.execute(s).fetchone()
if r is not None:
t._identity_column  = r[0]
return t._identity_column

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Is it possible to find a column in a Table SchemaItem by the original database name?

2009-03-04 Thread phrrn...@googlemail.com

Some of the unit-tests have parameters which don't refer to column
names so this is the latest formulation:

if identity_column in [t.c[key].name for key in
self.compiled_parameters[0] if hasattr(t.c, key)]:


On Mar 4, 5:34 pm, phrrn...@googlemail.com phrrn...@googlemail.com
wrote:
 It is common (for us) to have tables with an identity column and that
 column *not* be the primary key. I am dealing with this by querying
 the system catalogs and caching the result within the table meta-data
 object itself (probably a bad idea but doing it for expediency). The
 system catalog query returns -- of course -- the database-level
 identifier for the column with the identity property set. How do I
 find out if the insert parameters have a value for the identity
 column? I am using something like this which has a very evil look to
 it:

  if identity_column in [t.c[key].name for key in
 self.compiled_parameters[0]]:

 pjjH

 if self.compiled.isinsert:
 self.logger.debug('pre_exec for an
 INSERT')
 t = self.compiled.statement.table
if self._table_identity_column(t) is not None:
 self.HAS_IDENTITY = True
 # This returns the database-level name of the column
 # Since the parameters may have different names (this
 # will happen if a named parameter, 'key', was passed
 # to the Column constructor), we need to project out
 the names
 identity_column = self._table_identity_column(t)
 self.logger.debug([t.c[key].name for key in
 self.compiled_parameters[0]])
 if identity_column in [t.c[key].name for key in
 self.compiled_parameters[0]]:
 self.IDENTITY_INSERT = True
 self.logger.debug('setting IDENTITY_INSERT ON FOR
 %s' % (t))

 # detect if the table has an identity column by direct query against
 the system catalogs
def _table_identity_column(self, t):
 Return the name of the this table's identity column
 # negative caching

 if not hasattr(t, '_identity_column'):
 t._identity_column = None
 s = sql.select([syscolumns.c.name],
 from_obj=syscolumns.join(sysobjects))
 s = s.where(sql.and_(sysobjects.c.name == t.name, sql.text
 ((status  128 = 128
 r = self.connection.execute(s).fetchone()
 if r is not None:
 t._identity_column  = r[0]

return t._identity_column
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Can the test suite be run using a dialect that has poolclass=pool.AssertionPool?

2009-03-03 Thread phrrn...@googlemail.com

Because of the problems with a Sybase SA backend on both python-sybase
and pyodbc, I changed the poolclass to AssertionPool and found a bunch
of unexpected (to me) situations where AssertionPool failed. Based on
a casual examination of the code, it seems that one connection should
be sufficient most of the tests (apart, of course, from the ones that
test multiple connections). Is this assumption correct?

pjjH



==
FAIL: test_create_drop_bound (__main__.BindTest)
--
Traceback (most recent call last):
  File /u/harringp/work/open_source/sqlalchemy/test/testlib/
testing.py, line 387, in safe
return fn(*args, **kw)
  File test/engine/bind.py, line 109, in test_create_drop_bound
metadata.create_all()
  File /u/harringp/work/open_source/sqlalchemy/lib/sqlalchemy/
schema.py, line 1765, in create_all
bind.create(self, checkfirst=checkfirst, tables=tables)
  File /u/harringp/work/open_source/sqlalchemy/lib/sqlalchemy/engine/
base.py, line 1129, in create
self._run_visitor(self.dialect.schemagenerator, entity,
connection=connection, **kwargs)
  File /u/harringp/work/open_source/sqlalchemy/lib/sqlalchemy/engine/
base.py, line 1154, in _run_visitor
conn = self.contextual_connect(close_with_result=False)
  File /u/harringp/work/open_source/sqlalchemy/lib/sqlalchemy/engine/
base.py, line 1229, in contextual_connect
return self.Connection(self, self.pool.connect(),
close_with_result=close_with_result, **kwargs)
  File /u/harringp/work/open_source/sqlalchemy/lib/sqlalchemy/
pool.py, line 142, in connect
return _ConnectionFairy(self).checkout()
  File /u/harringp/work/open_source/sqlalchemy/lib/sqlalchemy/
pool.py, line 304, in __init__
rec = self._connection_record = pool.get()
  File /u/harringp/work/open_source/sqlalchemy/lib/sqlalchemy/
pool.py, line 161, in get
return self.do_get()
  File /u/harringp/work/open_source/sqlalchemy/lib/sqlalchemy/
pool.py, line 839, in do_get
assert self.connection is not None
AssertionError

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] How to use a subquery to emulate an auxilliary numbers table

2009-03-03 Thread phrrn...@googlemail.com

On systems where I do not have administrative access, I emulate an
auxilliary table of numbers with a subquery that has a bunch of UNION
ALL statements in it. I would like to do same with SQL Alchemy but
have not been able to figure out yet how to do it. My current effort
is almost there. I need to find out how to name a subquery and say
what columns it returns. In this case, I would like to say 'nums is a
derived table implemented by raw sql and it has one column, n, which
is an integer'. It is possible to define a table SchemaItem with a
from_obj rather than a table name?

pjjH


nums = sql.select(['n'], from_obj=sql.text(r
  SELECT 1 as n
   UNION ALL SELECT 2
   UNION ALL SELECT 3
   UNION ALL SELECT 4
   UNION ALL SELECT 5
   UNION ALL SELECT 5
   UNION ALL SELECT 6
   UNION ALL SELECT 7
   UNION ALL SELECT 8
   UNION ALL SELECT 9
   UNION ALL SELECT 10
   UNION ALL SELECT 11
   UNION ALL SELECT 12
   UNION ALL SELECT 13
   UNION ALL SELECT 14
   UNION ALL SELECT 15
   UNION ALL SELECT 16

   )
  ).alias('nums')



   s = sql.select([sysindexes.c.name.label('index_name'),
sysindexes.c.status,
index_col(sql.func.object_name
(sysindexes.c.id), sysindexes.c.indid, nums.c.n).label('column_name'),
nums.c.n.label('column_ordinal'),
],from_obj=[sysindexes.join
(nums,sysindexes.c.keycnt = nums.c.n)]).where(sql.func.object_name
(sysindexes.c.id) ==table.name).order_by(nums.c.n)

ProgrammingError: (ProgrammingError) ('42000', [42000] [FreeTDS][SQL
Server]Incorrect syntax near the keyword 'SELECT'.\n (156)) u'SELECT
sysindexes.name AS index_name, sysindexes.status, index_col(OBJECT_NAME
(sysindexes.id), sysindexes.indid, nums.n) AS column_name, nums.n AS
column_ordinal \nFROM sysindexes JOIN (SELECT n \nFROM \n
SELECT 1 as n\n   UNION ALL SELECT
2\n   UNION ALL SELECT 3
\n   UNION ALL SELECT 4
\n   UNION ALL SELECT
5\n   UNION ALL SELECT
5\n   UNION ALL SELECT
6\n   UNION ALL SELECT
7\n   UNION ALL SELECT
8\n   UNION ALL SELECT
9\n   UNION ALL SELECT
10\n   UNION ALL SELECT
11\n   UNION ALL SELECT
12\n   UNION ALL SELECT
13\n   UNION ALL SELECT
14\n   UNION ALL SELECT
15\n   UNION ALL SELECT 16\n) AS nums ON
sysindexes.keycnt = nums.n \nWHERE OBJECT_NAME(sysindexes.id) = ?
ORDER BY nums.n' [u'plain']


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to use a subquery to emulate an auxilliary numbers table

2009-03-03 Thread phrrn...@googlemail.com

Thanks. I wrapped it as ' (original_sql) as foo' as Sybase needs a
name for the derived table.You have helped to get primary key and
index introspection working on Sybase!

One interesting and useful bit of information is that one can't use
placeholders in the WHERE clause for anything other than column
values: Pyodbc over FreeTDS barfs with an unknown type error as it
does not know the datatype of functions.

s = sql.select([sysindexes.c.name.label('index_name'),
sysindexes.c.status,
index_col(sql.func.object_name
(sysindexes.c.id), sysindexes.c.indid, nums.c.n).label('column_name'),
nums.c.n.label('column_ordinal'),
],from_obj=[sysindexes.join(nums,nums.c.n =
sysindexes.c.keycnt).join(sysobjects)]).where(sysobjects.c.name
==table.name).order_by(nums.c.n)



On Mar 3, 5:03 pm, Rick Morrison rickmorri...@gmail.com wrote:
 Seems to me the issued SQL would work if the innermost query (the UNION
 query) was phrased as a subquery. Have you tried simply wrapping the literal
 SQL text in parenthesis to force it into a subquery like this?

 nums = sql.select(['n'], from_obj=sql.text(r(SELECT 1 as n
   UNION ALL SELECT 2
   UNION ALL SELECT 3
   UNION ALL SELECT 4
   UNION ALL SELECT 5
   UNION ALL SELECT 5
   UNION ALL SELECT 6
   UNION ALL SELECT 7
   UNION ALL SELECT 8
   UNION ALL SELECT 9
   UNION ALL SELECT 10
   UNION ALL SELECT 11
   UNION ALL SELECT 12
   UNION ALL SELECT 13
   UNION ALL SELECT 14
   UNION ALL SELECT 15
   UNION ALL SELECT 16) )).alias('nums')

 Alternatively, (if SQLA supports aliasing of sql text literals, I haven't
 tried this), you could alias the innermost query, and use that as virtual
 table:

 uq = sql.text(rSELECT 1 as n
   UNION ALL SELECT 2
   UNION ALL SELECT 3
   UNION ALL SELECT 4
   UNION ALL SELECT 5
   UNION ALL SELECT 5
   UNION ALL SELECT 6
   UNION ALL SELECT 7
   UNION ALL SELECT 8
   UNION ALL SELECT 9
   UNION ALL SELECT 10
   UNION ALL SELECT 11
   UNION ALL SELECT 12
   UNION ALL SELECT 13
   UNION ALL SELECT 14
   UNION ALL SELECT 15
   UNION ALL SELECT 16).alias('uq')

 nums = sql.select(['n'], from_obj=uq).alias('nums')
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to use a subquery to emulate an auxilliary numbers table

2009-03-03 Thread phrrn...@googlemail.com

The annotation stuff works well on the client but I don't think the
problem with the driver can be worked around so easily .. I will look
at the TDS/ODBC traces but I am pretty sure that the type information
is not being sent back from the dataserver to the driver so the driver
does not know how to describe the placeholder. It occurs to me that
the same thing should hold true for anything which is *not* in the
select list .. I can check this out later.

anyway, thanks for your help. My work on the driver has increased my
respect and admiration for the SA design and code.

pjjH


class object_name(sql_functions.GenericFunction):
__return_type__ = sqltypes.String
def __init__(self, object_id, database_id=None, **kwargs):
super(index_col, self).__init__(self, args=(object_id,
database_id), **kwargs)

functions.update (
{
sql_functions.current_timestamp: 'GETDATE()',
sql_functions.current_date : 'GETDATE()',
sql_functions.session_user : 'SUSER_NAME()',
'current_database' : 'DB_NAME()',
'current_user' : 'USER_NAME()',
'object_id': lambda x:
OBJECT_ID(%s) % x,
'length'   : lambda x: LEN
(%s) % x,
index_col  : 'index_col%(expr)
s',
object_name: 'OBJECT_NAME%
(expr)s',
column_name: 'col_name%(expr)
s',
sql_functions.char_length  : lambda x: LEN
(%s) % x,
}
)


On Mar 3, 5:46 pm, Rick Morrison rickmorri...@gmail.com wrote:
 On Tue, Mar 3, 2009 at 5:31 PM, phrrn...@googlemail.com 

 phrrn...@googlemail.com wrote:

  Thanks. I wrapped it as ' (original_sql) as foo' as Sybase needs a
  name for the derived table.You have helped to get primary key and
  index introspection working on Sybase!

 Huh, I thought you were using mssql, as the odbc error in your posted
 traceback seems to indicate a SQL Server ODBC driver. I'm surprised that
 even the system tables in Sybase and Mssql have identical names as well,
 although knowing a bit about mssql's history, I guess I shouldn't be.



  One interesting and useful bit of information is that one can't use
  placeholders in the WHERE clause for anything other than column
  values: Pyodbc over FreeTDS barfs with an unknown type error as it
  does not know the datatype of functions.

 I think there's a way to annotate database functions in SQLA with a return
 type, although I can't remember the syntax right now, or if it would work
 with pyodbc. Might be worth of bit of investigation, though.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Sybase DB-API driver uses @foo for placeholder names and expect parameter dict keys to be similarly named

2009-02-27 Thread phrrn...@googlemail.com

How does one deal with driver-specific unit tests? I am running in
difficulties in testing the pyodbc and python-sybase drivers for the
sybase dialect. For example, test_raw_qmark works  with the pyodbc
driver (as it supports that style) but not with the python-sybase
driver. Is there some decorator available that can help with skipping
certain tests for a given DBABI driver. Any suggestions on how to
handle this?

pjjH


On Feb 26, 5:31 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 we have ticket 785 for this:

 http://www.sqlalchemy.org/trac/ticket/785

 On Feb 26, 2009, at 4:45 PM, phrrn...@googlemail.com wrote:



  Thanks Michael. I have a sybase.py passing *some* unit tests with both
  pyodbc and the Sybase driver, both running on Solaris 10 x86 against
  ASE 15. This is a hack that seems to work for the Sybase DBAPI module.
  I do have access to lots and lots of different Sybase stuff so I will
  start from your patched version and reintegrate my schema
  introspection and other stuff. Do you have a ticket open for the
  sybase driver yet? Where should I send the patches?

  pjjH

 def do_execute(self, cursor, statement, parameters, context=None,
  **kwargs):
 if self.paramstyle == 'named':
 #prepend the arguments with an '@'
 hacked_args = dict((@+n, v) for n,v in parameters.items
  ())
 super(SybaseSQLDialect_Sybase, self).do_execute(cursor,
  statement, hacked_args, context=context, **kwargs)
 else:
 super(SybaseSQLDialect_Sybase, self).do_execute(cursor,
  statement, parameters, context=context, **kwargs)

 def create_connect_args(self, url):
 opts = url.translate_connect_args()
 opts.update(url.query)

 self.autocommit = False
 if 'autocommit' in opts:
 self.autocommit = bool(int(opts.pop('autocommit')))

 dictArgs = {
 'datetime': 'python',# Stop the annoying
  diagnostics from the module
 'auto_commit' : self.autocommit, # the named argument is
  called 'auto_commit' rather than 'autocommit'
 }

 if 'database' in opts:
 dictArgs['database'] = opts['database']

 return ([opts['host'], opts['username'], opts['password']],
  dictArgs)

  On Feb 26, 4:30 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  On Feb 26, 2009, at 3:55 PM, phrrn...@googlemail.com wrote:

  I am doing some work on a SA engine for Sybase Adaptive Server
  Enterprise (ASE) on top of both pyodbc and the Sybase DB-API driver.
  The existing sybase engine for SA only works with Sybase Anywhere
  (ASA).

  that is correct ; I've recently had to take a look at this driver and
  realized that it was not really written for Sybase at all, and the
  original author is whereabouts unknown.   To that end I would like it
  to be replaced with an actual Sybase driver.

  There is a problem with named parameters with the Sybase driver in
  that the placeholders are prepended with an '@' *and* the execute
  method expects any dict paramers to have have keys that also have an
  '@'. I was able  to get the placeholders generated correctly by
  subclassing the compiler. Any suggestions on how to get the execute
  method to work nicely or do I have to do some much around with
  copying
  parameters or monkeypatching the Sybase module with an
  implementation
  of execute that will work with 'ordinary' dictionaries?

  the attached patch, which represents my partial progress, addresses
  this.  Unfortuantely I was not able to continue since I was
  developing
  from a Mac to a development server, and it turns out that connecting
  with the Sybase driver using FreeTDS renders bind parameters
  inoperable.  After several days of attempting to get the developer
  edition of sybase ASE running in a virtual linux environment
  (apparently only works on older versions of ubuntu/fedora, but even
  after installing those, I was unsuccessful), I gave up.

  If you have access to a working Sybase ASE environment, you can have
  full reign over the sybase.py dialect - anything specific to SQL
  Anywhere can be removed, since its an obsolete product and if it were
  supported, it would be in its own dialect. The Sybase driver may
  be targeted towards the 0.6 release of SQLAlchemy.  Version 0.6 is
  oriented around a dialect refactor and schema expression refactor
  (there are no ORM changes) and would be a much better place to start
  building out new drivers - there are some significant differences in
  how dialects are constructed between 0.5 versus 0.6.

  sybase.patch
  12KViewDownload
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com

[sqlalchemy] Re: Script to automatically generate SA classes.

2009-02-27 Thread phrrn...@googlemail.com

I think that the SQL Alchemy SchemaItem objects can be reasonably code-
generated from the schema. I don't feel the same way about the mapped
classes: the naming of classes and the naming of the various relations
is something that benefits from some thought and design, more than
seems practical from a code-generator.  I have a rough and ready tool
which is a SA wrapper over a centralized set of tables which contain
meta-data about *all* dataservers, databases, tables and columns. The
schema of the metadata tables is based on the signatures of the ODBC
catalog functions. My experience so far (based on a couple of months)
is that is *vastly* simpler to code-generate from a relational DB
representation of the metadata than it is to code-generate from the
catalog functions:  the SA model takes care of all the heavy lifting.

I am working on making the code usable with cog (http://
nedbatchelder.com/code/cog) and may make a public release of it at
some point.

pjjH



mapper(Relation, tables, properties = {
'attributes' : relation(Attribute,  order_by = asc
(Attribute.ORDINAL_POSITION)),
'primary_key': relation(_PrimaryKey, uselist=False), #
At most one PK is allowed.
'indexes': relation(_Index),
'foreign_keys'   : relation(_ForeignKey)
})



Here are the Chetah templates that code-generate the Tables,  some
stub classes and the mapper invocations.

$TABLE_NAME = Table('$TABLE_NAME', metadata,
#for $column in $attributes
Column('$column.COLUMN_NAME', $column.as_alchemy_type_name,
nullable=#if $column.nullable#True#else#False#end if#),
#end for
schema = '$TABLE_SCHEM'
)

#if $primary_key
${TABLE_NAME}.append_constraint(PrimaryKeyConstraint(
#for e in
$primary_key.elements
'$e.COLUMN_NAME',
#end for
)
   )
#end if

#for $fk in $foreign_keys
${TABLE_NAME}.append_constraint(ForeignKeyConstraint([#for e in
fk.elements#'$e.FKCOLUMN_NAME',#end for#],
[
#for e in fk.elements
 
'$e.fully_qualified_pk_column',
#end for
],
name='$e.FK_NAME'
   )
  )
#end for




## Some quasi-arbitrary rules:
##
## . We do not attempt to map tables unless they have a primary key.
##
## . We don't map tables that start with an underscore but instead
##   emit a comment to say that they have been ignored.
##
## . The class name is the CamelCase equivalent of the table
##   name. Again we assume that the CamelCase equivalent of the table
##   name is a valid Python identifier.

#if $primary_key
#unless $TABLE_NAME.startswith('_')
class $camelize($TABLE_NAME)(object):
   pass

mapper($camelize($TABLE_NAME), $TABLE_NAME)
#else
# ignoring  [$TABLE_CAT].[$TABLE_SCHEM].[$TABLE_NAME] as it starts
with '_'
#end unless
#else
# ignoring  [$TABLE_CAT].[$TABLE_SCHEM].[$TABLE_NAME] as a primary key
is not defined for it.
#end if



# This is what the command-line driver looks like

q = session.query(Relation)
if (options.dataservers):
q = q.filter(Relation.dataserver.in_(options.dataservers))

if (options.databases):
q = q.filter(Relation.TABLE_CAT.in_(options.databases))

if (options.schemas):
q = q.filter(Relation.TABLE_SCHEM.in_
(options.schemas))

if (options.tables):
q = q.filter(Relation.TABLE_NAME.like(options.tables[0])) #
XXX: figure out way of passing in multiple patterns

# This restricts the query to look at samples with the largest
# (i.e. most recent) timestamp. XXX: this may change as I am not
# sure how to deal with time/history for this application.

q = q.join((Sample, and_(Relation.dataserver == Sample.dataserver,
 Relation.TABLE_CAT  ==
Sample.CATALOG_NAME,
 Relation.ts == Sample.ts)))
for f in options.templates:
for t in q.order_by(desc(Relation.TABLE_NAME)):
logging.info(t.TABLE_NAME)
template = Template(file=f, searchList=[t,{'camelize':
camelize}])
print template



On Feb 27, 10:45 am, Michael Bayer mike...@zzzcomputing.com wrote:
 Piotrek Byzia wrote:

  Hi,

  I bet that not only me is bored by having to write manually all the SA
  mappings and class definitions.

 there's a tool out there somewhere called autocode that does something
 like this.

 But I never get bored writing mapped classes.  That's the domain model for
 my application, and beyond database enabled attributes they have all sorts
 of behaviors and constraints that are specific to my application's
 in-python behavior.   So I'm not familiar with this issue of wanting the
 application to be written automatically.


[sqlalchemy] Re: Sybase DB-API driver uses @foo for placeholder names and expect parameter dict keys to be similarly named

2009-02-27 Thread phrrn...@googlemail.com

I want to automatically set IDENTITY_INSERT for a table if the
identity column is explicitly listed. Likewise, after execution of an
insert on a table with an identity column we want to retrieve the
identity value.

Any idea why the following code would cause the connection to be
checked in between the pre_exec() and the actual execution of the
statement?  I have enabled high levels of debugging on the python-
sybase driver and can see that a new connection is made *after* the
'SET IDENTITY_INSERT foo ON'  and the actual command runs on that new
connection (and, of course, fails as IDENTITY_INSERT is not enabled on
that table for the new connection).

pjjH


class SybaseSQLExecutionContext(default.DefaultExecutionContext):
def _table_identity_column(self, t):
Return the name of the this table's identity column
# negative caching
if not hasattr(t, '_identity_column'):
t._identity_column = None
s = rSELECT cols.name FROM syscolumns as cols JOIN
sysobjects as o ON (cols.id = o.id) WHERE o.name ='%s' and cols.status
 0x80 = 0x80 % (t.name)
self.cursor.execute(s)
r = self.cursor.fetchone()
if r:
t._identity_column  = r[0]
return t._identity_column

def pre_exec(self):
self.HAS_IDENTITY = False
self.IDENTITY_INSERT = False
# What about UPDATE statements? Is this even possible in
Sybase?
if self.compiled.isinsert:
if self._table_identity_column
(self.compiled.statement.table):
self.HAS_IDENTITY = True
identity_column = self._table_identity_column
(self.compiled.statement.table)
if identity_column in self.compiled_parameters[0].keys
():
self.IDENTITY_INSERT = True
self.cursor.execute(SET IDENTITY_INSERT %s ON %
 
self.dialect.identifier_preparer.format_table
(self.compiled.statement.table))

def post_exec(self):
if self.HAS_IDENTITY:
self.cursor.execute(SELECT @@identity AS lastrowid)
lastrowid = self.cursor.fetchone()[0]
if lastrowid  0:
if not hasattr(self, '_last_inserted_ids') or
self._last_inserted_ids is None:
self._last_inserted_ids = [lastrowid]
else:
self._last_inserted_ids = [lastrowid] +
self._last_inserted_ids[1:]

if self.IDENTITY_INSERT:
self.cursor.execute(SET IDENTITY_INSERT %s OFF %
self.dialect.identifier_preparer.format_table
(self.compiled.statement.table))


On Feb 27, 2:02 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
 OK. I will do the development work against the 0.6 tree. I may end up
 backporting it to 0.5 as I want to get in into use at work as soon as
 is reasonable (which may be prior to the 0.6 release)

 pjjH

 On Feb 27, 11:29 am, Michael Bayer mike...@zzzcomputing.com wrote:

  phrrn...@googlemail.com wrote:

   How does one deal with driver-specific unit tests? I am running in
   difficulties in testing the pyodbc and python-sybase drivers for the
   sybase dialect. For example, test_raw_qmark works  with the pyodbc
   driver (as it supports that style) but not with the python-sybase
   driver. Is there some decorator available that can help with skipping
   certain tests for a given DBABI driver. Any suggestions on how to
   handle this?

  most tests make usage of decorators like @testing.fails_on to mark various
  databases as unsupported.  That test in particular is very specific to
  certain DBAPIs, i.e. those that support qmark bind parameters.  For the
  lesser databases like MSSQL and Firebird, i.e. those which have lots of
  missing features, hundreds of decorators are configured to exclude them.
  You would have a similar task in the case of sybase.

  But to be specific regarding pyodbc vs. python-sybase, that is exactly
  what's addressed in SQLA 0.6.  If you look there you'll see the decorators
  can differentiate among multiple DBAPIs for the same dialect, i.e.
  sybase+pyodbc vs. sybase+python-sybase in this case.  There is also a
  coherent non-guesswork system of using specific drivers.

  just so you know we'd really like SQLA 0.6 to be released soon after
  pycon.   There's not that much work to be done on it for a release.  The
  only reason its a major number is because the API for dialects does
  change considerably.

   pjjH

   On Feb 26, 5:31 pm, Michael Bayer mike...@zzzcomputing.com wrote:
   we have ticket 785 for this:

  http://www.sqlalchemy.org/trac/ticket/785

   On Feb 26, 2009, at 4:45 PM, phrrn...@googlemail.com wrote:

Thanks Michael. I have a sybase.py passing *some* unit tests with both
pyodbc and the Sybase driver, both running on Solaris 10 x86 against
ASE 15. This is a hack that seems to work for the Sybase DBAPI module.
I do have access to lots and lots of different Sybase stuff so I will
start from your patched

[sqlalchemy] Re: Sybase DB-API driver uses @foo for placeholder names and expect parameter dict keys to be similarly named

2009-02-27 Thread phrrn...@googlemail.com

Yes, it is based off the mssql code but I made some modifications to
it to take care of situations like, for example, where there is an
identity column but it is not the primary key (and hence not a
'sequence'). This means a read off the catalog to find the identity
column (I believe that only one identity column is permitted per
table). I was wondering if some 'bad thing' happens if you execute a
select on the cursor and retrieve results when you are in the
pre_exec.

I don't know what you are referring to when you say 'throw a pdb' .. I
hope it has something to do with the debugger!

As for being in deep, I am afraid we are only starting: Sybase has
enough 'special' stuff to keep us busy for a long time e.g. cross-
database referential integrity constraints. database-specific default
schemas (e.g. login foo may have schema 'dbo' in database apple  but
schema 'guest' in database pear and schema 'prod' in database banana).
Then what does one do about remote objects mapped in via CIS (e.g.
REMOTE.production.dbo.very_important_table) (actually this is a
problem with SQL Server also)

pjjH


On Feb 27, 4:05 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 phrrn...@googlemail.com wrote:

  I want to automatically set IDENTITY_INSERT for a table if the
  identity column is explicitly listed. Likewise, after execution of an
  insert on a table with an identity column we want to retrieve the
  identity value.

  Any idea why the following code would cause the connection to be
  checked in between the pre_exec() and the actual execution of the
  statement?  I have enabled high levels of debugging on the python-
  sybase driver and can see that a new connection is made *after* the
  'SET IDENTITY_INSERT foo ON'  and the actual command runs on that new
  connection (and, of course, fails as IDENTITY_INSERT is not enabled on
  that table for the new connection).

 Assuming you took that code from the MSSQL dialect, it should be fine.
 that looks like an older version of it, though...in 0.6 take a look in
 mssql/base.py for the latest version of the IDENTITY_INSERT dance.

 but no there's nothing in there in any case that would cause a second
 connection to be checked out. throw a pdb into the Connection constructor,
 or perhaps in pool.connect(), to track where that's coming from.

 sorry you're in deep  :)



  pjjH

  class SybaseSQLExecutionContext(default.DefaultExecutionContext):
  def _table_identity_column(self, t):
  Return the name of the this table's identity column
  # negative caching
  if not hasattr(t, '_identity_column'):
  t._identity_column = None
  s = rSELECT cols.name FROM syscolumns as cols JOIN
  sysobjects as o ON (cols.id = o.id) WHERE o.name ='%s' and cols.status
   0x80 = 0x80 % (t.name)
  self.cursor.execute(s)
  r = self.cursor.fetchone()
  if r:
  t._identity_column  = r[0]
  return t._identity_column

  def pre_exec(self):
  self.HAS_IDENTITY = False
  self.IDENTITY_INSERT = False
  # What about UPDATE statements? Is this even possible in
  Sybase?
  if self.compiled.isinsert:
  if self._table_identity_column
  (self.compiled.statement.table):
  self.HAS_IDENTITY = True
  identity_column = self._table_identity_column
  (self.compiled.statement.table)
  if identity_column in self.compiled_parameters[0].keys
  ():
  self.IDENTITY_INSERT = True
  self.cursor.execute(SET IDENTITY_INSERT %s ON %

  self.dialect.identifier_preparer.format_table
  (self.compiled.statement.table))

  def post_exec(self):
  if self.HAS_IDENTITY:
  self.cursor.execute(SELECT @@identity AS lastrowid)
  lastrowid = self.cursor.fetchone()[0]
  if lastrowid  0:
  if not hasattr(self, '_last_inserted_ids') or
  self._last_inserted_ids is None:
  self._last_inserted_ids = [lastrowid]
  else:
  self._last_inserted_ids = [lastrowid] +
  self._last_inserted_ids[1:]

  if self.IDENTITY_INSERT:
  self.cursor.execute(SET IDENTITY_INSERT %s OFF %
  self.dialect.identifier_preparer.format_table
  (self.compiled.statement.table))

  On Feb 27, 2:02 pm, phrrn...@googlemail.com
  phrrn...@googlemail.com wrote:
  OK. I will do the development work against the 0.6 tree. I may end up
  backporting it to 0.5 as I want to get in into use at work as soon as
  is reasonable (which may be prior to the 0.6 release)

  pjjH

  On Feb 27, 11:29 am, Michael Bayer mike...@zzzcomputing.com wrote:

   phrrn...@googlemail.com wrote:

How does one deal with driver-specific unit tests? I am running in
difficulties in testing the pyodbc and python-sybase drivers for the
sybase dialect. For example, test_raw_qmark works  with the pyodbc

[sqlalchemy] Sybase DB-API driver uses @foo for placeholder names and expect parameter dict keys to be similarly named

2009-02-26 Thread phrrn...@googlemail.com

I am doing some work on a SA engine for Sybase Adaptive Server
Enterprise (ASE) on top of both pyodbc and the Sybase DB-API driver.
The existing sybase engine for SA only works with Sybase Anywhere
(ASA).

There is a problem with named parameters with the Sybase driver in
that the placeholders are prepended with an '@' *and* the execute
method expects any dict paramers to have have keys that also have an
'@'. I was able  to get the placeholders generated correctly by
subclassing the compiler. Any suggestions on how to get the execute
method to work nicely or do I have to do some much around with copying
parameters or monkeypatching the Sybase module with an implementation
of execute that will work with 'ordinary' dictionaries?

pjjH

Error message is like this one .. note how the keys in the param dict
do not start with an '@'

There is no host variable corresponding to the one specified by the
PARAM datastream. This means that this variable
'type_1' was not used in the preceding DECLARE CURSOR or SQL command.
 'SELECT sysobjects.name \nFROM sysobjects \nWHERE sysobjects.name =
@name_1 AND sysobjects.type = @type_1' {'type_1': 'U', 'name_1': 't1'}

class SybaseSQLCompiler_Sybase(SybaseSQLCompiler):
def __init__(self, *args, **params):
super(SybaseSQLCompiler_Sybase, self).__init__(*args,
**params)
# This is a bit tedious: the Sybase module (i.e. the thing
# that you get when you say 'import Sybase') names its
# placeholders as '@foo'.
if self.dialect.paramstyle == 'named':
self.bindtemplate =@%(name)s




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Sybase DB-API driver uses @foo for placeholder names and expect parameter dict keys to be similarly named

2009-02-26 Thread phrrn...@googlemail.com

Thanks Michael. I have a sybase.py passing *some* unit tests with both
pyodbc and the Sybase driver, both running on Solaris 10 x86 against
ASE 15. This is a hack that seems to work for the Sybase DBAPI module.
I do have access to lots and lots of different Sybase stuff so I will
start from your patched version and reintegrate my schema
introspection and other stuff. Do you have a ticket open for the
sybase driver yet? Where should I send the patches?

pjjH

def do_execute(self, cursor, statement, parameters, context=None,
**kwargs):
if self.paramstyle == 'named':
#prepend the arguments with an '@'
hacked_args = dict((@+n, v) for n,v in parameters.items
())
super(SybaseSQLDialect_Sybase, self).do_execute(cursor,
statement, hacked_args, context=context, **kwargs)
else:
super(SybaseSQLDialect_Sybase, self).do_execute(cursor,
statement, parameters, context=context, **kwargs)

def create_connect_args(self, url):
opts = url.translate_connect_args()
opts.update(url.query)

self.autocommit = False
if 'autocommit' in opts:
self.autocommit = bool(int(opts.pop('autocommit')))

dictArgs = {
'datetime': 'python',# Stop the annoying
diagnostics from the module
'auto_commit' : self.autocommit, # the named argument is
called 'auto_commit' rather than 'autocommit'
}

if 'database' in opts:
dictArgs['database'] = opts['database']

return ([opts['host'], opts['username'], opts['password']],
dictArgs)


On Feb 26, 4:30 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 26, 2009, at 3:55 PM, phrrn...@googlemail.com wrote:



  I am doing some work on a SA engine for Sybase Adaptive Server
  Enterprise (ASE) on top of both pyodbc and the Sybase DB-API driver.
  The existing sybase engine for SA only works with Sybase Anywhere
  (ASA).

 that is correct ; I've recently had to take a look at this driver and
 realized that it was not really written for Sybase at all, and the
 original author is whereabouts unknown.   To that end I would like it
 to be replaced with an actual Sybase driver.

  There is a problem with named parameters with the Sybase driver in
  that the placeholders are prepended with an '@' *and* the execute
  method expects any dict paramers to have have keys that also have an
  '@'. I was able  to get the placeholders generated correctly by
  subclassing the compiler. Any suggestions on how to get the execute
  method to work nicely or do I have to do some much around with copying
  parameters or monkeypatching the Sybase module with an implementation
  of execute that will work with 'ordinary' dictionaries?

 the attached patch, which represents my partial progress, addresses
 this.  Unfortuantely I was not able to continue since I was developing
 from a Mac to a development server, and it turns out that connecting
 with the Sybase driver using FreeTDS renders bind parameters
 inoperable.  After several days of attempting to get the developer
 edition of sybase ASE running in a virtual linux environment
 (apparently only works on older versions of ubuntu/fedora, but even
 after installing those, I was unsuccessful), I gave up.

 If you have access to a working Sybase ASE environment, you can have
 full reign over the sybase.py dialect - anything specific to SQL
 Anywhere can be removed, since its an obsolete product and if it were
 supported, it would be in its own dialect. The Sybase driver may
 be targeted towards the 0.6 release of SQLAlchemy.  Version 0.6 is
 oriented around a dialect refactor and schema expression refactor
 (there are no ORM changes) and would be a much better place to start
 building out new drivers - there are some significant differences in
 how dialects are constructed between 0.5 versus 0.6.

  sybase.patch
 12KViewDownload


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---