[sqlalchemy] Re: Polymorphic collections / ticket #500

2007-03-06 Thread King Simon-NFHD78
I wanted to do something like this in the past, and in the end, rather
than using polymorphic mappers it made more sense to create a
MapperExtension which overrides create_instance. In create_instance you
can examine your 'typ' column to decide what class to create, selecting
one of your Manager/Demigod classes if necessary, or falling back to the
Person class otherwise.
 
Hope that helps,
 
Simon




From: sqlalchemy@googlegroups.com
[mailto:[EMAIL PROTECTED] On Behalf Of Rick Morrison
Sent: 06 March 2007 01:12
To: sqlalchemy
Subject: [sqlalchemy] Polymorphic collections / ticket #500


The fix for ticket #500 breaks a pattern I've been using.

It's most likely an anti-pattern, but I don't see a way to get
what I want in SA otherwise.

I've got a series of entities

class Person():
   pass

class Manager(Person):
   def __init__(self):
   # do manager stuff

class Demigod(Person):
   def __init__(self):
   # do demigod stuff

etc.

there are mappers for each of these entities that inherit from
Person(), so all of the normal Person() properties exist, but Person()
itself is not polymorphic. That's on purpose, and because the class
hierarchy of Manager(), etc, is not exhaustive, and I occasionally  want
to save instances of Person() directly.  
If I make the Person() class polymorphic on a column of say
typ, then SA clears whatever typ I may have tried to set directly,
and seems to make me specify an exhaustive list of sub-types. 

And so I leave Person() as non-polymorphic. I also have a
collection of Person() objects on a different mapper, which can load
entity objects of any type. 

Before rev #2382, I could put a Manager() in a Person()
collection, and  it would flush OK. Now it bitches that it wants a real
polymorphic mapper. I don't want to use a polymorphic mapper, because I
don't want to specify an exhaustive list of every class that I'm ever
going to use. 

What to do?

Thanks,
Rick





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Explicit column in a SelectResults qry.

2007-03-06 Thread Glauco

Hi all, i've builded a mapper based on over 10 tables and it's
optimized well for use of *_by function.
Obviously some clausoleWhere must be formatted for select so my mapper
has this profile:

class MyClass(DomainObject, SferaDomainObject):
def search( self, **kw  ):
by_where_clause = {}
where_clause = []
for k,v in kw.items():

# Generic _by clausole  want use SA engine for dinamic
join_to tbl
if k in ('filt1', 'filt2', .):
by_where_clause[ k ] = v

# Generic explicit clausole for the where_clause feature
as operators
elif k == 'filt5':
where_clause.append( self.c.filt5 = v)

elif k == 'filt6':
where_clause.append( self.c.filt6 = v )

else:
raise ValueError, XXX

if where_clause:
   return
self.select_by( **by_where_clause ).select( and_( *where_clause )
else:
   return self.select_by( **by_where_clause )


this have extension = SelectResultsExt().


Now...
how can i specify field name in select function ?

something like
select ([tbl1.c.field1, tbl3.c.field2, tbl2.c.field3,
tbl5.c.field1], ??? )


Thank's all
Glauco


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Is it possible to know the size of a deferred PickleType?

2007-03-06 Thread Sanjay

Hi All,

Wondering if it is possible to know the size of a file stored in a
deferred column (PickleType), without retrieving the file itself.

Thanks in advance.

Sanjay


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Is it possible to know the size of a deferred PickleType?

2007-03-06 Thread Sean Davis

On Tuesday 06 March 2007 05:32, Sanjay wrote:
 Hi All,

 Wondering if it is possible to know the size of a file stored in a
 deferred column (PickleType), without retrieving the file itself.

I usually try to save the filesize as a separate column.

Sean

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Table being removed from nested query

2007-03-06 Thread King Simon-NFHD78
Hi,

I have a problem in which a table is being removed from the FROM clause
of a nested query. The attached file should show the problem, which I've
tested on 0.3.5 and rev 2383.

In the example, there are two tables, department and employee, such that
one department has many employees. The inner query joins the two tables
and returns department IDs:

  inner = select([departments.c.department_id],
 employees.c.department_id ==
departments.c.department_id)
  inner = inner.alias('filtered_departments')

The SQL looks like:

 SELECT departments.department_id
 FROM departments, employees
 WHERE employees.department_id = departments.department_id

I then join this query back to the department table:

 join = inner.join(departments,
 
onclause=inner.c.department_id==departments.c.department_id)

SQL for the join condition looks like:

 (SELECT departments.department_id
  FROM departments, employees
  WHERE employees.department_id = departments.department_id)
  AS filtered_departments
  JOIN departments ON filtered_departments.department_id =
departments.department_id

This still looks correct to me. However, I then base a query on this
join:

  outer = select([departments.c.name],
 from_obj=[join],
 use_labels=True)

At this point, the 'departments' table is no longer part of the inner
query. The SQL looks like:

 SELECT departments.name
 FROM (SELECT departments.department_id AS department_id
   FROM employees
   WHERE employees.department_id = departments.department_id)
AS filtered_departments
  JOIN departments ON filtered_departments.department_id =
departments.department_id

...and the query doesn't run.

I think I can work around it by putting the join condition in the
whereclause of the select, instead of from_obj, but is there a reason
why the join version doesn't work?

Thanks,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



inner_query_test.py
Description: inner_query_test.py


[sqlalchemy] Re: Table being removed from nested query

2007-03-06 Thread Michael Bayer

try putting correlate=False in the nested select.

On Mar 6, 2007, at 6:29 AM, King Simon-NFHD78 wrote:

 Hi,

 I have a problem in which a table is being removed from the FROM  
 clause
 of a nested query. The attached file should show the problem, which  
 I've
 tested on 0.3.5 and rev 2383.

 In the example, there are two tables, department and employee, such  
 that
 one department has many employees. The inner query joins the two  
 tables
 and returns department IDs:

   inner = select([departments.c.department_id],
  employees.c.department_id ==
 departments.c.department_id)
   inner = inner.alias('filtered_departments')

 The SQL looks like:

  SELECT departments.department_id
  FROM departments, employees
  WHERE employees.department_id = departments.department_id

 I then join this query back to the department table:

  join = inner.join(departments,

 onclause=inner.c.department_id==departments.c.department_id)

 SQL for the join condition looks like:

  (SELECT departments.department_id
   FROM departments, employees
   WHERE employees.department_id = departments.department_id)
   AS filtered_departments
   JOIN departments ON filtered_departments.department_id =
 departments.department_id

 This still looks correct to me. However, I then base a query on this
 join:

   outer = select([departments.c.name],
  from_obj=[join],
  use_labels=True)

 At this point, the 'departments' table is no longer part of the inner
 query. The SQL looks like:

  SELECT departments.name
  FROM (SELECT departments.department_id AS department_id
FROM employees
WHERE employees.department_id = departments.department_id)
 AS filtered_departments
   JOIN departments ON filtered_departments.department_id =
 departments.department_id

 ...and the query doesn't run.

 I think I can work around it by putting the join condition in the
 whereclause of the select, instead of from_obj, but is there a reason
 why the join version doesn't work?

 Thanks,

 Simon

 
 inner_query_test.py


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ping

2007-03-06 Thread percious

Problem is, using turbogears I don't really have access to the
pool_recycle without some horrible monkey patch.

FYI, my ping seems to work, so I'm going with that for now.

cheers.
-percious

On Mar 5, 1:00 pm, Sébastien LELONG [EMAIL PROTECTED]
securities.fr wrote:
  I need to ping the Mysql database to keep it alive.

  Is there a simple way to do this with SA?

 Maybe you'd want to have a look at the pool_recycle argument of
 create_engine. It'll make the connection pool beeing checked if active, and
 potentially recycle the connections if they are too old. Search for MySQL
 has gone away, you'll have different threads talking about that.

 Hope it helps.

 Seb
 --
 Sébastien LELONG
 sebastien.lelong[at]sirloon.nethttp://www.sirloon.net


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ping

2007-03-06 Thread Paul Johnston

Hi,

Problem is, using turbogears I don't really have access to the
pool_recycle without some horrible monkey patch.
  

Actually, it's easy to specify parameters to create_engine, just include 
a line like this in your config file

sqlalchemy.pool_recycle = 3600

Paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] pyodbc and tables with triggers

2007-03-06 Thread polaar

Hi,

I recently tried out sqlalchemy with mssql via pyodbc (after being
bitten by the adodbapi bug with the truncated parameters), and noticed
the following problem:

On inserting records into tables with triggers, pyodbc fails on the
'select @@identity as lastrowid' statement with an 'invalid cursor
state' error.

I've managed to work around this by doing set nocount on when
creating the connection, via a creator like this:
def create_connection():
conn = pyodbc.connect(connectionstring)
cur = conn.cursor()
cur.execute('set nocount on')
cur.close()
return conn

I don't know if there's a better way, but it seems to work for me. I
just mention it because it might be a useful tip, and also because it
seems like something that should be handled by the sqlalchemy mssql
driver?

greetings,

Steven


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sa newbie

2007-03-06 Thread Jonathan Ellis

try

db.books.select(db.books.c.book_skus.like('abcd%'))

On 3/5/07, dan [EMAIL PROTECTED] wrote:

 I'm trying to track down the syntax for using a 'like' clause with sql
 soup.  I'm trying to do something like
 select book_sku from books where book_sku like 'abcd%';

 best i can tell, my syntax should look something like:
 skus = db.books.select(book_skus.like('abcd%'))

 but I'm getting an error, so obviously not.  Pointers?

 tia,
 dan


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cannot connect to Oracle DB: AttributeError: 'module' object has no attribute 'NCLOB'

2007-03-06 Thread Michael Bayer

old bug, upgrade to sqlalchemy 0.3.5

On Mar 6, 2007, at 5:51 PM, vinjvinj wrote:


 I get the following error:

   File build\bdist.win32\egg\sqlalchemy\engine\base.py, line 266, in
 execute
   File build\bdist.win32\egg\sqlalchemy\engine\base.py, line 271, in
 execute_t
 ext
   File build\bdist.win32\egg\sqlalchemy\databases\oracle.py, line
 326, in crea
 te_result_proxy_args
 AttributeError: 'module' object has no attribute 'NCLOB'

 When I try to connect to the oracle DB. I have cx_oracle module
 installed. Any ideas what I'm doing wrong.

 Thanks,

 VJ


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Is it possible to know the size of a deferred PickleType?

2007-03-06 Thread Sanjay

 I usually try to save the filesize as a separate column.

Hi Sean, Thanks for the suggestion. I think that might be the only way
to know a filesize without retrieving a file.

Sanjay


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---