[sqlalchemy] Retrieving attribute values after error rollback without querying

2011-05-31 Thread chris e
I have a field verification routine that is run as part of a mapper
extension. When a field error is detected, an exception is thrown with
the field in question, the object with the incorrect field. This
worked great in 0.3, but I'm now moving to 0.6, and I can no longer do
this as my invalid filed exception is using out the __repr__ value of
the object with the incorrect field, and the __repr__ is causing
sqlalchemy to attempt to query the db to retrieve attribute values
because the rollback caused the object to be expired.

Is there a way to retrieve the attribute values without having
sqlalchemy run a query? I tried object.attribute.get_history, but I
end up with an empty history item.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Custom UTC DateTime Type with Custom insert functionality

2011-02-08 Thread chris e
To simplify date handling in a project on which I am working, I am
storing UTC dates in the database in a timestamp with timezone field,
however, because cx_Oracle does not have any timezone functionality, I
need to cast the UTC timestamp I'm inserting into the database as a
timestamp in UTC so that the database does not convert it to the db
timezone. This also needs to apply to default values.


I have the following, however, it is not called for default values:

from pytz import UTC
class UTCDateTime(TypeDecorator):
impl = TIMESTAMP

# add the UTC time zone info to naive timestamps
def process_result_value(self, value, dialect) :
if value != None :
value = UTC.localize(value)

return value

@compiles(_BindParamClause)
def _compile_utc_date(element, compiler, **kw):
if isinstance(element.type, UTCDateTime) :
   return from_tz(cast(%s as timestamp), 'UTC') \
   % compiler.visit_bindparam(element, **kw)

return compiler.visit_bindparam(element, **kw)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Custom UTC DateTime Type with Custom insert functionality

2011-02-08 Thread chris e
I now have the code below, but the _oracle_utc_timestamp function is
never called, even when I do explicitly set a value.

class UTCTimestamp(TypeDecorator):
impl = TIMESTAMP

# add the UTC time zone info to naive timestamps
def process_result_value(self, value, dialect) :
if value != None :
value = UTC.localize(value)

return value

def process_bind_param(self, value, dialect):
# if we have a value convert it to UTC if needed
if value != None :
if value.tzinfo :
value = value.astimezone(UTC)
return value

class utc_timestamp(FunctionElement):
type = DateTime()

@compiles(utc_timestamp)
def _oracle_utc_timestamp(element, compiler, **kw):
import pdb
pdb.set_trace()






On Feb 8, 6:09 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 8, 2011, at 4:56 PM, chris e wrote:

  To simplify date handling in a project on which I am working, I am
  storing UTC dates in the database in a timestamp with timezone field,
  however, because cx_Oracle does not have any timezone functionality, I
  need to cast the UTC timestamp I'm inserting into the database as a
  timestamp in UTC so that the database does not convert it to the db
  timezone. This also needs to apply to default values.

  I have the following, however, it is not called for default values:

 The @compiles for _BindParamClause was never expected and is not covered 
 within a visit_insert() right now.  Ticket #2042 is added.  However, you're 
 better off using SQL level UTC functions for defaults in any case which would 
 be the workaround here.  Here's one I use for PG + SQL Server.  You can add 
 another for Oracle that includes your CAST expression if needed:

 class utcnow(expression.FunctionElement):
     type = DateTime()

 @compiles(utcnow, 'postgresql')
 def _pg_utcnow(element, compiler, **kw):
     return TIMEZONE('utc', CURRENT_TIMESTAMP)

 @compiles(utcnow, 'mssql')
 def _ms_utcnow(element, compiler, **kw):
     return GETUTCDATE()

  from pytz import UTC
  class UTCDateTime(TypeDecorator):
     impl = TIMESTAMP

     # add the UTC time zone info to naive timestamps
     def process_result_value(self, value, dialect) :
         if value != None :
             value = UTC.localize(value)

         return value

  @compiles(_BindParamClause)
  def _compile_utc_date(element, compiler, **kw):
     if isinstance(element.type, UTCDateTime) :
        return from_tz(cast(%s as timestamp), 'UTC') \
            % compiler.visit_bindparam(element, **kw)

     return compiler.visit_bindparam(element, **kw)

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Custom UTC DateTime Type with Custom insert functionality

2011-02-08 Thread chris e
I understand that's the case for defaults, but to test the code I
actually set a value for one of the columns that is of type
UTCTimestamp, and the compiler extension was never called. I'm
stumped.

That said, I can keep moving, I've decided to just use oracle
timestamps without timezones, and always convert to UTC since
cx_Oracle doesn't handle timezones properly.

On Feb 8, 7:18 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 The TypeDecorator is not used for defaults.   You set 
 default=utc_timestamp() on your Column.    The SQL expression is rendered 
 directly in the INSERT when no value given, no bind params used.

 On Feb 8, 2011, at 10:13 PM, chris e wrote:

  I now have the code below, but the _oracle_utc_timestamp function is
  never called, even when I do explicitly set a value.

  class UTCTimestamp(TypeDecorator):
     impl = TIMESTAMP

     # add the UTC time zone info to naive timestamps
     def process_result_value(self, value, dialect) :
         if value != None :
             value = UTC.localize(value)

         return value

     def process_bind_param(self, value, dialect):
         # if we have a value convert it to UTC if needed
         if value != None :
             if value.tzinfo :
                 value = value.astimezone(UTC)
         return value

  class utc_timestamp(FunctionElement):
     type = DateTime()

  @compiles(utc_timestamp)
  def _oracle_utc_timestamp(element, compiler, **kw):
     import pdb
     pdb.set_trace()

  On Feb 8, 6:09 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  On Feb 8, 2011, at 4:56 PM, chris e wrote:

  To simplify date handling in a project on which I am working, I am
  storing UTC dates in the database in a timestamp with timezone field,
  however, because cx_Oracle does not have any timezone functionality, I
  need to cast the UTC timestamp I'm inserting into the database as a
  timestamp in UTC so that the database does not convert it to the db
  timezone. This also needs to apply to default values.

  I have the following, however, it is not called for default values:

  The @compiles for _BindParamClause was never expected and is not covered 
  within a visit_insert() right now.  Ticket #2042 is added.  However, 
  you're better off using SQL level UTC functions for defaults in any case 
  which would be the workaround here.  Here's one I use for PG + SQL Server. 
   You can add another for Oracle that includes your CAST expression if 
  needed:

  class utcnow(expression.FunctionElement):
      type = DateTime()

  @compiles(utcnow, 'postgresql')
  def _pg_utcnow(element, compiler, **kw):
      return TIMEZONE('utc', CURRENT_TIMESTAMP)

  @compiles(utcnow, 'mssql')
  def _ms_utcnow(element, compiler, **kw):
      return GETUTCDATE()

  from pytz import UTC
  class UTCDateTime(TypeDecorator):
     impl = TIMESTAMP

     # add the UTC time zone info to naive timestamps
     def process_result_value(self, value, dialect) :
         if value != None :
             value = UTC.localize(value)

         return value

  @compiles(_BindParamClause)
  def _compile_utc_date(element, compiler, **kw):
     if isinstance(element.type, UTCDateTime) :
        return from_tz(cast(%s as timestamp), 'UTC') \
            % compiler.visit_bindparam(element, **kw)

     return compiler.visit_bindparam(element, **kw)

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Get list of items to be flushed in session extension

2010-05-04 Thread chris e
I'm trying to provide functionality in a session extension  for an
class to provide a 'before_flush' method that allows the class to make
changes to the session, and add additional items. To do this I need to
get the list of instances to be flushed to the database, and the order
in which sqlalchemy would commit the changes to the database. I then
reverse the order of this list so that items that the instances are
processed in the reverse order of the database commits. I used to do
this using some of the internal task functionality of UOW(see below),
but that is no longer available in 0.6.0. Any suggestions?

   # from UOW
while True:
ret = False
for task in uow.tasks.values():
for up in list(task.dependencies):
if up.preexecute(uow):
ret = True
if not ret:
break

# HACK we are using a hidden method of UOW here
# run our tasks in reverse order this will
# cause child flushes to be called before
# parent ones
tasks = uow._sort_dependencies()
tasks.reverse()
reprocess = False
for task in tasks :
for element in task.elements :
obj_instance = element.state.obj()
if hasattr(obj_instance, 'before_flush')
and \
 
callable(obj_instance.before_flush) and \
not obj_instance in
self.before_items_processed :
reprocess = \
obj_instance.before_flush() or
reprocess and True or False
 
self.before_items_processed.append(obj_instance)

if reprocess :
self._before_flush_inner(session, instances_in)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Get list of items to be flushed in session extension

2010-05-04 Thread chris e
I'll look into the new code. It does look simpler. I tried using the
MapperExtension functionality, but, the last time I tried to use it,
it did not allow for orm level changes(new items added to the session,
or attribute changes) to be made, as the UOW has already been
calculated. My main use case is business logic where child objects end
up updating parent items, or adding items to parent relations. Once
the UOW is calculated I have found that changes to items and their
relations are not caught, which makes sense.

I'd love to have something in the public session/UOW api that provides
the items to be flushed in the order in which they are being flushed,
even though this may be an expensive operation.


On May 4, 4:17 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On May 4, 2010, at 5:23 PM, chris e wrote:

  I'm trying to provide functionality in a session extension  for an
  class to provide a 'before_flush' method that allows the class to make
  changes to the session, and add additional items. To do this I need to
  get the list of instances to be flushed to the database, and the order
  in which sqlalchemy would commit the changes to the database. I then
  reverse the order of this list so that items that the instances are
  processed in the reverse order of the database commits. I used to do
  this using some of the internal task functionality of UOW(see below),
  but that is no longer available in 0.6.0. Any suggestions?

 getting the order is pretty controversial.    what elements of the order 
 are significant to you and why isn't this something you are tracking yourself 
 ?  wiring business logic onto the details of persistence doesn't seem like a 
 good idea.   Or are your flush rules related to SQL -level dependencies, in 
 which case why not let the flush handle it, or at least use a MapperExtension 
 so that your hooks are invoked within the order of flush  ?

 anyway, the order is available in a similar way as before if you peek into 
 what UOWTransaction.execute() is calling, namely _generate_actions().    It 
 would be necessary for you to call this separately yourself which is fairly 
 wasteful from a performance standpoint.   it returns a structure that is 
 significantly simpler than the old one but you'll still have to poke around 
 unitofwork.py to get a feel for it, since this isn't any kind of documented 
 public API (you obviously figured out the previous one, this one is simpler).

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group 
 athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Calling a function after every new connection is created

2010-03-18 Thread chris e
Thanks, that's perfect. I knew it had to be in the API somewhere, but
I couldn't find it.


On Mar 18, 5:44 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Mar 17, 2010, at 9:45 PM, chris e wrote:



  Because of the way that we have our Oracle database setup, I have to
  do the following to force every connection to use exact cursor
  sharing.

     dbapi = engine.dialect.dbapi
     orig_connect = dbapi.connect
     def exact_connect(*args, **kwargs) :
         conn_obj = orig_connect(*args, **kwargs)
         cursor = conn_obj.cursor()
         cursor.execute('alter session set cursor_sharing = exact');
         cursor.close()
         return conn_obj
     dbapi.connect = exact_connect

  Is there a better way to do this? Is there a way to call a function
  with the new connection every time one is created by the engine?

 sure - 
 usehttp://www.sqlalchemy.org/docs/reference/sqlalchemy/interfaces.html?h...



  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] AttributeError: 'cx_Oracle.LOB' object has no attribute 'decode'

2010-03-18 Thread chris e
I am running into the following error running under mod_wsgi, and
against an Oracle Database, using cx_Oracle

I'm running the following query:

result = select([TABLES.SYSTEM_CONFIG.c.value],
 
TABLES.SYSTEM_CONFIG.c.key=='email_address').execute().fetchall()


The table is defined as follows:

SYSTEM_CONFIG = Table('system_config', bound_meta_data,
  Column('value', UnicodeText(),
nullable=False),
  schema=schema, autoload=True)

When the query runs I ocassionally get the following error:

Module sqlalchemy.engine.base:1776 in fetchall
Module sqlalchemy.engine.base:1663 in fetchone
Module sqlalchemy.engine.base:1379 in __init__
Module sqlalchemy.engine.base:1620 in _get_col
Module sqlalchemy.databases.oracle:229 in process
Module sqlalchemy.types:470 in process
AttributeError: 'cx_Oracle.LOB' object has no attribute 'decode'


Any suggestions as to what might be causing this?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: how to change a database

2009-12-04 Thread chris e
As far as the creation I'm no help, but I have done something similar
by connecting to different schemas in oracle. By setting the schema
argument on your table objects, you can bind them to a particular
database(schema), by changing the value of the schema argument, you
can switch from one database to another. I normally have a table
initializer method that I can pass the schema argument to to handle
this functionality.


On Dec 3, 4:37 am, Peter vm...@mycircuit.org wrote:
 Hi

 Lets suppose I created an engine on database 'foo' and I want to create
 a database 'bar' and then do my work on 'bar'.
 What is the recommended way to do this ?

 connection.execute('CREATE DATABASE IF NOT EXISTS bar')
 connection.execute('USE bar')

 The former command succeeds with a warning ( already discussed on this
 list ) but the latter seems to be the wrong approach:

 ...
 2009-12-03 13:28:39,221 INFO sqlalchemy.engine.base.Engine.0x...b0ec COMMIT
 TypeError: 'NoneType' object is not callable in function lambda at
 0x8821bc4 ignored

 Thanks a lot for your advice
 Peter

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: potential oracle character column reflection bug?

2009-08-27 Thread chris e

If we prefer it to be characters, then we should probably use
CHAR_LENGTH instead of DATA_LENGHT when reflecting, and add the CHAR
specifier to the column generators. I can put together a patch if that
helps.




On Aug 27, 10:02 am, jek jason.kirtl...@gmail.com wrote:
 On Aug 26, 6:01 pm, Michael Bayer mike...@zzzcomputing.com wrote:



  On Aug 26, 2009, at 8:53 PM, chris e wrote:

   I just checked the trunk, it the same reflection code is in place, as
   far as the column length is concerned.

   To me the question is, should sqlalchemy be aware of Char vs Byte
   storage?

   Is VARCHAR2(400) the same as VARCHAR2(100 CHAR), by storage size it
   is, but do we want storage size or number of characters?

   I'm not 100% sure. Anyone else out there using oracle, and have an
   opinion.

  oh , I hadn't realized the VARCHAR2(100 CHAR) syntax.  in that case we  
  should stick with data length.   the point is that a round trip  
  reflection back to CREATE TABLE creates the same data type.

  although a thornier issue is, should the number we send to String()  
  attempt to resolve itself in terms of character length instead of data  
  length.   For that I'd love for Jason to chime in but we might have to  
  put out the bat signal on the roof for that to happen.

 IIRC SQL specifies the length of VARCHARs as characters rather than
 bytes, and I'd imagine that's what we're currently getting via
 reflection  DDL generation on most backends.  I guess I'd vote for
 the oracle dialect to do the CHAR annotation for generation for parity
 with String(...) on other backends.  The specter of doing char set
 detection  width math for reflection is pretty icky though.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] potential oracle character column reflection bug?

2009-08-26 Thread chris e

I noticed that with reflection, my column lengths seems to be
incorrect for varchar2, and char columns that are using char storage
instead of byte storage.

I.E. a VARCHAR2(400 CHAR) colum, is reported to have a length of 1600
by sqlalchemy, as our database uses utf-32 for storage, however, there
is only room for 400 characters.

Should sqlalchemy be using the CHAR_LENGTH field instead of
DATA_LENGTH when reflecting character columns? CHAR_LENGTH returns 400
in the example above instead of 1600
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: potential oracle character column reflection bug?

2009-08-26 Thread chris e

I just checked the trunk, it the same reflection code is in place, as
far as the column length is concerned.

To me the question is, should sqlalchemy be aware of Char vs Byte
storage?

Is VARCHAR2(400) the same as VARCHAR2(100 CHAR), by storage size it
is, but do we want storage size or number of characters?

I'm not 100% sure. Anyone else out there using oracle, and have an
opinion.



On Aug 26, 5:37 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Aug 26, 2009, at 6:56 PM, chris e wrote:



  I noticed that with reflection, my column lengths seems to be
  incorrect for varchar2, and char columns that are using char storage
  instead of byte storage.

  I.E. a VARCHAR2(400 CHAR) colum, is reported to have a length of 1600
  by sqlalchemy, as our database uses utf-32 for storage, however, there
  is only room for 400 characters.

  Should sqlalchemy be using the CHAR_LENGTH field instead of
  DATA_LENGTH when reflecting character columns? CHAR_LENGTH returns 400
  in the example above instead of 1600

 its possible.  whichever value gets fed back into CHAR, VARCHAR2,  
 NVARCHAR2 to create the same result would be most appropriate.    make  
 sure you test out with the 0.6 trunk since the oracle dialect is  
 dramatically different now.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: potential oracle character column reflection bug?

2009-08-26 Thread chris e

The issue I'm having with the length is that I have a verification
layer written in a mapper extension that verifies the length of what
the user is inserting based on the field length. Guess I'll have to
convert to bytes to determine the actual length of the data to be
inserted.

On Aug 26, 6:01 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Aug 26, 2009, at 8:53 PM, chris e wrote:



  I just checked the trunk, it the same reflection code is in place, as
  far as the column length is concerned.

  To me the question is, should sqlalchemy be aware of Char vs Byte
  storage?

  Is VARCHAR2(400) the same as VARCHAR2(100 CHAR), by storage size it
  is, but do we want storage size or number of characters?

  I'm not 100% sure. Anyone else out there using oracle, and have an
  opinion.

 oh , I hadn't realized the VARCHAR2(100 CHAR) syntax.  in that case we  
 should stick with data length.   the point is that a round trip  
 reflection back to CREATE TABLE creates the same data type.

 although a thornier issue is, should the number we send to String()  
 attempt to resolve itself in terms of character length instead of data  
 length.   For that I'd love for Jason to chime in but we might have to  
 put out the bat signal on the roof for that to happen.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] _dependency_processor Attribute Error Bug in orm.dependency with viewonly many to many relationships

2009-08-17 Thread chris e

I'm not sure if this affects S.A 0.6.

It appears that orm.properties.RelationProperty. _post_init does not
create a _dependency_processor attribute if the relation is viewonly.


Line1016:
if not self.viewonly:
self._dependency_processor =
dependency.create_dependency_processor(self)

This causes orm.dependency._check_reverse_action to fail as the
attribute does not exist.

The patch I applied is to add attribute existence verification:

Line 142:

for r in self.prop._reverse_property:
if hasattr(r, '_dependency_processor') :
if (r._dependency_processor, action, parent, child) in
uowcommit.attributes:
return True
return False



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: _dependency_processor Attribute Error Bug in orm.dependency with viewonly many to many relationships

2009-08-17 Thread chris e

Ticket created:

#1507

Chris

On Aug 17, 12:49 pm, Philip Jenvey pjen...@underboss.org wrote:
 On Aug 17, 2009, at 12:01 PM, chris e wrote:





  I'm not sure if this affects S.A 0.6.

  It appears that orm.properties.RelationProperty. _post_init does not
  create a _dependency_processor attribute if the relation is viewonly.

  Line1016:
         if not self.viewonly:
             self._dependency_processor =
  dependency.create_dependency_processor(self)

  This causes orm.dependency._check_reverse_action to fail as the
  attribute does not exist.

  The patch I applied is to add attribute existence verification:

  Line 142:

         for r in self.prop._reverse_property:
             if hasattr(r, '_dependency_processor') :
                 if (r._dependency_processor, action, parent, child) in
  uowcommit.attributes:
                     return True
         return False

 Could you please log a ticket for this on trac, with a short test that  
 would reproduce it?

 --
 Philip Jenvey
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Unit Of work seems to be calling save/delete twice

2007-04-17 Thread chris e

I'm not sure why. But when I do a delete/sql alchemy seems to be
running the save/delete operation twice. Could this be related to a
circular dependency in UOW that is undetected?? When deleting this is
causing the following error because the database delete is done twice:

sqlalchemy.exceptions.ConcurrentModificationError: Updated rowcount 0
does not match number of objects updated


To fix this I added the following, but it is a serious hack, and
probably needs to be addressed in the unit of work code, but I'm not
sure where to look. Code below.  I am on 0.3.5, however I tested
against 0.3.6 as well and this bug appears to be present there as
well.


in orm.unitofwork.UnitOfWork

def _remove_deleted(self, obj):
if hasattr(obj, _instance_key):

# ADDED
# ignore key errors if the item has already been
deleted
try :
del self.identity_map[obj._instance_key]
except KeyError:
pass
try:
self.deleted.remove(obj)
except KeyError:
pass
try:
self.new.remove(obj)
except KeyError:
pass



in orm.mapper.Mapper


   def delete_obj(self, objects, uowtransaction):
issue DELETE statements for a list of objects.

this is called within the context of a UOWTransaction during a
flush operation.

if self.__should_log_debug:
self.__log_debug(delete_obj() start)

connection = uowtransaction.transaction.connection(self)

[self.extension.before_delete(self, connection, obj) for obj
in objects]
deleted_objects = util.Set()
for table in self.tables.sort(reverse=True):
if not self._has_pks(table):
continue
delete = []
for obj in objects:

# ADDED
# 4/17/07
# this prevents items from being deleted twice
if hasattr(obj, '_has_been_deleted_') :
continue

params = {}
if not hasattr(obj, _instance_key):
continue
else:
delete.append(params)
for col in self.pks_by_table[table]:
params[col.key] = self.get_attr_by_column(obj,
col)
if self.version_id_col is not None:
params[self.version_id_col.key] =
self.get_attr_by_column(obj, self.version_id_col)
deleted_objects.add(obj)
if len(delete):
def comparator(a, b):
for col in self.pks_by_table[table]:
x = cmp(a[col.key],b[col.key])
if x != 0:
return x
return 0
delete.sort(comparator)
clause = sql.and_()
for col in self.pks_by_table[table]:
clause.clauses.append(col ==
sql.bindparam(col.key, type=col.type))
if self.version_id_col is not None:
clause.clauses.append(self.version_id_col ==
sql.bindparam(self.version_id_col.key, type=self.version_id_col.type))
statement = table.delete(clause)
c = connection.execute(statement, delete)
if c.supports_sane_rowcount() and c.rowcount !=
len(delete):
raise
exceptions.ConcurrentModificationError(Updated rowcount %d does not
match number of objects updated %d % (c.cursor.rowcount,
len(delete)))

# ADDED
# this prevents items from being deleted twice
for obj in deleted_objects :
obj._has_been_deleted_ = True
[self.extension.after_delete(self, connection, obj) for obj in
deleted_objects]


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] deferred join against details table in polymorphic union for performance

2007-03-26 Thread chris e

Currently when sqlalchemy performs a polymorphic lookup, it queries
against all the detail tables, and returns the correct  Python object
represented by the polymorphic identity. Essentially you get a sub
select for each detail table that is included in your primary join
even though only one of the detail tables contains the data that is
specific to the identity of the object. Is there currently a way, or a
plan to support, splitting the polymorphic query into two queries? The
first would get the base table, the second would retrieve the details
based on the discovered table. This way only two tables would be
queried instead of n where n is the number of polymorphic identities.

Our DBAs have concerns that as our tables grow, possibly to the size
of 2.5million rows, that unioning against multiple tables, despite the
fact that we are unioning against a primary key, will become non-
performant. I know I could write a custom mapper to resolve this
issue, however, I thought I would bring this up since it may affect
other users, and there may already be a way to solve this easily of
which I am not aware.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Column and business object verification??

2007-01-29 Thread chris e



On Jan 29, 1:49 am, Julien Cigar [EMAIL PROTECTED] wrote:
 Michael Bayer wrote:
  On Jan 24, 1:50 am, chris e [EMAIL PROTECTED] wrote:

  I am planning on using sqlalchemy to build the api for a database I am
  developing, and I was wondering if there is any type of column
  verification that occurs before database commit.

  I.E.:  a string column with length 40 would throw a verification
  exception if a value longer that 40 characters was placed into it and
  saved.

  your database will throw an error.  why reinvent the wheel ?

  Additionally has anyone thought of implementing some sort of
  verification support for objects that are mapped which would allow the
  object to perform pre database action logic?  The intent being that the
  instance would check to see that the object meets additional business
  logic requirements before it is inserted or updated.

  I.E.: a User business object would verify that the userid was part of
  the [a-z] [A-Z]and [0-9] character classes, and if not an exception
  would be raised to prevent the database action.

  thats exactly the kind of thing you should write into your
  application.  has nothing to do with an ORM.  for generic validation
  widgets to help, check out formencode (http://formencode.org/).(Just to 
  share a method which woks well for me :)

 What I usually to perform validation is to create a property()
 (_set_attribute(), _get_attribute()) for each mapped column, then I use
 the column_prefix=_ attribute in SQLAlchemy.
 After that I have a function which iterate on the columns
 (YourMappedObject.c.keys()) and use a try / except with a setattr (it's
 a bit more sophisticated than that in fact), for example :

 assign_mapper(session_context, Language, table_languages, column_prefix='_')

 class Language(object):

   def _set_iso_code(self, value):
 try:
   value = ''.join(value.split()).lower()
 except AttributeError:
   raise Invalid('iso code must be a string')
 if len(value) == 2:
   self._iso_code = value
 else:
   raise Invalid('Invalid iso code')

   def _get_iso_code(self):
 return self._iso_code

   iso_code = property(_get_iso_code, _set_iso_code)

 (...)

 then I do something like (not complete):
 def populate(MappedObject, values):
   errors = []
   for c in MappedObject.c.keys():
 value = values.get(c, Undefined())
 if value is not Undefined:
   try:
 setattr(MappedObject, c , value)
   except Invalid, e:
 errors.append(str(e))
   return errors

 also, I have in my models a __before_save__ / __before_update__ which
 check additional things like NOT NULL constraints (!None), ...

 --
 Julien Cigar
 Belgian Biodiversity Platformhttp://www.biodiversity.be
 Université Libre de Bruxelles
 Campus de la Plaine CP 257
 Bâtiment NO, Bureau 4 N4 115C (Niveau 4)
 Boulevard du Triomphe, entrée ULB 2
 B-1050 Bruxelles
 office: [EMAIL PROTECTED]
 home: [EMAIL PROTECTED]

Thanks for the info. After I did a bit more reading I realised that I 
would have to build a custom mapper, I will definitely take your 
suggestions into account when I write it.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Column and business object verification??

2007-01-28 Thread chris e

I am planning on using sqlalchemy to build the api for a database I am
developing, and I was wondering if there is any type of column
verification that occurs before database commit.

I.E.:  a string column with length 40 would throw a verification
exception if a value longer that 40 characters was placed into it and
saved.

Additionally has anyone thought of implementing some sort of
verification support for objects that are mapped which would allow the
object to perform pre database action logic?  The intent being that the
instance would check to see that the object meets additional business
logic requirements before it is inserted or updated.

I.E.: a User business object would verify that the userid was part of
the [a-z] [A-Z]and [0-9] character classes, and if not an exception
would be raised to prevent the database action.


My sincerest apologies If this functionality already exists for
sqlalchemy. If the functionality exists please point me in the right
direction. If not I would be interested in some help developing the
functionality for my uses, and then returning the code to the
sqlalchemy repository if the community thinks that the features are
needed/useful.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---