[sqlalchemy] Re: database definitions - was sqlalchemy migration/schema creation

2008-05-15 Thread az

they said in Soft Wars: read the source, Luke...
u want everything premade?
it's what it says. no mysql in there. i've no idea about mysql. 
just add it - poke here and there - and once it works for you, post it 
here. i'll check it in...

On Thursday 15 May 2008 07:08:59 Lukasz Szybalski wrote:
 On Fri, May 9, 2008 at 10:14 AM,  [EMAIL PROTECTED] wrote:
  On Friday 09 May 2008 16:32:25 Lukasz Szybalski wrote:
  On Fri, May 9, 2008 at 4:46 AM,  [EMAIL PROTECTED] wrote:
   On Friday 09 May 2008 03:05, Lukasz Szybalski wrote:
   Do you guys know what would give me column definition of
   table?
  
   do u want it as generated source-text or what?
 
  Yes. The Final output I would like is the txt version of db
  definitions.
 
  autoload ---  sqlalchemy.Column('Address_Sid',
  sqlalchemy.Integer, primary_key=True),
 
  well, then see that one.
 
   have a look at dbcook/dbcook/misc/metadata/autoload.py
   at dbcook.sf.net
 
  running 'python autoload.py dburl' will dump the db-metadata into
  src-text,

 I keep getting this error:

  python autoload.py mysql://user:[EMAIL PROTECTED]/mytable
 Traceback (most recent call last):
   File autoload.py, line 204, in ?
 assert 0, 'unsupported engine.dialect:'+str( engine.dialect)
 AssertionError: unsupported
 engine.dialect:sqlalchemy.databases.mysql.MySQLDialect object at
 0x2af7f6f522d0
 (ENV)[EMAIL PROTECTED]:~/tmp/metadata$ python autoload.py
 mysql://root:[EMAIL PROTECTED]/mysql
 mysql://root:[EMAIL PROTECTED]/mysql
 Traceback (most recent call last):
   File autoload.py, line 204, in ?
 assert 0, 'unsupported engine.dialect:'+str( engine.dialect)
 AssertionError: unsupported
 engine.dialect:sqlalchemy.databases.mysql.MySQLDialect object at
 0x2b11f2dcb2d0


 Any ideas?
 Lucas

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Pre-commit hooks

2008-05-15 Thread Yannick Gingras

Michael Bayer [EMAIL PROTECTED] writes:

 easy enough to build yourself a generic MapperExtension that scans  
 incoming objects for a _pre_commit() method.

Yeah indeed.  I used this:

--
class HookExtension(MapperExtension):
 Extention to add pre-commit hooks.

Hooks will be called in Mapped classes if they define any of these
methods:
  * _pre_insert()
  * _pre_delete()
  * _pre_update()

def before_insert(self, mapper, connection, instance):
if getattr(instance, _pre_insert, None):
instance._pre_insert()
return EXT_CONTINUE

def before_delete(self, mapper, connection, instance):
if getattr(instance, _pre_delete, None):
instance._pre_delete()
return EXT_CONTINUE

def before_update(self, mapper, connection, instance):
if getattr(instance, _pre_update, None):
instance._pre_update()
return EXT_CONTINUE
--

It works great.  Thanks for the pointers.

-- 
Yannick Gingras

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Pre-commit hooks

2008-05-15 Thread Roger Demetrescu

On 5/15/08, Yannick Gingras [EMAIL PROTECTED] wrote:

 Michael Bayer [EMAIL PROTECTED] writes:

  easy enough to build yourself a generic MapperExtension that scans
  incoming objects for a _pre_commit() method.

 Yeah indeed.  I used this:

 --
 class HookExtension(MapperExtension):
 Extention to add pre-commit hooks.

Hooks will be called in Mapped classes if they define any of these
methods:
  * _pre_insert()
  * _pre_delete()
  * _pre_update()

def before_insert(self, mapper, connection, instance):
if getattr(instance, _pre_insert, None):
instance._pre_insert()
return EXT_CONTINUE

[cut]

Any reason for not using hasattr  ? Like...

def before_insert(self, mapper, connection, instance):
if hasattr(instance, _pre_insert):
instance._pre_insert()
return EXT_CONTINUE


Cheers,

Roger

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Pre-commit hooks

2008-05-15 Thread az

speed wise, this is better: hasattr is implemented as getattr + try 
except. i would do it even:
  f = getattr(instance, _pre_insert, None)
  if f: f()
Thus the func name is spelled only once - avoids stupid mistakes.

On Thursday 15 May 2008 17:36:52 Roger Demetrescu wrote:
 On 5/15/08, Yannick Gingras [EMAIL PROTECTED] wrote:
  Michael Bayer [EMAIL PROTECTED] writes:
   easy enough to build yourself a generic MapperExtension that
   scans incoming objects for a _pre_commit() method.
 
  Yeah indeed.  I used this:
 
  --
  class HookExtension(MapperExtension):
  Extention to add pre-commit hooks.
 
 Hooks will be called in Mapped classes if they define any of
  these methods:
   * _pre_insert()
   * _pre_delete()
   * _pre_update()
 
 def before_insert(self, mapper, connection, instance):
 if getattr(instance, _pre_insert, None):
 instance._pre_insert()
 return EXT_CONTINUE

 [cut]

 Any reason for not using hasattr  ? Like...

 def before_insert(self, mapper, connection, instance):
 if hasattr(instance, _pre_insert):
 instance._pre_insert()
 return EXT_CONTINUE


 Cheers,

 Roger

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Pre-commit hooks

2008-05-15 Thread Roger Demetrescu

On 5/15/08, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 speed wise, this is better: hasattr is implemented as getattr + try
 except. i would do it even:
  f = getattr(instance, _pre_insert, None)
  if f: f()
 Thus the func name is spelled only once - avoids stupid mistakes.

Good point...

Thanks,

Roger

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Pre-commit hooks

2008-05-15 Thread Yannick Gingras

[EMAIL PROTECTED] writes:

 speed wise, this is better: hasattr is implemented as getattr + try 
 except. i would do it even:
   f = getattr(instance, _pre_insert, None)
   if f: f()
 Thus the func name is spelled only once - avoids stupid mistakes.

Spelling only once is the killer feature of your approach.  I just
refactored my implementation.

Thanks!

-- 
Yannick Gingras

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Challenging relationship: many-to-many, custom join, mapped dictionary, can it work???

2008-05-15 Thread Allen Bierbaum

I am trying to setup a many-to-many relationship for two tables where
I would like to allow more natural access to the data using a
dictionary interface.  The exact usage is pretty complex to explain,
but I have come up with a simple example that demonstrates the same
concept. (there is a full code example at the end of the e-mail)

The tables are:
  - Script: holds a code script to run
  - VarTypes: Type details for variables that can be used as input and
output to the script

The relationship is:
  - For each Script, there is a set of named variables.
  - Each variable has a type associated with it
  - Each variable can be either an input variable or an output variable

What I want to allow in the end is something like this:

script = Script()
script.code = test code

var1 = VarType()
var2 = VarType()
var3 = VarType()
var4 = VarType()

script.input_vars[in1] = var1
script.input_vars[in2] = var2
script.output_vars[out1] = var3
script.output_vars[out2] = var4

Is there some way to setup a many-to-many relationship to do this?

Thanks,
Allen


Here is the more complete code example to play with and see what I am
thinking so far for table definitions.

#---
from sqlalchemy import (create_engine, Table, Column, Integer, String, Text,
MetaData, ForeignKey)
from sqlalchemy.orm import mapper, backref, relation, create_session

engine = create_engine('sqlite:///:memory:', echo=True)

metadata = MetaData()
metadata.bind = engine

script_table = Table('script', metadata,
Column('id', Integer, primary_key=True),
Column('code', Text)
)

types_table = Table('types', metadata,
Column('id', Integer, primary_key=True),
Column('desc', Text))

script_var_table = Table('script_var_table', metadata,
Column('script_id', Integer, ForeignKey('script.id')),
Column('var_name', Text),
Column('input_output_type', Integer),
Column('type_id', Integer, ForeignKey('types.id')))

print Creating all tables
metadata.create_all(engine)

INPUT_VAR  = 0
OUTPUT_VAR = 1

class Script(object):
   def __init__(self):
  pass

class VarType(object):
   def __init__(self):
  pass

mapper(VarType, types_table)
mapper(Script, script_table, properties = {
   'vars':relation(VarType, secondary=script_var_table)
   })

session = create_session(bind = engine)
script = Script()
script.code = test code
var1 = VarType()
var1.desc = var type 1
var2 = VarType()
var2.desc = var type 2

script.vars.append(var1)
script.vars.append(var2)

session.save(script)
session.flush()

#  WOULD LIKE - #
# Can this be done using
# - Custom join condition on input_output_type
# - column_mapped_collection
#
script = Script()
script.code = test code

var1 = VarType()
var2 = VarType()
var3 = VarType()
var4 = VarType()

script.input_vars[in1] = var1
script.input_vars[in2] = var2
script.output_vars[out1] = var3
script.output_vars[out2] = var4

print Done
#---

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Polymorphic across foreign key

2008-05-15 Thread J. Cliff Dyer

I'm trying to implement polymorphic inheritance using the
sqlalchemy.ext.declarative, but the field that I want to use for the
polymorphic_on is not in my polymorphic base table, but at the other end
of a many-to-one relationship.  We have items of many types, and in the
item table, we have a type_id field which contains an integer, but in
the item_type table, that integer is mapped to a descriptive name.
Essentially, I'd like to use that name as my polymorphic_identity.  I've
tried to implement this with declarative classes below, but I get an
error, also shown below.


class ItemType(Declarative):
__table__ = sa.Table('item_type', Declarative.metadata)
type_id = sa.Column('type_id', sa.Integer, primary_key=True)
description = sa.Column('description', sa.Unicode(250))
item = orm.relation('Item', backref='item_type')

class Item(Declarative):
__table__ = sa.Table('item', Declarative.metadata)
item_id = sa.Column('item_id', sa.Integer, primary_key=True)
type_id = sa.Column('type_id', sa.Integer,
sa.ForeignKey('item_type.type_id'))
short_title = sa.Column('short_title', sa.Unicode(100))
__mapper_args__ = {'polymorphic_on': ItemType.description }


### Traceback follows:


Traceback (most recent call last):
  File polymorph_test.py, line 23, in ?
class ItemType(Declarative):
  File /net/docsouth/dev/lib/python/sqlalchemy/ext/declarative.py,
line 257, in __init__
cls.__mapper__ = mapper_cls(cls, table, properties=our_stuff,
**mapper_args)
  File /net/docsouth/dev/lib/python/sqlalchemy/orm/__init__.py, line
566, in mapper
return Mapper(class_, local_table, *args, **params)
  File /net/docsouth/dev/lib/python/sqlalchemy/orm/mapper.py, line
181, in __init__
self.__compile_properties()
  File /net/docsouth/dev/lib/python/sqlalchemy/orm/mapper.py, line
653, in __compile_properties
self._compile_property(key, prop, False)
  File /net/docsouth/dev/lib/python/sqlalchemy/orm/mapper.py, line
719, in _compile_property
raise exceptions.ArgumentError(Column '%s' is not represented in
mapper's table.  Use the `column_property()` function to force this
column to be mapped as a read-only attribute. % str(c))
sqlalchemy.exceptions.ArgumentError: Column 'description' is not
represented in mapper's table.  Use the `column_property()` function to
force this column to be mapped as a read-only attribute.


Apparently it can't find the description column, even though it's
clearly defined on ItemType.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Polymorphic across foreign key

2008-05-15 Thread J. Cliff Dyer

On Thu, 2008-05-15 at 11:24 -0400, J. Cliff Dyer wrote:
 I'm trying to implement polymorphic inheritance using the
 sqlalchemy.ext.declarative, but the field that I want to use for the
 polymorphic_on is not in my polymorphic base table, but at the other end
 of a many-to-one relationship.  We have items of many types, and in the
 item table, we have a type_id field which contains an integer, but in
 the item_type table, that integer is mapped to a descriptive name.
 Essentially, I'd like to use that name as my polymorphic_identity.  I've
 tried to implement this with declarative classes below, but I get an
 error, also shown below.
 
 
 class ItemType(Declarative):
 __table__ = sa.Table('item_type', Declarative.metadata)
 type_id = sa.Column('type_id', sa.Integer, primary_key=True)
 description = sa.Column('description', sa.Unicode(250))
 item = orm.relation('Item', backref='item_type')
 
 class Item(Declarative):
 __table__ = sa.Table('item', Declarative.metadata)
 item_id = sa.Column('item_id', sa.Integer, primary_key=True)
 type_id = sa.Column('type_id', sa.Integer,
 sa.ForeignKey('item_type.type_id'))
 short_title = sa.Column('short_title', sa.Unicode(100))
 __mapper_args__ = {'polymorphic_on': ItemType.description }
 
 
 ### Traceback follows:
 
 
 Traceback (most recent call last):
   File polymorph_test.py, line 23, in ?
 class ItemType(Declarative):
   File /net/docsouth/dev/lib/python/sqlalchemy/ext/declarative.py,
 line 257, in __init__
 cls.__mapper__ = mapper_cls(cls, table, properties=our_stuff,
 **mapper_args)
   File /net/docsouth/dev/lib/python/sqlalchemy/orm/__init__.py, line
 566, in mapper
 return Mapper(class_, local_table, *args, **params)
   File /net/docsouth/dev/lib/python/sqlalchemy/orm/mapper.py, line
 181, in __init__
 self.__compile_properties()
   File /net/docsouth/dev/lib/python/sqlalchemy/orm/mapper.py, line
 653, in __compile_properties
 self._compile_property(key, prop, False)
   File /net/docsouth/dev/lib/python/sqlalchemy/orm/mapper.py, line
 719, in _compile_property
 raise exceptions.ArgumentError(Column '%s' is not represented in
 mapper's table.  Use the `column_property()` function to force this
 column to be mapped as a read-only attribute. % str(c))
 sqlalchemy.exceptions.ArgumentError: Column 'description' is not
 represented in mapper's table.  Use the `column_property()` function to
 force this column to be mapped as a read-only attribute.
 
 
 Apparently it can't find the description column, even though it's
 clearly defined on ItemType.

For what it's worth, mysqldump yields the following for item_type:

--
-- Table structure for table `item_type`
--

DROP TABLE IF EXISTS `item_type`;
CREATE TABLE `item_type` (
  `type_id` int(11) NOT NULL auto_increment,
  `description` char(250) collate utf8_unicode_ci default NULL,
  `ts` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,
  PRIMARY KEY  (`type_id`)
) ENGINE=InnoDB AUTO_INCREMENT=23 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4.6 released

2008-05-15 Thread Bobby Impollonia

Any idea when a .deb will be available?
http://packages.debian.org/unstable/python/python-sqlalchemy is still .4.5

On Wed, May 14, 2008 at 3:02 PM, Michael Bayer [EMAIL PROTECTED] wrote:

 SQLAlchemy 0.4.6 is now available at:

 http://www.sqlalchemy.org/download.html

 This release includes some fixes for some refactorings in 0.4.5,
 introduces a new collate() expression construct, and improves the
 behavior of contains_eager(), a useful ORM option.

 The 0.4 series is now in bugfix mode as we put the new features into
 0.5, which is in the current trunk.

 changelog:

 - orm
 - A fix to the recent relation() refactoring which fixes
   exotic viewonly relations which join between local and
   remote table multiple times, with a common column shared
   between the joins.

 - Also re-established viewonly relation() configurations
   that join across multiple tables.

 - contains_eager(), the hot function of the week, suppresses
   the eager loader's own generation of the LEFT OUTER JOIN,
   so that it is reasonable to use any Query, not just those
   which use from_statement().

 - Added an experimental relation() flag to help with
   primaryjoins across functions, etc.,
   _local_remote_pairs=[tuples].  This complements a complex
   primaryjoin condition allowing you to provide the
   individual column pairs which comprise the relation's
   local and remote sides.  Also improved lazy load SQL
   generation to handle placing bind params inside of
   functions and other expressions.  (partial progress
   towards [ticket:610])

 - repaired single table inheritance such that you
   can single-table inherit from a joined-table inherting
   mapper without issue [ticket:1036].

 - Fixed concatenate tuple bug which could occur with
   Query.order_by() if clause adaption had taken place.
   [ticket:1027]

 - Removed an ancient assertion that mapped selectables
   require alias names - the mapper creates its own alias
   now if none is present.  Though in this case you need to
   use the class, not the mapped selectable, as the source of
   column attributes - so a warning is still issued.

 - Fixes to the exists function involving inheritance
   (any(), has(), ~contains()); the full target join will be
   rendered into the EXISTS clause for relations that link to
   subclasses.

 - Restored usage of append_result() extension method for
   primary query rows, when the extension is present and only
   a single- entity result is being returned.

 - Fixed Class.collection==None for m2m relationships
   [ticket:4213]

 - Refined mapper._save_obj() which was unnecessarily calling
   __ne__() on scalar values during flush [ticket:1015]

 - Added a feature to eager loading whereby subqueries set as
   column_property() with explicit label names (which is not
   necessary, btw) will have the label anonymized when the
   instance is part of the eager join, to prevent conflicts
   with a subquery or column of the same name on the parent
   object.  [ticket:1019]

 - Same as [ticket:1019] but repaired the non-labeled use
   case [ticket:1022]

 - Adjusted class-member inspection during attribute and
   collection instrumentation that could be problematic when
   integrating with other frameworks.

 - Fixed duplicate append event emission on repeated
   instrumented set.add() operations.

 - set-based collections |=, -=, ^= and = are stricter about
   their operands and only operate on sets, frozensets or
   subclasses of the collection type. Previously, they would
   accept any duck-typed set.

 - added an example dynamic_dict/dynamic_dict.py, illustrating
   a simple way to place dictionary behavior on top of
   a dynamic_loader.

 - sql
 - Added COLLATE support via the .collate(collation)
   expression operator and collate(expr, collation) sql
   function.

 - Fixed bug with union() when applied to non-Table connected
   select statements

 - Improved behavior of text() expressions when used as FROM
   clauses, such as select().select_from(text(sometext))
   [ticket:1014]

 - Column.copy() respects the value of autoincrement, fixes
   usage with Migrate [ticket:1021]

 - engines
 - Pool listeners can now be provided as a dictionary of
   callables or a (possibly partial) duck-type of
   PoolListener, your choice.

 - Added reset_on_return option to Pool which will disable
   the database state cleanup step (e.g. issuing a
   rollback()) when connections are returned to the pool.

 -extensions
 - set-based association proxies |=, -=, ^= and = are
   stricter about their operands and only operate on sets,
   frozensets or other association proxies. Previously, they
   would accept any duck-typed set.

 - 

[sqlalchemy] Re: Polymorphic across foreign key

2008-05-15 Thread J. Cliff Dyer

On Thu, 2008-05-15 at 11:30 -0400, J. Cliff Dyer wrote:
 On Thu, 2008-05-15 at 11:24 -0400, J. Cliff Dyer wrote:
  I'm trying to implement polymorphic inheritance using the
  sqlalchemy.ext.declarative, but the field that I want to use for the
  polymorphic_on is not in my polymorphic base table, but at the other end
  of a many-to-one relationship.  We have items of many types, and in the
  item table, we have a type_id field which contains an integer, but in
  the item_type table, that integer is mapped to a descriptive name.
  Essentially, I'd like to use that name as my polymorphic_identity.  I've
  tried to implement this with declarative classes below, but I get an
  error, also shown below.
  
  

OK.  The old example was wrong in other ways.  That was a problem with
__table__ vs. __tablename__, but I got past that, and got my
polymorphism working on the basic case, where I just use the type_id
directly as follows:

class ItemType(Declarative):
__tablename__ = 'item_type'
type_id = sa.Column('type_id', sa.Integer, primary_key=True)
description = sa.Column('description', sa.Unicode(250))

class Item(Declarative):
__tablename__ = 'item'
item_id = sa.Column('item_id', sa.Integer, primary_key=True)
type_id = sa.Column('type_id', sa.Integer,
sa.ForeignKey('item_type.type_id'))
short_title = sa.Column('short_title', sa.Unicode(100))
item_type = orm.relation('ItemType', backref='items')
__mapper_args__ = {'polymorphic_on': type_id }

But I'd still like to do something like this:

__mapper_args__ = {'polymorphic_on': item_type.description }

but I get the following error:

Traceback (most recent call last):
  File polymorph_test.py, line 28, in ?
class Item(Declarative):
  File polymorph_test.py, line 34, in Item
__mapper_args__ = {'polymorphic_on': item_type.description }
AttributeError: 'PropertyLoader' object has no attribute 'description'

or from the following:

__mapper_args__ = {'polymorphic_on': ItemType.description }

Traceback (most recent call last):
  File polymorph_test.py, line 28, in ?
class Item(Declarative):
  File /net/docsouth/dev/lib/python/sqlalchemy/ext/declarative.py,
line 257, in __init__
cls.__mapper__ = mapper_cls(cls, table, properties=our_stuff,
**mapper_args)
  File /net/docsouth/dev/lib/python/sqlalchemy/orm/__init__.py, line
566, in mapper
return Mapper(class_, local_table, *args, **params)
  File /net/docsouth/dev/lib/python/sqlalchemy/orm/mapper.py, line
181, in __init__
self.__compile_properties()
  File /net/docsouth/dev/lib/python/sqlalchemy/orm/mapper.py, line
684, in __compile_properties
col = self.mapped_table.corresponding_column(self.polymorphic_on) or
self.polymorphic_on
  File /net/docsouth/dev/lib/python/sqlalchemy/sql/expression.py, line
1735, in corresponding_column
target_set = column.proxy_set
  File /net/docsouth/dev/lib/python/sqlalchemy/orm/mapper.py, line
638, in __getattribute__
return getattr(getattr(cls, clskey), key)
AttributeError: 'InstrumentedAttribute' object has no attribute
'proxy_set'


How can I use this field for polymorphism?  Is it possible?




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Connecting to an MS SQL server ?

2008-05-15 Thread TkNeo

Hi,

This is my first encounter with sqlalchemy. I am trying to connect to
an MS SQL server 2000 that is not on local  host. I want to connect
using Integrated Security and not use a specific username and
password. Can anyone tell me the format of the connection string ?

Thanks
TK

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to specify NOLOCK queries in SA (mssql)

2008-05-15 Thread Rick Morrison
Hi Bruce,

I'm considering a switch from pymssql to pyodbc myself in the
not-too-distance future, and this thread has me a bit curious about what's
going on. This is a subject that may affect SQL more in the future when ODBC
and JDBC drivers get more use.

I think there's two distinct questions that need to be answered to get to
the bottom of this. The first question is why are these queries being
issued at all, and from where? Like Mike says, SQLA is playing no part in
constructing or issuing these queries.

From the bit of googling that I've done so far, it seems that the FMTONLY
queries are issued behind the scenes by the data connector to fetch metadata
regarding the query. While there's a lot of reasons a data connector might
need to have metadata, there's two that seem especially likely when SQLA
comes into play:

   a) There are un-typed bind parameters in the query, and the connector
needs to know the data types for some reason.

   b) There is going to be a client-side cursor constructed, and result
metadata is needed to allocate the cursor. From the description you give, I
would bet that this is your main issue.

If the cause is (a), a fix might be problematic, as SQLA issues all of its
queries using bind parameters, and I'm not sure if type information is used
for each. But if you're using explicit bind parameters, you may want to
specify the type on those.

As for the more likely cause (b) I would think this could be gotten around
by making sure you specify firehose (read-only, forward-processing,
non-scrollable) cursors for retrieval, but I'm not sure what the pyodbc
settings for this might be. As a bonus, you'll probably see a bit of a
performance boost using these types of cursors as well.


The second question is more of a mystery to me: ok, so the data connector
issues a FMTONLY queryif it's just fetching metadata, why would that
cause database locks?.

This one I can't figure out. Unless you're calling stored procedures or
UDF's that have locking side effects, It's got to be a bug in the data
connector.  From what I read a FMTONLY query should be pretty fast (other
than the round-trip network time), and should lock nothing.

Are you running on Windows, or on Unix? What's your ODBC connector?

Please post to the list as you work through this and let us know what you
find...

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Polymorphic across foreign key

2008-05-15 Thread Michael Bayer


On May 15, 2008, at 12:12 PM, J. Cliff Dyer wrote:



 How can I use this field for polymorphism?  Is it possible?


polymorphic discriminators are currently table-local scalar columns.   
So if you had a many-to-one of discriminators, youd currently have to  
encode the discriminator to the primary key identifier of each  
discriminotor.  We will eventually allow a python function to be used  
as a discriminator as well which you can use to add a level of  
abstraction to this (you'd preload the list of discriminiator objects  
and again map based on primary key).

But a straight load of an entity relying upon an inline JOIN to get  
the polymorphic discriminator in all cases, automatically by SA, is  
not going to happen - its inefficient and would be very complex to  
implement.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Challenging relationship: many-to-many, custom join, mapped dictionary, can it work???

2008-05-15 Thread Michael Bayer


On May 15, 2008, at 11:23 AM, Allen Bierbaum wrote:

 #  WOULD LIKE - #
 # Can this be done using
 # - Custom join condition on input_output_type
 # - column_mapped_collection
 #

it can be done.  Try working first with two separate relation()s using  
a secondary join that filters on input or output (theres an  
example of something similar in the docs); then add  
column_mapped_collection in.

Alternatively, use just one relation, and build dict-like proxies on  
top of it (i.e. class descriptors which return a custom object that  
has a __getitem__ method, which then reads back to the main relation  
to get data).  This the more manual route but is not terribly  
complex.  A not quite the same thing example of this can be seen at:  
http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/examples/dynamic_dict/dynamic_dict.py
 
  .

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Default collection class for unordered relations

2008-05-15 Thread Nick Murphy

Hello Group,

After looking over the 0.5 migration notes and seeing that implicit
ordering is to be removed, it seems to me that it might make sense to
change the default collection class for unordered relations from a
list to a multiset.  This would reinforce that unless order_by is
specified, one cannot rely on a collection having a particular order.
I could see this as helpful in preventing bugs when switching between
databases (e.g. from MySQL to PostgreSQL) with different default
ordering behaviors.

Regards,
Nick
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] processing.Process and sqlalchemy

2008-05-15 Thread Craig Swank

Hello,
I am playing around with loading data into a database using subclasses
of processing.Process so that data files can be sent to a server.  I
would like to have the server fire off a process that takes care of
parsing the data file and loading the data into a data.

When I run some tests that start up a few of theses processes at once,
I run into OperationalErrors due to the database being locked by one
of the other process.

Does anyone have suggestions on how to handle this?  I am pretty new
to sqlalchemy, so please forgive me if I'm missing something basic.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Default collection class for unordered relations

2008-05-15 Thread az


 After looking over the 0.5 migration notes and seeing that implicit
 ordering is to be removed, it seems to me that it might make sense
 to change the default collection class for unordered relations from
 a list to a multiset.  This would reinforce that unless order_by is
 specified, one cannot rely on a collection having a particular
 order. I could see this as helpful in preventing bugs when
 switching between databases (e.g. from MySQL to PostgreSQL) with
 different default ordering behaviors.
mmh. between db's - maybe u're right. But the order will also change 
depending on current hash-values between 2 runs on otherwise same 
system... There's plenty of difficulties to get a repeatable flow for 
tests etc already.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Default collection class for unordered relations

2008-05-15 Thread Nick Murphy

 mmh. between db's - maybe u're right. But the order will also change
 depending on current hash-values between 2 runs on otherwise same
 system... There's plenty of difficulties to get a repeatable flow for
 tests etc already.

That's exactly my point in fact -- unless order_by is specified, a
collection with a defined order is illogical.  Using an unordered
multiset instead would ensure that users don't accidentally rely on
this behavior.  Of course, if order_by is given, a list makes perfect
sense.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Default collection class for unordered relations

2008-05-15 Thread Michael Bayer


On May 15, 2008, at 12:51 PM, Nick Murphy wrote:


 Hello Group,

 After looking over the 0.5 migration notes and seeing that implicit
 ordering is to be removed, it seems to me that it might make sense to
 change the default collection class for unordered relations from a
 list to a multiset.  This would reinforce that unless order_by is
 specified, one cannot rely on a collection having a particular order.
 I could see this as helpful in preventing bugs when switching between
 databases (e.g. from MySQL to PostgreSQL) with different default
 ordering behaviors.


it is worth considering.   though im not sure if i like the implicit  
order by changes the default collection class, only because it might  
get kind of confusing.

if we had a totally explicit collection class is required approach,  
that would be something different (like, cant use list as a  
collection unless order_by is present).  We might just say in any case  
that order_by is required with listbut then that might be too  
steep a change for 0.4 to 0.5, and also it gets a little weird in that  
we dont necessarily know how well the collection is list-like/set-like.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Default collection class for unordered relations

2008-05-15 Thread Nick Murphy

 if we had a totally explicit collection class is required approach,
 that would be something different (like, cant use list as a
 collection unless order_by is present).  We might just say in any case
 that order_by is required with listbut then that might be too
 steep a change for 0.4 to 0.5, and also it gets a little weird in that
 we dont necessarily know how well the collection is list-like/set-like.

Yeah, it's definitely tricky, and I could certainly see the behavior
as confusing to new users.  The pedant in me tends to think that lists
are appropriate only when there is an explicit ordering for a
collection -- as with orderinglist -- and that sets/multisets should
be used in most other circumstances; if an arbitrary order is needed,
the collection can be sorted in Python.  That said, it's really
convenient to not have to worry about that, even if it's not 100%
correct.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Default collection class for unordered relations

2008-05-15 Thread jason kirtland

Nick Murphy wrote:
 mmh. between db's - maybe u're right. But the order will also change
 depending on current hash-values between 2 runs on otherwise same
 system... There's plenty of difficulties to get a repeatable flow for
 tests etc already.
 
 That's exactly my point in fact -- unless order_by is specified, a
 collection with a defined order is illogical.  Using an unordered
 multiset instead would ensure that users don't accidentally rely on
 this behavior.  Of course, if order_by is given, a list makes perfect
 sense.

Logic that depends on any ordering from a non-ORDER BY result is a bug, 
but I don't know that the impact of presenting all users with a new, 
non-standard, non-native collection type and injecting some kind of 
__eq__ into mapped classes to satisfy a multiset contract is worth it 
for what amounts to nannying.  Not to mention that unless the 
implementation did something really silly like rand() its internal 
ordering for each __iter__ call, it doesn't offer a huge safety 
improvement for the common case of 'for x in obj.collection: ...'


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Connecting to an MS SQL server ?

2008-05-15 Thread Yannick Gingras

TkNeo [EMAIL PROTECTED] writes:

 Hi,

Hello Tarun, 

 This is my first encounter with sqlalchemy. I am trying to connect to
 an MS SQL server 2000 that is not on local  host. I want to connect
 using Integrated Security and not use a specific username and
 password. Can anyone tell me the format of the connection string ?

I don't know about Integrated Security but we use alchemy to connect
to a MSSQL from a GNU/Linux box and it works really well.  We use Unix
ODBC with TDS with the DSN registered with the local ODBC.

Take a look at

http://www.lucasmanual.com/mywiki/TurboGears#head-4a47fe38beac67d9d03e49c4975cfc3dd165fa31

My obdb.ini looks like

 [JDED]
 Driver  = TDS
 Trace   = No
 Server  = 192.168.33.53
 Port= 1433

and my alchemy connection string is

  mssql://user:pass@/?dsn=JDEDscope_identity=1

-- 
Yannick Gingras

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Default collection class for unordered relations

2008-05-15 Thread Rick Morrison
I think Jason hits the nail on the head with his response - my first
reaction on the initial post was that was splitting hairs to enforce the
difference between an ordered list and an (allegedly) unordered list, but I
thought it was going to be a non-starter until I read Mike's reply. It seems
like overhead and unneeded complexity for what is really just a reminder.
And as his iterative example points out, not a very effective reminder at
that.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Default collection class for unordered relations

2008-05-15 Thread Michael Bayer

it should be considered that when you use hibernate, the collection  
type is explicit with the collection mapping itself; and when you use  
the list type, a list-index is required (which is also a much  
better name here than order_by).   So there is the notion that using  
a list should at all times have a list-index, and it is more  
correct for sure.  And this does bug me.  I just am not sure if  
we're there yet as far as this kind of setting, if we want to force  
that level of explicitness in a Python lib, etc.  Its something of a  
cultural decision.


On May 15, 2008, at 2:06 PM, Rick Morrison wrote:

 I think Jason hits the nail on the head with his response - my first  
 reaction on the initial post was that was splitting hairs to enforce  
 the difference between an ordered list and an (allegedly) unordered  
 list, but I thought it was going to be a non-starter until I read  
 Mike's reply. It seems like overhead and unneeded complexity for  
 what is really just a reminder. And as his iterative example points  
 out, not a very effective reminder at that.


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Connecting to an MS SQL server ?

2008-05-15 Thread Rick Morrison
The DSN method should work with Integrated Security as well. Here's a short
writeup of the DSN configuration:

   http://support.microsoft.com/kb/176378

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Polymorphic across foreign key

2008-05-15 Thread J. Cliff Dyer

On Thu, 2008-05-15 at 12:27 -0400, Michael Bayer wrote:
 
 On May 15, 2008, at 12:12 PM, J. Cliff Dyer wrote:
 
 
 
  How can I use this field for polymorphism?  Is it possible?
 
 
 polymorphic discriminators are currently table-local scalar columns.   
 So if you had a many-to-one of discriminators, youd currently have to  
 encode the discriminator to the primary key identifier of each  
 discriminotor.  We will eventually allow a python function to be used  
 as a discriminator as well which you can use to add a level of  
 abstraction to this (you'd preload the list of discriminiator objects  
 and again map based on primary key).
 
 But a straight load of an entity relying upon an inline JOIN to get  
 the polymorphic discriminator in all cases, automatically by SA, is  
 not going to happen - its inefficient and would be very complex to  
 implement.
 

Thanks.  That makes sense.  I guess if you can pull across a foreign
key, you can pull across any arbitrary query.  

My goal was to have the code self-documenting in this respect, and I've
achieved that by creating a helper function get_type_id(type_name), so
each polymorphic child of Item now has 

{'polymorphic_identity': get_type_id(u'Article')}
{'polymorphic_identity': get_type_id(u'Illustration')}
{'polymorphic_identity': get_type_id(u'Map')}

and so on.  It seems to be working so far.

Thanks,
Cliff



  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Default collection class for unordered relations

2008-05-15 Thread Nick Murphy

 Logic that depends on any ordering from a non-ORDER BY result is a bug,
 but I don't know that the impact of presenting all users with a new,
 non-standard, non-native collection type and injecting some kind of
 __eq__ into mapped classes to satisfy a multiset contract is worth it
 for what amounts to nannying.  Not to mention that unless the
 implementation did something really silly like rand() its internal
 ordering for each __iter__ call, it doesn't offer a huge safety
 improvement for the common case of 'for x in obj.collection: ...'

I have to disagree: it's hardly nannying as much as it is representing
the underlying reality with greater fidelity.  Relations in SQL are by
definition unordered, so there's something of an logical mismatch in
representing them with a type for which order is defined.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Default collection class for unordered relations

2008-05-15 Thread jason kirtland

Nick Murphy wrote:
 Logic that depends on any ordering from a non-ORDER BY result is a bug,
 but I don't know that the impact of presenting all users with a new,
 non-standard, non-native collection type and injecting some kind of
 __eq__ into mapped classes to satisfy a multiset contract is worth it
 for what amounts to nannying.  Not to mention that unless the
 implementation did something really silly like rand() its internal
 ordering for each __iter__ call, it doesn't offer a huge safety
 improvement for the common case of 'for x in obj.collection: ...'
 
 I have to disagree: it's hardly nannying as much as it is representing
 the underlying reality with greater fidelity.  Relations in SQL are by
 definition unordered, so there's something of an logical mismatch in
 representing them with a type for which order is defined.

There's no disagreement from me on that.  I just don't see purity 
winning out over practicality here.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] allow_column_override

2008-05-15 Thread Chris Guin
My goal is to have a one-to-many relation defined using the same name 
as the foreign key column underneath.  So that, if my Detection 
table has a foreignkey column named sensor, the following mappers 
should work, I think:

mapper(Sensor, sensor)
detectionmapper = mapper(Detection, detection, 
allow_column_override=True, properties={
 'sensor': relation(Sensor),
})

I'm getting the following exception, however, when I actually create 
a Detection with a Sensor and try to flush the session:

Traceback (most recent call last):
   File console, line 1, in module
   File 
c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg\sqlalchemy\orm\scoping.py,
 
line 98, in do
 return getattr(self.registry(), name)(*args, **kwargs)
   File 
c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg\sqlalchemy\orm\session.py,
 
line 757, in flush
 self.uow.flush(self, objects)
   File 
c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg\sqlalchemy\orm\unitofwork.py,
 
line 233, in flush
 flush_context.execute()
   File 
c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg\sqlalchemy\orm\unitofwork.py,
 
line 445, in execute
 UOWExecutor().execute(self, tasks)
   File 
c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg\sqlalchemy\orm\unitofwork.py,
 
line 930, in execute
 self.execute_save_steps(trans, task)
   File 
c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg\sqlalchemy\orm\unitofwork.py,
 
line 948, in execute_save_steps
 self.execute_dependencies(trans, task, False)
   File 
c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg\sqlalchemy\orm\unitofwork.py,
 
line 959, in execute_dependencies
 self.execute_dependency(trans, dep, False)
   File 
c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg\sqlalchemy\orm\unitofwork.py,
 
line 942, in execute_dependency
 dep.execute(trans, isdelete)
   File 
c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg\sqlalchemy\orm\unitofwork.py,
 
line 895, in execute
 self.processor.process_dependencies(self.targettask, [elem.state 
for elem in self.targettask.polymorphic_tosave_elements if elem
.state is not None], trans, delete=False)
   File 
c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg\sqlalchemy\orm\dependency.py,
 
line 332, in process_dependencies
 self._synchronize(state, child, None, False, uowcommit)
   File 
c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg\sqlalchemy\orm\dependency.py,
 
line 374, in _synchronize
 sync.populate(child, self.mapper, state, self.parent, 
self.prop.synchronize_pairs)
   File 
c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg\sqlalchemy\orm\sync.py,
 
line 27, in populate
 self._raise_col_to_prop(True, source_mapper, l, dest_mapper, r)
NameError: global name 'self' is not defined


I am still using SQLAlchemy 0.4.5.

Thanks for any help!
Chris 
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] SA with ORACLE connect

2008-05-15 Thread sbhatt

Hi All,

I want to write a query in SA which uses ORACLE 'connect' along with
joins on other table.
The query will be:-
SELECT grouprelation.grouprelationid AS grouprelation_grouprelationid,
grouprelation.parentgroupid AS grouprelation_parentgroupid,
grouprelation.childgroupid AS grouprelation_childgroupid FROM
grouprelation, grouptable CONNECT BY childgroupid = PRIOR
parentgroupid START WITH childgroupid = 91 AND
grouprelation.parentgroupid = grouptable.groupid AND
grouptable.wfstatus != 'I'

I tried writing it in SA as following:-

connectstring = connect by childgroupid = prior parentgroupid start
with childgroupid = 91

query =
session.query(GroupRelation).filter(sql.and_(GroupRelation.c.parentgroupid
== GroupTable.c.groupid, GroupTable.c.wfstatus != 'I', connectstring))

compiles to

SELECT grouprelation.grouprelationid AS grouprelation_grouprelationid,
grouprelation.parentgroupid AS grouprelation_parentgroupid,
grouprelation.childgroupid AS grouprelation_childgroupid FROM
grouprelation, grouptable WHERE grouprelation.parentgroupid =
grouptable.groupid AND grouptable.wfstatus != 'I' AND connect by
childgroupid = prior parentgroupid start with childgroupid = 91

which is erroneous because when I write a filter, it compiles to a
WHERE clause, which does not go with CONNECT and gives ORA-00936:
missing expression error.

The only way as of now will be to write the whole query in the
from_statement which will be my last resort.

Does anyone know how to write it using SA ?

Thanks,
sbhatt

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Challenging relationship: many-to-many, custom join, mapped dictionary, can it work???

2008-05-15 Thread Allen Bierbaum

On Thu, May 15, 2008 at 11:37 AM, Michael Bayer
[EMAIL PROTECTED] wrote:
 On May 15, 2008, at 11:23 AM, Allen Bierbaum wrote:
 #  WOULD LIKE - #
 # Can this be done using
 # - Custom join condition on input_output_type
 # - column_mapped_collection
 #

 it can be done.  Try working first with two separate relation()s using
 a secondary join that filters on input or output (theres an
 example of something similar in the docs); then add
 column_mapped_collection in.

Will do.

The thing that I am missing though is an example of using
column_mapped_collection with a many-to-many relationship.  Maybe I am
just a bit slow, but I can't wrap my head around how to make that work
or more specifically, how to specify it.

Thanks,
Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: allow_column_override

2008-05-15 Thread Michael Bayer

On May 15, 2008, at 3:23 PM, Chris Guin wrote:

 My goal is to have a one-to-many relation defined using the same  
 name as the foreign key column underneath.  So that, if my  
 Detection table has a foreignkey column named sensor, the  
 following mappers should work, I think:

allow_column_override is not used for this, its used to entirely  
obliterate the knowledge of the underlying Column so that you can  
place a relation() there instead.  Its also deprecated since the same  
effect can be acheived with exclude_columns (thanks for reminding me  
so i can remove it from 0.5).

Since you dont want to obliterate the column and you actually need it,  
do this:

mapper(Class, mytable, properties={
'_actual_foreign_key_column' : mytable.c.sensor,
'sensor':relation(Sensor)
})






 mapper(Sensor, sensor)
 detectionmapper = mapper(Detection, detection,  
 allow_column_override= True , properties={
 'sensor' : relation(Sensor),
 })

 I'm getting the following exception, however, when I actually create  
 a Detection with a Sensor and try to flush the session:

 Traceback (most recent call last):
   File console, line 1, in module
   File c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg 
 \sqlalchemy\orm\scoping.py, line 98, in do
 return getattr(self.registry(), name)(*args, **kwargs)
   File c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg 
 \sqlalchemy\orm\session.py, line 757, in flush
 self.uow.flush(self, objects)
   File c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg 
 \sqlalchemy\orm\unitofwork.py, line 233, in flush
 flush_context.execute()
   File c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg 
 \sqlalchemy\orm\unitofwork.py, line 445, in execute
 UOWExecutor().execute(self, tasks)
   File c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg 
 \sqlalchemy\orm\unitofwork.py, line 930, in execute
 self.execute_save_steps(trans, task)
   File c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg 
 \sqlalchemy\orm\unitofwork.py, line 948, in execute_save_steps
 self.execute_dependencies(trans, task, False)
   File c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg 
 \sqlalchemy\orm\unitofwork.py, line 959, in execute_dependencies
 self.execute_dependency(trans, dep, False)
   File c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg 
 \sqlalchemy\orm\unitofwork.py, line 942, in execute_dependency
 dep.execute(trans, isdelete)
   File c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg 
 \sqlalchemy\orm\unitofwork.py, line 895, in execute
 self.processor.process_dependencies(self.targettask, [elem.state  
 for elem in self.targettask.polymorphic_tosave_elements if elem
 .state is not None], trans, delete=False)
   File c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg 
 \sqlalchemy\orm\dependency.py, line 332, in process_dependencies
 self._synchronize(state, child, None, False, uowcommit)
   File c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg 
 \sqlalchemy\orm\dependency.py, line 374, in _synchronize
 sync.populate(child, self.mapper, state, self.parent,  
 self.prop.synchronize_pairs)
   File c:\python25\lib\site-packages\SQLAlchemy-0.4.5-py2.5.egg 
 \sqlalchemy\orm\sync.py, line 27, in populate
 self._raise_col_to_prop(True, source_mapper, l, dest_mapper, r)
 NameError: global name 'self' is not defined


 I am still using SQLAlchemy 0.4.5.

 Thanks for any help!
 Chris
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Challenging relationship: many-to-many, custom join, mapped dictionary, can it work???

2008-05-15 Thread Michael Bayer


On May 15, 2008, at 3:54 PM, Allen Bierbaum wrote:

 The thing that I am missing though is an example of using
 column_mapped_collection with a many-to-many relationship.  Maybe I am
 just a bit slow, but I can't wrap my head around how to make that work
 or more specifically, how to specify it.


it shouldnt be any different than using it with a one to many  
relationship.  Unless you have additional columns in your many-to-many  
table, in which case you wouldnt use M2M, youd use the association  
pattern.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SA with ORACLE connect

2008-05-15 Thread Michael Bayer


On May 15, 2008, at 3:53 PM, sbhatt wrote:


 Does anyone know how to write it using SA ?


CONNECT BY goes where things like ORDER BY and GROUP BY go, so is not  
going to work within the WHERE clause.  It is as of yet unsupported by  
the select() construct so you'd have to go with from_statement() for  
now.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Connecting to an MS SQL server ?

2008-05-15 Thread TkNeo

I don't want to use the DSN method. The DSN would not be configured at
some client machines etc etc. ..

TK


On May 15, 1:30 pm, Rick Morrison [EMAIL PROTECTED] wrote:
 The DSN method should work with Integrated Security as well. Here's a short
 writeup of the DSN configuration:

http://support.microsoft.com/kb/176378
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Connecting to an MS SQL server ?

2008-05-15 Thread Rick Morrison
You really should reconsider. DSN is a much easier setup method than trying
to specify a myriad of ODBC options in a connection string. There's a GUI
user interface for setting up DSN's etc. It's the simpler and better
supported method.

If you really are dead-set against it, you'll need to use the 'odbc_options'
keyword in the dburi or as a keyword argument to create_engine() and specify
the ODBC connection options as a string.
For the details of the ODBC connection string contents, you'll need to
consult the ODBC documentation.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Connecting to an MS SQL server ?

2008-05-15 Thread Olivier Thiery
The following works for me :

mssql://:@host/catalog

It seems to use automatically SSPI auth if you don't specify any user and
password.

Olivier

2008/5/15, TkNeo [EMAIL PROTECTED]:


 I don't want to use the DSN method. The DSN would not be configured at
 some client machines etc etc. ..

 TK



 On May 15, 1:30 pm, Rick Morrison [EMAIL PROTECTED] wrote:
  The DSN method should work with Integrated Security as well. Here's a
 short
  writeup of the DSN configuration:
 
 http://support.microsoft.com/kb/176378
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] declarative_base and UNIQUE Constraint

2008-05-15 Thread Carlos Hanson

Greetings,

I just started playing with declarative_base.  I have one table on
which I would like to have a unique contraint on two columns.  Is this
possible using declarative?

-- 
Carlos Hanson

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative_base and UNIQUE Constraint

2008-05-15 Thread Carlos Hanson

On Thu, May 15, 2008 at 3:43 PM, Carlos Hanson [EMAIL PROTECTED] wrote:
 Greetings,

 I just started playing with declarative_base.  I have one table on
 which I would like to have a unique contraint on two columns.  Is this
 possible using declarative?

 --
 Carlos Hanson


I think there are two solutions.  The first is to use a direct Table
contruct within the class definition:

  class MyClass(Base):
__table__ = Table('my_class_table', Base.metadata, ...

The second is to apply a unique index after the fact:

  my_index = Index('myindex', MyClass.col1, MyClass.col2, unique=True)

I like the first solution, so I'm not going to test the second today.

-- 
Carlos Hanson

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Connecting to an MS SQL server ?

2008-05-15 Thread Lukasz Szybalski

On Thu, May 15, 2008 at 12:51 PM, Yannick Gingras [EMAIL PROTECTED] wrote:

 TkNeo [EMAIL PROTECTED] writes:

 Hi,

 Hello Tarun,

 This is my first encounter with sqlalchemy. I am trying to connect to
 an MS SQL server 2000 that is not on local  host. I want to connect
 using Integrated Security and not use a specific username and
 password. Can anyone tell me the format of the connection string ?

 I don't know about Integrated Security but we use alchemy to connect
 to a MSSQL from a GNU/Linux box and it works really well.  We use Unix
 ODBC with TDS with the DSN registered with the local ODBC.

 Take a look at

 http://www.lucasmanual.com/mywiki/TurboGears#head-4a47fe38beac67d9d03e49c4975cfc3dd165fa31

 My obdb.ini looks like

  [JDED]
  Driver  = TDS
  Trace   = No
  Server  = 192.168.33.53
  Port= 1433

 and my alchemy connection string is

  mssql://user:pass@/?dsn=JDEDscope_identity=1



Not sure what platform you are using but on linux I use:
e = sqlalchemy.create_engine(mssql://user:[EMAIL 
PROTECTED]:1433/database?driver=TDSodbc_options='TDS_Version=8.0')
but you need sa 0.4.6.

on windows you can use:
e = sqlalchemy.create_engine('mssql://user:[EMAIL PROTECTED]:1433/database')

Lucas

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---