[sqlalchemy] select nextval - None

2010-02-15 Thread Christoph Haas
Dear list,

I have been upgrading an application I wrote with SQLAlchemy 0.4 to
0.5.8. Currently I'm fighting a strange effect. In some places of my
application I can't insert data into the database any more. I get error
messages like:

IntegrityError: (IntegrityError) null value in column myobject_id
violates not-null constraint 'INSERT INTO myobject (id, related_id,
text) VALUES (%(id)s, %(related_id)s, %(text)s)' {'related_id': 1, 'id':
None, 'text': 'The quick brown fox'}

So apparently SQLAlchemy tries to insert a new row with the id field
being None. This is obviously wrong because the field is supposed to
contain the next serial value. I enabled debugging and saw that before
the insert SQLAlchemy runs:

___SQL___ sqlalchemy.engine.base.Engine.0x...3b8c: select
  nextval('myobject_id_seq')
___SQL___ sqlalchemy.engine.base.Engine.0x...3b8c: None

I was curious and ran that SELECT statement in a psql shell directly
on the database server and correctly received:

mydatabase=# select nextval('myobject_id_seq');
 nextval
-
 358
(1 row)

After hours of digging around in my code, reinstalling psycopg2 and
writing test cases I'm stuck somehow. How come SQLAlchemy fails to find
the next ID of that column?

Thanks in advance for any insight.

Regards
 Christoph



signature.asc
Description: OpenPGP digital signature


[sqlalchemy] Re: One-to-many relation fails with unsaved, pending instance and is an orphan

2009-05-05 Thread Christoph Haas
Michael, thanks a lot for your reply. I haven't yet understood your 
explanation completely so please allow me to ask further.

Am Montag, 4. Mai 2009 23:01:01 schrieb Michael Bayer:
 the key to the problem is in the traceback:

 Traceback (most recent call last):
   File test.py, line 80, in module
 item.logbookentries.append(logbookentry)
   File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py,
 line 159, in __get__
 return self.impl.get(instance_state(instance))
   File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py,
 line 375, in get
 value = callable_()
   File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/strategies.py,
 line 568, in __call__
 result = q.all()
   File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py, line
 1193, in all
 return list(self)
   File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py, line
 1286, in __iter__
 self.session._autoflush()
   File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py,
 line 899, in _autoflush
 self.flush()
   File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py,
 line 1356, in flush
 self._flush(objects)
   File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py,
 line 1413, in _flush
 mapperutil.state_str(state), path))
 sqlalchemy.orm.exc.FlushError: Instance LogbookEntry at 0xdeaa70 is an
 unsaved, pending instance and is an orphan (is not attached to any
 parent 'Item' instance via that classes' 'logbookentries' attribute)

 at the point of adding the object to item.logbookentries, autoflush is
 invoked.  You can see this in the stack trace that it is occuring before
 the append() ever happens.   Autoflush is attempting to flush your
 LogbookEntry which has been added to the session by attaching it to the
 user object.  Both user and item are already persistent so that's
 why you get live database activity when touching them.

Above the logbook thingy I run these statements:

# Create a user...
Session.save(user)
# Create an item...
Session.save(item)
# Save the previously created objects into the database...
Session.commit()

So my Session.commit() should do the database action and create one row 
for the user and one row for the item. So why is there a problem with the 
autoflushing? SQLAlchemy could save a new logbookentry to the database 
referring via foreign keys to the user and item rows in their respective 
tables. But the error message says that the LogbookEntry does not have any 
connection to an item. Why not? I'm at that very moment creating a 
connection by running

item.logbookentries.append(logbookentry)

 The most straightforward way to prevent premature addition of your entry
 to the session via attachment to the user is to disable cascade on the
 user.logbookentries relation:

 'logbookentries':orm.relation(LogbookEntry,
 backref=orm.backref('user', uselist=False),
 cascade=none
 ),

 this should be fine as you will always be associating a LogbookEntry
 with an item, which will take care of cascading it into the session.

Don't I need a cascade here? If the user of a certain logbook entry is 
removed then I need to set the logbook_table.user_id to None. Or don't I?

And my other question remains, too: why has this been working in 0.4?

Sorry for the followup but I want to learn why this is happening because 
I'm a big fan of SQLAlchemy and fear to fail at such relations in a bigger 
context. Thanks!

 Christoph



signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] One-to-many relation fails with unsaved, pending instance and is an orphan

2009-05-04 Thread Christoph Haas
Dear list,

I have an SQLAlchemy problem that has ruined my last weekend. So I have 
reduced the problem to the bare minimum and thought I'd ask here on what my 
fault is. :)

Imagine a (web) application that has Users (employees) and Items (on the 
shelf). Like in an online shop. Now there's a logbook that records every 
change that happens over time. Like a User has taken an Item and done 
something to it. So a logbook entry is both connected (many-to-one) to the 
Users and the Items. So I can find out what a certain User did 
(User.logbookentries) or see what has happened to a certain Item 
(Item.logbookentries). In the reverse way I can see which User and which 
Item a logbookentry refers to.

Allow me to show you my example code (you should be able to run it like 
that and immediately reproduce the problem):

=
#!/usr/bin/env python  
# -*- coding: utf-8 -*-

import sqlalchemy as sql
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy import MetaData, create_engine, orm

Session = scoped_session(sessionmaker())
metadata = MetaData()   

# Define the tables
users_table = sql.Table(
'users', metadata,  
sql.Column('id', sql.Integer, primary_key=True),
sql.Column('name', sql.Unicode()),  
)   

items_table = sql.Table(
'items', metadata,  
sql.Column('id', sql.Integer, primary_key=True),
sql.Column('name', sql.Unicode),
)   

logbook_table = sql.Table(
'logbook', metadata,  
sql.Column('id', sql.Integer, primary_key=True),
sql.Column('item_id', sql.Integer, sql.ForeignKey('items.id')),
sql.Column('user_id', sql.Integer, sql.ForeignKey('users.id')),
sql.Column('text', sql.Unicode(100)),  
)  

# Define the classes for ORM mapping
class User(object): pass
class Item(object): pass
class LogbookEntry(object): pass

# ORM mapping
orm.mapper(User, users_table,
properties={
'logbookentries':orm.relation(LogbookEntry,
# either works without this backref:
backref=orm.backref('user', uselist=False),
# or works if this becomes cascade=all,delete-orphan:
),
}
)

orm.mapper(Item, items_table,
properties={
'logbookentries':orm.relation(LogbookEntry,
cascade=all, delete-orphan),
}
)

orm.mapper(LogbookEntry, logbook_table)

# Connect to the database
engine = create_engine('sqlite:///mystuff.sqlite', echo=True)
Session.configure(bind=engine)

# Create database schema
metadata.create_all(bind=engine)

# Create a user
user = User()
user.name=u'SomeUser'
Session.save(user)

# Create an item
item = Item()
item.name=u'SomeItem'
Session.save(item)

# Save the previously created objects into the database
Session.commit()

# Create a logbook entry
logbookentry = LogbookEntry()
logbookentry.text = u'SomeLogText'

# Connect the LogbookEntry instance to the User and Item
logbookentry.user = user
item.logbookentries.append(logbookentry)

Session.commit()
=

What actually happens when I run this code is this error message:

sqlalchemy.orm.exc.FlushError: Instance LogbookEntry at 0x91f5b2c is an 
unsaved, pending instance and is an orphan (is not attached to any parent 
'Item' instance via that classes' 'logbookentries' attribute)

I wasn't sure why this happens. After all I have connected the LogbookEntry 
to the Item by saying item.logbookentries.append(logbookentry) so the 
LogbookEntry isn't orphaned at all.

There were two remedies that made this code working. First one was to 
remove the line

backref=orm.backref('user', uselist=False),

from the User.logbookentries mapper. And the second one was defining the 
cascade as

cascade=all,delete-orphan

instead of

cascade=all

Does SQLAlchemy want to tell me that a backref wouldn't work unless I 
enforce a user entry to be there by using a delete cacade from LogbookEntry 
to User? After all if I delete a User then the backref would point nowhere.

The reason I don't use 'delete-orphan' here is that a User (employee) could 
get fired and deleting the User from the database would automatically kill 
all the LogbookEntrys. But I want to preserve the logbook even if the 
referring User is gone. I would expect the logbook.user_id to be None if 
the User is gone.

The funny fact is that this code works well with SQLAlchemy 0.4.8 but fails 
on 0.5.3. And even if SQLAlchemy wants to save me from doing something 
illogical I had expected another error message telling me that my cascade 
without delete-orphan is useless if I want to have a backref on the 
'user'.

Is this a case of an improvable error message? Or have I 

[sqlalchemy] Re: ORM base class for 0.5?

2009-01-13 Thread Christoph Haas
Thanks for the code. For those who might also be interested in an ORM base
class providing __init__, update and __repr__ - this is what I use now with
0.5 (comments welcome):

=
import sqlalchemy as sql
from sqlalchemy import orm

class MyOrm(object):
def __init__(self, **kw):
Create a mapped object with preset attributes
for key, value in kw.iteritems():
if hasattr(self, key):
setattr(self, key, value)
elif not ignore_missing_columns:
raise AttributeError('Cannot set attribute which is not column 
in mapped table: %s' % (key,))

def update(self, update_dict, ignore_missing_columns=True):
Update an object's attributes from a dictionary
for key, value in update_dict.iteritems():
if hasattr(self, key):
setattr(self, key, value)
elif not ignore_missing_columns:
raise AttributeError('Cannot set attribute which is not column 
in mapped table: %s' % (key,))

def __repr__(self):
Return a decent printable representation of a mapped object and
its attributes.
atts = []
columns = orm.object_mapper(self).mapped_table.c
for column in columns:
key = column.key
if hasattr(self, key):
col = columns.get(key)
if not (getattr(col, 'server_default', None) is not None or
isinstance(getattr(col, 'default', None), 
sql.PassiveDefault) or
getattr(self, key) is None):
atts.append( (key, getattr(self, key)) )
return self.__class__.__name__ + '(' + ', '.join(x[0] + '=' + 
repr(x[1]) for x in atts) + ')'
=

Would be nice if mapped objects could automatically get such methods
assigned. Not sure if SQLAlchemy can or should provide that or if it
broke other functionality.

Cheers
 Christoph


signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] ORM base class for 0.5?

2009-01-12 Thread Christoph Haas
Dear list,

I used to use a certain ORM base class with SQLA 0.4 for a while. It 
defined __init__, __repr__ and update so that I could preset mapped 
objects with values like

leo = User(name='leo', age=23)

or just

print leo

and especially

leo.update(dictionary_from_web_form)

I couldn't find it on the wiki anymore. And what I use here doesn't work 
properly on 0.5 (it relies heavily on the 'c' attribute and just removing 
it doesn't solve things because it tries if a_certain_attribute in 
this_object.c).

If anyone has ported that few lines of code to 0.5 please let me know. 

I thought that the declarative_base helps here but it's just helping with 
the declaration.

Cheers
 Christoph


signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Documenting SQLAlchemy models with Sphinx fails (table already defined)

2008-08-27 Thread Christoph Haas
Dear list...

I'm on a documentation frenzy now in my current Pylons project. And I'm 
using Sphinx for that purpose. All my controllers and helper functions are 
documented automatically. Great. Unfortunately Sphinx fails to create 
module documentation for my models. I'm staying close to the Using 
SQLAlchemy with Pylons [0] way as documented on the Pylons wiki. This is 
the output on my console:

=
$ make html
mkdir -p _build/html _build/doctrees
sphinx-build -b html -d _build/doctrees   . _build/html
Sphinx v0.4.2, building html
trying to load pickled env... not found
building [html]: targets for 4 source files that are out of date
updating environment: 4 added, 0 changed, 0 removed
reading... antrag-allgemein antrag-erstellen api Exception occurred:
  File /var/lib/python-support/python2.5/sqlalchemy/schema.py, line 101, 
in __call__
    columns on an existing Table object. % key)
InvalidRequestError: Table 'applications' is already defined for this 
MetaData instance.  Specify 'useexisting=True' to redefine options and 
columns on an existing Table object.
The full traceback has been saved in /tmp/sphinx-err-lBwkYm.log, if you 
want to report the issue to the author.
Please also report this if it was a user error, so that a better error 
message can be provided next time.
Send reports to [EMAIL PROTECTED] Thanks!
make: *** [html] Error 1
=

I have a table applications here - but it's only defined once. My 
model/__init__.py looks roughly like (less interesting parts removed):

=
# -*- coding: utf-8 -*-

import sqlalchemy as sql
import sqlalchemy.orm as orm
from myapp.model import meta

def init_model(engine):
    sm = orm.sessionmaker(autoflush=True, transactional=True, bind=engine)
    meta.engine = engine
    meta.Session = orm.scoped_session(sm)

applications_table = sql.Table(
    'applications', meta.metadata,
    sql.Column('id', sql.Integer, primary_key=True),
    sql.Column('applicant', sql.Unicode(8), nullable=False),
    sql.Column('created_date', sql.DateTime(), default=sql.func.now()),
    sql.Column('valid_until', sql.DateTime()),
    sql.Column('description', sql.Unicode(100)),
    sql.Column('reason', sql.Unicode(4000)),
    sql.Column('status', sql.Integer, nullable=False),
)

class Application(object): pass

orm.mapper(Application, applications_table)
=

Does anyone have an idea what's going wrong? And whether it needs to fixed 
in Sphinx, SQLAlchemy (I'm on 0.4.6) or my code? Mike Orr assumed that it 
could be that Sphinx loads the module twice to tear out docstrings.

Cheers
 Christoph

[0] 
http://wiki.pylonshq.com/display/pylonsdocs/Using+SQLAlchemy+with+Pylons


signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Sphinx documentation stumbles upon SQLAlchemy models

2008-08-26 Thread Christoph Haas
Dear list...

I'm on a documentation frenzy now in my current Pylons project. And I'm 
using Sphinx for that purpose. All my controllers and helper functions are 
documented automatically. Great. Unfortunately Sphinx fails to create 
module documentation for my models. I'm staying close to the Using 
SQLAlchemy with Pylons way as documented on the wiki. This is the output 
on my console:

=
$ make html
mkdir -p _build/html _build/doctrees
sphinx-build -b html -d _build/doctrees   . _build/html
Sphinx v0.4.2, building html
trying to load pickled env... not found
building [html]: targets for 4 source files that are out of date
updating environment: 4 added, 0 changed, 0 removed
reading... antrag-allgemein antrag-erstellen api Exception occurred:
  File /var/lib/python-support/python2.5/sqlalchemy/schema.py, line 101, 
in __call__
columns on an existing Table object. % key)
InvalidRequestError: Table 'applications' is already defined for this 
MetaData instance.  Specify 'useexisting=True' to redefine options and 
columns on an existing Table object.
The full traceback has been saved in /tmp/sphinx-err-lBwkYm.log, if you 
want to report the issue to the author.
Please also report this if it was a user error, so that a better error 
message can be provided next time.
Send reports to [EMAIL PROTECTED] Thanks!
make: *** [html] Error 1
=

I have a table applications here - but it's only defined once. My 
model/__init__.py looks roughly like (less interesting parts removed):

=
# -*- coding: utf-8 -*-

import sqlalchemy as sql
import sqlalchemy.orm as orm
from myapp.model import meta

def init_model(engine):
sm = orm.sessionmaker(autoflush=True, transactional=True, bind=engine)
meta.engine = engine
meta.Session = orm.scoped_session(sm)

applications_table = sql.Table(
'applications', meta.metadata,
sql.Column('id', sql.Integer, primary_key=True),
sql.Column('applicant', sql.Unicode(8), nullable=False),
sql.Column('created_date', sql.DateTime(), default=sql.func.now()),
sql.Column('valid_until', sql.DateTime()),
sql.Column('description', sql.Unicode(100)),
sql.Column('reason', sql.Unicode(4000)),
sql.Column('status', sql.Integer, nullable=False),
)

class Application(object): pass

orm.mapper(Application, applications_table)
=

Does anyone have an idea what's going wrong?

Cheers
 Christoph


signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] 'owner' parameter seems to set PostgreSQL schema name

2008-04-10 Thread Christoph Haas
Dear list,

since I upgraded to 0.4.5 I found that my complete models stopped working.
I think the cause is the owner parameter when I define a table. Example:

 dns_supermasters_table = sql.Table(
 'supermasters', metadata,
 sql.Column('ip', sql.Unicode(15), nullable=False, primary_key=True),
 sql.Column('nameserver', sql.Unicode(40), nullable=False),
 sql.Column('account', sql.Unicode(40)),
 owner='dnsdhcp'
 )

ORM queries on that table fail with error messages like:

 InvalidRequestError: Could not find table 'domains' with which to
 generate a foreign key 

If I look at the queries that are generated I see that SQLAlchemy prepends 
all table columns with the owner's name. In PostgreSQL that would mean the 
schema name (the default schema is public unless specified). Removing 
the owner parameter made my queries work again.

I would still like to set the owner to a certain user so that when the 
tables get created they have a proper ownership. But that is not meant to 
specify the schema. :)

Cheers
 Christoph


signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Re: Mapping and querying multiple levels of foreign key'd tables

2008-03-05 Thread Christoph Haas

On Wed, Mar 05, 2008 at 08:31:12AM +0100, Christoph Haas wrote:
 Basically I have three tables like 'companies', 'departments' and
 'employees'. I have already set up foreign keys and gave all of them
 one-to-many relationships. So a company has several departments. And
 each department has several employees. So for an ORM-mapped company
 object mycompany I can get the departments by the property
 mycompany.departments. Works well.
 
 Now I'd like to create a query for all employees of a certain company. 
 And I'm not sure how to properly define a mapper relation propery that
 would give me that. Like mycompany.employees. Do I have to use JOINs
 myself in the mapper?
 
 In my application I'd then like to query like this:
 Session.query(Employee).filter_by(employee.company=my_company)

Meanwhile I re-read
http://www.sqlalchemy.org/docs/04/ormtutorial.html#datamapping_joins
explaining that a relation path A-bars-B-bats-C-widgets-D is
queried as:

session.query(Foo).join(['bars', 'bats', 'widgets']).filter(...)

So in my example I already managed to get this working:

session.query(Employee)join(['department','company']). \
filter(model.Company.id==mycompany.id)

Is this the common way to deal with CompanyDepartment-Employee paths?
If it is - can I perhaps even omit the .id part somehow? I tried:

session.query(Employee)join(['department','company']). \
filter(model.Company==mycompany)

But SQLAlchemy didn't like that:

ArgumentError: filter() argument must be of type
sqlalchemy.sql.ClauseElement or string

Thanks for any comments.

 Christoph
-- 
[EMAIL PROTECTED]  www.workaround.org   JID: [EMAIL PROTECTED]
gpg key: 79CC6586 fingerprint: 9B26F48E6F2B0A3F7E33E6B7095E77C579CC6586

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Mapping and querying multiple levels of foreign key'd tables

2008-03-05 Thread Christoph Haas

Moin, Michael...

thanks for your quick reply.

On Wed, Mar 05, 2008 at 11:16:33AM -0500, Michael Bayer wrote:
 On Mar 5, 2008, at 4:14 AM, Christoph Haas wrote:
  Meanwhile I re-read
  http://www.sqlalchemy.org/docs/04/ormtutorial.html#datamapping_joins
  explaining that a relation path A-bars-B-bats-C-widgets-D is
  queried as:
 
 session.query(Foo).join(['bars', 'bats', 'widgets']).filter(...)
 
  So in my example I already managed to get this working:
 
 session.query(Employee)join(['department','company']). \
 filter(model.Company.id==mycompany.id)
 
  Is this the common way to deal with CompanyDepartment-Employee  
  paths?
  If it is - can I perhaps even omit the .id part somehow? I tried:
 
 session.query(Employee)join(['department','company']). \
 filter(model.Company==mycompany)
 
  But SQLAlchemy didn't like that:
 
 ArgumentError: filter() argument must be of type
 sqlalchemy.sql.ClauseElement or string
 
 you want to compare Employee.department.company to the element you  
 have, so it would be:
 
   session.query(Employee).join('department').filter(Department.company  
 == mycompany)

Oh, right. That saves the second JOIN as the department.company_id is
already a field on the departments. Yet another case of thinking too
abstract instead of remembering how the actual database tables look. :)

 or even cooler, you could do

 session.query(Employee).filter(Employee.department.has(
   Department.company==mycompany))

Somehow .has and .any have always been somewhat magical to me. I think
I'll rather use the less cool way because it's more likely I understand
my code then in a few years. Works well, thanks. So I was basically on
the right track with the JOIN.

But now I'm curious. Why do I get the ArgumentError if I try

.filter(Company==my_company)

while

.filter(Company.id==my_company.id)

works? I was comparing ORM objects directly instead of fields/properties
of a mapped object. But shouldn't that work, too?

Cheers
 Christoph
-- 
[EMAIL PROTECTED]  www.workaround.org   JID: [EMAIL PROTECTED]
gpg key: 79CC6586 fingerprint: 9B26F48E6F2B0A3F7E33E6B7095E77C579CC6586

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Mapping and querying multiple levels of foreign key'd tables

2008-03-04 Thread Christoph Haas

Dear list...

Actually I'm trying something rather simple so I'm surprised myself that
it got me stuck. Bear with me that I'm not providing much code but the
application is not written in english so the database models aren't
either.

Basically I have three tables like 'companies', 'departments' and
'employees'. I have already set up foreign keys and gave all of them
one-to-many relationships. So a company has several departments. And
each department has several employees. So for an ORM-mapped company
object mycompany I can get the departments by the property
mycompany.departments. Works well.

Now I'd like to create a query for all employees of a certain company. 
And I'm not sure how to properly define a mapper relation propery that
would give me that. Like mycompany.employees. Do I have to use JOINs
myself in the mapper?

In my application I'd then like to query like this:
Session.query(Employee).filter_by(employee.company=my_company)

Thanks for any hints.

 Christoph
-- 
[EMAIL PROTECTED]  www.workaround.org   JID: [EMAIL PROTECTED]
gpg key: 79CC6586 fingerprint: 9B26F48E6F2B0A3F7E33E6B7095E77C579CC6586

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DBMS

2008-01-16 Thread Christoph Haas

On Wed, Jan 16, 2008 at 02:33:53PM +0530, Manish Soni wrote:
 Send me some study material on DBMS.
 
 --
 Regards,
 Manish Soni
 M. Phil.(Computer Science),MCA(Hons.)
 Master of Science (Computer Science)
 B.Sc.(Mathematics, Physics, Computer Application)
 MCPDEA, MCSD(.Net), MCDBA(SQL Server 2000), MCAD(.Net), A Level, OCA(9i)
 MCP(VB6 Desktop, VB6 Distributed, Windows 2000 Server), SCJP, PGDCA,
 MCSA(Messaging), MCSE(2003), CCNA, Hardware and Networking Diploma of Six
 Months
 MCP(Microsoft CRM)

Pardon me... you have all these degrees but still just order the
mailing list of an open-source database toolkit to send you some
material on DBMS? We must be having a serious language interface
problem here. Back when I made my Master of Informatics I learned a
whole lot about DBMSs.

 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy generates invalid SQL query with empty or_()

2008-01-04 Thread Christoph Haas

On Fri, Jan 04, 2008 at 05:55:21PM +0100, Felix Schwarz wrote:
 I noticed that SQLAlchemy 0.4.2 generates invalid SQL queries if
 I use or_() [without parameters] in a filter condition:
or_conditions = []
condition = and_(User.c.name==John, or_(*or_conditions))
query = session.query(User).filter(condition)

Since you can chain filters together I'd suggest something like:

query = session.query(User).filter(User.c.name==John)
if or_conditions:
query = query.filter(or_(*or_conditions))

Cheers
 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Self-referential property

2007-08-22 Thread Christoph Haas

Dear list,

I'm still working on a DNS administration web interface and have to deal
with DNS records. This is what I have so far:

---
import sqlalchemy as sql
import sqlalchemy.orm as orm
from sqlalchemy.databases.postgres import PGInet

records_table = sql.Table(
'records', metadata,
sql.Column('id', sql.Integer, primary_key=True),
sql.Column('name', sql.Unicode(80)),
sql.Column('type', sql.Unicode(10)),
sql.Column('content', sql.Unicode(200)),
sql.Column('inet', PGInet),
)

class Record(MyOrm):
@property
def ptr_records(self):

Returns matching PTR records for an A record

assert self.type=='A'
assert self.inet!=None
return Record.q().filter_by(type='PTR', inet=self.inet)

orm.mapper(Record, records_table)
---

So for an address (A) entry I'd like to find out if there is a matching
PTR record. Match criteria are the inet column. So if I have a certain
A record...

a = query(Record).filter_by(type='A').one()

...and like to find out the matching PTR records I would call...

ptr = a.ptr_records

This works okay so far. But somehow it feels wrong to do queries in
properties I add to the Record class. Especially since the ptr_records
do not get cached and the query is run every time I access this
property. So I wondered how to do that as properties of the mapper. I
started with
http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_relation_customjoin

But that example deals with User and Address tables and not with
self-references. I suspect I have to alias the table. Roughly I'm
thinking of something like:

properties={
'ptr_records': relation(Record, primaryjoin=and_(
records_table.c.type=='PTR', 
records_table.c.inet=records_table2.c.inet
))
}
 
I don't know how to say match other Record objects where the 'inet'
column contains the same value. How do I do that correctly?

Cheers
 Christoph

P.S.: I simplified the models for this posting so bear with me if this
  is not code that would run as is.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Self-referential property

2007-08-22 Thread Christoph Haas

On Wed, Aug 22, 2007 at 12:37:32PM -0400, Michael Bayer wrote:
 
 
 On Aug 22, 2007, at 8:27 AM, Christoph Haas wrote:
 
 
  But that example deals with User and Address tables and not with
  self-references. I suspect I have to alias the table. Roughly I'm
  thinking of something like:
 
  properties={
  'ptr_records': relation(Record, primaryjoin=and_(
  records_table.c.type=='PTR',  
  records_table.c.inet=records_table2.c.inet
  ))
  }
 
  I don't know how to say match other Record objects where the 'inet'
  column contains the same value. How do I do that correctly?
 
 Well the issue here is that while its a self-referential relationship  
 you're dealing with, its not an adjacency list model, which is  
 really what our self-referential relation() is designed to do.   

Understood. That's the easy/supported case that I will need at a later
time. :)

 Normally you can use the remote_side attribute to indicate  
 whichever column on the join condition is remote, but here its the  
 same column.
 
 Doing the alias thing is possibly a solution.  it would look like this:
 
 records2 = records.alias()
 
 rmapper = mapper(Record, records2, non_primary=True)
 
 'ptr_records':relation(rmapper, primaryjoin=and_(
  records2.c.type=='PTR', records.c.inet=records2.c.inet),
   viewonly=True)
   )
 
 the above should work in theory but I havent tried it.
 notice that  the records2 alias indicates the child side of the
 relationship so  its the one which gets the PTR criterion.
 viewonly=True is to  eliminate whatever issues arise in calculating
 the persistence aspect  of the relation since you only need loading
 here.

Tried it. It raises this error:

ArgumentError: Can't locate any foreign key columns in primary join
condition 'records_1.type = :records_1_type AND records.inet =
records_1.inet' for relationship 'Record.ptr_records (Record)'.
Specify 'foreign_keys' argument to indicate which columns in the
join condition are foreign.

So I added:

foreign_keys=[records_table2.c.inet]

The error disappeared. But getting the joined PTR records for a certain
inet took very long. So I analyzed the query that SQLAlchemy did:

SELECT records.id AS records_id, records.domain_id AS
records_domain_id, records.dhcpzone_id AS records_dhcpzone_id,
records.name AS records_name, records.type AS records_type,
records.content AS records_content, records.ttl AS records_ttl,
records.prio AS records_prio, records.change_date AS
records_change_date, records.mac_address AS records_mac_address,
records.inet AS records_inet
FROM records, records AS records_1
WHERE records_1.type = %(records_1_type)s AND %(param_2)s =
records_1.inet ORDER BY records.id

{'records_1_type': 'PTR', 'param_2': '10.20.30.40'}

The FROM records, records AS records_1 looks suspiciously like a
cartesian product. And I get all possible results - not just rows with
type=='PTR'. The childs from records_1 have the right criteria. But
I get all records. :(

I'm not sure if JOIN is the right paradigm here. Usually I'd use a
SELECT. But select in SQLAlchemy's world doesn't seem to return mapped
objects but rather plain rows. I assume it's not supported to add a
filter() statement as a property?

Perhaps you have another hint. Would be a pity to give up and use my
dirty @property method. Thanks so far.

 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to dynamically create where-clauses

2007-08-10 Thread Christoph Haas

On Fri, Aug 10, 2007 at 02:51:51PM +0200, Hermann Himmelbauer wrote:
 My web-application, that is in front of SQLAlchemy has various input-fields, 
 which should act as a where-clause (with like()), a query looks e.g. like 
 this:
 
 session.query(MyObj).filter(table1.c.input1.like(inputfield_1+'%'),
table1.c.input2.like(inputfield_2+'%'),
...)
 
 The problem is that if such an input field is empty, it should not be 
 included 
 in the query. If I rule out empty fields manually, I have 2^n if-clauses (I 
 think), so I need to dynamically create the where-clauses in some way.
 
 What would be the best way to accomplish this?

You are on the right track with filter() already. Consider this:

result = session.query(MyObj)
if 'name' in params:
result=result.filter_by(name=params['name'])
if 'min_age' in params:
result=result.filter_by(age=params['min_age'])
...

Works well in my Pylons (a web framework) applications in search/query
forms.

Cheers
 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: PROBLEMS WITH TIMESTAMP variable in MySql

2007-07-13 Thread Christoph Haas

Disclaimer: I'm no SQLAlchemy guru.

On Fri, Jul 13, 2007 at 01:53:48PM -, robertz23 wrote:
  Hi,  I've been using SQLAlchemy since a month.  I'm having some
 problems with a one-to-many relationship.  I have 2 tables: one for
 users and one for users's emails.  The problem arises when I want to
 put a timestamp in the email table in order to see when a given
 account was created.  Here are the tables, my code, and the error:
 [...]
 user_table = Table('user', metadata, autoload=True)
 email_table = Table('email', metadata, autoload=True)
 class User(object):
 pass
 class Email(object):
 pass
 session_context = SessionContext(create_session)
 assign_mapper(session_context, User, user_table,
 properties={'emails':relation(Email)})
 assign_mapper(session_context, Email, email_table)
 
 obj = User()
 obj.get(1).emails

It looks like you are first creating a new User instance and then
calling .get(1) on it. I'd rather expect this to work:

emails = User.get(1).emails

Or following the query syntax that seems to be upcoming with 0.4:

emails = User.query.get(1).emails

Kindly
 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Qualified column names in JOINs?

2007-06-23 Thread Christoph Haas

On Fri, Jun 22, 2007 at 08:30:07PM -, Michael Bayer wrote:
 the contents of the columns clause is configurable via the select()
 construct directly:
 
 result=select([records_a, records_ptr],
 records_a.c.type=='A',
 from_obj=[model.outerjoin(records_a, records_ptr,
 ( (records_a.c.inet==records_ptr.c.inet) 
 (records_ptr.c.type=='PTR') ))],
 use_labels=True).execute().fetchone()

Very good. That does what I wanted. A full outer join would be perfect
but I'm willing to UNION my way through. :)

Thank you.

 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Qualified column names in JOINs?

2007-06-22 Thread Christoph Haas

Dear list,

I'm trying to join a table with itself. That works well. However since
the column names are identical I had no luck accessing both the original
and the joined information.

I have aliased the tables already and run the join on the aliased names.
But the column names are still not qualified.

Example:

records_a   = model.records_table.alias('records_a')
records_ptr = model.records_table.alias('records_ptr')
joined = records_a.select(...,
from_obj=[outerjoin(records_a, records_ptr,
records_a.c.foo==records_ptr.c.bar)).execute().fetchone()

The records contain fields like 'id', 'type' or 'name'. So I tried
this and failed:

print joined[0].records_a.c.id
print joined[0]['records_a.id']

However I _can_ access the result columns without the table alias:

print joined[0].id
print joined[0]['id']

Is there some magical hidden parameter that qualifies the rows of the
result?

Kindly
 Christoph

P.S.: I'm now trying the example from self-referential mappers.
  But the general question is still valid because in every join
  it might happen that column names overlap.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Qualified column names in JOINs?

2007-06-22 Thread Christoph Haas

On Fri, Jun 22, 2007 at 11:50:51AM -0400, Michael Bayer wrote:
 On Jun 22, 2007, at 9:59 AM, Christoph Haas wrote:
 
  I'm trying to join a table with itself. That works well. However since
  the column names are identical I had no luck accessing both the  
  original
  and the joined information.
 
  I have aliased the tables already and run the join on the aliased  
  names.
  But the column names are still not qualified.
 
  Example:
 
  records_a   = model.records_table.alias('records_a')
  records_ptr = model.records_table.alias('records_ptr')
  joined = records_a.select(...,
  from_obj=[outerjoin(records_a, records_ptr,
  records_a.c.foo==records_ptr.c.bar)).execute().fetchone()
 
  The records contain fields like 'id', 'type' or 'name'. So I tried
  this and failed:
 
  print joined[0].records_a.c.id
  print joined[0]['records_a.id']
 
 that makes no sense.  joined is a result of fetchone() so it  
 therefore a RowProxy.  joined[0] is the first column of the row.  the  
 value of joined[0] should be a scalar.

Sorry, you are right. I was pasting incorrectly. Here is a real-life session:

In [2]: records_a=model.records_table.alias('records_a')

In [3]: records_ptr=model.records_table.alias('records_ptr')

In [4]: result=records_a.select(records_a.c.type=='A', 
from_obj=[model.outerjoin(records_a, records_ptr, ( 
(records_a.c.inet==records_ptr.c.inet)  (records_ptr.c.type=='PTR') 
))]).execute().fetchone()
2007-06-22 18:09:57,852 INFO sqlalchemy.engine.base.Engine.0x..6c SELECT 
records_a.id, records_a.domain_id, records_a.dhcpzone_id, records_a.name, 
records_a.type, records_a.content, records_a.ttl, records_a.prio, 
records_a.change_date, records_a.mac_address, records_a.inet
FROM records AS records_a LEFT OUTER JOIN records AS records_ptr ON 
records_a.inet = records_ptr.inet AND records_ptr.type = %(records_ptr_type)s
WHERE records_a.type = %(records_a_type)s
2007-06-22 18:09:57,852 INFO sqlalchemy.engine.base.Engine.0x..6c 
{'records_a_type': 'A', 'records_ptr_type': 'PTR'}

In [5]: result.keys()
Out[5]:
['id',
 'domain_id',
 'dhcpzone_id',
 'name',
 'type',
 'content',
 'ttl',
 'prio',
 'change_date',
 'mac_address',
 'inet']

As you see the keys are just there once. Although the actual SQL result
contains these columns for both records_a and the joined records_ptr.

 to see all the literal column names, use result.fetchone().keys().   
 or, just turn on SQL echoing and watch the names generated.  the  
 easiest way to get at the column you want is to target using the  
 Column objects themselves:
 
 result.fetchone()[records_a.c.id]

Does not work in the way I tried:

In [6]: result=records_a.select(records_a.c.type=='A', 
from_obj=[model.outerjoin(records_a, records_ptr, ( 
(records_a.c.inet==records_ptr.c.inet)  (records_ptr.c.type=='PTR') 
))]).execute().fetchone()[recordsd_a.c.type]
2007-06-22 18:11:38,571 INFO sqlalchemy.engine.base.Engine.0x..6c SELECT 
records_a.id, records_a.domain_id, records_a.dhcpzone_id, records_a.name, 
records_a.type, records_a.content, records_a.ttl, records_a.prio, 
records_a.change_date, records_a.mac_address, records_a.inet
FROM records AS records_a LEFT OUTER JOIN records AS records_ptr ON 
records_a.inet = records_ptr.inet AND records_ptr.type = %(records_ptr_type)s
WHERE records_a.type = %(records_a_type)s
2007-06-22 18:11:38,572 INFO sqlalchemy.engine.base.Engine.0x..6c 
{'records_a_type': 'A', 'records_ptr_type': 'PTR'}
---
exceptions.NameError Traceback (most recent 
call last)

/home/chaas/projekte/dnsdhcp/ipython console

NameError: name 'recordsd_a' is not defined

Kindly
 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Qualified column names in JOINs?

2007-06-22 Thread Christoph Haas

On Fri, Jun 22, 2007 at 04:40:58PM -, Michael Bayer wrote:
 On Jun 22, 12:12 pm, Christoph Haas [EMAIL PROTECTED] wrote:
  2007-06-22 18:09:57,852 INFO sqlalchemy.engine.base.Engine.0x..6c SELECT 
  records_a.id, records_a.domain_id, records_a.dhcpzone_id, records_a.name, 
  records_a.type, records_a.content, records_a.ttl, records_a.prio, 
  records_a.change_date, records_a.mac_address, records_a.inet
  FROM records AS records_a LEFT OUTER JOIN records AS records_ptr ON 
  records_a.inet = records_ptr.inet AND records_ptr.type = 
  %(records_ptr_type)s
  WHERE records_a.type = %(records_a_type)s
  2007-06-22 18:09:57,852 INFO sqlalchemy.engine.base.Engine.0x..6c 
  {'records_a_type': 'A', 'records_ptr_type': 'PTR'}
 
  In [5]: result.keys()
  Out[5]:
  ['id',
   'domain_id',
   'dhcpzone_id',
   'name',
   'type',
   'content',
   'ttl',
   'prio',
   'change_date',
   'mac_address',
   'inet']
 
  As you see the keys are just there once. Although the actual SQL result
  contains these columns for both records_a and the joined records_ptr.
 
 set use_labels=True in your select() statement.  the point of that is
 to combine table or alias names with column names as labels for all
 selected columns, so that names are automatically unique.  the column-
 targeting i mentioned also would rely upon this to differentiate
 between the two tables.

Thank you. It's a tiny bit closer. However that still only gives the
left side of the join as results:

result=records_a.select(
records_a.c.type=='A',
from_obj=[model.outerjoin(records_a, records_ptr,
( (records_a.c.inet==records_ptr.c.inet)  (records_ptr.c.type=='PTR') 
))],
use_labels=True).execute().fetchone()

In [9]: result
Out[9]: (108914, 1, None, u'foo.domain.tld', u'A', u'192.168.26.39', 86400, 0, 
None, None, '192.168.26.39')

In [10]: result.keys()
Out[10]:
['records_a_id',
 'records_a_domain_id',
 'records_a_dhcpzone_id',
 'records_a_name',
 'records_a_type',
 'records_a_content',
 'records_a_ttl',
 'records_a_prio',
 'records_a_change_date',
 'records_a_mac_address',
 'records_a_inet']

Sorry for being a pain. But I'm kind of stuck here.

 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Qualified column names in JOINs?

2007-06-22 Thread Christoph Haas

On Fri, Jun 22, 2007 at 07:22:05PM +0200, Christoph Haas wrote:
 
 On Fri, Jun 22, 2007 at 04:40:58PM -, Michael Bayer wrote:
  On Jun 22, 12:12 pm, Christoph Haas [EMAIL PROTECTED] wrote:
   2007-06-22 18:09:57,852 INFO sqlalchemy.engine.base.Engine.0x..6c SELECT 
   records_a.id, records_a.domain_id, records_a.dhcpzone_id, records_a.name, 
   records_a.type, records_a.content, records_a.ttl, records_a.prio, 
   records_a.change_date, records_a.mac_address, records_a.inet
   FROM records AS records_a LEFT OUTER JOIN records AS records_ptr ON 
   records_a.inet = records_ptr.inet AND records_ptr.type = 
   %(records_ptr_type)s
   WHERE records_a.type = %(records_a_type)s
   2007-06-22 18:09:57,852 INFO sqlalchemy.engine.base.Engine.0x..6c 
   {'records_a_type': 'A', 'records_ptr_type': 'PTR'}
  
   In [5]: result.keys()
   Out[5]:
   ['id',
'domain_id',
'dhcpzone_id',
'name',
'type',
'content',
'ttl',
'prio',
'change_date',
'mac_address',
'inet']
  
   As you see the keys are just there once. Although the actual SQL result
   contains these columns for both records_a and the joined records_ptr.
  
  set use_labels=True in your select() statement.  the point of that is
  to combine table or alias names with column names as labels for all
  selected columns, so that names are automatically unique.  the column-
  targeting i mentioned also would rely upon this to differentiate
  between the two tables.
 
 Thank you. It's a tiny bit closer. However that still only gives the
 left side of the join as results:
 
 result=records_a.select(
 records_a.c.type=='A',
 from_obj=[model.outerjoin(records_a, records_ptr,
 ( (records_a.c.inet==records_ptr.c.inet)  
 (records_ptr.c.type=='PTR') ))],
 use_labels=True).execute().fetchone()
 
 In [9]: result
 Out[9]: (108914, 1, None, u'foo.domain.tld', u'A', u'192.168.26.39', 86400, 
 0, None, None, '192.168.26.39')
 
 In [10]: result.keys()
 Out[10]:
 ['records_a_id',
  'records_a_domain_id',
  'records_a_dhcpzone_id',
  'records_a_name',
  'records_a_type',
  'records_a_content',
  'records_a_ttl',
  'records_a_prio',
  'records_a_change_date',
  'records_a_mac_address',
  'records_a_inet']

Update: I found another way - but with .select instead of .outerjoin:

In [9]: result = model.select([records_a, records_ptr],
(records_a.c.inet==records_ptr.c.inet)  (records_ptr.c.type=='PTR'),
use_labels=True).execute().fetchone()

In [10]: result.keys()
Out[10]:
['records_a_id',
 'records_a_domain_id',
 'records_a_dhcpzone_id',
 'records_a_name',
 'records_a_type',
 'records_a_content',
 'records_a_ttl',
 'records_a_prio',
 'records_a_change_date',
 'records_a_mac_address',
 'records_a_inet',
 'records_ptr_id',
 'records_ptr_domain_id',
 'records_ptr_dhcpzone_id',
 'records_ptr_name',
 'records_ptr_type',
 'records_ptr_content',
 'records_ptr_ttl',
 'records_ptr_prio',
 'records_ptr_change_date',
 'records_ptr_mac_address',
 'records_ptr_inet']

So I finally get both sides of the join. But this appears to be just an
INNER JOIN. There are records_a with no matching records_ptr and vice
versa. So my actual goal is a FULL OUTER JOIN that just fills
non-matching rows with NULL values. If I understoog correctly then full
outer joins are not yet supported due to some complications with Oracle
databases. So I thought I'd try a LEFT OUTER JOIN with a UNION of a
INNER (RIGHT) JOIN as a workaround as described at
http://en.wikipedia.org/wiki/Join_(SQL)#Full_outer_join (last
paragraph).

Since the above query is just an INNER JOIN I'd have to do a LEFT JOIN +
INNER JOIN + RIGHT JOIN.

Phew...

 Christoph



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Mapper with relation/custom join condition fails

2007-06-19 Thread Christoph Haas

On Tue, Jun 19, 2007 at 02:21:51AM +0300, [EMAIL PROTECTED] wrote:
  Perfectly. Although all the hassle makes me think even harder if
  there is really no other way of connecting these two database
  tables. A casted join with string comparison gets dirtier every
  time I look at it. :(

 excuse me if it sounds dumb... what about a middleman table of proper 
 (casted) values that is automaticaly build by some trigger? 
 i can guess at least 3 objections right now, but it is an option in 
 general principle...

Doesn't sound dumb at all. I assume you are thinking of an intermediary
table that is connecting primary keys like row 10 of dns table is
connected to row 515 of dhcp table. Yes, I'm considering that, too.
With secondary properties SQLAlchemy makes such many-to-many relations
really easy. However I have a higher risk of rows that have different IP
address values in this case. I'll think about it though.

Thanks for the suggestion.

 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Mapper with relation/custom join condition fails

2007-06-18 Thread Christoph Haas

Dear list...

I'm having trouble with two assign-mappers with a custom JOIN condition.
(And I admit that I'm not yet good at that in SQLAlchemy. So be gentle.)

powerdns_records_table = Table(
'records', meta,
Column('id', Integer, primary_key=True),
Column('name', Unicode(80)),
Column('type', Unicode(10)),
Column('content', Unicode(200)),
)
 
dhcp_hosts_table = Table(
'dhcp_hosts', meta,
Column('id', Integer, primary_key=True),
Column('ip_address', PGInet, nullable=False),
)
 
assign_mapper(session_context, DhcpHost, dhcp_hosts_table)

assign_mapper(session_context, DnsRecord, powerdns_records_table,
properties={
'dhcphost': relation(DhcpHost,

primaryjoin=(cast(powerdns_records_table.c.content,PGInet)==dhcp_hosts_table.c.ip_address),
foreign_keys=[powerdns_records_table.c.content]),
}
)

Yes, this may look a bit dirty. The JOIN condition is really that and I
have no chance to alter the table schemas because that's what another
application demands. The powerdns_records_table uses a string and
dhcp_hosts_table uses a PostgreSQL inet object (defined in SQLAlchemy
as types.PGInet) and I am trying to match those. I added the
foreign_keys parameter when SQLAlchemy complained:

ArgumentError: Can't locate any foreign key columns in primary join
condition 'CAST(records.content AS INET) = dhcp_hosts.ip_address'
for relationship 'DnsRecord.dhcphost (DhcpHost)'.  Specify
'foreign_keys' argument to indicate which columns in the join
condition are foreign.

Now when I try DnsRecords.get_by(content='10.0.0.1') I get:

ArgumentError: No syncrules generated for join criterion
CAST(records.content AS INET) = dhcp_hosts.ip_address

When I try that again (or use
DnsRecords.select(DnsRecords.c.content='10.0.0.1') then I get:

AttributeError: 'PropertyLoader' object has no attribute 'strategy'

The actual SQL query I'd expect would be:

SELECT * FROM records JOIN dhcp_hosts ON CAST(records.content AS
inet)=dhcp_hosts.ip_address WHERE records.content='10.0.0.1';

My SQLAlchemy version is 0.3.7. Any idea what might be the problem? If
that join condition is generally not supposed to work in SA I'll have to
do that matching manually in my Python code (which I expect to be way
slower than pure SQL). Are primaryjoins with less than trivial integer
unique IDs unsupported?

 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: documentation unsearchable

2007-06-17 Thread Christoph Haas

On Sun, Jun 17, 2007 at 07:28:04AM -0700, gardsted wrote:
 I find the documentation for sqlalchemy very confusing.
 I find no way of navigating it efficiently or searching it.

Perhaps partly. Although we have to admit that the documentation is
generally very extensive and provides good examples. It's just not
always easy to find something. With the time you know what to look for
and where.

As a workaround you may use google and search for e.g.

site:www.sqlalchemy.org/docs filter_by

That way you search for filter_by just in the SQLAlchemy
documentation. Maybe Michael can add a google search feature on the
pages to simplify that?

 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Mapped class + querying for DISTINCT field

2007-06-13 Thread Christoph Haas

I use my assign_mapper'd classes with a lot of joy so far. But now I
feel trapped. My table:

records_table = Table(
'records', meta,
Column('id', Integer, primary_key=True),
Column('name', Unicode(80)),
Column('type', Unicode(10)),
Column('content', Unicode(200)),
Column('ttl', Integer),
Column('prio', Integer))

My mapping:

class DnsRecord(object): pass
assign_mapper(session_context, DnsRecord, records_table)

Now I'd like to select all objects from that table with unique values in
the name column. In old-school SQL that would be:

SELECT DISTINCT name,* FROM records;

When I start with...

DnsRecord.select(...)

...I can just change the WHERE clause but not the actually selected
data. Do I really have to use

select([DnsRecord.c.name], distinct=True)

? It's simple and working. But I'd rather like to get the result as a
mapped class so that I can work with other columns, too. This way I just
received a list of strings when I .execute() it.

On #sqlalchemy I was proposed to:
- use a subselect with .filter()
- map a select instead of a Table()
- use result-set mapping

But these solutions sounded like dirty workarounds and I couldn't even
figure out the proper syntax to try it.

I have seen similar requests on the list that went unanswered. Is it an
obviously lacking feature in SA or is my idea just plain stupid? What is
the one truly right way (tm) to handle this?

Thanks,
 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: assign_mapper query methods

2007-06-01 Thread Christoph Haas

On Fri, Jun 01, 2007 at 07:17:19AM -0700, Michael Bayer wrote:
 so i didnt add filter() because i didnt feel like getting all the bug
 reports from people who have instance variables called filter, and
 also because my plan was to do away with *all* the select/filter/etc
 methods and have everything go through query().

Right. I didn't think of that possibility. Do I read that as
assign_mapper will die anyway because you have an implicit PITA tag
on it already? ;)

 but if we dont think its a problem im certainly not going to hold back
 assign_mapper from what people want it to be, im just putting out the
 issues there and you all can decide.

I think I will refrain from using assign_mapper in the future. I'd like
to use .filter() because it's great and I'd rather use the explicit
mapper syntax instead of the (barely easier) assign_mapper syntax just
to be consistent. Sometimes using the assign_mapper because it's simpler
but in other places use the mapper methods because they are more
powerful doesn't really look consistent. If you asked me: either provide
these methods on the assign_mapper, too, or let the assign_mapper die
sooner or later. Am I right that we are just talking of

john = session.query(User).get_by(name=john)

versus

john = User.get_by(name=john)

here? I think I can live with that. Inserting/creating new objects looks
the same with both mapping methods AFAIK. So we are talking about
wasting 15 bytes in every query. I'll go waste some bytes then.

Cheers
 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: assign_mapper query methods

2007-06-01 Thread Christoph Haas

On Fri, Jun 01, 2007 at 10:50:33AM -0700, David Shoemaker wrote:
 Both
 session.query(User).select()
 and
 User.query().select()
 seem more verbose than they need to be for my taste. However, I think most
 people (myself included) define a base class for all their mapped classes.

Uhm, that (few) people I know just use assign_mapper as demonstrated in
the many tutorials. Even myself. And now that you mention it I like the
idea. Not duplicating what SA is providing but rather making your own
life easier.

 I've always used this base class to provide the interface I want, no
 matter what interface sqlalchemy provides

Now I'm curious. Could you share your base class?

 That said, if I was new to sqlalchemy, I think I'd be scared to see
 session.query(User).select() as the recommended way in the tutorial.

From lurking on IRC I'm sure most people are indeed scared by SA. :)

 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: assign_mapper query methods

2007-05-31 Thread Christoph Haas

TOFU day? Okay, me too. ;)

If I may cast a vote: yes, please add these 10 methods (if they are
remotely connected to selecting or changing rows) and start with
.filter(). I'm working with assign_mapped objects most the time and just
have to use a completely different (Query) syntax if I want to
.filter(). It's great that the .filter() exists and that Query has
adopted the functionality of SelectResults. But even if it would be
extra work I'd love to see the methods be available both for the bare
Query object as well as for a mapped class. That keeped my code more
consistent. Otherwise I'll probably stop using assign_mapper - what
would a pity - for not having to explain to my coworkers why using two
different way to play with the database.

 Christoph

On Thu, May 31, 2007 at 08:47:01PM -0700, Michael Bayer wrote:
 heres the question.  Query gets 10 new methods one day.  do we then
 add 10 methods to assign_mapper() ?  must the user class be a total
 clone of Query ?  assign_mapper just bugs me for this reason.   hence
 i like entity.query() better.  im not sure which one youre saying you
 prefer ?
 
 On May 31, 5:46 pm, Mike Orr [EMAIL PROTECTED] wrote:
  What are future plans for the assign_mapper query methods?
 
  MyClass.select(...)
  -- works great.  A clear advantage for assign_mapper over the
  regular mapper.
 
  MyClass.filter(...)
  -- doesn't exist.
 
  MyClass.query().filter(...)
  -- works but is undocumented and requires a convoluted monkeypatch
  in the source.  Not so clear an advantage over the regular mapper
  because it's so verbose.
 
  The third is the one I've found most useful.  That way I can have
  functions that return a Query, and the caller can call .select(),
  .select(offset=,limit=), or .count() as they wish.
 
  --
  Mike Orr [EMAIL PROTECTED]
 
 
  

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---