[sqlalchemy] Re: Select from multiple databases

2009-10-02 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of kkapron
> Sent: 01 October 2009 21:27
> To: sqlalchemy
> Subject: [sqlalchemy] Select from multiple databases
> 
> 
> Hello, I'm a beginner in sqlalchemy and I've got a problem with select
> statement.
> 
> example:
> 
> tabela T1: id, username, date (in database B1)
> tabela T2: id, user_id, ip (in database B2)
> 
> I've got two engines
> ...
> self.db['B1']['engine'] = create_engine('mysql://B1')
> self.db['B1']['conn'] = self.db['B1']['engine'].connect()
> self.db['B1']['metadata'] = MetaData()
> self.db['B1']['metadata'].bind = self.db['B1']['engine']
> self.db['B1']['metadata'].create_all()
> 
> self.db['B2']['engine'] = create_engine('mysql://B2')
> self.db['B2']['conn'] = self.db['B1']['engine'].connect()
> self.db['B2']['metadata'] = MetaData()
> self.db['B2']['metadata'].bind = self.db['B2']['engine']
> self.db['B2']['metadata'].create_all()
> ...
> 
> and tables
> ...
> self.tables['T1'] = Table('T1', self.db['B1']['metadata'],
> autoload=True)
> self.tables['T2'] = Table('T2', self.db['B2']['metadata'],
> autoload=True)
> ...
> 
> and a test query:
> ...
> T1 = self.tables['T1']
> T2 = self.tables['T2']
> s = select( [T1.c.username, T2.c.ip], (T2.c.user_id == T1.c.id) )
> s.execute().fetchall()
> 
> and error:
> 
> (ProgrammingError) (1146, "Table 'B2.T1' doesn't exist")...
> 
> it's true that T1 doesn't exists in B2, because it exist in T2.
> 
> Does anybody know how to help me? :)
> 

I don't think you can do this - the 'select' function represents a
single SQL SELECT statement, which can't be sent to 2 different database
servers. You'd need to run two separate queries and join the results in
Python.

If your tables are actually in different schemas but the same MySQL
instance (ie. If you can connect to the MySQL server and write 'SELECT *
FROM B1.T1' and 'SELECT * FROM B2.T2'), then you can use a single engine
and metadata to access them both by specifying the schema in your Table
definitions. See:

 
http://www.sqlalchemy.org/docs/05/metadata.html#specifying-the-schema-na
me

for an example.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Bypass checking to database structure (metadata.create_all)

2009-10-01 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Christian Démolis
> Sent: 01 October 2009 10:40
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Bypass checking to database structure 
> (metadata.create_all)
> 
> Hi again,
> 
> Is there any way to avoid checking database structure during 
> the metadata.create_all declaration's phase?
> It can be good to check when we are in test phase but when we 
> are in production and we are sure of our model, it can be 
> good to bypass create_all checking to database.
> 

create_all (and drop_all) have a 'checkfirst' parameter that defaults to True. 
If you set it to False, SA won't check to see if your tables already exist 
before issuing the CREATE statements:



Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: unexpected chained relations and "append" behaviour

2009-09-24 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Crusty
> Sent: 24 September 2009 16:16
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: unexpected chained relations and 
> "append" behaviour
> 
> 
> Hello Simon,
> 
> thanks for your answer, I will have a look into that.
> By the way:  len(car.parts) does indeed work, try it ;)
> 
> Greetings, Tom
> 

len(car.parts) works with your current configuration, because accessing
car.parts loads the entire relation and returns it as a python list. But
if you change it to be a 'dynamic' relation, it will no longer be a list
but a Query instance, which no longer has a __len__ method.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: unexpected chained relations and "append" behaviour

2009-09-23 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Crusty
> Sent: 23 September 2009 15:48
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] unexpected chained relations and 
> "append" behaviour
> 
> 
> Hello everyone,
> 
> I have a realy simple model for you to consider:
> 
> 1 car has n wheels
> car.wheels is a relation from cars to wheels
> wheel.car is a backref to cars
> 
> 1 car has n parts
> car.parts is a relation from car to parts
> 
> I just wondered why my app was really getting slow, turned on SA debug
> mode, and saw that
> 
> my_new_doorknob = model.Part("doorknob")
> wheel.car.parts.append(my_new_door_knob)
> 
> is downloading the entire "parts" table WHERE parts.car == car.id
> (that is around 20.000 entries) just so that it can append my new
> doorknob to that relation.
> 
> Furthermore I noticed a similar behaviour when doing 
> something like this:
> 
> amount_of_parts = len(car.parts)
> 
> Instead of sending a COUNT to the database, it populates the entire
> car.parts relation (around 20.000 entries) just to get the count. Of
> course I could avoid using relations, and just use my __init__
> functions, or setting:
> 
> my_new_doorknob = model.Part("doorknob")
> my_new_doorknob.car_id = car.id
> DBSession.append(my_new_doorknob)
> 
> But then I could as well just write literal SQL if I cant use the "R"
> part of ORM...
> 
> Has anyone observed similar behaviour or is this a "feature" and
> intended to work like this?
> 
> Greetings, Tom

Yes, this is exactly how it is intended to work. You may like to read
http://www.sqlalchemy.org/docs/05/mappers.html#working-with-large-collec
tions for hints on how to improve performance. In particular, making
your car.parts property a 'dynamic' relation rather than the default
will prevent SA from loading the entire collection unless you
specifically ask it to.

However, the len(car.parts) line won't work. SA deliberately doesn't
implement the __len__ method for Query objects because it is called
implicitly by python in a number of situations, and running a
potentially slow query when you aren't expecting it is a bad idea.
Instead you would use car.parts.count().

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Pre-commit validation spanning multiple tables/ORM classes

2009-09-22 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Randall Nortman
> Sent: 22 September 2009 16:31
> To: sqlalchemy
> Subject: [sqlalchemy] Pre-commit validation spanning multiple 
> tables/ORM classes
> 

[SNIP]

> 
> So if I do this with a SessionExtension.before_commit(), I would have
> to iterate through the new, dirty, and deleted instances lists,
> inspect the type of each instance, and do whatever is required.  I am
> not sure, though, how to handle the case of a change in membership in
> the parent/child relationship -- the child instance that is present in
> the dirty list will have only the new parent on it -- how do I find
> out what the old parent was, so I can validate it?  If a flush has
> already occurred, the old value is already lost in the context of the
> current transaction, and I think that if I open a new transaction
> inside a before_commit() validator I'm just asking for trouble.  Do I
> need to instrument the Child class with a descriptor that tracks
> changes to the parent and remembers the old parent?  Or can I set the
> cascade option in such a way that the old parent will end up in the
> dirty list, even though there are no changes to its underlying table,
> and in fact it may never have been explicitly loaded into the
> session?  (I must admit to be somewhat unsure of what the different
> cascade options do -- but they don't seem to be useful for tracking
> something like this.)
> 

I can't answer most of your question, but as far as finding out what the
old parent was, could you use the get_history function?



Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: getting data from primary keys

2009-09-15 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of C.T. Matsumoto
> Sent: 15 September 2009 07:21
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: getting data from primary keys
> 
> That did the trick.
> 
> Thanks a lot.
> 
> Your solution uses the orm sessionmaker. Till now my script 
> was relying on sqlalchemy's expression 
> language. Is there some way of doing the same with the 
> expression language? Or would it get too
> complicated? (Just curious)
> 
> Cheers,
> 
> T
> 

How about:

  import sqlalchemy as sa

  key_cols = [c for c in table.primary_key.columns]
  query = sa.select(key_cols)
  print query.execute().fetchall()
  
Or

  print connection.execute(query).fetchall()

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to bypass scoped_session?

2009-09-09 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Eloff
> Sent: 08 September 2009 21:34
> To: sqlalchemy
> Subject: [sqlalchemy] Re: How to bypass scoped_session?
> 
> 
> On Sep 8, 3:47 am, "King Simon-NFHD78" 
> wrote:
> > Automatically adding objects to a session when they are created is a
> > feature of Session.mapper (rather than the plain orm.mapper 
> function),
> > and is deprecated. If you use the plain mapper function, all "root"
> > objects that you create will have to be explicitly added to 
> a session.
> > ("related" objects will normally be added by default)
> 
> That makes sense to me.
> 
> > Session.mapper also included a couple of extra features (the 'query'
> > property and a default __init__) which you can recreate 
> yourself using
> > the examples at
> > 
> <http://www.sqlalchemy.org/trac/wiki/UsageRecipes/SessionAwareMapper>
> 
> Trouble is I am using elixir, so I'm not sure I can change this
> behavior.
> 
> Thanks Simon,
> 
> -Dan

I think you'll need to take that up on the elixir list. I've no idea if elixir 
allows you to use a different mapper.

Sorry I can't be more help,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to bypass scoped_session?

2009-09-08 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Eloff
> Sent: 08 September 2009 04:40
> To: sqlalchemy
> Subject: [sqlalchemy] How to bypass scoped_session?
> 
> 
> Hi,
> 
> I'm using scoped_session in my pylons app, but sometimes I have a need
> to create a local session, do a little work, and commit it without
> worrying about the current state of the scoped_session (i.e. without
> committing it)
> 
> I'm having some issues with that because the minute I create a new
> object, it gets associated (as transient I guess?) with the
> scoped_session and barfs when I add it to the local session (object is
> already in a session.)
> 
> The best I've found is expunge() from the scoped_session after init,
> then the add will work without issues.
> 
> Is there any way to create objects without them being added to
> scoped_session? I don't see that there is any way with a regular
> session that they'd be added on init, so it can't be required
> functionality. Maybe there's an option to disable it?
> 
> I can't imagine what it is good for, but that's probably just failure
> of imagination on my part.
> 

Automatically adding objects to a session when they are created is a
feature of Session.mapper (rather than the plain orm.mapper function),
and is deprecated. If you use the plain mapper function, all "root"
objects that you create will have to be explicitly added to a session.
("related" objects will normally be added by default)

Session.mapper also included a couple of extra features (the 'query'
property and a default __init__) which you can recreate yourself using
the examples at


Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Curious Problem

2009-09-02 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of gizli
> Sent: 02 September 2009 04:45
> To: sqlalchemy
> Subject: [sqlalchemy] Curious Problem
> 
> 
> Hi all,
> 
> I just discovered something weird when doing concurrency testing with
> my program. Before writing a simplified test case for it and really
> figuring out whether its a bug with sqlalchemy (I am using 0.5.5), I
> wanted to write the scenario here. Basically I was getting the
> infamous "Set size changed during iteration" error during commit.
> Multiple threads are not accessing the same session (as was in some of
> the posts I have read online). Here's what I am doing:
> 
> There is a task dispatcher that queries the database, gets a list of
> tasks to be done, dispatches them to separate threads. Each thread
> creates its own session using the session_maker and updates the status
> of a task. There are three kinds of objects in question: Task,
> TaskSchedule and TaskResult. There is a one-to-many relation between
> Task and TaskSchedule. There is also a one-to-many relation between
> Task and TaskResult. Pseudo code:
> 
> task_dispatcher:
>for task_schedule in sess.query(TaskSchedule).filter
> (next_execution_time <= current_time):
>do_task(task_schedule.task) <-- using relation
>task_schedule.next_execution_time = some_value
>sess.commit()
> 
> (in a new thread)
> do_task(task):
>  sess := get_session()
>  sess.add(TaskResult(task.id, task_started))
>  sess.commit()
>  task.perform()
>  sess.merge(TaskResult(task.id, task_finished))
>  sess.commit()
>  sess.close()
> 

I'm no expert on any of this, but there are a couple of aspects of this
that I would imagine could be risky. You are obtaining a 'task' object
in your dispatcher thread (so it is attached to that thread's session),
and then using it in another thread. If "task.perform" does any
ORM-related activity (such as triggering lazy loads), they will be
loaded into the dispatcher thread's session, not the subthread's
session.

Also, if your session has expire_on_commit set (defaults to True when
using sessionmaker), then accessing any property at all of the task will
trigger a load.

I think you should either be expunging your task object from the
original session, or at least merging it into your new session. I think
you could use session.merge(task, dont_load=True) to prevent the merge
from querying the database again.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: from a sqlalchemy table to the corresponding CREATE TABLE sql code

2009-08-26 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Michele Simionato
> Sent: 26 August 2009 13:40
> To: sqlalchemy
> Subject: [sqlalchemy] from a sqlalchemy table to the 
> corresponding CREATE TABLE sql code
> 
> 
> A part from setting echo=True and monitoring the result
> of .create_all, is there a more elegant way
> to extract the SQL creation code from a table object?
> TIA,
> 
>   M.S.

I think this FAQ should answer your question:

http://www.sqlalchemy.org/trac/wiki/FAQ#HowcanIgettheCREATETABLEDROPTABL
Eoutputasastring

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Tracking Last Update timestamp for models

2009-08-25 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Stephen Hansen
> Sent: 25 August 2009 07:34
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Tracking Last Update timestamp for models
> 
> Hi all. 
> 
> I'm working on converting our system to use SQLAlchemy, and 
> one thing I'm not sure how to do properly is to track when a 
> particular model object is last updated. When doing just raw 
> SQL over the DB-API, this basically meant whenever I did a 
> change to the model, I would issue an UPDATE statement to 
> update the last update field as well.
> 
> When using model objects, I'm not sure how to best do that. 
> Is there a way to put a hook into the model objects that are 
> called whenever they're saved to the database, and add a new 
> change as well? Or do I have to make it so all fields are 
> synonyms() and properties that update this attribute at the 
> same time as setting the primary one?
> 
> Thanks in advance.
> 
> --S
> 

SQLAlchemy column definitions can include an "onupdate" parameter. From
the docs:

onupdate - A scalar, Python callable, or ClauseElement representing a
default value to be applied to the column within UPDATE statements,
which wil be invoked upon update if this column is not present in the
SET clause of the update. This is a shortcut to using ColumnDefault as a
positional argument with for_update=True.

So if you pass something like sqlalchemy.func.current_timestamp() as the
onupdate parameter to a Column, you should get the behaviour you want.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ForeignKey on a ForeignKey

2009-08-24 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of asrenzo
> Sent: 24 August 2009 13:48
> To: sqlalchemy
> Subject: [sqlalchemy] Re: ForeignKey on a ForeignKey
> 
> 
> Hi King Simon,
> 
> Thanks for your answer but this doesn't solve my problem.
> If I comment my parent_id definition in the objects_tree definition,
> everything is OK with create_all() invocation.
> 
> If I change the parent_id column definition to :
> 
> sa.Column('parent_id', None, sa.ForeignKey('objects.id',
> ondelete='CASCADE', onupdate='CASCADE'), primary_key=True)
> 
> everything is still OK (parent_id is not a foreign key on another
> foreign key anymore). But as soon as I try :
> 
> sa.Column('parent_id', None, sa.ForeignKey('objects.parent_id',
> ondelete='CASCADE', onupdate='CASCADE'), primary_key=True)
> 
> the create_all() is broken.
> 
> Any idea ?
> 
> Laurent
> 

Is it still the same error that you are getting (NotImplementedError),
or something else? If I run your code against SQLite, I get no errors.
If I run it against MySQL, I get the error "BLOB/TEXT column 'name' used
in key specification without a key length". If I change your 'name'
column to be a String type rather than Text, it runs without a problem.

(It looks like I was wrong about the None being required - it works fine
without it)

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ForeignKey on a ForeignKey

2009-08-24 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Laurent Rahuel
> Sent: 24 August 2009 12:16
> To: sqlalchemy
> Subject: [sqlalchemy] ForeignKey on a ForeignKey
> 
> 
> Hi all,
> 
> I'm stucked with a problem I'm not able to solve (SA 0.5.2). 
> I hope on 
> of you would be smarter than me.
> 
> Here is the problem:
> 
> """
> import sqlalchemy as sa
> 
> __metadata__ = sa.MetaData()
> 
> OBJECTS = sa.Table('objects', __metadata__,
>  sa.Column('id', sa.Integer, primary_key=True),
>  sa.Column('parent_id', sa.ForeignKey('objects.id', 
> ondelete='CASCADE', onupdate='CASCADE')),
>  sa.Column('name', sa.Text, index=True, nullable=False, 
> default='Root'),
> )
> 
> OBJECTSTREE = sa.Table('objects_tree', __metadata__,
>  sa.Column('id', sa.Integer, primary_key=True),
>  sa.Column('child_id', sa.ForeignKey('objects.id', 
> ondelete='CASCADE', onupdate='CASCADE'), primary_key=True),
>  sa.Column('parent_id', sa.ForeignKey('objects.parent_id', 
> ondelete='CASCADE', onupdate='CASCADE'), primary_key=True),
>  sa.Column('depth', sa.Integer, nullable=False, 
> index=True, default=0),
> )
> 
> When I call the create_all() method from metadata, I always get this 
> error -> sqlalchemy/types.py", line 375, in get_col_spec raise 
> NotImplementedError()
> 

I think your Column definitions are wrong - the second parameter to
Column should be the column type (Integer, String etc.). For foreign
keys, you can pass None, in which case the type will be the same as the
column that the key is pointing at.

eg.

sa.Column('child_id', None, sa.ForeignKey'objects.id',
ondelete='CASCADE', onupdate='CASCADE'), primary_key=True)

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: negative indexes in query slices?

2009-08-24 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of john smallberries
> Sent: 24 August 2009 08:51
> To: sqlalchemy
> Subject: [sqlalchemy] negative indexes in query slices?
> 
> 
> I just tried limiting a query to the final 10 items of a 30010 item
> table using this slice notation:
>   q = some.Session.query()
>   items = q[-10:]
> the resulting mysql query had no limit or offset clause. Changing the
> form to
>   items = q[q.count()-10:]
> produced:
>   select from data LIMIT 3, 18446744073709551615
> (which worked fine in practice). Changing the form to
>   items = q[q.count()-10:q.count()]
> produced the desired:
>   select from data LIMIT 3, 10
> 
> Is that the expected behavior for negative indices?
> I am using SA 0.5.4p2

I think the problem is that there is no way of specifiying negative
indices in the SQL LIMIT clause (at least for MySQL - I don't know about
other databases), so it would have to be emulated in some way. The only
choices I can think of are:

a) Execute a count() first, as you did above

b) Use a subquery in which the sort order is reversed

Both would have to transform the query substantially, so I'm not sure if
it's the sort of thing that SA should do automatically.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: table creation oddity

2009-08-21 Thread King Simon-NFHD78

Faheem Mitha wrote:
> Thanks for the fast and helpful response. This looks like an artifact
of 
> how I am creating the table. I wonder if this would still show up if I

> explicitly specified the id. I could check this. Also, presumably if I
had 
> other cols in the table, they wouldn't show up in sqla's printout.

The sequence isn't directly related to the table (as far as I'm aware).
Instead, when you insert a row into the table, you do something like
"SELECT NEXTVAL('')" to get the next ID value. SQLALchemy
does this automatically for you.


> 
> I'm still puzzled why sqlalchemy lists the entries in the table, when 
> querying directly via psql doesn't show it.
> 

Your SQLAlchemy operations are happening within a transaction that never
gets committed. If you turned on SQL echoing (use echo=True or
echo='debug' in your call to create_engine), you would see that your
object is actually being inserted into the 'mytest' table. When you
SELECT from the table, you are still in the same transaction, so you see
the rows that you have inserted.

However, when the script exits, the transaction gets rolled back, so you
never see the new rows in psql. The only evidence that anything ever
happened is that the sequence has moved on (see the note at the bottom
of http://www.postgresql.org/docs/8.1/static/functions-sequence.html)

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: index in SA

2009-08-21 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of rajasekhar911
> Sent: 21 August 2009 10:25
> To: sqlalchemy
> Subject: [sqlalchemy] Re: index in SA
> 
> 
> i tried
> 
> class MyClass:
>__tablename__ = 'my_table'
> 
> id = Column(Integer, primary_key=True)
> name = Column(String, nullable=False)
> type = Column(String, nullable=False)
> __table_args__ = (
> Index('ix_name_type', name , type ,unique=True)
> )
> 
> it errors out
> 
> __table_args__ = (
>   File "/usr/lib/python2.4/site-packages/SQLAlchemy-0.5.5-py2.4.egg/
> sqlalchemy/schema.py", line 1461, in __init__
> self._init_items(*columns)
>   File "/usr/lib/python2.4/site-packages/SQLAlchemy-0.5.5-py2.4.egg/
> sqlalchemy/schema.py", line 1465, in _init_items
> self.append_column(_to_schema_column(column))
>   File "/usr/lib/python2.4/site-packages/SQLAlchemy-0.5.5-py2.4.egg/
> sqlalchemy/schema.py", line 1476, in append_column
> self._set_parent(column.table)
>   File "/usr/lib/python2.4/site-packages/SQLAlchemy-0.5.5-py2.4.egg/
> sqlalchemy/schema.py", line 1469, in _set_parent
> self.metadata = table.metadata
> AttributeError: 'NoneType' object has no attribute 'metadata'
> 
> thanks

The problem is that at the time you are calling "Index", the table
object doesn't exist. Apparently the Index object doesn't work with
declarative in this way.

However, if you just move your Index definition outside the class
definition completely, I think it should be fine.

ie.

class MyClass(Base):
   __tablename__ = 'my_table'

id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
type = Column(String, nullable=False)

Index('ix_name_type', MyClass.name, MyClass.type, unique=True)

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: new questions

2009-08-21 Thread King Simon-NFHD78

darkblueB wrote:
>
> ok, I have done this
> I have an object def and a __table__ for all of the "main actors" in
> my setup
> I defined an __init__() for one of them
> I use the declarative base
> 
> when the objects get created, they seem to have a __mapper__ in them
> already
> (the "primary mapper")
> 
> but when I do simple queries that would draw upon a foreign key
> constraint present in the DB
> I dont get any lookups
> 
> for example, (attribute? - I dont know all the terms here)
> //meta is setup already, sqlite
> // read in an existing DB
> cBase = declarative_base(metadata=meta)
> 
> class saProject(cBase):
> __table__ = meta.tables['bt_projects']
> 
> def __init__(self, inName, inFacilName, inImpModifier="",
> inProbModifier="", inIsInviteOnly=0, inAllowWt=0, inAllowEvtSummary=1,
> inAllowAssSum=1, inShowEvtRateAuth=1, inShowEvtRateCom=1 ):
> self.name = inName
> self.facilitator = inFacilName
> self.impact_mod = inImpModifier
> self.prob_mod = inProbModifier
> .
> 
> so now, I read one project in to tProj
> tProj.name => 'FBLS'
> 
> but
> tProj.impact_mod => Decimal("0")
> *not* a lookup with the foreign key
> 
> here is the relevant SQL CREATE DB
> 
> CREATE TABLE 'bt_projects' (
> name TEXT UNIQUE,
> facilitator TEXT,
> active NUMERIC DEFAULT 1,
> impact_mod NUMERIC DEFAULT 0,
> prob_mod NUMERIC DEFAULT 0,
> 
> key INTEGER NOT NULL PRIMARY KEY,
> FOREIGN KEY (impact_mod) REFERENCES bt_evt_prob_modifiers(key),
> FOREIGN KEY (prob_mod) REFERENCES bt_evt_impact_modifiers(key)
> );
> 
> so, I have to make an explicit mapper, too?
> perhaps I am missing something
>   -Brian
> 

The "lookup with foreign key" that you are asking for is what SQLALchemy
calls a "relation" (See
http://www.sqlalchemy.org/docs/05/ormtutorial.html#building-a-relation).
You have to create them explicitly, and they can't have the same name as
the foreign key itself. When the foreign keys are simple, SA can
normally figure out the join conditions by itself.

For example, to get the behaviour your were expected, your class
definitions should look something like this:

import sqlalchemy.orm as saorm

class saEvtProbModifiers(cBase):
__table__ = meta.tables['bt_evt_prob_modifiers']

class saEvtImpactModifiers(cBase):
__table__ = meta.tables['bt_evt_impact_modifiers']

class saProject(cBase):
__table__ = meta.tables['bt_projects']

prob_modifier = saorm.relation(saEvtProbModifiers)
impact_modifier = saorm.relation(saEvtImpactModifiers)


Now when you access tProj.impact_modifier, you should get an instance of
the saEvtImpactModifiers class.

The relation function gies you lots of other options. You can configure
a 'back reference' (so you could automatically add a 'projects' property
to saEvtImpactModifiers and saEvtProbModifiers for example). You can
also explicitly define the join conditions if SA is unable to work them
out.

It's worth working through the ORM tutorial
(http://www.sqlalchemy.org/docs/05/ormtutorial.html) which covers all of
this.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: index in SA

2009-08-21 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of rajasekhar911
> Sent: 21 August 2009 07:30
> To: sqlalchemy
> Subject: [sqlalchemy] Re: index in SA
> 
> 
> i want to add a composite index to the class inherited from
> declarative_base
> 
> I tried this,
> 
> class MyClass:
>__tablename__ = 'my_table'
> 
> id = Column(Integer, primary_key=True)
> name = Column(String, nullable=False)
> type = Column(String, nullable=False)
> __table_args__ = (
> Index('ix_name_type','name','type',unique=True)
> )
> 
> gave me an error,
> File "/m2svn/trunk/src/model/MyClass.py", line 32, in MyClass
> __table_args__ = (
>   File "/usr/lib/python2.4/site-packages/SQLAlchemy-0.5.5-py2.4.egg/
> sqlalchemy/schema.py", line 1461, in __init__
> self._init_items(*columns)
>   File "/usr/lib/python2.4/site-packages/SQLAlchemy-0.5.5-py2.4.egg/
> sqlalchemy/schema.py", line 1465, in _init_items
> self.append_column(_to_schema_column(column))
>   File "/usr/lib/python2.4/site-packages/SQLAlchemy-0.5.5-py2.4.egg/
> sqlalchemy/schema.py", line 2145, in _to_schema_column
> raise exc.ArgumentError("schema.Column object expected")
> sqlalchemy.exc.ArgumentError: schema.Column object expected
> 

I'm not sure if this is the root cause of your error, but __table_args__
must either be a dictionary or a tuple where the last element is a
dictionary (according to
http://www.sqlalchemy.org/docs/05/reference/ext/declarative.html#table-c
onfiguration)

Also, I think Index may require actual column parameters rather than
strings (according to
http://www.sqlalchemy.org/docs/05/metadata.html#indexes). You may be
able to use something like the following after your class definition:

  Index('ix_name_type', MyClass.__table__.c.name,
MyClass.__table__.c.type, unique=True)

or even

  Index('ix_name_type', MyClass.name, MyClass.type, unique=True)

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: table creation oddity

2009-08-21 Thread King Simon-NFHD78

Faheem Mitha wrote:
> 
> Hi,
> 
> The following script is then followed by its output, and 
> finally by the 
> table output.
> 
> I don't get what is going on here. Yes, I should commit the 
> session, and 
> the table is empty as expected, but why does the id keep 
> incrementing on 
> successive runs, and where is this table living, if not in 
> the db? I'd 
> expect to see the id stay at 1. Also, I'd expect to see something in 
> session.dirty. Deleting the table resets the counter back to 1, so it 
> looks like it is using the table in some way, but as already 
> stated, the 
> table shows as empty via a select * command.
> 
> If anyone can clarify what is going on here and satisfy my 
> curiosity, I'd 
> appreciate it. Please CC me on any reply. Thanks.
> 
>
> Regards, Faheem.
> 
> ***
> oddity.py
> ***
> 
> from sqlalchemy import *
> from sqlalchemy.orm import mapper, relation, sessionmaker
> 
> def add_obj(session, obj):
>  """ Check if object primary key exists in db. If so,exit, else
>  add.
>  """
>  from sqlalchemy import and_
>  from sqlalchemy.orm import object_mapper
>  mapper = object_mapper(obj)
>  pid = mapper.primary_key_from_instance(obj)
>  criterion = and_(*(col == val for col, val in 
> zip(mapper.primary_key, 
> mapper.primary_key_from_instance(obj
>  if session.query(obj.__class__).filter(criterion).count() > 0:
>  print "%s object with id %s is already in 
> db."%(type(obj).__name__, pid)
>  exit
>  else:
>  session.add(obj)
> 
> metadata = MetaData()
> 
> mytest_table = Table(
>  'mytest', metadata,
>  Column('id', Integer, primary_key=True),
>  )
> 
> class MyTest(object):
>  def __init__(self):
>  pass
> 
> mapper(MyTest, mytest_table)
> 
> dbstring = "postgres://username:pas...@localhost:5432/oddity"
> db = create_engine(dbstring)
> metadata.bind = db
> metadata.create_all()
> conn = db.connect()
> 
> Session = sessionmaker()
> session = Session()
> t1 = MyTest()
> add_obj(session, t1)
> print session.query(MyTest).count()
> 
> stmt = mytest_table.select()
> for row in stmt.execute():
>  print row
> 
> stmt = select([mytest_table.c.id])
> print "anno statement is %s\n"%stmt
> for row in stmt.execute():
>  print row
> 
> print "session.dirty is %s"%session.dirty
> 
> #session.commit()
> #session.flush()
> #conn.close()
> 
> *
> script output
> *
> $ python oddity.py
> 1
> (1,)
> anno statement is SELECT mytest.id
> FROM mytest
> 
> (1,)
> session.dirty is IdentitySet([])
> $ python oddity.py
> 1
> (2,)
> anno statement is SELECT mytest.id
> FROM mytest
> 
> (2,)
> session.dirty is IdentitySet([])
> 
> 
> table output
> 
> oddity=# select * from mytest;
>   id
> 
> (0 rows)
> 

I've never used postgres, but I believe auto-incrementing counters are
implemented using database sequences. I think these are incremented
outside of a transaction - this ensures that two seperate database
connections using the sequence at the same time will get distinct
values. So although you aren't commiting your transaction, the sequence
still advances.

I guess the sequence must be associated with the table, so when you drop
the table it destroys the sequence as well (I don't know if this is SA
behaviour or PG behaviour).

session.dirty only contains objects that have been loaded from the DB
and subsequently modified. You don't ever actually modify your object,
so it shouldn't appear in session.dirty. (It should appear in
session.new though)

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: new questions

2009-08-19 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of darkblueB
> Sent: 19 August 2009 02:58
> To: sqlalchemy
> Subject: [sqlalchemy] Re: new questions
> 

> Hi Simon
> 
> thanks for the reply.. Your second part is straightforward..
> The first one, not so much..
> 
> I have gone back to a simple
> meta.bind = //my engine here
> meta.reflect()
> 
> I can see meta.tables, and meta.tables_sorted()
> but how do I know what mapped objects exist?
> 
> (there should be about 12 tables, with a few one to many and one many
> to many defined)
> I feel like time is passing by, I would like to use ORM but this is
> new to me
> thanks much
>   -Brian
> 

Ah - I see what you mean now. meta.reflect() will only create Table
objects (part of the SQL expression language layer). It doesn't
automatically create mapped classes for you. If you want to use the
declarative system, you would need to create at least a basic class for
each of your reflected tables.

For example (untested):

import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base

meta = sa.Metadata(bind='sqlite:///your.db')
meta.reflect()

Base = declarative_base(metadata=meta)

class SomeObject(Base):
__table__ = meta.tables['some_table']


class SomeOtherObject(Base):
__table__ = meta.tables['some_other_table']


I suppose you could automate this by iterating over meta.tables and
creating the classes in a loop. You would end up with something similar
to SqlSoup (http://www.sqlalchemy.org/trac/wiki/SqlSoup). I suppose it
should be possible to automatically create ORM-level relations by
inspecting your foreign keys as well if you really wanted.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: renaming columns

2009-08-17 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Michele Simionato
> Sent: 17 August 2009 16:11
> To: sqlalchemy
> Subject: [sqlalchemy] renaming columns
> 
> 
> There should be an easy way to do this, but I cannot find it in the
> documentation.
> I would like to generate a query with a renamed column, something like
> this:
> 
> SELECT acolumn AS newcol FROM atable
> 
> I tried things like
> 
> print atable.select().with_only_columns([atable.c.acolumn.as_
> ('newcol')])
> 
> What's the right syntax? Thanks,
> 
>   M.S.

I think you want something like .label('newcol'). For example:


  import sqlalchemy as sa

  print sa.select([atable.c.acolumn.label('newcol')])


Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: new questions

2009-08-17 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of darkblueB
> Sent: 17 August 2009 06:31
> To: sqlalchemy
> Subject: [sqlalchemy] new questions
> 
> 
> Hi-
>   I have just read a lot and gone through some exercises, but am at an
> early stage here..
> Two questions:
> 
> 1) I have an existing database, and want to use sa to work with it..
> so "reflect" seems like what I want. I have loaded up the database
> successfully - sqlite - not with reflect(), but instead with named
> tables and autoload=true.
> 
>   Now I am reading about declarative base class.. is a reflect/
> autoload approach preemptive of declarative? do I have to make
> mappers? there are a number of foreign key relationships, declared in
> the db.. how do I find out more about which ones are being
> instantiated for me?
> 
> 2) Session vs connect.. from my limited experience, conn =
> engine.connect() seems natural and all I need. I am reading this intro
> material and they are going on at length with Session. Is Session
> really necessary? so much more desirable?
> 
>  I am really looking for an expedient use of sa, I dont think I will
> get deeply into this.. pragmatic!
>   thanks in advance
>-Brian
> 

Hi,

In answer to your first question, the declarative extension is pretty
flexible. You can still use autoload to get the table definitions, and
then explicitly add anything that the autoload doesn't pick up. There is
an example at
http://www.sqlalchemy.org/docs/05/reference/ext/declarative.html#table-c
onfiguration

Once your declarative class has been created, you can access the
underlying table object as "YourMappedClass.__table__". This table
should have a foreign_keys property which lists all the forgein keys, so
you can see whether they have been detected correctly.

As for your second question, this is mostly about the difference between
the "SQL Expression Language" layer of SQLAlchemy, which generally works
with engines and connections, and the "ORM" layer, which normally works
with the Session. The description at
http://www.sqlalchemy.org/docs/05/session.html#what-does-the-session-do
is probably a reasonable description of why you might want to use a
Session.

If you are only ever reading data from the database, and you don't want
or need to build object relationships on top of your SQL data, you can
easily get away with the SQL Expression Language layer and not need the
ORM. However, if you need to do much more than basic manipulation of the
data, the ORM can make your life much easier.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Backrefs and the identity map

2009-08-03 Thread King Simon-NFHD78

Michael Bayer wrote:
>
> King Simon-NFHD78 wrote:
> >
> > to
> >
> > master_id = sa.Column(sa.Integer,
> > sa.ForeignKey(Master.__table__.c.id))
> >
> > ...and now it seems to work! So is this a bug?
> 
> yes, that would be a bug.   There are some other scenarios 
> where this kind
> of thing occurs (lazy clause doesn't optimize) related to 
> inheritance but
> they are less clear cut as "bugs".
> 
> Another point though, if you upgrade to trunk /0.5.6, 
> many-to-one backrefs
> wont even fire off anymore if unloaded, when they are accessed in the
> backref context.
> 
> anyway my hunch is that the Column you get back from Master.id is a
> "proxy" to the real one, so we'd have to figure some way for 
> ForeignKey to
> navigate into the real column (dangerous, since it is 
> assuming) or get the
> "comparison" to honor proxy columns (more likely).   if you can file a
> ticket that would be very helpful otherwise im going to forget.
> 

Done. In case it helps, the proxy is an
sqlalchemy.sql.util.AnnotatedColumn. If you call '_deannotate' on it,
the resulting object compares equal with the original column.

Thanks again,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Backrefs and the identity map

2009-08-03 Thread King Simon-NFHD78

I wrote:
> Hi,
> 
> Does accessing a backref always have to issue SQL, even if 
> the object to
> be loaded already exists in the identity map? For example, if I have a
> many-to-one lazy-loaded relationship from Master to Detail with a
> backref, the statement "master.details[0].master" will issue 
> SQL for the
> '.master' backref, even though it already exists in the 
> session. I know
>
> [SNIP]
> 
> So "query.get" doesn't issue a query, but "master.details[0].master"
> does. Is there any way of making the backref use query.get, 
> and thereby
> use the identity map?
> 

I delved into the source to find out how this works, and I see that the
LazyLoader strategy has an attribute 'use_get' which is meant to do
exactly this. However the test to see whether the lazywhere clause is
the same as the mapper's get() clause is failing:

In [1]: import satest2

In [2]: s = satest2.Detail.master.property.strategy

In [3]: s.mapper._get_clause[0].compare(s._LazyLoader__lazywhere)
Out[3]: False

In [4]: print s.mapper._get_clause[0]
master.id = ?

In [5]: print s._LazyLoader__lazywhere
master.id = ?

In [6]: print s.mapper._get_clause[0].left
master.id

In [7]: print s._LazyLoader__lazywhere.left
master.id

In [8]: print
s.mapper._get_clause[0].left.compare(s._LazyLoader__lazywhere.left)
False

So even though both clauses are binary expressions representing
"master.id = ?", the master.id in each case is different.

On the offchance, I changed the foreign key definition from:

master_id = sa.Column(sa.Integer, sa.ForeignKey(Master.id))

to

master_id = sa.Column(sa.Integer,
sa.ForeignKey(Master.__table__.c.id))

...and now it seems to work! So is this a bug?

Thanks,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Backrefs and the identity map

2009-08-03 Thread King Simon-NFHD78

Hi,

Does accessing a backref always have to issue SQL, even if the object to
be loaded already exists in the identity map? For example, if I have a
many-to-one lazy-loaded relationship from Master to Detail with a
backref, the statement "master.details[0].master" will issue SQL for the
'.master' backref, even though it already exists in the session. I know
I can eagerload('details.master'), but I was just wondering if there was
any way of getting the relation mechanism to check the identity map
before issuing the SQL.

Here is an example:


import sqlalchemy as sa
import sqlalchemy.orm as saorm
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

Base.metadata.bind = sa.create_engine('sqlite:///:memory')

class Master(Base):
__tablename__ = 'master'
id = sa.Column(sa.Integer, primary_key=True)


class Detail(Base):
__tablename__ = 'detail'
id = sa.Column(sa.Integer, primary_key=True)
master_id = sa.Column(None, sa.ForeignKey(Master.id))
master = saorm.relation(Master, backref='details')

def dbg(msg):
print "\n %s " % msg

def test():
dbg('SQLAlchemy version: %s' % sa.__version__)
Base.metadata.create_all()
Session = saorm.sessionmaker()
sess = Session()

m = Master()
for i in range(10):
m.details.append(Detail())
sess.add(m)
sess.commit()
sess.close()

Base.metadata.bind.echo = True
sess = Session()
dbg("Getting Master")
m = sess.query(Master).first()
dbg("Getting details")
details = m.details
dbg("Getting master of first detail")
m2 = details[0].master
assert m2 is m
dbg("Getting master again via query.get")
m3 = sess.query(Master).get(m.id)
assert m3 is m2


if __name__ == '__main__':
test()



And here is the output:

 SQLAlchemy version: 0.5.5 

 Getting Master 
2009-08-03 13:17:12,445 INFO sqlalchemy.engine.base.Engine.0x...7ecL
BEGIN
2009-08-03 13:17:12,447 INFO sqlalchemy.engine.base.Engine.0x...7ecL
SELECT master.id AS master_id
FROM master
 LIMIT 1 OFFSET 0
2009-08-03 13:17:12,447 INFO sqlalchemy.engine.base.Engine.0x...7ecL []

 Getting details 
2009-08-03 13:17:12,451 INFO sqlalchemy.engine.base.Engine.0x...7ecL
SELECT detail.id AS detail_id, detail.master_id AS detail_master_id
FROM detail
WHERE ? = detail.master_id
2009-08-03 13:17:12,451 INFO sqlalchemy.engine.base.Engine.0x...7ecL [1]

 Getting master of first detail 
2009-08-03 13:17:12,456 INFO sqlalchemy.engine.base.Engine.0x...7ecL
SELECT master.id AS master_id
FROM master
WHERE master.id = ?
2009-08-03 13:17:12,457 INFO sqlalchemy.engine.base.Engine.0x...7ecL [1]

 Getting master again via query.get 

 Finished 


So "query.get" doesn't issue a query, but "master.details[0].master"
does. Is there any way of making the backref use query.get, and thereby
use the identity map?

Thanks a lot,

Simon






--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Session mapper and Class.query() method

2009-07-27 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of naktinis
> Sent: 27 July 2009 13:07
> To: sqlalchemy
> Subject: [sqlalchemy] Session mapper and Class.query() method
> 
> 
> I've seen that Session.mapper is deprecated. While I agree that auto-
> adding objects to a session may be a bad idea, I don't see why is it
> so bad to add the query() method directly to the mapped class.
> 
> Is there a reason, why I shouldn't do it?
>

The query() method assumes that you are using a scoped session. As long
as you know that you are always going to use a scoped session, you can
use:

  cls.query = scoped_session.query_property()

...to add it back in again. If you ever use explicit sessions, it might
be confusing that sometimes you say "cls.query()", and other times say
"session.query(cls)", but that's up to you. That's the only reason I can
think of for being cautious about cls.query().

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: autoload of db view treating columns as Decimal

2009-07-23 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Randy Syring
> Sent: 23 July 2009 07:23
> To: sqlalchemy
> Subject: [sqlalchemy] Re: autoload of db view treating 
> columns as Decimal
> 
> 
> Well, I am not really sure what the problem is, but it looks like it
> might not have anything to do with the autoload.  I tried just using a
> select statement with a from_obj parameter and it looks to me like SA
> is getting things wrong when it looks for the field name.  Here is the
> script i used:
> 
> http://paste.pocoo.org/show/130239/
> 
> Here is the key:value pairs for each row/column when trying to select
> from an sqlite view that uses a case statement:
> row 0
> id: 1
> name: jack
> 'something': something
> case
> when name == 'jack' then 1
> else 0
> end: 1
> row 1
> id: 2
> name: sam
> 'something': something
> case
> when name == 'jack' then 1
> else 0
> end: 0
> 
> The same test running against Postgres is:
> 
> row 0
> id: 1
> name: jack
> staticfield: something
> isjack: 1
> row 1
> id: 2
> name: sam
> staticfield: something
> isjack: 0
> 
> 

When I run your script, I get the correct output, using Python 2.5.1, SA
0.5.4p2, sqlite3 version 2.3.2. Perhaps the bug is in the version of
SQLite that you are using?

What happens if you run your query using the SQLite command-line tools?

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Dynamic making of the where clause

2009-06-26 Thread King Simon-NFHD78

> On Jun 23, 3:32 pm, Ash  wrote:
> > Hello,
> >
> > I am tryin to make the dynamic where clause using 
> append_whereclause.
> >
> > But how i can do that,  For eg :
> >
> > I have a==b and c in (1,2,3,4) or d like %s
> >
> > So i made three sqlalchemy expression
> >
> > 1. a==b
> > 2. c in (1,2,3,4) [ using in_]
> > 3. d like %s [using like]
> >
> > now i want this 3 to stuff in where clause .
> >
> > I created select like sel = select()
> >
> > How can i make the where clause which have and and  or both uisng
> > append_whereclause.
> >

I think append_whereclause always uses AND, so you need to connect these 
conditions into a single clause and call append_whereclause just once.

It's not clear how you want the grouping of your conditions to work. Is it:

  (a==b AND c in (1, 2, 3, 4)) OR d like %s

Or

  a==b AND (c in (1, 2, 3, 4) OR d like %s)


I think you need to do something like this (for the first option):

from sqlalchemy import and_, or_

clause = or_(and_(a == b, c.in_([1, 2, 3, 4])),
 d.like(something))
sel.append_whereclause(clause)

You may also be able to use the '&' and '|' operators, as long as you are 
careful with brackets. See the docs at 
http://www.sqlalchemy.org/docs/05/sqlexpression.html#conjunctions

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: aggregation with count and webhelpers.paginate

2009-06-22 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Hollister
> Sent: 20 June 2009 02:15
> To: sqlalchemy
> Subject: [sqlalchemy] Re: aggregation with count and 
> webhelpers.paginate
> 
> 
> Well, that worked great:
> 
> q = meta.Session.query(m.Hit.referer, func.count(m.Hit.id))\
> .group_by(m.Hit.referer)\
> .order_by(func.count(m.Hit.id).desc())
> 
> Thanks!
> 
> ps: Is there a better way to specify the count in the order_by?
> 

If it's just that you don't like repeating yourself, you should be able
to save the result of func.count (untested):

 hit_count = func.count(m.Hit.id)
 q = (meta.Session.query(m.Hit.referer, hit_count)
  .group_by(m.Hit.referer)
  .order_by(hit_count.desc())


Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SqlAlchemy reflection error in TurboGears 2

2009-06-04 Thread King Simon-NFHD78

Mike Driscoll wrote:
> 
> Well, I did the fake id column because SA was throwing an error on a
> the reflected table since it didn't have a primary key defined. I
> think I may have found a workaround though.

>From SA's point of view, the primary key just has to be a set of columns
that uniquely identify a row in the database. It doesn't explicitly have
to be defined as a primary key in the database.

The reason for this is that if you retrieve an object (a row) from the
database and modify it, there is no way to save those changes back to
the database unless you can uniquely identify that row, so you know what
to put in the WHERE clause of the UPDATE statement.

If there really isn't a set of columns that uniquely identifies a row in
the database, and you only want read-only support, you might be able to
tell SA that the primary key is made up of all the columns in the table.
However, if there are any duplicate rows in the table, SA will only
return a single object for those rows because it has no way of knowing
that they are different.

If you do have duplicate rows, you might be better working with the
SQL-only layer of SA, rather than the ORM.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query -> column names

2009-06-04 Thread King Simon-NFHD78

George Sakkis wrote:
> 
> Is there a (public) API for getting the column names of a given Query
> instance and other similar introspection needs  ? I didn't find
> anything related in the docs but after digging in the code I came up
> with
> col_names = [e._result_label for e in q._entities]
> but I'm not sure how stable and robust this is.
> 
> George

Query instances have a 'statement' property that returns the underlying
SELECT object. You can then inspect it's 'columns' attribute:

columns = list(query.statement.columns)

for column in columns:
print column.name

Would that do what you want?

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: This join does not fill in the collection

2009-05-28 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Marcin Krol
> Sent: 28 May 2009 13:57
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: This join does not fill in the collection
> 
> 
> Hello Simon,
> 
> This answered my question, thanks! (I don't know how I missed that in 
> docs..)
> 
> This is theoretically theoretical:
> 
> But suppose I *did* some selection on Hosts and still used 
> .options(eagerload('hosts')) on query - would that screw smth up?
> 
> In particular, if I called session.commit() later, would that save 
> Reservations with changed collections?
> 
> Regards,
> mk
> 

I don't know the answer for certain, but I suspect that if you only had
a half-loaded collection, you could add and remove items from that
collection, and those changes would be reflected in the database,
without affecting items that hadn't been loaded. I don't think SA would
do the bulk UPDATES or DELETES that would be necessary to affect items
that hadn't been loaded.

(I also don't know what would happen if you deleted a Reservation with a
half-loaded hosts collection. It probably depends on the cascade
settings on the relation)

Again, these are only guesses. It should be easy enough for you to knock
up a test case to find out the answer for sure.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: This join does not fill in the collection

2009-05-28 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Marcin Krol
> Sent: 28 May 2009 10:09
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] This join does not fill in the collection
> 
> 
> Hello everyone,
> 
> session.query(Reservation, 
> Host).join(Reservation.hosts).filter(Reservation ...).all()
> 
> Reservation.hosts is not filled in, when I access .hosts 
> collection in 
> individual Reservations, SQLA issues queries to fill in the 
> collection 
> one Reservation by one.
> 
> Again, Reservation and Hosts are many to many relation.
> 
> I certainly can group the Host objects to particular 
> Reservations myself 
>   later.
> 
> But... Is there a way to make SQLA do it itself at query 
> time? I haven't 
> found anything in the docs that would suggest that..
> 
> Regards,
> mk
> 

If your query had also filtered by some of the host columns, the result
set wouldn't contain all the hosts for each Reservation returned. For
this reason, SQLAlchemy doesn't assume that just because the host
columns are available they represent the entire Reservation.hosts
collection.

If you want Reservation.hosts to be filled in by the query, you want
"eager loading". This can be configured for all queries when you define
the relation (by setting lazy=False), or on a query-by-query basis by
adding an 'eagerload' option to the query. Both of these methods will
add an extra join to the hosts table in your query (separate from any
join that you explicitly ask for)

If you've added an explicit join and you know that the result set
already contains all the information you need, you can use the
contains_eager option to indicate that the the relation should be filled
in from the information in the result set.

Documentation for most of this is at
http://www.sqlalchemy.org/docs/05/mappers.html#configuring-loader-strate
gies-lazy-loading-eager-loading

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mapping class against arbitrary SQL expression

2009-04-17 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of JanW
> Sent: 17 April 2009 14:45
> To: sqlalchemy
> Subject: [sqlalchemy] Re: mapping class against arbitrary SQL 
> expression
> 
> 
> OK, thanks,
> 
> it does work if you make an alias on the select like this:
> s = select([left.c.car_id,
> left.c.temperature.label('left_temperature'),
> right.c.temperature.label('right_temperature')],
>from_obj=tables,
>whereclause=and_(left.c.side == 'left',
> right.c.side == 'right')).alias('carside')
> 
> Thanks a lot!
> 
> Now, is it really needed to translate my SQL query first to SA-speak
> or could I use the SQL directly in some way? (I have many old projects
> with SQL embedded in Perl scripts, so it would be someway easier is I
> can transfer the SQL directly).
> 
> From session.query() there is something like from_statement
> (SQL_string) but that won't work here I think.
> 
> Again, many thanks,
> 
> Jan.
> 

I don't know the answer, but I suspect this will be a problem. I think
SA needs to know what columns are going to be returned from the select
statement so that it can set up properties on the mapped class. I don't
know of any way that you can mark a text block as a Selectable.

Sorry I can't be more help,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mapping class against arbitrary SQL expression

2009-04-17 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of JanW
> Sent: 17 April 2009 13:18
> To: sqlalchemy
> Subject: [sqlalchemy] mapping class against arbitrary SQL expression
> 
> 
> Hi,
> 
> is there a way to map a class against an arbitrary SQL expression
> (read-only would be OK)? I can't find the correct way to define the
> selectable for the mapper.
> 
> Example:
> this table:
> carside_table = Table(
> 'carside', metadata,
> Column('id', Integer, primary_key=True),
> Column('car_id', Integer),
> Column('side', Text),
> Column('temperature', Float),
> )
> 
> with a dummy class;
> class CarSide(object):
> pass
> 
> And I want to use this SQL expression to map the class:
> SELECT
> left.car_id
> left.temperature AS left_temperature
> right.temperature AS right_temperature
> FROM carside AS left
> JOIN carside AS right
> ON left.car_id=right.car_id
> WHERE
> left.side = "left" AND
> right.side = "right"
> ;
> 
> Many thanks,
> 
> Jan.
> 

I think the approach should look something like this:

#---

from sqlalchemy import *
from sqlalchemy import orm

metadata = MetaData()
carside_table = Table(
'carside', metadata,
Column('id', Integer, primary_key=True),
Column('car_id', Integer),
Column('side', Text),
Column('temperature', Float),
)

left = carside_table.alias('left')
right = carside_table.alias('right')

tables = left.join(right, left.c.car_id == right.c.car_id)

s = select([left.c.car_id,
left.c.temperature.label('left_temperature'),
right.c.temperature.label('right_temperature')],
   from_obj=tables,
   whereclause=and_(left.c.side == 'left',
right.c.side == 'right'))

class CarSide(object):
pass

orm.mapper(CarSide, s, primary_key=[s.c.car_id])

#---

...but it fails on the last line with the message "Mapper
Mapper|CarSide|%(3069523404 anon)s could not assemble any primary key
columns for mapped table '%(3069523404 anon)s'". I had hoped that
passing the primary_key parameter to mapper would have solved that, but
it doesn't. I'm not sure why.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Insertion issues

2009-04-03 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Tanmoy
> Sent: 03 April 2009 14:30
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: Insertion issues
> 
> import sqlalchemy
> from sqlalchemy import *
> 
> engine = create_engine('mysql://root:voxta...@localhost/stock')
> 
> metadata = MetaData()
> 
> users=Table('NSE', metadata,
> Column('Company_ID',String(40),primary_key=True),
> Column('10:00',Numeric(8,2)),
> )
> 
> metadata.create_all(engine)
> 
> conn = engine.connect()
> 
> conn.execute(users.insert(), [{'name':'joe', '10:00':'1200'}])
> 
> Run this snippet.if u cld..thr r errors 
> cropping up...
> 
> 
> Tom
> 

You aren't passing a value for the 'Company_ID' column. If I change your
'name' parameter to 'Company_ID' instead, and run the script on sqlite
(ie. dburi = 'sqlite:///:memory:'), it works for me.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: "Class.query" vs "DBSession.query(Class)"

2009-02-27 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Sanjay
> Sent: 27 February 2009 05:52
> To: sqlalchemy
> Subject: [sqlalchemy] "Class.query" vs "DBSession.query(Class)"
> 
> 
> Hi,
> 
> There are two styles of writing code for querying: the assignmapper
> style, i.e. "Class.query" vs. the standard style as documented in
> SQLAlchemy tutorial, i.e. "DBSession.query(Class)."
> 
> The assignmapper style seems simpler and intuitive. Curious to know
> why it is not the standard way. Are there any disadvantages?
> 
> Also refer 
> http://groups.google.co.in/group/turbogears/browse_thread/thre
> ad/8f3b4c4da33d69c8
> 
> thanks,
> Sanjay
> 

Class.query can only work if you are using a scoped session, which isn't
appropriate for many kinds of application. Session.query(Class) is
completely general and will work anywhere, and is therefore more
appropriate for use in the documentation.

As the post in the TurboGears thread points out, you can easily add
Class.query using the query_property method of the scoped session.

Simon


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Creating SQL Expression

2009-02-25 Thread King Simon-NFHD78

Could you use the python 'operator' module 
(http://docs.python.org/library/operator.html)?

Eg. (untested):

import operator

operations = {
'+': operator.add,
'-': operator.sub,
# etc.
}

def combine_columns(op, *cols):
return operations[op](*cols)

sum_column = combine_columns('+', a, b)

I think that should work.

Simon

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Ashish Bhatia
> Sent: 25 February 2009 13:26
> To: sqlalchemy
> Subject: [sqlalchemy] Re: Creating SQL Expression
> 
> 
> The problem is still their.
> 
> The two seprate list of
> columns = List of sqlalchem object
> operator = ['+'','-']
> 
> using join to join them will convert the columns object to string
> which is not desirable.
> 
> Any way to fix this.
> 
> On Feb 25, 3:54 pm, Ashish Bhatia  wrote:
> > sorry its resolved and working
> >
> > On Feb 25, 12:20 pm, Ash  wrote:
> >
> > > Hello ,
> >
> > > I am trying to make query like
> >
> > > select (a+b) from xyz;
> >
> > > to do this
> >
> > > xyz = sqlalchemy.Table('xyz',metadata)
> >
> > > a = sqlalchemy.Column('a', sqlalchemy.Integer)
> > > xyz.append_column(a)
> > > b = sqlalchemy.Column('b', sqlalchemy.Integer)
> > > xyz.append_column(b)
> >
> > > column = [(a + b)]
> > > select = sqlalchemy.select(from_obj=xyz, 
> columns=column,distinct=True)
> >
> > > This works fine for me.
> >
> > > Now when the columns a and b are dynamic (Enter by the 
> user in form of
> > > string) and the operator too comes from user
> >
> > > columns_list = ['a','b']
> > > operator = ['+']
> >
> > > like this i get the input
> >
> > > so i make the loop and make
> >
> > > for both the columns something like this
> > > columns = []
> > > for x in column_list :
> > >     t  = sqlalchemy.Column(x, sqlalchemy.Integer)
> > >     xyz.append_column(a)
> > >     columns.append(t)
> >
> > > so now
> > > how to add + to make the quer run
> >
> > > Thanks in the advance.
> > 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Separating session and db/models definitions

2009-02-25 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of bsdemon
> Sent: 24 February 2009 20:22
> To: sqlalchemy
> Subject: [sqlalchemy] Separating session and db/models definitions
> 
> 
> Hello.
> 
> I have the following states of things:
>  1) I need to place some common database metadata description and
> models in one package
>  2) There are at least to applications, which will use this models:
> first app uses scoped session with their own scope_func, second app
> uses session from sessionmaker
>  3) I have some logic in models' methods that need session object to
> present (delete relations or etc.)
> 
> I have no idea how to do it... Is there need for some kind of proxy to
> session, which will appear later, when app decide to instantiate it?
> Or I must define session in place with models?
> 
> 

You can find out which session an object is loaded in using the
sqlalchemy.orm.object_session function. So inside your methods, you
should be able to say something like:

  session = orm.object_session(self)

I think that should work no matter what session strategy you are using.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: altering tables

2009-02-11 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of 
> mhearne808[insert-at-sign-here]gmail[insert-dot-here]com
> Sent: 10 February 2009 19:13
> To: sqlalchemy
> Subject: [sqlalchemy] altering tables
> 

[snip]

> 
> I'd like to update the database (do the equivalent of an 'ALTER TABLE'
> SQL command) without destroying all of the data I already have.  How
> do I do this?
> 
> I have already tried using:
> metadata.create_all(engine,checkfirst=True)
> 
> which can handle _new_ tables being added, it seems, but doesn't seem
> to update the tables for which I have altered the definitions using
> the Table object.
> 
> I'm using SQLAlchemy '0.5.0beta3', with Python 2.5 on Mac OS X.
> 
> Thanks,
> 
> Mike
> 

You may like to have a look at the SQLAlchemy Migrate project:

http://code.google.com/p/sqlalchemy-migrate/

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: classes in separate files

2009-02-09 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of camlost
> Sent: 09 February 2009 09:18
> To: sqlalchemy
> Subject: [sqlalchemy] classes in separate files
> 
> 
> Hi, could someone advice me, please, how to split classes into
> individual files?
> I'm studying the documentation:
> Creating Table, Class and Mapper All at Once Declaratively
>  class-and-mapper-all-at-once-declaratively>.
> If I keep all the classes in the same file, the program works fine.
> If I try to split them into individual files, I'm getting errors like
> NoReferencedTableError: Could not find table 'users' with which to
> generate a foreign key
> 
> Are there some "Best practices" of how to do this?
> 
> Thanks.
> 
> c.
> 
> PS: All the classes are in their individual modules in a package
> called database:
> database
>   /dbinit.py - common initialization (ex. Base = declarative_base())
>   /mailevt.py - class MailEvent
>   /mailogrec.py - class MailLogEvent (foreign keys to Server,
> ObjAddress, MailEvent)
>   /objaddr.py - class ObjAddress
>   /server.py - class Server
>   /user.py - class User
>   /useraddr.py - class UserAddress (foreign key to User, ObjAddress)
>   /vpnlogrec.py - class VpnLogRecord (foreign key to User, Server)
>   /weblogrec.py - class WebLogRecord (foreign key to User, Server)

Are all your classes using the same declarative_base? I think this is
necessary so that the tables use the same metadata and things like
foreign keys can be resolved.

I would probably do this by creating database/base.py that contains
something like:

from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()

And then your other modules would import Base from there. Does that make
any sense?

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: polymorphic_identity not allowed to be zero for base class?

2009-01-30 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Michael Bayer
> Sent: 30 January 2009 17:25
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: polymorphic_identity not allowed to 
> be zero for base class?
> 
> 
> 
> On Jan 30, 2009, at 12:19 PM, King Simon-NFHD78 wrote:
> 
> >
> > Would None (ie. SQL NULL) be a valid polymorphic identity?
> 
> it becomes that issue where we just need some kind of constant to  
> represent THIS_IS_NOT_DEFINED, so that we know when  
> polymorphic_identity is defined or not.   it's a straightforward  
> patch.   Although we'd also need to adapt some expressions such as  
> identity IN (x, y, z) to account for NULL, so, supporting NULL is a  
> little more work.
> 
> 

Thanks - I was only asking out of curiosity. I have no need for it
personally.

Cheers,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: polymorphic_identity not allowed to be zero for base class?

2009-01-30 Thread King Simon-NFHD78

Would None (ie. SQL NULL) be a valid polymorphic identity? 

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Michael Bayer
> Sent: 30 January 2009 17:06
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: polymorphic_identity not allowed to 
> be zero for base class?
> 
> its likely a bug.  there are some boolean tests for 
> "polymorphic_identity" in mapper.py which should be changed 
> to "is None".
> 
> 
> On Jan 30, 2009, at 11:58 AM, Gunnlaugur Thor Briem wrote:
> 
> 
>   Hi,
>   
>   [trying to send this again, seems like previous copy 
> got lost in some moderation queue]
>   
>   Messing with single-table inheritance in a declarative 
> model, with a non-abstract base class, I find that querying 
> fails if polymorphic_identity is 0 (zero). Example:
>   
>    code begins 
>   from sqlalchemy import Column, Integer, create_engine
>   from sqlalchemy.ext.declarative import declarative_base
>   from sqlalchemy.orm import sessionmaker
>   
>   Base = declarative_base()
>   class Course(Base):
>   __tablename__ = 'course'
>   id = Column(Integer, primary_key=True)
>   course_type = Column(Integer)
>   __mapper_args__ = {'polymorphic_on':course_type, 
> 'polymorphic_identity':0}
>   
>   class MultiYearCourse(Course):
>   __mapper_args__ = {'polymorphic_identity':1}
>   
>   engine = create_engine('sqlite:///:
>   memory:')
>   Base.metadata.create_all(bind=engine)
>   session = sessionmaker(bind=engine)()
>   
>   myc = MultiYearCourse()
>   myc.name   = u"Computer Graphics"
>   c = Course()
>   c.name   = u"Sociology" 
>   session.add(c)
>   session.add(myc)
>   session.commit()
>   
>   print "MYC: %s" % myc
>   print "C:   %s" % c
>   query = session.query(Course)
>   print "Query:   %s" % query
>   print "Results: %s" % query.all()
>    code ends 
>   
>   That last line fails with an AssertionError:
>   
>    output begins 
>   MYC: <__main__.MultiYearCourse object at 0xcf7d30>
>   C:   <__main__.Course object at 0xcf7d70>
>   Query:   SELECT course.id   AS 
> course_id, course.course_type AS course_course_type 
>   FROM course
>   Traceback (most recent call last):
> File 
> "/Users/gthb/Documents/workspace/test/src/sqlalchemytest.py", 
> line 31, in 
>   print "Results: %s" % query.all()
> File 
> "/Users/gthb/Library/Python/2.5/site-packages/SQLAlchemy-0.5.2
> -py2.5.egg/sqlalchemy/orm/query.py", line 1186, in all
>   return list(self)
> File 
> "/Users/gthb/Library/Python/2.5/site-packages/SQLAlchemy-0.5.2
> -py2.5.egg/sqlalchemy/orm/query.py", line 1341, in instances
>   rows = [process[0](context, row) for row in fetch]
> File 
> "/Users/gthb/Library/Python/2.5/site-packages/SQLAlchemy-0.5.2
> -py2.5.egg/sqlalchemy/orm/query.py", line 1942, in main
>   return _instance(row, None)
> File 
> "/Users/gthb/Library/Python/2.5/site-packages/SQLAlchemy-0.5.2
> -py2.5.egg/sqlalchemy/orm/mapper.py", line 1557, in _instance
>   _instance = polymorphic_instances[discriminator]
> File 
> "/Users/gthb/Library/Python/2.5/site-packages/SQLAlchemy-0.5.2
> -py2.5.egg/sqlalchemy/util.py", line 71, in __missing__
>   self[key] = val = self.creator(key)
> File 
> "/Users/gthb/Library/Python/2.5/site-packages/SQLAlchemy-0.5.2
> -py2.5.egg/sqlalchemy/orm/mapper.py", line 1695, in 
> configure_subclass_mapper
>   raise AssertionError("No such polymorphic_identity 
> %r is defined" % discriminator)
>   AssertionError: No such polymorphic_identity 0 is defined
>    output ends 
>   
>   But if I exchange the polymorphic identities, so the 
> base class gets the 1 and the subclass gets the 0, then it 
> runs just fine!
>   
>   It seems to me that this can't be intentional - don't 
> see a reason for it, and the docs do not mention any 
> particular restrictions on values of polymorphic_identity.
>   
>   Regards,
>   
>   - Gulli
> 
> 
> 
> 
> 
> 
> > 
> 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using orderedlist with a secondary table

2009-01-30 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Nathan Harmston
> Sent: 30 January 2009 13:15
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Using orderedlist with a secondary table
> 
> Hi, 
> 
> I am currently trying to make a relation between Document and 
> Author, where there is a many to many association which is 
> dealt with by a secondary table and I am trying to store the 
> position of an author in the author list.
>

[snip]

>
> So is there an easy way of performing this functionality 
> using the ordered list or do I need to do something else?
> 
> Many thanks in advance
> 
> nathan
> 
> 

You can't use the 'secondary' mapper argument if you want to be able to
use columns in that secondary table. Instead, you need to map a class to
your secondary table directly, such that it has 1-many relations with
the Document and Author classes. You can then use the 'associationproxy'
extension to hide the details.

See the docs at
http://www.sqlalchemy.org/docs/05/mappers.html#association-pattern and
http://www.sqlalchemy.org/docs/05/reference/ext/associationproxy.html#as
sociationproxy

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: the return type of conn.execute(text(""))

2009-01-28 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Faheem Mitha
> Sent: 27 January 2009 22:41
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] the return type of conn.execute(text(""))
> 
> 
> 
> Hi,
> 
> Today I attempted to serialize the return value of the form
> 
> result = conn.execute(text(""))
> 
> Till now I thought that the return type was a list of tuples, 
> while in 
> fact it is a list of objects of type  'sqlalchemy.engine.base.RowProxy'>. Hence cPickle refused to 
> serialize 
> till I did some conversion.
> 
> Just wondering what the reason for this is.
> 
> Regards, Faheem.
> 

The RowProxy object is more intelligent than a plain tuple. As well as
accessing the values by index, you can use your original column objects
or the name of the column to retrieve the values from it. You can also
use attribute access rather than indexing.

Eg.

row['your_column_name']
row[your_column]
row.your_column_name

I imagine this would make it harder to pickle.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: passing tuple argument into sqlalchemy.sql.text string

2009-01-21 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Faheem Mitha
> Sent: 20 January 2009 22:05
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] passing tuple argument into 
> sqlalchemy.sql.text string
> 
> 
> 
> Hi,
> 
> I've got a query as follows:
> 
> from sqlalchemy.sql import text
> 
> gq = text("""
> SELECT decode_genotype(cell.snpval_id, snp.allelea_id, 
> snp.alleleb_id) FROM cell JOIN snp ON snp.fid = 
> cell.snp_id WHERE cell.patient_chipid IN ('DUKE1_plateA_A10.CEL', 
> 'DUKE1_plateA_A11.CEL')
> """)
> I want to pass in the tuple as an argument, and was wondering 
> how to do 
> it.
> 
> So, I'm looking for something conceptually like
> 
> gq = text("""
> SELECT decode_genotype(cell.snpval_id, snp.allelea_id, 
> snp.alleleb_id) FROM cell JOIN snp ON snp.fid = 
> cell.snp_id WHERE cell.patient_chipid IN :plist
> """)
> 
> gq = conn.execute(gq, plist="('DUKE1_plateA_A10.CEL', 
> 'DUKE1_plateA_A11.CEL')")
> 
> Note, I want to pass in a tuple of arbitary length, so 
> changing this to 
> pass two string arguments would not do. Perhaps I'm supposed 
> to pass in 
> some bindparams too, but I don't know what type I should be using.
> 
>
> Regards, Faheem.
> 

I'm not sure you can do that in the general case. I think bind
parameters (in the DBAPI sense) are only really intended for
substituting individual query parameters, not lists.

If you are happy to regenerate your query each time you want to execute
it, you could create a function which generates a string of the form
"(:p0, :p1, :p2, :p3)" for the given tuple length, and appends that to
the query.

If you use the SQLAlchemy expression language to build that query, it'll
do that for you automatically.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how to use model.py file across different applications

2009-01-20 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of abhishek
> Sent: 20 January 2009 14:26
> To: sqlalchemy
> Subject: [sqlalchemy] how to use model.py file across 
> different applications
> 
> 
> Hello all,
> 
> I have been using SQL Alchemy for sometime now in a turbo gears
> project. Recently, i wanted to write a small piece of python code
> which has to do a few DB operations on the same DB which is used by my
> TG project. so i decided to use the existing model.py file.
> 
> to my delight, as soon as i imported model.py in my code i was able to
> use all the functions and classes in my code. but when i tried to
> execute my code it gave me this error:
> 
> --
> --
> --
> File "d:\python25\lib\site-packages\SQLAlchemy-0.4.4-py2.5.egg
> \sqlalchemy\orm\scoping.py", line 98, in do
> return getattr(self.registry(), name)(*args, **kwargs)
> 
> File "d:\python25\lib\site-packages\SQLAlchemy-0.4.4-py2.5.egg
> \sqlalchemy\util.py", line 967, in __call__
> return self.registry.setdefault(key, self.createfunc())
> 
> File "d:\python25\lib\site-packages\TurboGears-1.0.4b3-py2.5.egg
> \turbogears\database.py", line 54, in create_session
> get_engine()
> 
> File "d:\python25\lib\site-packages\TurboGears-1.0.4b3-py2.5.egg
> \turbogears\database.py", line 43, in get_engine
> dburi = alch_args.pop('dburi')
> 
> KeyError: 'pop(): dictionary is empty'
> 
> --
> --
> --
> 
> i am using TG2.0, SQLAlchemy0.4.
> if some one needs any part of code from model.py or some configuration
> from prod.cfg please feel free to ask.
> 
> Suggestions are urgently required.
> 
> Regards
> Abhishek
> 

Actually, according to that stack trace, you are using TurboGears
1.0.4b3, not 2.0. If you have both versions installed, you may want to
check that you are really importing what you think you are importing.

The error that you are getting is because you haven't loaded your
TurboGears config file, which is where the SQLAlchemy database
connection URI is defined. In TurboGears 1 I think you do that something
like this:

  import turbogears
  turbogears.update_config(configfile='prod.cfg',
modulename='YOUR_APP_NAME.config')

(replace YOUR_APP_NAME with your root-level package name). I don't know
what the equivalent is in TG2.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.5 Released

2009-01-09 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of a...@svilendobrev.com
> Sent: 08 January 2009 19:11
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: SQLAlchemy 0.5 Released
> 
> 
> On Thursday 08 January 2009 21:03:42 Michael Bayer wrote:
> > On Jan 8, 2009, at 12:52 PM, a...@svilendobrev.com wrote:
> > > instead of None, better do some empty method raising some error,
> > > because in the current way the error comes up in not at all
> > > obvious fashion, takes quite some head-scratching to find-out -
> > > there's no mentioning of anything set() related there.
> >
> > __hash__ = None means no hash is defined.  Its in the Python docs.
> okay, have fun with 
> TypeError: 'NoneType' object is not callable
> 

http://docs.python.org/reference/datamodel.html#object.__hash__ implies
that "__hash__ = None" only took on that special meaning in Python 2.6

In Python 2.5, I think you're probably supposed to explicitly raise a
TypeError, but in Python 2.6 that would cause isinstance(obj,
collections.Hashable) to give the wrong answer. Maybe you could do
something like this:

#
import sys

if sys.version_info < (2, 6):
def unhashable(self):
raise TypeError('%s objects are not hashable' %
type(self).__name__)
else:
unhashable = None


def test():
print "Testing on Python version %s" % (sys.version_info,)
class Dummy(object):
__hash__ = unhashable

dummy = Dummy()

try:
print "FAIL: hash(dummy) = %s" % hash(dummy)
except TypeError, e:
print "PASS: %s" % e

if sys.version_info >= (2, 6):
import collections
if isinstance(dummy, collections.Hashable):
print "FAIL: isinstance(dummy, collections.Hashable) ->
True"
else:
print "PASS: isinstance(dummy, collections.Hashable) ->
False"

if __name__ == '__main__':
test()

#

That basic test seems to work on Python 2.4, 2.5 and 2.6.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Efficient dictificationof result sets

2008-12-19 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of Andreas Jung
> Sent: 19 December 2008 06:30
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: Efficient dictificationof result sets
> 
> On 19.12.2008 2:57 Uhr, Michael Bayer wrote:
> >
> > On Dec 18, 2008, at 3:04 PM, Andreas Jung wrote:
> >> Does SA contain some official API to introspect the list of defined
> >> synonyms for a particular class? The goal is to take values defined
> >> as a
> >> synonym also into account for the dictification (for backward
> >> compatiblity reasons for an existing codebase).
> >
> > the mapper's get_property() method includes a "resolve_synonyms"
> > keyword arg that indicates a given key which points to a synonym
> > should return the actual referenced property, so a recipe 
> that builds
> > upon this would look like:
> >
> > set([mapper.get_property(p.key, resolve_synonyms=True) for p in
> > mapper.iterate_properties])
> 
> However this does not apply when using the declarative layer. Any 
> options within such a context?
> 
> Andreas
> 

I haven't been following this discussion closely, so I'm probably wrong,
but that statement doesn't sound right to me. As far as I'm aware, the
declarative layer is just a convenience for setting up Tables and mapped
classes at the same time. The end result is exactly the same as if you
created the tables and classes in the traditional way. I would be very
surprised if the above expression didn't work.

You can get the mapper for a mapped class or object using the
"class_mapper" and "object_mapper" functions.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Create tables with metadata

2008-12-12 Thread King Simon-NFHD78

SQLALchemy doesn't (directly) contain functions for altering tables. You
may be interested in the sqlalchemy-migrate project:

  http://code.google.com/p/sqlalchemy-migrate/

The first example on this page shows how to add a column:

  http://code.google.com/p/sqlalchemy-migrate/wiki/MigrateChangeset

Hope that helps,

Simon

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of 
> jarrod.ches...@gmail.com
> Sent: 12 December 2008 07:08
> To: sqlalchemy
> Subject: [sqlalchemy] Re: Create tables with metadata
> 
> 
> Doesn't get created in the database.
> How do i add columns to tables already defined in the database after i
> have reflected them into the metadata
> 
> 
> 
> On Dec 12, 12:59 am, Empty  wrote:
> > Hi,
> >
> > On Thu, Dec 11, 2008 at 8:12 AM, jarrod.ches...@gmail.com <
> >
> >
> >
> > jarrod.ches...@gmail.com> wrote:
> >
> > > Hi
> > > I've scoured the documentation and i can't find any info on how to
> > > create a column using metadata.
> >
> > > from sqlalchemy import engine
> > > from sqlalchemy import schema
> > > from sqlalchemy import types
> >
> > > _config_dbengine = engine.create_engine('sqlite:tmp/db')
> > > _config_metadata = schema.MetaData(_config_dbengine, reflect=True)
> > > table = _config_metadata.tables['table_name']
> > > table.append_column(schema.Column('id', types.Integer,
> > > primary_key=True, autoincrement=True))
> >
> > > This is the steps i'm using but the column doesn't get created.
> >
> > Doesn't get created where?  In the database?  That's not 
> going to happen.
> >  Are you saying it's not included as part of the table definition?
> >
> > Michael
> --~--~-~--~~~---~--~~

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: returning primary key of object without know what it is called.

2008-12-05 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of Faheem Mitha
> Sent: 04 December 2008 20:43
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] returning primary key of object without 
> know what it is called.
> 
> 
> 
> Hi,
> 
> I'm trying to figure out how to have an object return its primary key 
> without knowing what it is called. The docs in 
> http://www.sqlalchemy.org/docs/05/sqlalchemy_orm_mapper.html look 
> relevant, for example the function identity_key_from_instance 
> (see entry 
> from docs below), but I'm not clear about usage. The 
> functions on this 
> page look like they are meant to be used as method functions 
> of a mapper 
> object, but how should I construct such a mapper object? In my schema 
> file, I have lines like
> 
> Mapper(Foo, foo_table)
> 
> should I be returning an mapper object for use with 
> functions? Ie should I 
> be doing
> 
> foo_mapper = Mapper(Foo, foo_table)
> 
> or similar? The section module sqlalchemy.orm.mapper saya
> 
> "This is a semi-private module; the main configurational API 
> of the ORM is 
> available in module sqlalchemy.orm."
> 
> Does this mean it is not meant to be used in this fashion?
> 
> Also, I don't understand what is meant by
> 
> "This value is typically also found on the instance state under the
> attribute name key."
> 
> in the docs for identity_key_from_instance below.
> 
> Please CC me on any reply.
>Thanks and regards, Faheem.
> 
> 

You can get the mapper for a given instance using the
sqlalchemy.orm.object_mapper function, and that mapper has a
'primary_key_from_instance' method. A generic primary_key function might
look like this (untested):


import sqlalchemy.orm as orm

def get_primary_key(instance):
   mapper = orm.object_mapper(instance)
   return mapper.primary_key_from_instance(instance)

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: New instance ExtraStat with identity key (...) conflicts with persistent instance ExtraStat

2008-11-28 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of Doug Farrell
> Sent: 28 November 2008 18:22
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] New instance ExtraStat with identity 
> key (...) conflicts with persistent instance ExtraStat
> 
> 
> Hi all,
> 
> I'm having a problem with a new instance of a relation 
> conflicting with
> an existing instance. I'm using SA 0.5rc with Sqlite3. Here are my
> simplified classes:
> 
> class Stat(sqladb.Base):
> __tablename__ = "stats"
> name = Column(String(32), primary_key=True)
> total= Column(Integer)
> created  = Column(DateTime, default=datetime.datetime.now())
> updated  = Column(DateTime)
> states   = Column(PickleType, default={})
> extraStats   = relation("ExtraStat", backref="stat")
> 
> class ExtraStat(sqladb.Base):
> __tablename__ = "extrastats"
> name = Column(String(32), ForeignKey("stats.name"),
> primary_key=True)
> total= Column(Integer)
> created  = Column(DateTime, default=datetime.datetime.now())
> updated  = Column(DateTime)
> states   = Column(PickleType, default={})
> 
> The above Stat class has a one-to-many relationship with the ExtraStat
> class (which I think I've implemented correctly). Later in 
> the program I
> create an in memory data model that has as part of it's components two
> dictionaries that contain Stat instances. Those Stat instances have
> relationships to ExtraStat instances. My problem comes in the 
> following
> when I'm trying to update the data in those instances/tables. 
> Here is a
> section of code that throws the exception:
> 
> pressName = "press%s" % pressNum
> # add new ExtraStat instances as relations
> self._addProductStatsPress(productType, pressName)
> self._addPressStatsProduct(pressName, productType)
> try:
>   extraStat = session.query(Stat). \
>   filter(Stat.name==productType). \
>   join("extraStats"). \
>   filter(ExtraStat.name==pressName).one()
> except:
>   extraStat = ExtraStat(pressName, ExtraStat.PRESS_TYPE)
>   self.productStats[productType].extraStats.append(extraStat)
>   extraStat.states.setdefault(sstate, 0)
>   extraStat.states[sstate] += 1
>   extraStat.updated = now
>   extraStat = session.merge(extraStat)
> try:
>   extraStat = session.query(Stat). \
>   filter(Stat.name==pressName). \
>   join("extraStats"). \
>   filter(ExtraStat.name==productType).one()   < throws
> exception right here
> except:
>   extraStat = ExtraStat(productType, ExtraStat.PRODUCT_TYPE)
>   self.pressStats[pressName].extraStats.append(extraStat)
>   extraStat.states.setdefault(sstate, 0)
>   extraStat.states[sstate] += 1
>   extraStat.updated = now
> 
> The marked area is wear it throws the exception. I'm not sure 
> what to do
> here to get past this, any help or ideas would be greatly appreciated.
> 
> The exact exception is as follows:
> Sqlalchemy.orm.exc.FlushError: New instance [EMAIL PROTECTED] 
> With identity
> key (,(u'C',)) conflicts with persistent
> instance [EMAIL PROTECTED]
> 
> Thanks!
> Doug  
> 

This may not be the cause of your problem, but with your mapping above,
I think you can only have one ExtraStat per Stat, because they have the
same primary key (the 'name' column). If 'name' is the only thing that
uniquely distinguishes an ExtraStat, you can only have one per name.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ORM mapping with Elixir compared to raw cursor query

2008-11-21 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of SinJax
> Sent: 21 November 2008 10:43
> To: sqlalchemy
> Subject: [sqlalchemy] Re: ORM mapping with Elixir compared to 
> raw cursor query
> 

[SNIP]

> sql = """
> SELECT annotations.id AS annotations_id, annotations.user AS
> annotations_user, annotations.subject AS annotations_subject,
> annotations.field AS annotations_field, annotations.value AS
> annotations_value, users_1.id AS users_1_id, users_1.user AS
> users_1_user, users_1.pass AS users_1_pass, users_1.lab_id AS
> users_1_lab_id, users_1.email AS users_1_email, groups_1.id AS
> groups_1_id, groups_1.name AS groups_1_name, subjects_1.id AS
> subjects_1_id, subjects_1.leg_id AS subjects_1_leg_id,
> subjects_1.`displayOrder` AS `subjects_1_displayOrder`, fields_1.id AS
> fields_1_id, fields_1.name AS fields_1_name, fields_1.help AS
> fields_1_help, values_1.id AS values_1_id, values_1.`parentId` AS
> `values_1_parentId`, values_1.field AS values_1_field, values_1.name
> AS values_1_name, values_1.image AS values_1_image, values_1.`order`
> AS values_1_order
> FROM users, subjects, annotations
>   LEFT OUTER JOIN users AS users_1 ON users_1.id = 
> annotations.user
>   LEFT OUTER JOIN usergroup AS usergroup_1 ON users_1.id =
> usergroup_1.`userId`
>   LEFT OUTER JOIN groups AS groups_1 ON groups_1.id =
> usergroup_1.`groupId`
>   LEFT OUTER JOIN subjects AS subjects_1 ON subjects_1.id =
> annotations.subject
>   LEFT OUTER JOIN `fields` AS fields_1 ON fields_1.id =
> annotations.field
>   LEFT OUTER JOIN `values` AS values_1 ON values_1.id =
> annotations.value
> WHERE users.user = '%s' AND subjects.id = %s
> ORDER BY annotations.id, users_1.id, usergroup_1.`userId`,
> subjects_1.id, fields_1.id, values_1.id
> """%("msn",2)

[SNIP]

Your problem is the 'FROM users, subjects, annotations' part of that
query. There is no join condition between the three tables, so you are
getting a cartesian product, and a very large number of rows.

In your query, you need to explcitly join to other tables before you use
them as filters.

For example, something like:

allAnn = (Annotation.query()
  .join(Annotation.user)
  .filter(User.py_user == "msn")
  .join(Annotation.subject)
  .filter(Subject.id == 2 ))

This page in the docs describes querying with joins:

http://www.sqlalchemy.org/docs/05/ormtutorial.html#datamapping_joins

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Info needed regarding the use of cascade

2008-11-18 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of 
> ---"<[EMAIL PROTECTED]>"@il06exr02.mot.com
> Sent: 18 November 2008 10:04
> To: sqlalchemy
> Subject: [sqlalchemy] Re: Info needed regarding the use of cascade
> 
> 
> 
> Thank you Michael ,
> 
> > you only need a single relation() + backref(), "books<->stock".
> 
> did you mean like this ?
> 
> 
> class Stock(declarative_base):
>   __tablename__ = 'tbl_stock'
>   
>   
>   
>   pass
> 
> class Book(declarative_base):
>   __tablename__ = 'tbl_books'
>   
>   
>   stock = relation('Stock', backref=backref
> ('tbl_books',order_by=id))
> 
> 
> if so how can i retrieve all the books in a particular stock  ??
> in my case i could have done it by
> 
> >>> ins_stock = session.querry(Stock).filter(id=100).one()
> >>> print ins.stock.books
> [ ...]
>

The 'backref' of a relation is the name of a property that gets placed
on the 'other end' of the relation, pointing back at the original
object. So with the configuration that you had above, you should be able
to say:

>>> ins_stock = session.query(Stock).filter(id=100).one()
>>> print ins_stock.tbl_books
[ ...]

If you name your backref 'books', then you can use your Stock objects in
exactly the same way as you did before.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using deferred with declarative

2008-11-14 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of Allen Bierbaum
> Sent: 14 November 2008 16:40
> To: sqlalchemy
> Subject: [sqlalchemy] Re: Using deferred with declarative
> 
> 
> Does anyone have any ideas on this?
> 
> Does declarative simply not support the deferred property?
> 
> -Allen
> 
> On Sat, Nov 8, 2008 at 11:32 AM, Allen Bierbaum 
> <[EMAIL PROTECTED]> wrote:
> > We have been using the declarative successfully in our 
> codebase for a
> > couple months now with 0.4.x, but we have just run into a problem.
> >
> > We have a table we we want to map using declarative but we want to
> > have one of the columns be deferred because it contains binary data.
> > Unfortunately we can't figure out how to do this.
> >
> > We tried something like this:
> >
> > class Person(Base):
> >__tablename__ = 'people'
> >id = Column('id', Integer, primary_key=True)
> >data = Column('data', Binary)
> >__mapper_args__ = {'properties' : {'data' : 
> sa.orm.deferred(data)}
> >
> > The problem is that when the declarative metaclass assembles the
> > arguments to pass to the mapper creation method, this causes two
> > arguments of name 'properties' to be passed into the call.  (see
> > DeclarativeMeta.__init__)
> >
> > - Is this a bug in the implementation of declarative?  (perhaps the
> > code in DeclarativeMeta.__init__ should look for
> > __mapper_args__['properties'] and merge it with the 
> internally created
> > properties)
> > - Or is there some other way to use deferred columns with 
> declarative?
> >
> > Thanks,
> > Allen
> >
> 

Did you see Michael's reply on the same day you sent your original
message?

http://groups.google.com/group/sqlalchemy/browse_thread/thread/d548138fe
d9903ef

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: select where field=max(field)

2008-11-11 Thread King Simon-NFHD78
> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of John Hunter
> Sent: 11 November 2008 01:54
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: select where field=max(field)
> 
> 
> On Mon, Nov 10, 2008 at 11:10 AM, Michael Bayer
> <[EMAIL PROTECTED]> wrote:
> 
> > you need an extra tuple on the join, query.join((q1, s.s==q1.c.s))
> 
> This gets past the syntax error, but does not produce the right
> results.  I had to take some time off today to work on other problems,
> but am now returning to this query.  To better take advantage of all
> of your generous time :-) I wrote a free-standing example that
> populates a test database.  The initial query Simon suggested works
> and produces the desired output -- the goal is to replicate this with
> a sqlalchemy query.  I also include the join we were working on.  Now
> that the syntax is correct, it runs, but gives the wrong output.
> 
> Since someone proposed a bowling example earlier I decided to run with
> that since it fits my problem quite well: instead of finding the
> number of symbols per strategy where the sum(pnl)<-15, we are
> looking for the number of bowlers per league where the
> sum(frames)>200.  Example below
> 
> 

Hi John,

I had to play around with this for a while, but I got there in the end.
The problem with the version I suggested was that the subquery produces
a number of rows for each league. When we join that subquery back to the
'league' table, you get a combinatorial explosion in the number of rows
returned. Joining is generally only useful if the field that you are
joining on is unique in at least one of the tables ('league' wasn't
unique in either).

The actual solution is much simpler than I expected. I didn't realise
that session.query doesn't actually need to start with a mapped class or
attribute at all - you can ask it directly for columns in the subquery.
Here's the result:

q1 = (session.query(Frame.league, Frame.bowler, total_score)
  .group_by(Frame.league, Frame.bowler)
  .having(total_score<200)).subquery()

q2 = (session.query(q1.c.league, func.count('*'))
  .group_by(q1.c.league))

By using 'q1.c.league' instead of 'Frame.league', SA doesn't try and put
the 'league' table in the outer query. The SQL looks like this:

SELECT anon_1.league AS anon_1_league,
   count(?) AS count_1
FROM (SELECT frame.league AS league,
  frame.bowler AS bowler,
  sum(frame.score) AS sum_1
  FROM frame
  GROUP BY frame.league, frame.bowler
  HAVING sum(frame.score) < ?) AS anon_1
GROUP BY anon_1.league

Which is pretty much the query we wanted, apart from the names. I hope
it works in your original example as well!

Cheers,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



bowlers.py
Description: bowlers.py


[sqlalchemy] Re: select where field=max(field)

2008-11-10 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of John Hunter
> Sent: 10 November 2008 15:29
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: select where field=max(field)
> 
> 
> On Mon, Nov 10, 2008 at 8:53 AM, King Simon-NFHD78
> <[EMAIL PROTECTED]> wrote:
> 
> > It should be fairly easy to build that query with SA's underlying
> > expression language. I'm not certain how to do it through 
> session.query,
> > but I'm sure it's possible.
> 
> The snippet you posted does do what I want when inserted directly into
> mysql database.  I am getting close on the sqlalchemy incantation::
> 
> 
> In [106]: q = session.query(Snapshot.strategy, Snapshot.symbol,
> sum_pnl).group_by(Snapshot.strategy,
> Snapshot.symbol).having(sum_pnl<-15000)
> 
> In [107]: newq = session.query(Snapshot.strategy, func.count('*'))
> 
> In [108]: print newq.select_from (q.subquery
> ()).group_by(Snapshot.strategy )
> SELECT anon_1.strategy AS anon_1_strategy, count(%s) AS count_1
> FROM (SELECT snapshot.strategy AS strategy, snapshot.symbol AS
> symbol, sum(snapshot.pnl) AS sum_1
> FROM snapshot GROUP BY snapshot.strategy, snapshot.symbol
> HAVING sum(snapshot.pnl) < %s) AS anon_1 GROUP BY 
> snapshot.strategy
> 
> The main problem is that the last "GROUP BY snapshot.strategy" is not
> what I want, since I need to be doing "GROUP BY anon_1.strategy".  Is
> there some label magic I can employ to explicitly refer to the
> Snapshot.query from the original "q" rather than the one from "newq"?
> 

I haven't used session.query to do anything more than simple queries, so
I don't honestly know. Does this section of the documentation help at
all?

http://www.sqlalchemy.org/docs/05/ormtutorial.html#datamapping_joins_ali
ases

Actually, the section after that (Using Subqueries) probably does
something very close to what you want. What's the result of these lines:

q1 = (session.query(Snapshot.strategy, Snapshot.symbol, sum_pnl)
  .group_by(Snapshot.strategy, Snapshot.symbol)
  .having(sum_pnl<-15000)).subquery()

q2 = (session.query(Snapshot.strategy, func.count('*'))
  .join(q1, Snapshot.strategy = q1.c.strategy)
  .groupby(Snapshot.strategy))

print q2

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: select where field=max(field)

2008-11-10 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of John Hunter
> Sent: 10 November 2008 14:07
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: select where field=max(field)
> 
> 
> On Mon, Nov 10, 2008 at 4:33 AM, King Simon-NFHD78
> <[EMAIL PROTECTED]> wrote:
> 
> > I'm no SQL expert, so please take this with a pinch of 
> salt, but as far
> > as I know, conditions in the 'WHERE' clause of an SQL statement are
> > applied BEFORE any grouping, so you can't use grouping 
> functions (such
> > as SUM, MAX etc) in the WHERE clause. Instead, you add a 'HAVING'
> > statement on the end to filter the rows AFTER the grouping.
> 
> Ahh, that helps a lot.
> 
> > BTW, I think the 'no grouping functions in WHERE clause' 
> rule is also
> > the reason why your MAX query didn't work. The  fix that 
> Mike gave you
> > turned that part of your query into a subquery that only 
> produced that
> > single value. This statement:
> 
> I see.  That is why the select worked in my first test case but not
> the second.  In the max case, there was only a single value to return.
>  In the sum case, there was a sum grouped by (strategy, symbol).
> Indeed, the having clause does what I want::
> 
>   In [39]: q = session.query(Snapshot.strategy, Snapshot.symbol,
>sum_pnl).group_by(Snapshot.strategy, 
> Snapshot.symbol).having(sum_pnl<-15000)
> 
>   In [40]: print q SELECT snapshot.strategy AS snapshot_strategy,
> snapshot.symbol AS
> snapshot_symbol, sum(snapshot.pnl) AS sum_1
> FROM snapshot GROUP BY snapshot.strategy, snapshot.symbol
> HAVING sum(snapshot.pnl) < %s
> 
> This produces a list of (strategy, symbol, sum(pnl)) as desired.
> 
> Now what I'm trying to figure out how to do is get a count over each
> strategy of the number of symbols where the sum(pnl)<-15000.  So I
> need to do one group_by over (strategy, symbol) to get the right sums,
> and then one group_by over strategy alone to get the symbol counts
> where the threshold criterion is met.
> 
> To be honest, I don't really know how to do this in pure SQL, so this
> is part sqlalachemy, part SQL question.  In the past, I have done
> naive sql queries and done the extra logic in python, so this time
> around I am trying to be a little more persistent in figuring out the
> sql way to do things.
> 

I think the query should look something like this:

SELECT strategy, COUNT(*)
FROM (SELECT snapshot.strategy AS strategy,
 snapshot.symbol AS snapshot_symbol,
 sum(snapshot.pnl) AS sum_1
  FROM snapshot
  GROUP BY snapshot.strategy, snapshot.symbol
  HAVING sum(snapshot.pnl) < 1500) AS strategies
GROUP BY strategy

Run that by hand on your database and see if you get the results you
expect. The nested query gets the list of strategies that match the
original criteria, and the outer query uses that to produce the counts.
(Note that there are other ways to get the same result. For example, you
could JOIN your snapshot table to the subquery, which might be useful if
you wanted other columns from it in the outer query)

It should be fairly easy to build that query with SA's underlying
expression language. I'm not certain how to do it through session.query,
but I'm sure it's possible.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: select where field=max(field)

2008-11-10 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of John Hunter
> Sent: 08 November 2008 05:09
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: select where field=max(field)
> 

[SNIP]

> Here is a query that lists the sum(pnl) for each symbol and strategy
> in my snapshots table
> 
>   session.query(Snapshot.strategy, Snapshot.symbol,
> func.sum(Snapshot.pnl)).group_by(Snapshot.strategy,
> Snapshot.symbol).all()
> 
> That works fine.  But what if I only want to list the rows where the
> sum(pnl)<-15000 ?  I tried a few things:
> 
>   session.query(Snapshot.strategy, Snapshot.symbol,
> func.sum(Snapshot.pnl)).group_by(Snapshot.strategy,
> Snapshot.symbol).filter(func.sum(Snapshot.pnl)<-15000).all()
> 
> but got the error below
> 
> raise exc.DBAPIError.instance(statement, parameters, e,
> connection_invalidated=is_disconnect)
> ProgrammingError: (ProgrammingError) (, 'Invalid use of group
> function') u'SELECT snapshot.strategy AS snapshot_strategy,
> snapshot.symbol AS snapshot_symbol, sum(snapshot.pnl) AS sum_1 \nFROM
> snapshot \nWHERE sum(snapshot.pnl) < %s GROUP BY snapshot.strategy,
> snapshot.symbol' [-15000]
> 

I'm no SQL expert, so please take this with a pinch of salt, but as far
as I know, conditions in the 'WHERE' clause of an SQL statement are
applied BEFORE any grouping, so you can't use grouping functions (such
as SUM, MAX etc) in the WHERE clause. Instead, you add a 'HAVING'
statement on the end to filter the rows AFTER the grouping.

ie. The SQL you want is something like:

SELECT snapshot.strategy AS snapshot_strategy,
   snapshot.symbol AS snapshot_symbol,
   sum(snapshot.pnl) AS sum_1
FROM snapshot
GROUP BY snapshot.strategy, snapshot.symbol
HAVING sum(snapshot.pnl) < 15000

In SA, I think you might be able to write that as (untested):

session.query(Snapshot.strategy,
  Snapshot.symbol,
  func.sum(Snapshot.pnl))
.group_by(Snapshot.strategy, Snapshot.symbol)
.having(func.sum(Snapshot.pnl)<-15000).all()


BTW, I think the 'no grouping functions in WHERE clause' rule is also
the reason why your MAX query didn't work. The  fix that Mike gave you
turned that part of your query into a subquery that only produced that
single value. This statement:

func.max(Snapshot.datetime).select()

...gets turned into something like 'SELECT max(datetime) FROM snapshot'.
This then gets embedded as a subquery into your larger query. It's
probably worth printing the SQL produced by each of the queries so that
you can see the difference.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Memory leak - is session.close() sufficient?

2008-11-10 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of joelanman
> Sent: 10 November 2008 00:21
> To: sqlalchemy
> Subject: [sqlalchemy] Re: Memory leak - is session.close() sufficient?
> 
> 
> Thanks for all the advice - I've changed my unicode settings and
> upgraded Beaker, but still have something to fix.. I'll report back if
> I find it.

I'm sure it's not relevant, but for a while I was developing an
application with SQLAlchemy (0.3.something) on Python 2.4.1, and had
problems with memory leaks. I upgraded to Python 2.4.4 and the leaks
went away.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how to print a constructed query with it's parameters?

2008-10-15 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of alex bodnaru
> Sent: 15 October 2008 11:00
> To: SQLAlchemy
> Subject: [sqlalchemy] how to print a constructed query with 
> it's parameters?
> 
> 
> hello friends,
> 
> in order to debug my code, i wish to print my query sql.
> 
> it's in the fashion of
> query = 
> table.query().filter(table.code='XL').filter(table.name.like('
> %'+q+'%')
> with unicode parameters.
> 
> by just printing query, i get the select with ? parameters, but not
> the additional parameters list, that contains ['XL', %q-value%]. since
> it doesn't presently work ok, i'd like to print the list as well.
> 
> thanks in advance,
> alex
> 

This question comes up a lot. For example, see
http://groups.google.com/group/sqlalchemy/browse_thread/thread/a0602ede8
18f55c7

Firstly, if you use echo=True in your call to create_engine, all SQL
will be printed to stdout. The parameters will be displayed as a list
AFTER the SQL is printed.

Eg. (from http://www.sqlalchemy.org/docs/05/ormtutorial.html)

BEGIN
INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
['ed', 'Ed Jones', 'edspassword']
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS
users_fullname, users.password AS users_password
FROM users
WHERE users.name = ?
LIMIT 1 OFFSET 0
['ed']

You can control the logging more finely using the logging module - see
http://www.sqlalchemy.org/docs/05/dbengine.html#dbengine_logging for
more details.

The problem is that SQLAlchemy doesn't ever replace those '?' characters
with the actual parameter values. Those strings are passed directly to
the DBAPI driver, along with the list of parameter values. It is then up
to the DBAPI driver how it passes the query to the database. (This is
why SQLAlchemy is fairly safe from SQL Injection attacks).

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to turn off UPDATE on child objects when deleting parent object

2008-09-30 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of Randy Syring
> Sent: 30 September 2008 07:17
> To: sqlalchemy
> Subject: [sqlalchemy] Re: How to turn off UPDATE on child 
> objects when deleting parent object
> 
> On Sep 30, 2:14 am, Randy Syring <[EMAIL PROTECTED]> wrote:
> > More details are 
> here:http://groups.google.com/group/sqlelixir/browse_thread/th
> read/aac5d22...
> >
> > But basically, I have a relationship between a parent 
> (Item) table and
> > child (Link) table.  When I try to delete an Item, an SQL 
> statement is
> > generated by SQLAlchemy that tries to set Link.item_id = NULL.  That
> > is invalid, because Link.item_id is a NOT NULL column and 
> also because
> > I have a FK on the column.  I have tried adjusting the 
> cascade options
> > to no avail.
> >
> 
> Sorry, one follow-up.  I can actually get this to work by setting
> cascade='all, delete-orphan'.  But since the FK will do a cascading
> delete, it is slower for SQLAlchemy to have to issue the statements.
> Can I just make SQLAlchemy not issue anything when deleting the
> parent?
> 

I think you need the 'passive_deletes' flag to tell SQLAlchemy that the 
database will delete child objects for you. It is described here:

http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_largecollections_passivedelete

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: M2M relationship

2008-07-14 Thread King Simon-NFHD78

Heston wrote:
> [SNIP]
> 
> Above you talk about a global module in the application which 
> creates the
> Base and metadata, but I don't understand how these can then 
> be accessed by
> other classes around the application?
> 
> Do you have any good sample code or a link to a decent 
> tutorial? Seems all I
> can find are examples which are based on the idea that all 
> the classes are
> defined in the same module, which isn't practical.
> 

Hi Heston,

This is really a plain python question rather than an SQLAlchemy
question. You might find this section of the python tutorial useful:

http://docs.python.org/dev/tutorial/modules.html

But basically, if you put the following code in a module called
'base.py':

#-
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import MetaData

meta = MetaData()
Base = declarative_base(metadata=meta)
#-

Then in your other modules you can write:

#-
import base

class Post(base.Base):
__tablename__ = 'post'

# etc.

#-

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cannot delete persisted object which is not in current session

2008-07-01 Thread King Simon-NFHD78

Tai Tran wrote:
> 
> [snip]
> 
> class PC(object):
> 
> ...
> 
> def destroySelf(self):
> db_session = Session()
> ...
> for port in self.ports:
> port.destroySelf()
> ...
> db_session.delete(self)
> 
> ...
> 
> 
> class Port(object):
> 
> ...
> 
> def destroySelf(self):
> db_session = Session()
> ...
> db_session.delete(self) 
> 
> 
> I'm using contextual/thread-local sessions, in the last statement
> "db_session.delete(self)" of Port.destroySelf(), I always get the same
> traceback as I tried to demonstrate in the last port:
> 
> "sqlalchemy.exceptions.InvalidRequestError: Instance 
> '[EMAIL PROTECTED]' is with
> key (, (2,), None) already persisted 
> with a different
> identity"
> 
> I found a solution to solve this problem by deleting the 
> object in current
> session:
> 
> def destroySelf(self):
> db_session = Session()
> ...
> obj = db_session.get(Port, self.id)
> db_session.delete(obj)
> 
> But I'm wondering whether it is the right approach?
> 

The 'object_session' function returns the session that an object is
currently associated with. I would have thought that you could write
your destroySelf method as:

def destroySelf(self):
db_session = object_session(self)
db_session.delete(self)


Actually, looking at the 0.5 docs, I see that object_session is now a
classmethod on Session, so perhaps it should be written like this:

def destroySelf(self):
db_session = Session.object_session(self)
db_session.delete(self)

See 'How can I get the Session for a certain object' in the FAQ section
of http://www.sqlalchemy.org/docs/05/session.html

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sharding id_chooser query_chooser

2008-06-27 Thread King Simon-NFHD78

Lilo wrote:
> 
> My understanding of this query_chooser is that it's used when you want
> to execute orm's sql rather than raw sql.
> 
> I don't quite understand what is visit_binary function do from
> attribute_shard.py example.  What does it mean binary.operator,
> binary.left, binary.right.clause and query._criterion?
> 
> The sharding design behind our application is that we have a master
> lookup table and shards.  What shard to execute sql is based on
> querying master lookup table.
> 
> taken from sqlalchemy attribute_shard.py example:
> 
> def query_chooser(query):
> ids = []
> 
> # here we will traverse through the query's criterion, searching
> # for SQL constructs.  we'll grab continent names as we find them
> # and convert to shard ids
> class FindContinent(sql.ClauseVisitor):
> def visit_binary(self, binary):
> if binary.left is weather_locations.c.continent:
> if binary.operator == operators.eq:
> ids.append(shard_lookup[binary.right.value])
> elif binary.operator == operators.in_op:
> for bind in binary.right.clauses:
> ids.append(shard_lookup[bind.value])
> 
> FindContinent().traverse(query._criterion)
> if len(ids) == 0:
> return ['north_america', 'asia', 'europe', 'south_america']
> else:
> return ids
> 
> thank you.
> 

Hi,

(I'm probably going to get the details wrong here, but hopefully the
general idea will be right)

SQLAlchemy represents SQL expressions as objects, a bit like a parse
tree. For example, there are classes that represent tables, joins,
functions and so on. It uses a Visitor pattern
(http://en.wikipedia.org/wiki/Visitor_pattern) to traverse these
structures.

A binary clause is an SQL expression with an operator, a left half and a
right half. For example, in the clause 'id = 5', binary.left  is 'id',
binary.right is '5', and binary.operator is '=' (or rather,
operators.eq, which is the object that represents '=').

The query_chooser function above uses a Visitor to look through all the
SQL expressions that make up the query that is about to be executed.
Because the only overridden method is 'visit_binary', anything other
than binary clauses are ignored.

The method body could be written in long-hand as:

If the left part of the expression is 'weather_locations.continent':
   If the expression is 'continent = XXX':
  add the shard for continent XXX
   Else if the expression is 'continent IN (XXX, YYY)':
  add the shards for XXX and YYY


(operators.in_op corresponds to the 'IN' operator, and
binary.right.clauses contains the right-hand-side of that expression)

The fallback case (if len(ids) == 0) happens if the visitor failed to
find any expressions that it could handle, in which case all the shards
will be queried.

I don't understand your situation well enough to know how to adapt the
example. If your master lookup table is basically doing the same job as
the shard_lookup dictionary in the example, then you could replace
shard_lookup above with a function call that does the query.

I hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: text

2008-06-11 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
On Behalf Of jack2318
> Sent: 11 June 2008 17:50
> To: sqlalchemy
> Subject: [sqlalchemy] text
> 
> 
> I tried very simple test:
> 
> s = text("SELECT COUNT(*) FROM table WHERE field LIKE 'something'")
> count = conn.execute(s).fetchone()
> 
> and this produced error
> Unexpected error:  not enough arguments
> for format string
> 
> not big deal there is few way around but just for completeness I
> believe it should work (no parameters to expand)
> 
> regards
> -- jacek
> PS: I am using mysql


Are you sure you didn't try:

s = text("SELECT COUNT(*) FROM table WHERE field LIKE 'something%'") 

Ie. Included a percent sign in the query?

I think that sql strings get passed through python's % substitution
function at some point in SQLAlchemy (or possibly the DBAPI driver), so
you need to use '%%' instead of '%'.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: select db engine with create_session

2008-06-10 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of lilo
> Sent: 10 June 2008 17:23
> To: sqlalchemy
> Subject: [sqlalchemy] select db engine with create_session
> 
> 
> can someone tell me how session chooses the right db engine to insert
> records in this example:
> http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/exampl
> es/sharding/attribute_shard.py#L184?
> 

(I've never used the sharding support before, so this is a guess)

On line 150, the create_session factory function gets configured with
the shard_chooser, id_chooser and query_chooser functions. The
shard_chooser function is defined on line 111 and looks like this:

111 def shard_chooser(mapper, instance, clause=None):
112 if isinstance(instance, WeatherLocation):
113 return shard_lookup[instance.continent]
114 else:
115 return shard_chooser(mapper, instance.location)

So if the instance being saved is a WeatherLocation object, then the
shard_lookup dictionary is used to return the appropriate db engine.
Otherwise, the instance is assumed to be a Report object, and so it
calls itself recursively with the Report's location.

I hope that helps - I'm sure someone will correct me if I'm wrong.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy in virtualenv Instructions

2008-06-10 Thread King Simon-NFHD78

Lukasz Szybalski wrote:
> 
> 
> I have mysqldb installed in the system wide install how do I tell
> virtualenv to use it?
> I don't see a need to install it in virtualenv again so I guess I just
> have to givea right path? How, and in which file?
> 
> Thanks,
> Lucas
> 
> 
>  File 
> "/usr/local/pythonenv/BASELINE/lib/python2.4/site-packages/SQL
Alchemy-0.4.6dev_r4675-py2.4.egg/sqlalchemy/databases/mysql.py",
> line 1430, in dbapi
> import MySQLdb as mysql
> ImportError: No module named MySQLdb
> 

When you create the virtualenv, there is a '--no-site-packages' switch
which determines whether the system-wide site-packages directory will be
on the search path or not. If you didn't provide this switch, then
MySQLdb should be visible. (Documented at
http://pypi.python.org/pypi/virtualenv#the-no-site-packages-option)

From your python prompt, type "import sys; print '\n'.join(sys.path)" to
see your search path. If the directory containing your MySQLdb
installation is not in there, you have a problem.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Pre-commit hooks

2008-05-14 Thread King Simon-NFHD78

Yannick Gingras wrote:
> Greetings Alchemists,
>   Is it possible to define a hook in a mapped class that will 
> be called
> to test the sanity of an instance before it gets committed?
> 
> As an example:
> 
>   class Item(object):
> def _pre_commit(self):
>   assert (self.dry_weight + self.fluids) < 50
>   mapper(Item, items_table)
> 
> I don't want to put the test mutators of dry_weight or fluids since
> it's OK to have a temporary inconsistent state as long as the state is
> consistent at commit time.
> 
> I see that some of this functionality if covered by MapperExtention
> but since the test is only related to Item I'd rather put the test in
> it.
>  

The way I've done this in the past is to define a fairly generic
MapperExtension that calls hook methods on my classes. Here's an
example:

--

import sqlalchemy as sa

class EventExtension(sa.orm.MapperExtension):
def handle_event(self, event, mapper, connection, instance):
name = '__%s__' % event
handler = getattr(instance, name, None)
if handler:
handler()

def _add_handler(event):
def handle_event(self, *args, **kwargs):
self.handle_event(event, *args, **kwargs)
return sa.orm.EXT_CONTINUE
setattr(EventExtension, event, handle_event)

for _name in ['before_insert',
 'after_insert',
 'before_update',
 'after_update',
 'before_delete',
 'after_delete']:
_add_handler(_name)

event_extension = EventExtension()


---

Now if your mapped class has __before_insert__, __after_insert__ etc.
methods, they will be called at the appropriate point. (Remember to
include event_extension when you map the class)

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: schema changes

2008-02-14 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of Chris Withers
> Sent: 13 February 2008 13:51
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: schema changes
> 
> 
> Michael Bayer wrote:
> > 
> >> What if they exist but don't match the spec that SA has created?
> > 
> > just try it out...create_all() by default checks the system 
> tables for  
> > the presence of a table first before attempting to create it 
> 
> Cool,
> 
> > (same  
> > with dropping). 
> 
> When would SA drop a table?


When you ask it to via metadata.drop_all() or table.drop(). Docs are
here:

http://www.sqlalchemy.org/docs/04/metadata.html#metadata_creating


> 
> > this is controlled by a flag called "checkfirst".
> 
> This a parameter to the methods or does it need to be set in 
> some config 
> file?
> 


See the docs referenced above. SQLAlchemy on its own doesn't use config
files, but some of the frameworks that use it (eg. TurboGears and
pylons) allow some SQLAlchemy-related configuration to be done in config
files.


> > if you're concerned about people running your application against  
> > databases created from a different version and then 
> failing, I would  
> > suggest adding a "version" table to your database which 
> contains data  
> > corresponding against the version of your application in 
> some way.   
> 
> Good plan.
> 
> > There has been interest among some SA users over building a 
> generic  
> > "schema comparison" system and I think even some prototypes are  
> > available, though I think thats a fairly complicated and 
> unreliable  
> > approach to take for this particular issue.
> 
> Do the projects have a name?
> 


I don't know about generic comparisons, but the Migrate project might be
a place to start:

http://code.google.com/p/sqlalchemy-migrate/

(Note that it is only in the process of being updated for SA 0.4)

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Search in object list by field value

2008-02-08 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of maxi
> Sent: 08 February 2008 14:30
> To: sqlalchemy
> Subject: [sqlalchemy] Re: Search in object list by field value
> 
> 
> Thanks Simon,
> I was doing of that manner.
> 
> Now, is advisable implement my own custom collection ? (One what
> implement a "locate" method for example)
> I was reading Custom Collection Implementations on sqlalchemy doc, but
> I'am not very clear over how implement this.
> 
> Any help with this?
> 
> Regards.
> 

It really depends where you want your 'locate' method to appear. The
custom collection classes mentioned in the docs are used for
relationships between mapped classes.

For example, if Person has a one-to-many relationship with Address, via
a property 'addresses', then by default that 'addresses' property is a
list. By creating a custom collection class with your 'locate' method,
you could write code like this:

person = session.query(Person).first()

address = person.addresses.locate('id', 12)

If that is really what you are trying to do, then this collection class
would probably work (untested):



class searchable_list(list):
def locate(self, attrname, value):
for obj in self:
if getattr(obj, attrname) == value:
return obj



Then when you create your 'addresses' relation, you pass
collection_class=searchable_list. However, this doesn't seem massively
useful to me, so perhaps I haven't understood what you're trying to do.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Search in object list by field value

2008-02-08 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of maxi
> Sent: 08 February 2008 13:47
> To: sqlalchemy
> Subject: [sqlalchemy] Re: Search in object list by field value
> 
> 
> On 8 feb, 09:58, svilen <[EMAIL PROTECTED]> wrote:
> > On Friday 08 February 2008 14:26:04 maxi wrote:
> > a) let SQl do it
> >   p1 = session.query(Person).filter_by(id==123).first()
> > #see .filter_by syntax
> 
> Yes, of course, I know it. But my question appoint to how search in an
> object list.
> 
> > b) get all people, then plain python:
> >  for p in people:
> >if p.id == 123: break
> >  else: p=None
> 
> Something like this, but through some build in function.
> 
> Thanks for your replay.
> Regards.

I don't think there's anything built in, but it is a very easy
stand-alone function to write:



def get_first_match(objects, attrname, value):
"""
Return the first object where the named attribute
has the given value.
Returns None if no match is found
"""
for obj in objects:
if getattr(obj, attrname) == value:
return obj


# Assuming 'people' is a list of Person objects
person_one = get_first_match(people, 'id', 1)



Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How do I Instantiate a class with an Auto-assigned Key ID

2008-02-04 Thread King Simon-NFHD78

Hi,

If your database is set up to generate auto-incrementing primary keys on
INSERT (such as a MySQL auto-incrementing column), then SA will retrieve
the new value when the object is flushed to the database. When the
object is constructed, the primary key will be None until you flush the
session, at which point the row gets inserted, the new ID gets
generated, read back, and appears on your instance.

(Note that depending on how the database is set up, and how you've told
SQLAlchemy about the column, there may be some subtlety about the
underlying operations that are performed, but that's usually invisible
to the application code)

The SQLAlchemy ORM tutorial is a good introduction to all this:

http://www.sqlalchemy.org/docs/04/ormtutorial.html

In particular, look at the end of the 'Setting up the Mapping', where it
talks about the 'id' attribute being None, and the 'Saving Objects'
session, where it becomes 1.

Hope that helps,

Simon

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
On Behalf Of jdcman
Sent: 04 February 2008 03:04
To: sqlalchemy
Subject: [sqlalchemy] Re: How do I Instantiate a class with an
Auto-assigned Key ID


Ok,   so can you please tell me how I can make a class that
AutoIncrements the ID automatically whenever I need to make a new
record  let me explain...

As I add records,  I want the ID to Auto Increment.   When you say
"define a constructor for your class which does not require an ID",
but I REQUIRE an ID,  I also require the ID to increment when I add
new record.   You went on to say "SQLALchemy has no constructor
requirements on mapped classes",  what do you mean by that?

Ok,  so can you give me some really simple SQLAlchemy code that would
allow me to increment the ID automatically?Keep in mind,  that
multiple people may be adding records,  so it has to sort all this
out.

Am I to assume that this is beyond SQLAlchemy,  and no one else would
ever want to do this?   And this was never designed to do this?

I'm a really dumb newbee here,  so please be patient with me...

Thanx

On Feb 3, 7:38 am, "Michael Bayer" <[EMAIL PROTECTED]> wrote:
> jdcman wrote:
>
> > our members table has an ID as theprimarykeywith an auto-assigned
> >keyID.
>
> > When we create the class it is instantiated with the ID and also
> > returns an ID.
>
> > obviously if we provide an ID it conflicts with the auto-generated
ID.
> > and if we don't provide an ID it says we didn't provide enough
> > variables to instantiate the class
> > so how do we define our class so that it can return an ID but
doesn't
> > require one to instantiate the class?
>
> define a constructor for your class which does not require an
idor,
> don't define a constructor at all.  SQLAlchemy has no constructor
> requirements on mapped classes, that error is being produced by your
own
> application.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Two SqlAlchemy versions installed together

2008-01-23 Thread King Simon-NFHD78

If you're using easy_install to install them, you should be able to
install them with the '-m' (--multiversion) switch. See:

http://peak.telecommunity.com/DevCenter/EasyInstall#command-line-options

Choosing the version you want to use in your application is then done
like this:

import pkg_resources
pkg_resources.require('SQLAlchemy == 0.3.10')

Or

import pkg_resources
pkg_resources.require('SQLAlchemy >= 0.4')

However, I much prefer the solution provided by virtualenv
(http://pypi.python.org/pypi/virtualenv), particularly as the number of
libraries that you depend on grow.

Hope that helps,

Simon

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
On Behalf Of maxi
Sent: 23 January 2008 13:54
To: sqlalchemy
Subject: [sqlalchemy] Two SqlAlchemy versions installed together


Can I get two SqlAlchemy versions installed together?
I need have, sqlalchemy 0.3.10 and the 0.4 last release installed
because I have two version of my application (two diferents brunches).

How can I work whit this?
In my app, how can I indicate which version I want to use?

Thanks,
M.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Filter by year in datetime column

2008-01-18 Thread King Simon-NFHD78

Would "create_date >= '2007-01-01' and create_date < '2008-01-01'" be
acceptable?

If so, something like this should work

from sqlalchemy import and_
from datetime import date

data = Table.query().filter(and_([Mikropost.c.create_date >= date(2007,
1, 1),
  Mikropost.c.create_date < date(2008,
1, 1)]))

Hope that helps,

Simon

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
On Behalf Of [EMAIL PROTECTED]
Sent: 18 January 2008 13:22
To: sqlalchemy
Subject: [sqlalchemy] Filter by year in datetime column


Hello, pleas, i have begginer problem and question:

In table (database is sqlite) is colum for create date (create_date =
Field(DateTime, default = datetime.now))

I need query from table, with all item where have year of create date
2007.

Is this the right way ? (this don`t work)
data = Table.query().filter(func.year(Mikropost.c.create_date) ==
2008)



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Schema display

2008-01-07 Thread King Simon-NFHD78

sqlalchemy_schemadisplay.py is not part of the SQLAlchemy distribution -
it's a separate script attached to that wiki page. You can download it
from the link at the bottom of the page.

Hope that helps,

Simon

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of 
> [EMAIL PROTECTED]
> Sent: 07 January 2008 09:44
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: Schema display
> 
> 
> I tried to import boundMetadata and this has become MetaData, and I'm 
> getting an import error
> 
> ImportError: No module named sqlalchemy_schemadisplay
> 
> So I was wondering if this has moved out of MetaData or been renamed.
> 
> Morgan
> 
> Michael Bayer wrote:
> > where its always been...
> >
> > http://www.sqlalchemy.org/trac/wiki/UsageRecipes/SchemaDisplay
> >
> >
> > On Jan 5, 2008, at 7:58 PM, [EMAIL PROTECTED] wrote:
> >
> >   
> >> Hi Guys,
> >>
> >> I was wondering where the function create_schema_graph has 
> gone, or  
> >> what
> >> it has changed to. Any assistance would be appreciated.
> >>
> >> Let me know,
> >> Morgan
> >>
> >> 
> >
> >
> > >
> >   

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Convert JOIN to WHERE for old MySQLs

2007-12-12 Thread King Simon-NFHD78

Michael Bayer wrote
> On Dec 12, 2007, at 9:04 AM, King Simon-NFHD78 wrote:
> 
> >
> > Hi,
> >
> > I'm connecting to an ancient version of MySQL (3.23.58) 
> that (as far  
> > as
> > I can tell) doesn't support basic "JOIN  ON 
> " syntax
> >
> > Ie. this gives a syntax error:
> >
> >  SELECT * FROM productversions pv
> >  JOIN producttypes pt ON pv.producttype = pt.producttype
> >
> > Whereas this is fine:
> >
> >  SELECT * FROM productversions pv
> >  JOIN producttypes pt
> >  WHERE pv.producttype = pt.producttype
> >
> > Can anyone think of anything I can do (perhaps by subclassing or
> > monkeypatching MySQLDialect) to automatically convert 
> queries like the
> > former into the latter? SQLAlchemy seems to be infitely flexible in
> > dealing with syntax quirks of various databases, so I've got my  
> > fingers
> > crossed...
> >
> 
> this is possible, we do it in the oracle module when the ansi=False  
> flag is set.  if you look at the visit_join code in oracle.py, you  
> could probably lift that up and stick it right in mysql.py, have it  
> activated by a dialect flag, and it would work.
> 
> however, if the database doesnt support JOIN..ON, that 
> indicates that  
> its impossible to do an OUTER JOIN.oracle used a special 
> operator  
> (+) to indicate the "outerness" of a join..does this old mysql have  
> anything for that ?   otherwise it doesnt seem much worth it to  
> support JOIN at all when you could just select from two tables.
> 

I knew SA could do it! Thanks!

This version of MySQL does support LEFT JOIN and RIGHT JOIN, so things
like eager loads work correctly (as far as I can tell). Does that mean
that in visit_join, I should defer to the normal implementation if
join.isouter is True?

This came up because I was using query.join to filter a query based on a
child's attributes, and suddenly the query stopped working on MySQL. I
was about to try and find all occurrences of query.join and change them
to explicit filter conditions, but if I can make the visit_join method
work, I would much prefer to do it that way.

I'll let you know if it works - it may be worth adding as an optional
behaviour to MySQLDialect (although if I'm the only one that's come
across the problem, maybe no-one else is using a version of MySQL that's
this old)

Thanks,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Convert JOIN to WHERE for old MySQLs

2007-12-12 Thread King Simon-NFHD78

Jason kirtland wrote:
> 
> King Simon-NFHD78 wrote:
> > Hi,
> > 
> > I'm connecting to an ancient version of MySQL (3.23.58) 
> that (as far as
> > I can tell) doesn't support basic "JOIN  ON 
> " syntax
> > 
> > Ie. this gives a syntax error:
> > 
> >   SELECT * FROM productversions pv
> >   JOIN producttypes pt ON pv.producttype = pt.producttype
> > 
> > Whereas this is fine:
> > 
> >   SELECT * FROM productversions pv
> >   JOIN producttypes pt
> >   WHERE pv.producttype = pt.producttype
> > 
> > Can anyone think of anything I can do (perhaps by subclassing or
> > monkeypatching MySQLDialect) to automatically convert 
> queries like the
> > former into the latter? SQLAlchemy seems to be infitely flexible in
> > dealing with syntax quirks of various databases, so I've 
> got my fingers
> > crossed...
> 
> This is in r3916.  It's a simple change that can be easily monkey 
> patched into any version of SA- these just need to be emitted 
> as INNER 
> JOIN .. ON instead.
> 
> -j

Thanks - that looks much easier than the oracle version. I _almost_
understand this one! Monkeypatching now...

(I also hadn't spotted that adding 'INNER' was enough to satisfy
MySQL...)

Cheers,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Column defaults in MapperExtension.after_insert

2007-12-12 Thread King Simon-NFHD78

Michael Bayer wrote:
> On Dec 11, 2007, at 10:07 AM, King Simon-NFHD78 wrote:
> 
> >
> > I assume the problem is that my date_created column isn't 
> immediately
> > available at the 'after_insert' stage, because it is 
> generated in the
> > SQL INSERT statement, but hasn't been read back from the 
> database yet.
> > Is there a more suitable hook-point than after_insert, where I can
> > safely read values like this?
> >
> 
> this is a thorny issue, while the immediate issue youre having is  
> something that can possibly be smoothed over (probably even by  
> sticking on the _instance_key yourself, not sure if i should 
> recommend  
> that though), its not going to work if say you wanted to look at the  
> foriegn key attributes in an attached collection since they havent  
> been set up yet.
> 
> also the API youre using with get_history() has changed in 
> trunk, itll  
> give you a tuple now of (added, unchanged, deleted).
> 
> there is the notion of SessionExtension which has an after_commit()  
> hook, not sure if that is a viable option here.   I cant go too nuts  
> with all these extension hooks since they all add latency to  
> everything (sorta wishing there were a lot less hooks as it is).
> 

Thanks - I've ignored your warning and stuck on the _instance_key, in
the full knowledge that when it breaks it'll be my own silly fault ;-)
I'll investigate the SessionExtension as a longer-term fix. If
after_commit runs when all attributes are completely up-to-date, and I
can still access the previous and new state via get_history, it sounds
like the right thing to use.

I fully understand your worries about hooks slowing SQLAlchemy down, but
I also think that they are one of the features that makes the ORM so
flexible, and it would be a great shame to lose them. I wonder if there
could be a way of caching all the extension methods so that applying
them would be as fast as possible.



OK, I should have guessed. You've already done that - I see the
ExtensionCarrier checks which methods have been overridden when each
extension is added.

Thanks for your help,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: LIMIT in queries

2007-12-12 Thread King Simon-NFHD78

Felix Schwarz wrote:
> Hi,
> 
> after reading the docs [1] I thought that something like
> session.query(User).filter(User.c.id > 3)[0]
> should work even if the filter clause does not return any rows.
> 
> But - compliant with Python's behavior - SQLAlchemy raises an
> IndexError.
> (...)
>File 
> "/usr/lib/python2.4/site-packages/sqlalchemy/orm/query.py", 
> line 577, in __getitem__
>  return list(self[item:item+1])[0]
> IndexError: list index out of range
> I expected that [0] applied to query without results would 
> return None.
> 
> Did I miss another possibility for LIMIT'ing queries (using 
> sqlalchemy.orm)?
> 
> fs
> 
> [1] 
> http://www.sqlalchemy.org/docs/04/ormtutorial.html#datamapping
> _querying

I think being matching Python's behaviour is definitely the right way to
go. If you want to get None specifically for index [0], you can use the
'first' method on query.

If you are happy get an empty list, you could use a slice and then call
.all()

Eg:

session.query(User).filter(User.c.id > 3)[:1].all()

-> []

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Convert JOIN to WHERE for old MySQLs

2007-12-12 Thread King Simon-NFHD78

Hi,

I'm connecting to an ancient version of MySQL (3.23.58) that (as far as
I can tell) doesn't support basic "JOIN  ON " syntax

Ie. this gives a syntax error:

  SELECT * FROM productversions pv
  JOIN producttypes pt ON pv.producttype = pt.producttype

Whereas this is fine:

  SELECT * FROM productversions pv
  JOIN producttypes pt
  WHERE pv.producttype = pt.producttype

Can anyone think of anything I can do (perhaps by subclassing or
monkeypatching MySQLDialect) to automatically convert queries like the
former into the latter? SQLAlchemy seems to be infitely flexible in
dealing with syntax quirks of various databases, so I've got my fingers
crossed...

Thanks a lot,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Column defaults in MapperExtension.after_insert

2007-12-11 Thread King Simon-NFHD78
svilen wrote:
>
> On Tuesday 11 December 2007 13:13:37 King Simon-NFHD78 wrote:
> > Hi,
> >
> > I used to be able to iterate over mapper.properties.items() to get
> > the name of each mapped property along with the object that
> > implements it. However, in 0.4.1, trying to do this results in a
> > NotImplementedError telling me to use iterate_properties and
> > get_property instead, but I can't see a way to get the name of the
> > property through either of these methods.
> >
> > Is there a generic way that, given a mapped class, I can get a list
> > of the mapped properties with their names?
> iterate_props yields Property instances; maybe p.key is the name u
> want.

Ah yes, p.key was exactly what I was looking for - thanks.

The next part of the question is that I am using this in the
after_insert method of a MapperExtension to write a log file with all
the values that were written to the database. The code looks something
like this:


mapper = orm.object_mapper(instance)
for prop in mapper.iterate_properties:
log.info('   %s=%r',
 prop.key, getattr(instance, prop.key))


This works nicely for attributes that I set directly. However, it breaks
when it comes across a column that is defined as:

sa.Column('date_created', sa.DateTime,
  default=sa.func.current_timestamp(type=sa.DateTime))

The attached script should show the problem. The traceback is:

Traceback (most recent call last):
  File "satest.py", line 32, in after_insert
print "  %s = %r" % (prop.key, getattr(instance, prop.key))
  File
"/home/warranty/python/development/lib/python2.5/site-packages/SQLAlchem
y-0.4.1-py2.5.egg/sqlalchemy/orm/attributes.py", line 40, in __get__
return self.impl.get(obj._state)
  File
"/home/warranty/python/development/lib/python2.5/site-packages/SQLAlchem
y-0.4.1-py2.5.egg/sqlalchemy/orm/attributes.py", line 215, in get
value = callable_()
  File
"/home/warranty/python/development/lib/python2.5/site-packages/SQLAlchem
y-0.4.1-py2.5.egg/sqlalchemy/orm/attributes.py", line 628, in
__fire_trigger
self.trigger(instance, [k for k in self.expired_attributes if k not
in self.dict])
  File
"/home/warranty/python/development/lib/python2.5/site-packages/SQLAlchem
y-0.4.1-py2.5.egg/sqlalchemy/orm/session.py", line 1112, in
load_attributes
if
object_session(instance).query(instance.__class__)._get(instance._instan
ce_key, refresh_instance=instance, only_load_props=attribute_names) is
None:
AttributeError: 'User' object has no attribute '_instance_key'

I assume the problem is that my date_created column isn't immediately
available at the 'after_insert' stage, because it is generated in the
SQL INSERT statement, but hasn't been read back from the database yet.
Is there a more suitable hook-point than after_insert, where I can
safely read values like this?

Thanks a lot,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



satest.py
Description: satest.py


[sqlalchemy] Iterating over mapped properties

2007-12-11 Thread King Simon-NFHD78

Hi,

I used to be able to iterate over mapper.properties.items() to get the
name of each mapped property along with the object that implements it.
However, in 0.4.1, trying to do this results in a NotImplementedError
telling me to use iterate_properties and get_property instead, but I
can't see a way to get the name of the property through either of these
methods.

Is there a generic way that, given a mapped class, I can get a list of
the mapped properties with their names?

Thanks a lot,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Matching a DateTime-field

2007-12-11 Thread King Simon-NFHD78

It may not matter to you, but I wouldn't have thought this would be a
very efficient query, because the database is going to have to call the
DATE_FORMAT function twice for every row in your table. I would have
thought a more efficient version would be one that asks for all rows
between the first of one month and the first of another month
(especially if the date column is indexed).

Something like:

from datetime import date
session.query(List).filter(
and_(List.expire >= date(2007, 12, 1),
 List.expire < date(2008, 1, 1))
).all()


Adding one month to a date is pretty easy, but if you wanted to do any
more complicated date calculations, the dateutil library is very good:

http://labix.org/python-dateutil

Hope that helps,

Simon

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of Adam B
> Sent: 10 December 2007 19:15
> To: sqlalchemy
> Subject: [sqlalchemy] Re: Matching a DateTime-field
> 
> 
> Hello!
> 
> Thanks for the pointers.
> 
> Here is the solution for MySQL :
> session.query(List).filter(and_(func.DATE_FORMAT(List.expire,'%Y')
> ==2007 ,func.DATE_FORMAT(List.expire,"%m") == 12)).all()
> 
> 
> 
> 
> 
> On Dec 10, 6:08 pm, "Rick Morrison" <[EMAIL PROTECTED]> wrote:
> > Yeah, it was a "for instance" answer, you'll need to use 
> the correct MySql
> > syntax of course.
> >
> > On 12/10/07, Adam B <[EMAIL PROTECTED]> wrote:
> >
> >
> >
> > > On Dec 10, 1:16 am, "Rick Morrison" 
> <[EMAIL PROTECTED]> wrote:
> > > > Any query using sql expressions is going to want to use 
> correctly typed
> > > data
> > > > -- you're trying to query a date column with a string 
> value. The LIKE
> > > > operator is for string data.
> >
> > > > I'm not up on my mssql date expressions, but the answer 
> is going to
> > > resemble
> > > > something like this:
> >
> > > > .filter(and_(func.datepart('year', List.expire) == 2007,
> > > > func.datepart('month', List.expire) == the_month_number))
> >
> > > Ok, isnt this mssql specifik?  I only find datepart in various
> > > VB / .net documentation/solutions.
> 
> > 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: IMPORTANT: Does SA caches objects in memory forever?

2007-12-07 Thread King Simon-NFHD78

Felix Schwarz wrote:
>
> I have a question which I think is similar enough to be asked 
> in the same 
> thread: I have a set of quite simple migration scripts which 
> us SQLAlchemy 0.4 
> and Elixir 0.4. I do extract data from the old legacy (MySQL) 
> database with 
> SQLAlchemy and put this data into new Elixir objects.
> 
> Currently, these scripts use up to 600 MB RAM. This is no 
> real problem as we 
> probably could devote a machine with 4 GB ram solely for the 
> automated 
> migration. But it would be nice to use lower-powered machines 
> for our migration 
> tasks.
> 
> What wonders me is that I do not (knowingly) keep references 
> neither to the old 
> data items nor the new elixir objects. Nevertheless memory 
> usage increases 
> during the migration. Is there any way to debug this easily 
> to see why Python 
> does need so much memory/which references prevent the objects 
> from being garbage 
> collected? Running the garbage collector manually did not 
> help much (saving only 
> about 5 MB).
> 
> fs
>

Here is a snippet that I've used before when trying to track down
objects that aren't getting cleaned up properly. I don't think it'll
find leaks of built-in types, but it should help with instances of
user-defined classes. Just call 'report_objects' every now and then.



import gc

_previous = {}
def report_objects(threshold=500):
objects = gc.get_objects()
print "Number of objects in memory: %d" % len(objects)
modules = {}
for obj in gc.get_objects():
if getattr(obj, '__module__', None) is not None:
module_parts = obj.__module__.split('.')
module = '.'.join(module_parts[:3])
modules.setdefault(module, 0)
modules[module] += 1

print "Modules with > %d objects:" % threshold
dump_modules(modules, threshold)

if _previous:
changes = {}
for module, value in modules.items():
changes[module] = value - _previous.get(module, 0)

print "Changes since last time:"
dump_modules(changes, 10)

_previous.clear()
_previous.update(modules)
print ""


def dump_modules(modules, threshold):
maxlen = max(len(m) for m in modules)
l = [(value, module) for module, value in modules.items()
 if value > threshold]
if l:
l.sort(reverse=True)
for value, module in l:
print "%*s %5d" % (maxlen+1, module, value)
else:
print "   "

-

The first time you call report_objects, you should get something like
this:

Number of objects in memory: 100794
Modules with > 500 objects:
sqlalchemy.ext.assignmapper  1935
sqlalchemy.util  1362
   sqlalchemy.types  1250
  sqlalchemy.schema  1170
 sqlalchemy.sql  1124
  sqlalchemy.orm.unitofwork  1003
  sqlalchemy.orm.strategies   956
  sqlalchemy.orm.properties   750
  sqlalchemy.orm.attributes   699
  sqlalchemy.orm.mapper   681
  testresults.define_schema   665


And then when you call it again some time later:

Number of objects in memory: 102349
Modules with > 500 objects:
sqlalchemy.ext.assignmapper  1935
sqlalchemy.util  1418
   sqlalchemy.types  1250
  sqlalchemy.schema  1204
 sqlalchemy.sql  1177
  sqlalchemy.orm.unitofwork  1004
  sqlalchemy.orm.strategies   993
  sqlalchemy.orm.properties   750
  sqlalchemy.orm.attributes   708
  sqlalchemy.orm.mapper   681
  testresults.define_schema   665
Changes since last time:
sqlalchemy.util56
 sqlalchemy.sql53
 sqlalchemy.databases.mysql49
MySQLdb.cursors45
  sqlalchemy.orm.strategies37
  sqlalchemy.schema34
MySQLdb.connections16
 MySQLdb.converters11

Note that the module names are where the classes are defined, not where
they are used, but it may be enough to give you a clue.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: concurent modification

2007-11-28 Thread King Simon-NFHD78

imgrey wrote:
> 
> Based on my observations it happens only with concurent inserts/
> updates.
> One thread :
> 
> {{{
> f_table.insert().execute()
> session.flush()
> transaction = session.begin()
> nested = session.begin_nested()
> try:
>   f_table.insert().execute()
> except IntegrityError:
>   #record is exists and we got exception corresponding to contraint
>   stuff = session.query(Path).select_from(..)
>   ..update..
> nested.commit()
> 
> transaction.commit()
> session.clear()
> }}}
> 
> Another thread:
> {{{
> f_table.delete(...)
> #or update
> }}}
> 

I'm probably completely wrong about this, but in your example above, I
don't think the statement f_table.insert().execute() necessarily uses
the same connection as the ORM-level transaction and queries.

Using SQL expressions with ORM transactions is covered in this section
of the docs:

http://www.sqlalchemy.org/docs/04/session.html#unitofwork_sql

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: fully qualified column names in RowProxy

2007-11-27 Thread King Simon-NFHD78

It may not help in your situation, but did you know that you can also
index the row with the Column instances themselves

Ie: row[table.c.column]

Simon

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of Yuri Pimenov
> Sent: 27 November 2007 16:31
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: fully qualified column names in RowProxy
> 
> 
> I've found 'use_labels' parameter in select(). Everything is good but
> it uses _ as separator between table and column. That makes things
> not so easy because it is common practice to use _ in table and column
> names.
> Why not to use . as separator?
> 
> My other blue sky dream is about ORM and unicode column names 
> (python3.0?).
> 
> On 27/11/2007, Michael Bayer <[EMAIL PROTECTED]> wrote:
> >
> >
> > On Nov 27, 2007, at 3:04 AM, icct wrote:
> >
> > >
> > > Hi.
> > >
> > > Is there a way to make RowProxy use fully qualified 
> column names like
> > > "table.column"
> > > as keys?
> > > Even if i explicitly label column with "as" in query, 
> RowProxy chops
> > > off anything till last
> > > dot in label.
> > >
> >
> >
> > the reason for that is because in sqlite, if you say 
> "SELECT t2.col3,
> > t2.col4 FROM t2", cursor.description reports the column names as
> > u't2.col3', u't2.col4', instead of 'col3' and 'col4' like 
> every other
> > DBAPI would...so we truncate against the dot.
> >
> > I did a little testing and I'm not even sure how you'd even make a
> > label with a dot in it;  all three of sqlite, postgres, and mysql
> > raise a syntax error if you try to say "select colname AS
> > tablename.colname from tablename", so thats a pretty decent 
> clue that
> > its not supported by SQL.
> Quote it.
> 
> -- 
> Yuri Pimenov
> 
> > 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: access mapped object attributes

2007-11-06 Thread King Simon-NFHD78

You may be interested in an older thread, 'How to get list of
relations':

http://groups.google.com/group/sqlalchemy/browse_thread/thread/ff03af921
eb12acb/861597e8a72f5e6f

Simon 

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
On Behalf Of Christophe Alexandre
Sent: 06 November 2007 15:59
To: sqlalchemy@googlegroups.com
Subject: [sqlalchemy] Re: access mapped object attributes


Hi,

I am also interested in retrieving all the attributes resulting from the
ORM.

The loop on '.c' will list only the database columns. Is there a way to
list the object attributes?

Thanks a lot for your help,

Chris

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
On Behalf Of Paul Johnston
Sent: Monday, November 05, 2007 8:45 PM
To: sqlalchemy@googlegroups.com
Subject: [sqlalchemy] Re: access mapped object attributes


Hi,

>Given a Message object, do I have a way to retrieve all the attributes
>that result from the database mapping? 
>
Try this:

for col in Message.c:
...

Paul




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Please add some __len__ methods

2007-10-22 Thread King Simon-NFHD78


> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of klaus
> Sent: 22 October 2007 11:33
> To: sqlalchemy
> Subject: [sqlalchemy] Please add some __len__ methods
> 
> 
> Hi all,
> I wonder why some classes/objects implement part of a list interface -
> but without a __len__ method. Obvious examples are:
> 
> Query has __iter__ and __getitem__, both of which access the database.
> __len__ would be a nice alternative to a basic count().
> 
> Session has __iter__ and __contains__. __len__ could be used to
> indicate when the session gets too large and should be cleared.
> 
> I already suggested this some months ago but attracted no
> attention. ;-)
> 
> Best regards
>   Klaus
> 

As far as Query is concerned, SQLObject used to have this feature, but
they removed it because it was too easy for it to be called implicitly:

http://www.sqlobject.org/News.html#id31 

For example:

>>> class a(object):
...   def __len__(self):
... print "__len__"
... return 5
...   def __iter__(self):
... print "__iter__"
... return iter([1, 2, 3, 4, 5])
...
>>>
>>> b = a()
>>> list(b)
__iter__
__len__
[1, 2, 3, 4, 5]
>>>

So converting a Query to a list would generate an unnecessary COUNT
query. (I'm not sure why __iter__ gets called before __len__ - I would
have guessed it would be the other way round)

I can't think of an argument for not having __len__ on Session though.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: one-to-many access and modification

2007-10-02 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of Ken Pierce
> Sent: 29 September 2007 00:21
> To: sqlalchemy
> Subject: [sqlalchemy] one-to-many access and modification
> 
> 
> Hello all,
> 
> I just got into using sqlalchemy today and I have a question about
> accessing data in a one-to-many relationship.
> 
> I've got a simplified example below. Basically the system is for a
> group of us voting on DVDs we want to rent. There are users and films
> and for each film a user makes a yes / no choice (where ? is
> undecided).
> 
> # create tables
> users_table = Table('users', metadata,
> Column('id', Integer, primary_key=True),
> Column('name', String(40)),
> mysql_engine='InnodB')
> 
> films_table = Table('films', metadata,
> Column('fid', Integer, primary_key=True),
> Column('title', String(128)),
> mysql_engine='InnodB')
> 
> choices_table = Table('choices', metadata,
> Column('fid', Integer, ForeignKey('films.fid'), primary_key=True),
> Column('id', Integer, ForeignKey('users.id'), primary_key=True),
> Column('choice', MSEnum("'?'","'Y'","'N'")),
> mysql_engine='InnodB')
> 
> # classes here
> class User(object):
> ...
> class Film(object):
> 
> class Choice(object):
> 
> 
> # mappers
> mapper(User, users_table)
> mapper(Film, films_table, properties={'choices':relation(Choice)})
> mapper(Choice, choices_table)
> 
> So, if I retrieve a Film object f, f.choices gives me a list of Choice
> objects. From that Film object, I want to look up (and possibly
> modify) a users choice.
> 
> My question is, is there a way to set up this relationship so that I
> could access a users choice like a dictionary (either with a user id
> or User object), or do I have to write a method to search the list and
> return the object -- in which case, would the changes be reflected in
> the session / database?
> 
> Thanks in advance,
> Most impressed so far,
> 
> Ken Pierce.
> 

You may be interested in the 'Custom List Classes' section of the
documentation:



If you are using 0.4, the mechanism has changed slightly:



Regardless of whether you do this or not, if you wrote a method that
searched the list of choices and returned the appropriate one, the
changes should always be reflected in the session.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] LIMIT syntax in old versions of MySQL

2007-09-20 Thread King Simon-NFHD78

Hi,

The ancient version of MySQL that I am connecting to (3.23.58) doesn't
support the syntax 'LIMIT  OFFSET ' syntax. Instead, it
uses LIMIT , . This is described in the docs, but it
doesn't say what version introduced the more standard syntax:

http://dev.mysql.com/doc/refman/5.0/en/select.html#id3376456

I'm currently monkeypatching MySQLCompiler to use LIMIT ,
. Would it make sense for MySQLCompiler to be changed to always
use that syntax, since it is supported in more MySQL versions than the
standard?

Thanks,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Auto load problem with SQLAlchemy 0.3.10 and mySQL

2007-09-20 Thread King Simon-NFHD78

Noufal wrote:
> 
> 
>   I create two tables like so
> run_table = sa.Table('runs',md,
>  sa.Column('rid', sa.Integer,
> primary_key=True),
>  sa.Column('cmdline', sa.String(250)),
>  sa.Column('hostname', sa.String(20)),
>  sa.Column('workdir', sa.String(250)),
>  sa.Column('incremental', sa.Boolean),
>  sa.Column('user', sa.String(20)),
>  sa.Column('starttime', sa.TIMESTAMP),
>  sa.Column('endtime', sa.TIMESTAMP),
>  sa.Column('status',sa.String(20)),
>  sa.Column('machinetype',sa.String(20))
>  )
> run_table.create()
> 
> stats_table = sa.Table('stats',md,
>  
> sa.Column('sid',sa.Integer,primary_key=True),
>  
> sa.Column('rid',sa.Integer,sa.ForeignKey('runs.rid')),
>sa.Column('stagename',sa.String(50)),
>  
> sa.Column('description',sa.String(250)),
>sa.Column('starttime',sa.TIMESTAMP),
>sa.Column('endtime',sa.TIMESTAMP))
> stats_table.create()
> 
> Then I can actually use these tables.
> However, if I autoload them like so.
>run_table = sa.Table('runs', md, autoload=True)
>stats_table = sa.Table('stats', md, autoload=True)
> (md is the metadata)
> I get an error. The final assertion raised is like so
> 
> sqlalchemy.exceptions.ArgumentError: Error determining primary and/or
> secondary join for relationship 'Run.stages (Stats)'. If the
> underlying error cannot be corrected, you should specify the
> 'primaryjoin' (and 'secondaryjoin', if there is an association table
> present) keyword arguments to the relation() function (or for
> backrefs, by specifying the backref using the backref() function with
> keyword arguments) to explicitly specify the join conditions. Nested
> error is "Can't find any foreign key relationships between 'runs' and
> 'stats'"
> 
> 

The table reflection doesn't seem to be picking up the foreign key
definition from stats.rid to runs.rid. In 0.3.10, SA uses a regular
expression on the output of 'SHOW CREATE TABLE' to find foreign keys
(round about line 1169 of databases/mysql.py). You could run 'SHOW
CREATE TABLE stats' yourself and try and figure out why the regular
expression isn't matching.



However, you can provide your own foreign key definition while still
autoloading the table. Something like this should work:

stats_table = sa.Table('stats', md,
   sa.ForeignKeyConstraint(['stats.rid'],
['runs.rid']),
   autoload=True)

Also, I believe mysql table reflection is being completely reworked in
0.4, so the issue might go away.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Outerjoin with a subset of columns

2007-08-15 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of mc
> Sent: 15 August 2007 14:23
> To: sqlalchemy
> Subject: [sqlalchemy] Outerjoin with a subset of columns
> 
> 
> Hi,
> 
> Note the following code:
> 
> oj=outerjoin(s,f)
> r1=select([oj]).execute().fetchall()
> r2=select([oj.left.c.id, oj.right.c.status]).execute().fetchall()
> 
> The first select executes a LEFT OUTER JOIN and returns all columns.
> I wanted only 2 specific columns, so I tried the 2nd select.
> That does not execute a LEFT OUTER JOIN, though.
> 
> How do I achieve my goal?
> 

You want to use the 'from_obj' parameter for select. Something like:

select([s.c.id, f.c.status], from_obj=[oj])

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Hierachical data

2007-08-03 Thread King Simon-NFHD78

Hi,

I think this is a pretty good match for Joined Table polymorphic
inheritance, as described in the docs
. Your nodes_table would correspond to the employees
table, and the 'subclass' tables such as video correspond to the
engineers table. Your foreign key problem is solved by giving the video
table a foreign key pointing back to the node table, instead of the
other way round.

I think of adjacency lists and nested sets as more about hierarchies of
a single type of object (imagine trying to represent a family tree with
a row for each Person). I don't really think they're relevant in this
case.

Hope that helps,

Simon

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of Alexandre CONRAD
> Sent: 03 August 2007 10:44
> To: sqlalchemy
> Subject: [sqlalchemy] Hierachical data
> 
> 
> Hello,
> 
> this is more a general database question rather than a SA 
> one, but maybe 
> someone has already achieve this with SA. I know there's a 
> small chapter 
> about this already in the SA docs (1), but as far as I 
> understood, this 
> is about the "Adjacency List" model and not about the "Nested Set" 
> model. This article (2) explains about both approches and the "Nested 
> Set" model would suite better my needs I think.
> 
> Maybe this should need some attention to implement in SA some API to 
> handle nodes (insert, move, remove) of herachical trees in SA the 
> "Nested Set" way.
> 
> Right now I have a playlist management application that is 
> XML-based. So 
> a playlist has many nodes that either represent media 
> elements ("video", 
> "image") or classification elements ("group"):
> 
> 
>intro.avi
>
>  foo.avi
>  bar.avi
>
>
>  weather_intro.avi
>  morning.jpg
>   stop="2359">afternoon.jpg
>  brought_to_you_by.avi
>  
>brand_A.avi
>brand_B.avi
>brand_C.avi
>  
>
>outro.avi
> 
> 
> You can understand that the playlist has the ability to store 
> different 
> types of medias (images, videos), some group elements used for 
> classification, can apply some scheduling data to any elements as 
> attributes.
> 
> So I'm looking foward to integrate this hierarchical tree in a flat 
> database.
> 
> Basicly, I'd have a "playlist" to hold the main playlist:
> 
> playlist_table = Table('playlists', meta,
>  Column('id', Integer, primary_key=True),
>  Column('name', Unicode(20), nullable=False),
>  Column('description', Unicode(40)),
> )
> 
> Then an "nodes" table to store every type of nodes (group, 
> video, image) 
> and some scheduling attributes.
> 
> nodes_table = Table('nodes', meta,
>  Column('id', Integer, primary_key=True),
>  Column('typeof', Unicode(20), nullable=False),
>  Column('foreign_id', Integer),
>  Column('lft', Integer, nullable=False), # For "Nested Set" model
>  Column('rgt', Integer, nullable=False), # For "Nested Set" model
>  Column('scheduled_from', DateTime),
>  Column('scheduled_until', DateTime),
>  Column('id_playlist', None, ForeignKey('playlists.id')),
> )
> 
> Every row would store "group", "video" or "image" in the 
> 'typeof' field 
> to identify the type of node. And the id of the media would be in the 
> 'foreign_id' field. 'typeof' and 'foreign_id' are the two 
> field I don't 
> feel clean about.
> 
> Lets finish with the "videos" table for the example:
> 
> videos_table = Table('videos', meta,
>  Column('id', Integer, primary_key=True),
>  Column('file_name', Unicode(255), nullable=False),
>  Column('category', Unicode(255), nullable=False),
>  Column('duration', Integer),
>  Column('id_codec', None, ForeignKey('codecs.id')),
> )
> 
> As the "nodes" table would need to point to different tables 
> to find the 
> corresponding media, I couldn't have a foreign key for a natural 
> relation. Or I would need to have as many foreign keys in the "nodes" 
> table (id_video, id_image, ...) and only fill in the nedded 
> column. But 
> this is not efficient.
> 
> The whole idea is about browing down my tree through python objects 
> attributes like SA already does so well when two tables are 
> related to 
> each other, but returning a different type of object depending on the 
> kind of node I hit.
> 
> class Node(object):
>  ...
> class Video(Node):
>  ...
> class Image(Node):
>  ...
> 
> Any ideas one how I could deal with that would be greatly 
> appreciated. I 
> hope this is clear enough to not scare people from reading 
> it. At least 
> if you got there, it's a good point. :)
> 
> ps: I just felt on the docs about "Concrete Inheritance, Polymorphic" 
> (3), maybe this could be something to dig for ?
> 
> (1) 
> http://www.sqlalchemy.org/docs/documentation.html#advdatamappi
> ng_selfreferential
> (2) 
> http://www.vbmysql.com/articles/database-design/managing-hiera
> rchical-data-in-my

[sqlalchemy] Re: ConcurrentModificationError: Updated rowcount 0 does not match number of objects updated 1

2007-07-31 Thread King Simon-NFHD78
databases/mysql.py has this snippet in create_connect_args (0.3.10):
 
# FOUND_ROWS must be set in CLIENT_FLAGS for to enable
# supports_sane_rowcount.
client_flag = opts.get('client_flag', 0)
if self.dbapi is not None:
try:
import MySQLdb.constants.CLIENT as CLIENT_FLAGS
client_flag |= CLIENT_FLAGS.FOUND_ROWS
except:
pass
opts['client_flag'] = client_flag

So CLIENT.FOUND_ROWS is a constant that should be passed as part of the
'client_flag' options to MySQLdb's connect method. I don't know you
didn't need this before though.
 
Hope that helps,
 
Simon





From: sqlalchemy@googlegroups.com
[mailto:[EMAIL PROTECTED] On Behalf Of Arun Kumar PG
Sent: 31 July 2007 09:47
To: sqlalchemy@googlegroups.com
Subject: [sqlalchemy] Re: ConcurrentModificationError: Updated
rowcount 0 does not match number of objects updated 1


The MySQLdb library has CLIENT.FOUND_ROWS = 2. What value
ideally it should have ?

I am still now clear why this problem was not coming in the
earlier SA version!


On 7/31/07, Arun Kumar PG <[EMAIL PROTECTED]> wrote: 

I am using pool.QueuePool for managing connections and
this pool is fed with a creator function which returns an instance of my
own custom DBAPI class (I need this because of some logging stuff that I
am doing). This custom DBAPI class returns a Connection object returned
by MySQLdb.connect.

>>> The FOUND_ROWS client flag must be enabled on MySQL
connections to make rowcount return what you (and SA) are expecting.

This has been enabled recently on SA 3.9 ? because the
previous version was working fine. 




On 7/31/07, jason kirtland < [EMAIL PROTECTED]
 > wrote: 


Arun Kumar PG wrote:
> Looks like the problem is coming because of
the fact when we are
> updating a row in table with the same data the
rowcount returned by
> mysql is 0. Only when there is a change in
data the rowcount is returned. 

Are you creating connections outside of
SQLAlchemy?  (I seem to recall
you were using a custom creator function.)  The
FOUND_ROWS client flag
must be enabled on MySQL connections to make
rowcount return what you 
(and SA) are expecting.







-- 
Cheers,

- A 




-- 
Cheers,

- A




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: not updated relation one-to-many

2007-07-27 Thread King Simon-NFHD78

The problem is that when you run your last query, SA sees that there is
still an existing T1 object (with the right primary key) in memory (both
in the session and in your 'a1' and 'ra1' variables). It deliberately
doesn't update them.

To get the behaviour you expect, you need to get rid of previous
references to your objects. Try adding these lines before the last
query:

# Clear the session
s.clear()
# Get rid of local references to T1 and T2 instances
del a1, a2, a22, ra1

This behaviour is documented here:

http://www.sqlalchemy.org/docs/unitofwork.html#unitofwork_identitymap

You may also be interested in the refresh/expire methods of session:

http://www.sqlalchemy.org/docs/unitofwork.html#unitofwork_api_refreshexp
ire

Hope that helps,

Simon

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of Michal Nowikowski
> Sent: 27 July 2007 13:08
> To: sqlalchemy
> Subject: [sqlalchemy] not updated relation one-to-many
> 
> 
> Hello
> 
> I've following problem. I've two tables (t1, t2) and relation t1 (one)
> - t2 (many):
>   mapper(T1, t1, properties={"t2s": relation(T2, lazy=False)})
>   mapper(T2, t2, properties={"t1": relation(T1, lazy=False)})
> 
> When I add row to t1, then to t2, and then run query for first row in
> t1, I see one
> element in collection t2s - it is ok.
> Then when I add second row to t2, the collection in t1 object is not
> updated.
> It still contains only one element.
> 
> Example below.
> 
> Could you tell me how to refresh collection in one-to-many relation???
> 
> Regards
> Michal Nowikowski
> 
> 
> from sqlalchemy import *
> 
> md = MetaData('sqlite:///a.db', echo=False)
> t1 = Table("t1", md,
>Column("id", Integer, primary_key=True),
>Column("version", Integer))
> 
> t2 = Table("t2", md,
>Column("id", Integer, primary_key=True),
>Column("name", String),
>Column("t1_id", Integer, ForeignKey("t1.id")))
> 
> md.create_all()
> s = create_session()
> 
> class T1(object):
> pass
> class T2(object):
> pass
> 
> mapper(T1, t1, properties={"t2s": relation(T2, lazy=False)})
> mapper(T2, t2, properties={"t1": relation(T1, lazy=False)})
> 
> a1 = T1()
> s.save(a1)
> s.flush()
> 
> a2 = T2()
> a2.t1_id = a1.id
> a2.name = "AAA"
> s.save(a2)
> s.flush()
> 
> ra1 = s.query(T1).first()
> print [ a.name for a in ra1.t2s ]
> 
> a22 = T2()
> a22.t1_id = a1.id
> a22.name = "BBB"
> s.save(a22)
> s.flush()
> 
> rra1 = s.query(T1).first()
> print [ a.name for a in rra1.t2s ]
> 
> 
> > 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: lazy table creation

2007-07-27 Thread King Simon-NFHD78

You want something like this:

user_table = Table('users', metadata, Column('userid', String(8)))
user_table.create(checkfirst=True)
# or 
# metadata.create_all(checkfirst=True)

Documentation is at:

http://www.sqlalchemy.org/docs/metadata.html#metadata_creating

Hope that helps,

Simon

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of samwyse
> Sent: 27 July 2007 13:23
> To: sqlalchemy
> Subject: [sqlalchemy] lazy table creation
> 
> 
> I've looked and looked, but can't find any guidance on this.  I want a
> program that, the first time it's run, creates its tables.  I'm
> guessing that I need something like this:
> 
> try:
>   user_table = Table('users', metadata, autoload=True)
> except NoSuchTableError:
>   user_table = Table('users', metadata, Column('userid', String(8)))
> 
> OTOH, I can't find any examples of anyone doing things this way.  Is
> there another way that I'm missing?  Thanks!
> 
> 
> > 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Model to Dictionary

2007-07-24 Thread King Simon-NFHD78

What do you mean by a model? If you are talking about an instance of a
mapped class, you could try something like this (untested):

def model_to_dict(instance):
model_dict = {}
for propname in instance.mapper.props:
model_dict[propname] = getattr(instance, propname)
return model_dict

Note that if you have relations to other mapped classes this will put
instances of those other classes in the dict. If you don't want that,
you could check to see if the result of the getattr is a mapped class
and call the function recursively. But you'd probably also need to check
for circular references, and treat x-to-many relationships specially.

Simon

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of HiTekElvis
> Sent: 23 July 2007 23:31
> To: sqlalchemy
> Subject: [sqlalchemy] Model to Dictionary
> 
> 
> Anybody know a way to change a model into a dictionary?
> 
> For those to whom it means anything, I'm hoping to pass that
> dictionary into a formencode.Schema.from_python method.
> 
> Any ideas?
> 
> -Josh
> 
> 
> > 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Creating where clauses programatically

2007-07-06 Thread King Simon-NFHD78


SQLAlchemy Column objects override the '==' operator (and many others as
well) so that python expressions get converted to appropriate SQL
expressions.

It's mentioned (briefly) here:

http://www.sqlalchemy.org/docs/sqlconstruction.html#sql_whereclause
 

Hope that helps,

Simon

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of mc
> Sent: 06 July 2007 09:38
> To: sqlalchemy
> Subject: [sqlalchemy] Re: Creating where clauses programatically
> 
> 
> I must be missing something
> Doesn't that evaluate the == expressions and result in an expression
> like and_(True, True, False, ...) ?
> 
> 
> On Jul 4, 1:04 am, Michael Bayer <[EMAIL PROTECTED]> wrote:
> > On Jul 3, 4:06 pm, mc <[EMAIL PROTECTED]> wrote:
> >
> > > I managed to avoid this issues because for select and 
> insert I can use
> > > dictionaries as arguments of execute().
> > > For update, I must create a where clause from a list of 
> conditions I
> > > have as a dictionary.
> > > How do I do this programatically?
> >
> > and_(*[table.c[key]==mydict[key] for key in mydict])
> 
> 
> > 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: advanced mapping help

2007-06-25 Thread King Simon-NFHD78

Can't you define the ContentCollection mapper without any reference to
Content, and then when you define the Content mapper, you can set up two
relations to ContentCollection with appropriate backrefs?

If that doesn't work, you can add properties to a mapper after it has
been defined, so you can create the basic mappers without the relations,
and then say something like:

class_mapper(Content).add_property('collection_children',
   relation(ContentCollection,
backref='content'))

Simon

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
On Behalf Of Matt
Sent: 23 June 2007 19:51
To: sqlalchemy
Subject: [sqlalchemy] Re: advanced mapping help


Thanks I think I had tried something like this before...  Anyway, now
I'm defining the relation from Content to ContentCollection and
separately from ContentCollection back to Content, but I'm running
into the chicken and the egg problem...  The mapper expects a class,
but I can't define Content before ContentCollection and
ContentCollection before Content...

thx

m

On Jun 22, 11:32 pm, [EMAIL PROTECTED] wrote:
> u may need an explicit intermediate association object, see the docs
> about "many to many" relations.
> Instead of directly getting the children, u'll get the associations
> via which u can get the children, or whatever attributes of the link.
> Then if u want a direct children-list, u can make some python property
> to do that i.e. return [x.child for x in me.children_links].
>
> > Hi, I have a relation I'm mapping through an intermediate table and
> > it seems to work fine, it looks something like:
>
> > Content -> ContentCollection -> Content
>
> > 'collection_children':relation(Content,
> >secondary = content_collection,
> >primaryjoin   = content.c.id ==
> > content_collection.c.collection_id,
> >secondaryjoin =
> > content_collection.c.content_id == content.c.id,
> >order_by  =
> > content_collection.c.priority,
> >   ),
>
> > The trouble is, I want to be able to set a couple fields on the
> > ContentCollection table, but I don't actually have a property for
> > those fields on the Content object anywhere since I'm mapping
> > through that table...  Any ideas on how I should handle this?
>
> > thx
>
> > Matt




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



<    1   2   3   >