[sqlalchemy] Re: Using assoc proxy with a regular field, help?

2007-02-06 Thread Roger Demetrescu

Hi iain

On 2/5/07, iain duncan [EMAIL PROTECTED] wrote:
 Below is my code in case someone has time to look at it. At the moment it 
 works to get
 article.ordering as a list instead of one field. Is there a way to tell it 
 that this is only supposed to be one item?

uselist is your friend..  :)

Try this:


# mapper for PageArticleAssoc
mapper( PageArticleAssoc, page_article_table,
   primary_key= [ page_article_table.c.page_id,
page_article_table.c.article_id ],
   properties={
   'article' : relation(Article, lazy=False, uselist=False),
   'page' : relation(Page, lazy=False, uselist=False),
   }
)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: new setuptools vs local SA copy

2007-02-06 Thread King Simon-NFHD78

Rick Morrison wrote:
 
 I keep two versions of SA installed here, one is a stable 
 version installed in the Python site-packages folder, and one 
 is current trunk with some local patches for testing.
 
 I used to be able to run tests and programs using the local 
 version by just inserting the local directory into the Python 
 path, and imports would then use that.
 
 I've recently upgraded to setuptools 0.6c5 and that doesn't 
 seem to work anymore -- I now always get the version from the 
 site-packages folder.
 
 Anyone running this kind of configuration out there run into 
 something like this?
 

The way I've done this is to run 'python setup.py develop' in the SVN
checkout. This puts the path to the checkout in easy-install.pth, and it
also creates an SQLAlchemy.egg-link file with the same path - I don't
know what this is used for.

To go back to the stable version I run 'easy_install -U SQLAlchemy'.
This seems to work on both Windows and Linux, but I am only on
setuptools 0.6c3.

This is probably more complicated than it needs to be - I would have
thought you can switch just by editing the easy-install.pth file.

The correct way is probably to use setuptools' --multi-version switch,
and put pkg_resources.require() somewhere in your application, but I've
not used that yet.

Another thing that I've found very useful (on Linux) is this:

 
http://peak.telecommunity.com/DevCenter/EasyInstall#creating-a-virtual-p
ython

Particularly with fast-moving projects like SQLAlchemy and TurboGears,
trying to share a single copy of a library between multiple applications
without breaking them every time I upgraded the library was getting
tricky. There's also working-env:

 http://blog.ianbicking.org/workingenv-update.html

which I haven't tried yet, but has the advantage of working on Windows
(apparently).

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SelectResults, counts and one-to-many relationships

2007-02-06 Thread King Simon-NFHD78

Michael Bayer wrote:
 
 I added distinct() to selectresults as a method and made the 
 unit test a little clearer (since i dont like relying on the 
 selectresults mod)...
 
 q = sess.query(Department)
 d = SelectResults(q)
 d =
 d.join_to('employees').filter(Employee.c.name.startswith('J'))
 d = d.distinct()
 d = d.order_by([desc(Department.c.name)])
 
 

...and...

 
 for the order by getting removed during the select, that 
 seemed to be an optimization that got stuck in there and 
 since this is a really fringe use case its never come up, so 
 i removed it and added your test case (only with 
 distinct=True) in rev 2301.
 

I think you're slipping - I had to wait a whole three and a half hours
for this fix ;-) Seriously, thanks again,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] inconsistent results between 2 testuites executions (SA 0.3.4)

2007-02-06 Thread sagblmi

When I run the testsuite more than once, results are differents.
This strange behavior disappear when the the testsuite is launched
with python in optimize mode 'python -OO alltest.py'
This is particularly true with orm.inheritance5.

Very very strange and very blocking since we can't guarantee the
result.

Any idee?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] just how lazy are backrefs?

2007-02-06 Thread Marco Mariani

I'm working inside TurboGears, autoload everything.
Tested with 0.3.3 and Trunk.


I find this strange behaviour: when I declare a relation between a
Person and a Contract:

assign_mapper(context, Contract, tbl['contracts'], properties = {
'responsible' : relation(Person,
  backref='contracts_responsible',
  # ambiguous... do an explicit join
 
primaryjoin=(tbl['contracts'].c.uid_responsible==Person.c.uid)),
   [...]
)


Now, if I start the console, then run...

john = Person.get('johndoe')
john.contracts_responsible

gives me

AttributeError: 'Person' object has no attribute 'contracts_responsabile'


_but_ the very moment I load any contract:

cont = Contract.get('somecode')

_then_ john.contracts_responsible starts working and returns the list of
contracts that john is responsible for.

The same happens with any backref... i'm puzzled  O_o


If this is not a known feature/bug, I can try and reproduce it in a
standalone project


Thanks


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: just how lazy are backrefs?

2007-02-06 Thread Marco Mariani

Marco Mariani wrote:
 john.contracts_responsible

 gives me

 AttributeError: 'Person' object has no attribute 'contracts_responsabile'
   
this is a cut-n-translate-n-paste typo, should read
contracts_responsible of course



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] py2exe sqlalchemy

2007-02-06 Thread Something Special

How to py2exe it?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: got problems where map to selectable

2007-02-06 Thread Sébastien LELONG

If you put a refresh just before your assert, that is working:

...
user = session.query(User).selectone_by(name='user1')
session.refresh(user)# here !
assert user.post_count==2, 'user.post_count is %s'%user.post_count

Not sure to understand why though... I bet its because of SA's cache 
(identity_map). Since the user was already loaded before, SA doesn't 
re-perform sql access to the db. You can see it by activating engine.echo = 
True, and selecting users by id (eg. session.query(User).get(1))

Hope it helps.

Cheers,

Seb
-- 
Sébastien LELONG
sebastien.lelong[at]sirloon.net

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Inspect and __init__ of a mapped class

2007-02-06 Thread Michael Bayer

sureim not super thrilled with this whole issue since id rather
__init__ is just left alone, but the whole auto-add-to-the-session
behavior is pretty popular...so just submit a ticket/patch for
whatever version you want there.

On Feb 6, 7:29 am, Patrick Lewis [EMAIL PROTECTED] wrote:
 On Feb 5, 4:21 pm, [EMAIL PROTECTED] wrote:

   oh right, old_init(), sorry, didnt read the first post carefully.

   i almost think the answer here might be to not even create the
   modified __init__()/old_init() method in the general case...if
   youre calling session.save(someobject) its not really needed.

   but for now...maybe class.__init__.func_globals['oldinit'] ?

  nah, i tried that, that is the globals of the module-level; while the
  locals that are inside the func... are in the .func_code. And
  especialy the outside-scope locally-bound references are
  the .func_code.co_freevars - but names only.

 That's what I'm seeing as well.



  one way is to expose class._old__init__, or if that is not acceptable
  (which is understandable), to completely copy not only name  doc,
  but also func_defaults and the args/kwargs-names from .code - see
  inspect. But i dont think the .func_code attributes are writable, so
  they have to go under the function itself.
  A really sneaky way would be to make/fake a new code obj - well, the
  doc says not for faint at heart (-:

  now i am completely offtopic.
  ciao

 If any of the following were to be included at some point, I would be
 very happy:

 1- class_._old_init = oldinit
 2- class_.__init__._sa_oldinit = oldinit
 3- class_._old_init_argspec =  inspect.getargspec(oldinit)
 4- class_.__init__._sa_old_init_argspec = inspect.getargspec(oldinit)

 #2 or #4 could even serve to replace the functionality of
 __init__._sa_mapper_init (indicating that the class has already been
 initialized)

 I'd submit a patch if Michael indicates that one of them might be
 accepted.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: py2exe sqlalchemy

2007-02-06 Thread Oleg Deribas

Hello,

Something Special said the following on 06.02.2007 18:07:

 How to py2exe it?

1. Install SQLAlchemy with --always-unzip
2. Rename sqlalchemy's logging.py to log.py
3. Search/replace logging with log in all sqlalchemy's sources
except log.py itself

-- 
Oleg


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: just how lazy are backrefs?

2007-02-06 Thread Michael Bayer

yeah im not sure if i can fix that behavior, your call to
Person.c.uid is compiling the Person mapper before the Contract mapper
has been created.   therefore when you call Person.get(), no mappers
compile and Contract remains uncompiled, and your
contracts_responsible attribute remains uninitialized.

so you need to either not reference Person.c ahead of time, or call
Contract.mapper.compile(), or use 0.3.4 and call compile_mappers()
to compile all mappers.

On Feb 6, 6:53 am, Marco Mariani [EMAIL PROTECTED] wrote:
 I'm working inside TurboGears, autoload everything.
 Tested with 0.3.3 and Trunk.

 I find this strange behaviour: when I declare a relation between a
 Person and a Contract:

 assign_mapper(context, Contract, tbl['contracts'], properties = {
 'responsible' : relation(Person,
   backref='contracts_responsible',
   # ambiguous... do an explicit join

 primaryjoin=(tbl['contracts'].c.uid_responsible==Person.c.uid)),
[...]
 )

 Now, if I start the console, then run...

 john = Person.get('johndoe')
 john.contracts_responsible

 gives me

 AttributeError: 'Person' object has no attribute 'contracts_responsabile'

 _but_ the very moment I load any contract:

 cont = Contract.get('somecode')

 _then_ john.contracts_responsible starts working and returns the list of
 contracts that john is responsible for.

 The same happens with any backref... i'm puzzled  O_o

 If this is not a known feature/bug, I can try and reproduce it in a
 standalone project

 Thanks


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inconsistent results between 2 testuites executions (SA 0.3.4)

2007-02-06 Thread svilen

i mentioned this dictionary-order on my ramblings some weeks ago; then 
u didnt take it into account.

i've managed here to get repeatable testing and SQL generation with 
hack-replacing many {} with ordered ones, but i don't like the 
solution.

Why not just use util.Dict (defaulting to dict) instead of {}?

Then whoever wants, replaces that with Ordered one and voila, all 
things get repeatable, if a little slower.


 assuming this is ticket 461 which has been fixed and some extra
 testing options added to catch these better.

 if youre wondering about random its because there is more than
 one way to produce a topological sort for many partial
 orderings...so when the initial A is dependent on B, B is
 dependent on C information sent to the sorter is incorrect as it
 was here, it will produce errors only if the resulting sort happens
 to turn out in an certain way, which is generally based on python's
 dictionary ordering for that particular run.  things like -OO and
 such will change the ordering you get from dicts.

 as far as blocking, its definitely not time to use SA in any kind
 of avionics or medical devices...id wait another 25 years for that.

 On Feb 6, 6:37 am, sagblmi [EMAIL PROTECTED] wrote:
  When I run the testsuite more than once, results are differents.
  This strange behavior disappear when the the testsuite is
  launched with python in optimize mode 'python -OO alltest.py'
  This is particularly true with orm.inheritance5.
 
  Very very strange and very blocking since we can't guarantee the
  result.
 
  Any idee?

 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Updating a column to an expression containing itself; Inserting data by lists or tuples

2007-02-06 Thread S.R. Larmes


Hi,

I'm trying to get into SQLAlchemy and it has been really great so far,
but there are two things I can't figure out how to do:

(both are outside of ORM)

a) How do I do:

UPDATE sometable SET value = value * 0.9 + 1[ + optional WHERE
clause ] ?

I've tried experimenting with .update(values=..) or stuff
like .execute(value=sometable.c.value*0.9+1) but it just tries to do
strange things. (SET value=sometable.value * %(sometable_value)s + %
(literal)s ??)

b) I have an array (or tuple) with values that I'd just like to fire
off to insert() which is compiled to work on one column (the other
columns have defaults or pre-set values)... To me, this one ought to
be obvious and intuitive, and maybe I'm just stupid or blind, but I
can't find a way.

(Alternatively, if I have an array such as [(1, 'hi', 'text'), (2,
'hello', 'tuxt'), ...] where the columns match up to the columns in
the table (or a given insert())...)

.execute(*[dict(name=x) for x in array]) # to me, this is inelegant
and horribly roundabout

I'm not raising concern about any speed penalty for having to make a
dict() for every param (100% negligible), it's just that it seems so
unnecessary! (Given Python's good polymorphism and introspection.)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: new setuptools vs local SA copy

2007-02-06 Thread Michael Bayer

as I have like 12 different SA directories which id like to jump
between without going through a distinct install for each one, im a
big fan of PYTHONPATH, and after futile-ly arguing with PJE that plain
libraries on PYTHONPATH should take precedence over installed
.eggs (he strongly feels that .eggs take precedence in all cases,
although others agree with me), I hacked my Python install to work the
way i wanted it:

put a file -pythonpath.pth into your site_packages folder:

import os, sys; sys.__egginsert=len(os.environ.get('PYTHONPATH',
'').split(os.pathsep));

this will send a modified path over to the easy-install.pth file which
is the culprit responsible for blowing up your PYTHONPATH.  I continue
to use easy_install normally with no problems.

The SA unit test suite artifically shoves ./lib into sys.path at
startup so that running SA unit tests should generally always use the
local checkout regardless of setuptools getting in the way.

if people are interested in more organized petitioning of PJE to
change his mind on this behavior, sign me up.  generally people seem
to be unaware of it.  I challenged him to name *any* scenario where an
administrator would want a local-environment-based PYTHONPATH to be
overridden by an application-wide configuration and he didnt reply to
that one.  he sees it as a if youre using .eggs, then you must accept
that PYTHONPATH only points to installation directories, not runtime
directories...so basically breaking PYTHONPATH's documented behavior
into something repurposed is by design.


On Feb 6, 5:06 am, King Simon-NFHD78 [EMAIL PROTECTED]
wrote:
 Rick Morrison wrote:

  I keep two versions of SA installed here, one is a stable
  version installed in the Python site-packages folder, and one
  is current trunk with some local patches for testing.

  I used to be able to run tests and programs using the local
  version by just inserting the local directory into the Python
  path, and imports would then use that.

  I've recently upgraded to setuptools 0.6c5 and that doesn't
  seem to work anymore -- I now always get the version from the
  site-packages folder.

  Anyone running this kind of configuration out there run into
  something like this?

 The way I've done this is to run 'python setup.py develop' in the SVN
 checkout. This puts the path to the checkout in easy-install.pth, and it
 also creates an SQLAlchemy.egg-link file with the same path - I don't
 know what this is used for.

 To go back to the stable version I run 'easy_install -U SQLAlchemy'.
 This seems to work on both Windows and Linux, but I am only on
 setuptools 0.6c3.

 This is probably more complicated than it needs to be - I would have
 thought you can switch just by editing the easy-install.pth file.

 The correct way is probably to use setuptools' --multi-version switch,
 and put pkg_resources.require() somewhere in your application, but I've
 not used that yet.

 Another thing that I've found very useful (on Linux) is this:

 http://peak.telecommunity.com/DevCenter/EasyInstall#creating-a-virtual-p
 ython

 Particularly with fast-moving projects like SQLAlchemy and TurboGears,
 trying to share a single copy of a library between multiple applications
 without breaking them every time I upgraded the library was getting
 tricky. There's also working-env:

  http://blog.ianbicking.org/workingenv-update.html

 which I haven't tried yet, but has the advantage of working on Windows
 (apparently).

 Hope that helps,

 Simon


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inconsistent results between 2 testuites executions (SA 0.3.4)

2007-02-06 Thread Michael Bayer

On Feb 6, 1:13 pm, svilen [EMAIL PROTECTED] wrote:
 i mentioned this dictionary-order on my ramblings some weeks ago; then
 u didnt take it into account.

because i stop reading rambles after 50-60 words or so, and the stress
of glossing over the rest generally causes me to forget words 30-60 as
well.


 i've managed here to get repeatable testing and SQL generation with
 hack-replacing many {} with ordered ones, but i don't like the
 solution.

 Why not just use util.Dict (defaulting to dict) instead of {}?

 Then whoever wants, replaces that with Ordered one and voila, all
 things get repeatable, if a little slower.

the ordering of the data sent to the sort should not affect the
outcome of the program.  therefore its important to test all kinds of
orderings to ensure that the initial lists of dependencies are
correct.  while you can hardcode the internal datastructures to use a
deterministic ordering, that says nothing about the order in which the
calling application inserts records into the UOW, which will then
change the ordering anyway.

i think my --reversetop solution will be sufficient for now, ive seen
lots and lots of these issues in the past year and a half and a
combination of testing with straight/reverse will pretty much bring
all of them out.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using assoc proxy with a regular field, help?

2007-02-06 Thread iain duncan

On Tue, 2007-06-02 at 06:25 -0200, Roger Demetrescu wrote:
 Hi iain
 
 On 2/5/07, iain duncan [EMAIL PROTECTED] wrote:
  Below is my code in case someone has time to look at it. At the moment it 
  works to get
  article.ordering as a list instead of one field. Is there a way to tell it 
  that this is only supposed to be one item?
 
 uselist is your friend..  :)

Great, that did it. ( It's actually on the ordering field that I needed
it, but it worked ). Now after selecting and article I can use ordering
directly as a field: 

[(a.article_name, a.ordering ) for a in Article.select_by(page_id=2)]

One thing I'm still not clear on is whether I can do an order_by on an
object field, or whether that always needs to be with the select on
table notation like so:

arts = Article.select(page_table.c.page_id==2,
order_by=[page_article_table.c.ordering])

Is there someway to do the above like one of these but with ordering
added now that ordering is accessible as either
page_article_table.c.ordering or Article.ordering:

arts = Article.select_by(page_id=2) 

arts = Page.get(2).articles   


A big thanks to the makers of association proxy and session context and
assign mapper! 

Thanks
Iain


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Updating a column to an expression containing itself; Inserting data by lists or tuples

2007-02-06 Thread S.R. Larmes

Michael Bayer wrote:
 On Feb 6, 1:20 pm, S.R. Larmes [EMAIL PROTECTED] wrote:
 
  a) How do I do:
 
  UPDATE sometable SET value = value * 0.9 + 1[ + optional WHERE
  clause ] ?
 
  I've tried experimenting with .update(values=..) or stuff
  like .execute(value=sometable.c.value*0.9+1) but it just tries to do
  strange things. (SET value=sometable.value * %(sometable_value)s + %
  (literal)s ??)

 that looks correct to me...all literal values are by default converted
 to bind params.   if you execute the statement the bind values will be
 set up correctly.

Ahh, got it to work now. It doesn't seem to work with
.execute(value=table.c.value*0.9)) (gives programming error), but that
makes sense to me. :) With .update(values={..}) it works fine! Sorry
about that.

Thanks for the swift reply by the way.

  b) I have an array (or tuple) with values that I'd just like to fire
  off to insert() which is compiled to work on one column (the other
  columns have defaults or pre-set values)... To me, this one ought to
  be obvious and intuitive, and maybe I'm just stupid or blind, but I
  can't find a way.

 how did you compile an insert() to work on just one (unnamed) column ?
 like table.insert(values=['someval']) ?

No, I didn't even know about that? I just meant that all the other
columns had bound values in values={..} or as default values so I
would
expect execute() (etc) to figure out their values automatically
without
me having to mention those columns again.

  (Alternatively, if I have an array such as [(1, 'hi', 'text'), (2,
  'hello', 'tuxt'), ...] where the columns match up to the columns in
  the table (or a given insert())...)
 
  .execute(*[dict(name=x) for x in array]) # to me, this is inelegant
  and horribly roundabout

 well, SA is mostly name oriented at this time for several reasons:

 1. there is no chance of ambiguity.  for example, if you write code
 that just says table.insert().execute(1,2,3,4,5), supposing the table
 is reflected from the local database - that code might not be portable
 if executed on a different database where the table was created with a
 different column ordering, or had columns added via ALTER, etc.

That's true, but I imagine there would be a way to specify the columns
(once, when getting the insert()), and then trust subsequent
execute()s
to permute the arguments into their right order...

 2. it is more database-neutral to build off named parameters, since
 its easy to convert from named param to positional, hard to convert in
 the other direction (ordering may not be consistent depending on how a
 dialect compiles a statement).  as not all DBAPIs support positional
 bind parameters anyway, SA is going to create named bind parameters in
 any case so the inelegant and roundabout behavior will still occur,
 albeit behind the scenes, if support for positional execute()
 arguments were improved.

 since its *only* inserts that positional parameters even have a chance
 of working on all DBs, SA's parameter-processing logic was written
 around a named parameter concept since positional params are of very
 limited use.

But aren't the result set from selects also positional (AS WELL as
name-centric):

.select().execute().fetchone() gives something which can be thought
of either as a hash or as a tuple à la (1, 'ah', 102), and I'm using
the tuple... It would thus be consistent to be able to use similar
constructs with inserts.

 3. lets look at the proposed feature itself.  let me preface by
 saying, i will accept this feature into SA, and i am not opposed to
 it.  if an adequate patch is provided, i will commit it.  however, its
 not a priority for me to implement this myself.  this is because it
 would be complex to implement, and the current insert() method is one
 step and is a single way to do it.  the method you are proposing
 requries two steps in most cases:

 i.e.

table.insert().execute(*[dict(name=x) for x in array])

 specifies the column names and their values in one step.  the proposed
 method requires in most cases a separate column specification step,
 currently the inelegant:

   table.insert(values=['fake value 1', 'fake value2']).execute(*array)

 but if we added a column list argument, would look like:

   insert([table.c.col1, table.c.col2]).execute(*array)

 which is still two steps.  of course if you want to insert the whole

Two steps, but, I feel, more consistent and overall better design,
without always having to go via the tiny temporary dictionary objects.

Also, I don't think it should be necessary to unpack the array at all,
since we're using Python. If we wanted to enforced strict typing we
might be better off programming in a statically-typed environment
(no?). To me, the incredible dynamic powers of introspection and
polymorphism is half of the point of using Python. If I pass an array
to insert().execute() it should be intuitively obvious that I'm
trying to insert a series of rows, each represented by one item (be
it a scalar or 

[sqlalchemy] Re: Updating a column to an expression containing itself; Inserting data by lists or tuples

2007-02-06 Thread Rick Morrison

Well one good thing about the positional-style INSERT is that it just
might be useful for implementing INSERT INTO SELECT FROM, which
would be a pretty useful addition to the SQL API layer



On 2/6/07, S.R. Larmes [EMAIL PROTECTED] wrote:

 Michael Bayer wrote:
  On Feb 6, 1:20 pm, S.R. Larmes [EMAIL PROTECTED] wrote:
  
   a) How do I do:
  
   UPDATE sometable SET value = value * 0.9 + 1[ + optional WHERE
   clause ] ?
  
   I've tried experimenting with .update(values=..) or stuff
   like .execute(value=sometable.c.value*0.9+1) but it just tries to do
   strange things. (SET value=sometable.value * %(sometable_value)s + %
   (literal)s ??)
 
  that looks correct to me...all literal values are by default converted
  to bind params.   if you execute the statement the bind values will be
  set up correctly.

 Ahh, got it to work now. It doesn't seem to work with
 .execute(value=table.c.value*0.9)) (gives programming error), but that
 makes sense to me. :) With .update(values={..}) it works fine! Sorry
 about that.

 Thanks for the swift reply by the way.

   b) I have an array (or tuple) with values that I'd just like to fire
   off to insert() which is compiled to work on one column (the other
   columns have defaults or pre-set values)... To me, this one ought to
   be obvious and intuitive, and maybe I'm just stupid or blind, but I
   can't find a way.
 
  how did you compile an insert() to work on just one (unnamed) column ?
  like table.insert(values=['someval']) ?

 No, I didn't even know about that? I just meant that all the other
 columns had bound values in values={..} or as default values so I
 would
 expect execute() (etc) to figure out their values automatically
 without
 me having to mention those columns again.

   (Alternatively, if I have an array such as [(1, 'hi', 'text'), (2,
   'hello', 'tuxt'), ...] where the columns match up to the columns in
   the table (or a given insert())...)
  
   .execute(*[dict(name=x) for x in array]) # to me, this is inelegant
   and horribly roundabout
 
  well, SA is mostly name oriented at this time for several reasons:
 
  1. there is no chance of ambiguity.  for example, if you write code
  that just says table.insert().execute(1,2,3,4,5), supposing the table
  is reflected from the local database - that code might not be portable
  if executed on a different database where the table was created with a
  different column ordering, or had columns added via ALTER, etc.

 That's true, but I imagine there would be a way to specify the columns
 (once, when getting the insert()), and then trust subsequent
 execute()s
 to permute the arguments into their right order...

  2. it is more database-neutral to build off named parameters, since
  its easy to convert from named param to positional, hard to convert in
  the other direction (ordering may not be consistent depending on how a
  dialect compiles a statement).  as not all DBAPIs support positional
  bind parameters anyway, SA is going to create named bind parameters in
  any case so the inelegant and roundabout behavior will still occur,
  albeit behind the scenes, if support for positional execute()
  arguments were improved.
 
  since its *only* inserts that positional parameters even have a chance
  of working on all DBs, SA's parameter-processing logic was written
  around a named parameter concept since positional params are of very
  limited use.

 But aren't the result set from selects also positional (AS WELL as
 name-centric):

 .select().execute().fetchone() gives something which can be thought
 of either as a hash or as a tuple à la (1, 'ah', 102), and I'm using
 the tuple... It would thus be consistent to be able to use similar
 constructs with inserts.

  3. lets look at the proposed feature itself.  let me preface by
  saying, i will accept this feature into SA, and i am not opposed to
  it.  if an adequate patch is provided, i will commit it.  however, its
  not a priority for me to implement this myself.  this is because it
  would be complex to implement, and the current insert() method is one
  step and is a single way to do it.  the method you are proposing
  requries two steps in most cases:
 
  i.e.
 
 table.insert().execute(*[dict(name=x) for x in array])
 
  specifies the column names and their values in one step.  the proposed
  method requires in most cases a separate column specification step,
  currently the inelegant:
 
table.insert(values=['fake value 1', 'fake value2']).execute(*array)
 
  but if we added a column list argument, would look like:
 
insert([table.c.col1, table.c.col2]).execute(*array)
 
  which is still two steps.  of course if you want to insert the whole

 Two steps, but, I feel, more consistent and overall better design,
 without always having to go via the tiny temporary dictionary objects.

 Also, I don't think it should be necessary to unpack the array at all,
 since we're using Python. If we wanted to enforced strict typing we
 

[sqlalchemy] Re: Updating a column to an expression containing itself; Inserting data by lists or tuples

2007-02-06 Thread Michael Bayer

its all good, just not a high priority for me :).  feel free to dig
in.  the insert/update code is already a little intense due to it
already having multiple ways to do the same thing, not to mention its
some of the first code i wrote in SA and feels a little old, so adding
features like these might not be so straightforward.

On Feb 6, 3:46 pm, Rick Morrison [EMAIL PROTECTED] wrote:
 Well one good thing about the positional-style INSERT is that it just
 might be useful for implementing INSERT INTO SELECT FROM, which
 would be a pretty useful addition to the SQL API layer

 On 2/6/07, S.R. Larmes [EMAIL PROTECTED] wrote:



  Michael Bayer wrote:
   On Feb 6, 1:20 pm, S.R. Larmes [EMAIL PROTECTED] wrote:

a) How do I do:

UPDATE sometable SET value = value * 0.9 + 1[ + optional WHERE
clause ] ?

I've tried experimenting with .update(values=..) or stuff
like .execute(value=sometable.c.value*0.9+1) but it just tries to do
strange things. (SET value=sometable.value * %(sometable_value)s + %
(literal)s ??)

   that looks correct to me...all literal values are by default converted
   to bind params.   if you execute the statement the bind values will be
   set up correctly.

  Ahh, got it to work now. It doesn't seem to work with
  .execute(value=table.c.value*0.9)) (gives programming error), but that
  makes sense to me. :) With .update(values={..}) it works fine! Sorry
  about that.

  Thanks for the swift reply by the way.

b) I have an array (or tuple) with values that I'd just like to fire
off to insert() which is compiled to work on one column (the other
columns have defaults or pre-set values)... To me, this one ought to
be obvious and intuitive, and maybe I'm just stupid or blind, but I
can't find a way.

   how did you compile an insert() to work on just one (unnamed) column ?
   like table.insert(values=['someval']) ?

  No, I didn't even know about that? I just meant that all the other
  columns had bound values in values={..} or as default values so I
  would
  expect execute() (etc) to figure out their values automatically
  without
  me having to mention those columns again.

(Alternatively, if I have an array such as [(1, 'hi', 'text'), (2,
'hello', 'tuxt'), ...] where the columns match up to the columns in
the table (or a given insert())...)

.execute(*[dict(name=x) for x in array]) # to me, this is inelegant
and horribly roundabout

   well, SA is mostly name oriented at this time for several reasons:

   1. there is no chance of ambiguity.  for example, if you write code
   that just says table.insert().execute(1,2,3,4,5), supposing the table
   is reflected from the local database - that code might not be portable
   if executed on a different database where the table was created with a
   different column ordering, or had columns added via ALTER, etc.

  That's true, but I imagine there would be a way to specify the columns
  (once, when getting the insert()), and then trust subsequent
  execute()s
  to permute the arguments into their right order...

   2. it is more database-neutral to build off named parameters, since
   its easy to convert from named param to positional, hard to convert in
   the other direction (ordering may not be consistent depending on how a
   dialect compiles a statement).  as not all DBAPIs support positional
   bind parameters anyway, SA is going to create named bind parameters in
   any case so the inelegant and roundabout behavior will still occur,
   albeit behind the scenes, if support for positional execute()
   arguments were improved.

   since its *only* inserts that positional parameters even have a chance
   of working on all DBs, SA's parameter-processing logic was written
   around a named parameter concept since positional params are of very
   limited use.

  But aren't the result set from selects also positional (AS WELL as
  name-centric):

  .select().execute().fetchone() gives something which can be thought
  of either as a hash or as a tuple à la (1, 'ah', 102), and I'm using
  the tuple... It would thus be consistent to be able to use similar
  constructs with inserts.

   3. lets look at the proposed feature itself.  let me preface by
   saying, i will accept this feature into SA, and i am not opposed to
   it.  if an adequate patch is provided, i will commit it.  however, its
   not a priority for me to implement this myself.  this is because it
   would be complex to implement, and the current insert() method is one
   step and is a single way to do it.  the method you are proposing
   requries two steps in most cases:

   i.e.

  table.insert().execute(*[dict(name=x) for x in array])

   specifies the column names and their values in one step.  the proposed
   method requires in most cases a separate column specification step,
   currently the inelegant:

 table.insert(values=['fake value 1', 'fake value2']).execute(*array)

   but if we added a column