[sqlalchemy] Re: Is inserting new data with column_mapped_collection inconsistent?

2008-05-17 Thread Allen Bierbaum

On Fri, May 16, 2008 at 4:54 PM, jason kirtland [EMAIL PROTECTED] wrote:
[..]

 Anyway, I think this is a bit non-intuitive.  What I propose instead
 is that SA could automatically set the 'keyword' attribute of the Note
 object as part of the process of assigning it to the mapped collection
 dictionary.  This way the insert could look more like:

 item.notes['not-color'] = Note(value='blue')

 and behind the scenes SA would call:  new Note.keyword = 'not-color'

 Any thoughts on this?  Has anyone tried this in the past?

 MappedCollection doesn't currently have a mismatch guard on __setitem__
 (d[key] = val) or setdefault(), but easily could.  There *is* a guard
 protecting against item.notes = {'not-color': Note('color', 'blue')}, so
 that machinery is available and applying it to the other setters is
 straightforward.

 Automatically setting the value for the attribute_ and column_mapped
 dict collections would be pretty convenient and DRY.  This is a great
 time to integrate that feature, if you want to try your hand at putting
 together a patch and tests.  If it's not too disruptive to existing
 users it could slide right in as a new feature of 0.5.

I would be more then happy to look into this (I already have), but I
think my skills aren't quite up to the challenge.  Could you point me
in the general direction?

On a related note, I think it would be good to make this behavior come
through a user customizable callback method that takes the index value
and the newly assigned class item as values.  This would allow users
to add more automatic behavior that may be needed.

For example I my current relationship is actually like this:

   'input_vars' : relation(Var, primaryjoin = and_(script_table.c.id
== var_table.c.script_id,

var_table.c.input_output_type == 0),

collection_class=column_mapped_collection(var_table.c.name)),

So I would want to not only set the name automatically based on the
key, but I would want to set the input_output_type to 0 in this case.
Something like this would be good.

def input_cb(key, item):
   item.name = key
   item.input_output_type = 0

'input_vars' : relation(Var, primaryjoin = and_(script_table.c.id ==
var_table.c.script_id,

var_table.c.input_output_type == 0),

collection_class=column_mapped_collection(var_table.c.name,
set_cb=input_cb)),

Any thoughts and/or pointers on how to implement this?

-Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] filtering by datetime elements

2008-05-17 Thread Eric Abrahamsen

Hi there,

I'm new to this, so please be patient if I'm a little slow... I'm  
trying to filter Article objects by a datetime field ('pubdate'), and  
expected that I would be able to do something like this:

arts = sess.query(Article).filter(and_(Article.pubdate.year==year,  
Article.pubdate.month==month, Article.id==id)).one()

This gives me:
AttributeError: 'InstrumentedAttribute' object has no attribute 'year'

Apparently methods on attributes are reserved for sqlalchemy  
internals, and I'm not working with a straight Python object, as I'd  
thought. That makes sense, but can anyone suggest a simple substitute  
for what I'm trying to do here?

Thanks very much,
Eric

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: filtering by datetime elements

2008-05-17 Thread az

On Saturday 17 May 2008 17:53:42 Eric Abrahamsen wrote:
 Hi there,

 I'm new to this, so please be patient if I'm a little slow... I'm
 trying to filter Article objects by a datetime field ('pubdate'),
 and expected that I would be able to do something like this:

 arts = sess.query(Article).filter(and_(Article.pubdate.year==year,
 Article.pubdate.month==month, Article.id==id)).one()

 This gives me:
 AttributeError: 'InstrumentedAttribute' object has no attribute
 'year'

 Apparently methods on attributes are reserved for sqlalchemy
 internals, and I'm not working with a straight Python object, as
 I'd thought. That makes sense, but can anyone suggest a simple
 substitute for what I'm trying to do here?
it's SQL that is not working with python objects, and the column 
pubdate (associated with type DateTime on python side) has no 
attr .year or .month.

lookup the messages in the group, there were some sugestions long time 
ago, but AFAIremember one was something with strings, another with 
separate columns.

wishlist suggestion: how about type-related attributes on columns, 
using which will autogenerate some expr off that column?
e.g. select( mytable.c.mydate.year == '1998') to automagicaly yield 
something like func.substr(mytable.c.mydate,1,4)) == '1998' 
(supposing the type of mydate stores datetime as mmdd string)

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: filtering by datetime elements

2008-05-17 Thread Eric Abrahamsen


On May 17, 2008, at 11:17 PM, [EMAIL PROTECTED] wrote:
 it's SQL that is not working with python objects, and the column
 pubdate (associated with type DateTime on python side) has no
 attr .year or .month.

 lookup the messages in the group, there were some sugestions long time
 ago, but AFAIremember one was something with strings, another with
 separate columns.

Thanks svil, this is good to know. I suppose there's no reason why I  
can't pull a simpler query into Python and then filter it by date  
there. It seems like doing this in the SQL query is going to be  
hackish no matter what, particularly when it's so simple to do in  
Python...

Thanks again,
Eric

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: composite primary key/postgres

2008-05-17 Thread az

i have found a sort of workaround for this.
a composite primary key in postgres cannot have nulls in any of its 
columns.
so there must be default_value=something. 
but because the column is also foreign key, that something should 
point to an existing record... which means creating a bogus record 
just for the sake of the foregn key.
hmmm may be not a workaround then.

actualy this whole mess is because... i need an unique constraint on 
that set of foreign keys, but mssql refused to have unique 
constraints other than the primary key, hence i forced it to be the 
primary key...

On Tuesday 13 May 2008 10:48:23 Eric Ongerth wrote:
 So part of the problem is postgresql is autoincrementing where you
 do not want it to do so?  I thought postgresql only autoincrements
 where your column is of type 'serial'.   Is that not true?  Or if
 so, you could use type 'integer' instead of 'serial'.  There is
 also the possibility that the combination of column type 'integer'
 and 'nullable=False is handled as identical to 'serial', i.e.
 autoincrement.  I think I've seen that before but I'm not sure.

 On May 12, 2:57 pm, [EMAIL PROTECTED] wrote:
  hi.
  i have a sort-of multicolumn m2m association table, where the
  primary key is composed of all the links. At least 1 link
  (actualy, 2) is always present, but never all. so i am defining
  all of those columns with primary_key=True, nullable=True.
  which is fine in sqlite, but doesnot work in postgres - it
  autoincrements those columns without value.
 
  how can i fix this?
  would a default_value=0 - or something - work?
 
  (now as i look at it, at least as declaration, the whole primary
  key seems nullable - is this wrong?)
 
  ciao
  svil

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: filtering by datetime elements

2008-05-17 Thread Lukasz Szybalski

On Sat, May 17, 2008 at 11:20 AM, Eric Abrahamsen [EMAIL PROTECTED] wrote:


 On May 17, 2008, at 11:17 PM, [EMAIL PROTECTED] wrote:
 it's SQL that is not working with python objects, and the column
 pubdate (associated with type DateTime on python side) has no
 attr .year or .month.

 lookup the messages in the group, there were some sugestions long time
 ago, but AFAIremember one was something with strings, another with
 separate columns.

 Thanks svil, this is good to know. I suppose there's no reason why I
 can't pull a simpler query into Python and then filter it by date
 there. It seems like doing this in the SQL query is going to be
 hackish no matter what, particularly when it's so simple to do in
 Python...



you could convert your month variable to same format as
article.pubdate (datetime)
http://lucasmanual.com/mywiki/PythonManual#head-7b8d3475aa2baaa193b02b72fccd6eb009a1ee63

or modify the datetime to separate date and time columns.
Lucas


-- 
Automotive Recall Database. Cars, Trucks, etc.
http://www.lucasmanual.com/recall/
TurboGears Manual-Howto
http://lucasmanual.com/pdf/TurboGears-Manual-Howto.pdf

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] table without any primary_keys?

2008-05-17 Thread az

seems such thing is disallowed, or at least Mapper complains. 
is it SQL requirement or what?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: filtering by datetime elements

2008-05-17 Thread Michael Bayer


On May 17, 2008, at 10:53 AM, Eric Abrahamsen wrote:


 Hi there,

 I'm new to this, so please be patient if I'm a little slow... I'm
 trying to filter Article objects by a datetime field ('pubdate'), and
 expected that I would be able to do something like this:

 arts = sess.query(Article).filter(and_(Article.pubdate.year==year,
 Article.pubdate.month==month, Article.id==id)).one()

 This gives me:
 AttributeError: 'InstrumentedAttribute' object has no attribute 'year'

 Apparently methods on attributes are reserved for sqlalchemy
 internals, and I'm not working with a straight Python object, as I'd
 thought. That makes sense, but can anyone suggest a simple substitute
 for what I'm trying to do here?

two approaches here:

1. use datepart functions.  currently these are specific to individual  
databases (heres the postgres version):

filter(func.date_part(year, Class.somedate)==year)

2. use comparison/BETWEEN; this has the advantage in that if the  
column has an index on it, it can be used:

filter(Class.somedate.between(date(year, 1, 1), date(year + 1, 1, 1)))



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: table without any primary_keys?

2008-05-17 Thread Michael Bayer


On May 17, 2008, at 9:52 PM, [EMAIL PROTECTED] wrote:


 seems such thing is disallowed, or at least Mapper complains.
 is it SQL requirement or what?

the mapper needs some set of PK columns defined for mapped classes so  
that it can identify objects.  they dont need to be actual PK columns  
in the database.

in theory there just needs to be a function that can extract a primary  
key from a row - it doesnt even necessarily have to be use these  
columns.  though such a feature would require some API changes.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---