[sqlalchemy] chunks of blob data...

2007-12-18 Thread Chris Withers

Hi All,

I guess this question would also be applicable to the lists of the rdb 
in question but since SA is going to have to help to I thought I'd ask 
here...

Okay, so if I have a table that contains a blob-ish field containing 
something like a 20Mb PDF per row (I'm not sure this is a good idea, and 
I'm welcome to other suggestions ;-)) I'd like to be able to return 
chunks of that PDF in http range requests (Adobe's pluggin is good at 
them...)

How do I get SA and the RDB to only return a portion of the data in the 
field? Is that even possible? What sort of strategies do people have in 
this situation?

cheers,

Chris

-- 
Simplistix - Content Management, Zope  Python Consulting
- http://www.simplistix.co.uk

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Minor documentation mistake related to Unicode?

2007-12-18 Thread Felix Schwarz

Hi,

http://www.sqlalchemy.org/docs/04/types.html says:
  The Unicode type is shorthand for String with convert_unicode=True and
  assert_unicode=True.

But the constructor for String (types.py) says:
 def __init__(self, length=None, convert_unicode=False):

Using TEXT(assert_unicode=True) raises an error. I think assert_unicode was
removed in the past, correct?

fs

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Group_by 0.4 sytax

2007-12-18 Thread voltron

I am trying to retrieve unique users from a table using this:

results = sa_session.query(TempScore).group_by(TempScore.user_name)

This unfortunately causes the error below:

There was a problem: (ProgrammingError) column temp_scores.id must
appear in the GROUP BY clause or be used in an aggregate function

Could someone tell me what the SA 0.4 syntax would be for this query?
I am using Postgresql


Thanks

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: chunks of blob data...

2007-12-18 Thread Michael Bayer


On Dec 18, 2007, at 4:30 AM, Chris Withers wrote:


 Hi All,

 I guess this question would also be applicable to the lists of the rdb
 in question but since SA is going to have to help to I thought I'd ask
 here...

 Okay, so if I have a table that contains a blob-ish field containing
 something like a 20Mb PDF per row (I'm not sure this is a good idea,  
 and
 I'm welcome to other suggestions ;-)) I'd like to be able to return
 chunks of that PDF in http range requests (Adobe's pluggin is good at
 them...)

 How do I get SA and the RDB to only return a portion of the data in  
 the
 field? Is that even possible? What sort of strategies do people have  
 in
 this situation?


cx_oracle's blob object is able to stream out blobs like filehandles,  
since thats how OCI wants it to be done.  Im not sure if it only  
allows sequential access or access to any range.  As far as other  
DBAPIs like psycopg2 and MySQLDB, you'd have to dig into the docs/ 
mailing lists/source code of those to see what resources are  
available.  Another way might be some SQL function available within  
the database that truncates the binary stream to a certain portion of  
itself before selecting, but Im not aware of any function which does  
that (havent checked tho).

Current SA behavior is to abstract away the streamingness of  
cx_oracles binary objects, but theres no reason we couldnt expose this  
functionality through something like a StreamingBinary type or similar.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Minor documentation mistake related to Unicode?

2007-12-18 Thread Michael Bayer


On Dec 18, 2007, at 5:41 AM, Felix Schwarz wrote:


 Hi,

 http://www.sqlalchemy.org/docs/04/types.html says:
 The Unicode type is shorthand for String with convert_unicode=True  
 and
 assert_unicode=True.

 But the constructor for String (types.py) says:
   def __init__(self, length=None, convert_unicode=False):

 Using TEXT(assert_unicode=True) raises an error. I think  
 assert_unicode was
 removed in the past, correct?


assert_unicode is a new flag that hasnt been released yet (thats why  
the docs say 0.4.2...a little premature i know).  so if youre not on  
trunk yet then it wont work.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Group_by 0.4 sytax

2007-12-18 Thread Michael Bayer


On Dec 18, 2007, at 9:49 AM, voltron wrote:


 I am trying to retrieve unique users from a table using this:

 results = sa_session.query(TempScore).group_by(TempScore.user_name)

 This unfortunately causes the error below:

 There was a problem: (ProgrammingError) column temp_scores.id must
 appear in the GROUP BY clause or be used in an aggregate function

 Could someone tell me what the SA 0.4 syntax would be for this query?
 I am using Postgresql



all columns that arent in an aggregate must be in the GROUP BY, so to  
get every column in, do this:

sess.query(TempScore).group_by([c for c in TempScore.c])



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Querying and retrieving the position of a result

2007-12-18 Thread voltron

I have numerical values assigned to user in a table, I have query the
table, sort the table in a descending order and then return the value
of a certain user with his position/rank in the table.Could someone
give me a leg up?


Thanks
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Group_by 0.4 sytax

2007-12-18 Thread voltron

Thanks! I'll try that out Mike

On Dec 18, 3:55 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Dec 18, 2007, at 9:49 AM, voltron wrote:



  I am trying to retrieve unique users from a table using this:

  results = sa_session.query(TempScore).group_by(TempScore.user_name)

  This unfortunately causes the error below:

  There was a problem: (ProgrammingError) column temp_scores.id must
  appear in the GROUP BY clause or be used in an aggregate function

  Could someone tell me what the SA 0.4 syntax would be for this query?
  I am using Postgresql

 all columns that arent in an aggregate must be in the GROUP BY, so to
 get every column in, do this:

 sess.query(TempScore).group_by([c for c in TempScore.c])
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how to unit test with SQ

2007-12-18 Thread Paulino

Hello!

I have a similar error with SA 4.1.0 and turbogears.

session.query(Extrato).filter_by(bank='IF5',
data=datetime.datetime.strptime('2007/07/27','%Y/%m/%d'))
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.1-py2.5.egg/
sqlalchemy/orm/session.py, line 654, in query
q = self._query_cls(_class_mapper(mapper_or_class,
entity_name=entity_name), self, **kwargs)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.1-py2.5.egg/
sqlalchemy/orm/mapper.py, line 1649, in class_mapper
raise exceptions.InvalidRequestError(Class '%s' entity name '%s'
has no mapper associated with it % (class_.__name__, entity_name))
InvalidRequestError: Class 'Extrato' entity name 'None' has no mapper
associated with it


But if I run the query outside the turbogears framework, it works
fine, no errors!

My model definition is:

from sqlalchemy import Table, Column, String, Integer, DateTime,
MetaData, create_engine
from sqlalchemy.orm import relation, mapper, create_session

mssql = create_engine('mssql://paulino:[EMAIL PROTECTED]/db')
metadata = MetaData()
metadata.bind = mssql
session = create_session(bind=mssql)

extratoTab = Table('ttfcmf201100', metadata,
   Column('t_bank', String(), primary_key=True),
   Column('t_nseq', Integer(), primary_key=True,
autoincrement=False),
   Column('t_dtmv', DateTime(), primary_key=True),
   Column('t_dtvl', DateTime(), primary_key=True),
   autoload=True)

class Extrato(object):
pass

mapper(Extrato, extratoTab)

what is going on?

thank you
Paulino
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Missing Sequences

2007-12-18 Thread Jonathan Gardner

I don't know what the design decisions were regarding the Sequence
object, so bear with my ignorance for a moment here.

I played with SQLAlchemy in conjunction with Pylons. I had code that
looked like this:

  links_table = Table('links', metadata,
Column('id', types.Numeric(), schema.Sequence('link_id_seq'),
primary_key=True),
Column('title', types.Unicode(), ),
Column('url', types.Unicode(), )
  )

Of course, using SQLite, this leads to missing 'id' when inserting a
new row since, as far as I can tell, SQLite doesn't support sequences.

The interesting part is the following. If I take the sequence out and
treat it as something separate:

  id_seq = schema.Sequence('id_seq', 1000)

And then leave it out of the table defined above, then I can do
something like this:

log.info(Adding test link...)
link = model.Link()
link.id = engine.execute(model.id_seq)
log.info(link.id = %r % link.id)

Well, except for the bit about the link.id being None in SQLite.

The following questions are more about design than implementation.
That is, there may be something about the assumptions SQLAlchemy makes
and how it handles different things.

Shouldn't SQLAlchemy throw an exception when you try to grab the next
value of a non-existant sequence?

Should SQLAlchemy implement a crude version of sequences in Python to
cover up deficient SQL implemetnations? I imagine it is technically
possible. We can have a table that stores the last value of the
sequence and its parameters, and then in those databases just go look
up the last state of the sequence there. But is this in the spirit of
the SQLAlchemy design?

Should sequences be sequences first-class citizens of the schema like
tables are? That is, why do I have to create them and drop them
separately? Why can't we have a next() method for sequences and then
bind sequences to the metadata the same way tables are?

If you are looking for development help, I have some spare time here
and there and this is actually interesting. If someone can point the
way to go, I can take it from there.

Otherwise, I am incredibly surprised at how well SQLAlchemy works in
allowing me to write truly database independent applications. You have
done something I don't think anyone else has ever done in the history
of computing, and for that you should be proud.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Querying for Date column value

2007-12-18 Thread Justin

I have a table with a Date column. Why does a python datetime match
the value and a python date does not? For example:

# example table
institutions_table = Table('institutions', meta,
Column('id', Integer, primary_key=True),
Column('report_date', Date),
...

this works:

session.query(Institutions).filter(Institution.report_date==datetime.datetime(2007,6,30)).all()

and this does not:

session.query(Institutions).filter(Institution.report_date==datetime.date(2007,6,30)).all()

What am I missing?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Querying for Date column value

2007-12-18 Thread Michael Bayer

not sure, it depends on what kind of database youre using (and what
DBAPI) as well as the actual type of column is is on the database
side.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Missing Sequences

2007-12-18 Thread Michael Bayer


On Dec 18, 2007, at 10:42 AM, Jonathan Gardner wrote:


 Shouldn't SQLAlchemy throw an exception when you try to grab the next
 value of a non-existant sequence?

for a database which supports sequences, it will.  It appears that for  
DB's which don't, executing a sequence explicitly is silently ignoring  
it; I would consider this to be a bug. However, when a Sequence  
attached to a Column for the purpose of generating new default values  
for that column, during an INSERT, it is ignored for a database that  
doesnt support sequences, like sqlite.  This is by design and it is so  
that a single Table construct can be used with multiple database  
backends (this should be in the docs too).

 Should SQLAlchemy implement a crude version of sequences in Python to
 cover up deficient SQL implemetnations? I imagine it is technically
 possible. We can have a table that stores the last value of the
 sequence and its parameters, and then in those databases just go look
 up the last state of the sequence there. But is this in the spirit of
 the SQLAlchemy design?

sure, you can write such a function and attach it as the default  
generator on any Column.  As far as should such an ID generator be  
included with SQLAlchemy, I would support having some ID generators  
available as optional constructs to use on Tables, and I'd like to see  
generators like UUID, hilo, etc.  As of yet nobody has contributed  
these.


 Should sequences be sequences first-class citizens of the schema like
 tables are? That is, why do I have to create them and drop them
 separately?

They are first class citizens.  if your Sequence is sitting inside of  
a Column as its default generator, it will be created and dropped  
along with the Table which its associated with.As far as being  
able to associate the Sequence with MetaData directly without the  
Table serving as a host, see below, I dont have any issue with such an  
enhancement.  Nobody has ever asked for this feature before since a  
Sequence usually associates with a specific table.

 Why can't we have a next() method for sequences and then
 bind sequences to the metadata the same way tables are?

We do have a next():  nextid = engine.execute(Sequence('foo')) .   
Sequences are bound to metadata as well, as a little experimentation  
will show...the current methodology is that the Sequence is associated  
with the MetaData via one or more Tables its associated with:

  from sqlalchemy import *
  e = create_engine('postgres://scott:[EMAIL PROTECTED]/test')
  m = MetaData()
  t = Table('foo', m, Column('id', Integer, Sequence('foo_seq')))
  m.bind = e
  t.create()
  t.c.id.default.execute()
1L

Now it may be a nice enhancement to be able to create a Sequence which  
binds to a metadata, without the Table part needing to be involved,  
such as:

Sequence('foo_id_seq', metadata)

This enhancement is fine with me.

 If you are looking for development help, I have some spare time here
 and there and this is actually interesting. If someone can point the
 way to go, I can take it from there.

if you can produce decent code, and almost more importantly good unit  
tests that ensure the decent code always works (else it might as well  
be broken), commit access is there for the taking. Start with trac  
tickets, i.e. create some and/or fix some, hang out on irc and  
sqlalchemy-devel, come up with some patches and we go from there.   
We're quite ready for many more developers.



 Otherwise, I am incredibly surprised at how well SQLAlchemy works in
 allowing me to write truly database independent applications. You have
 done something I don't think anyone else has ever done in the history
 of computing, and for that you should be proud.


thanks for the compliments !  see you at Pycon.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Querying for Date column value

2007-12-18 Thread Justin

Sorry,

Database is sqlite. From my understanding sqlite would store the value
as a string.

From running the following test it seems that the orignal type of
value stored (datetime vs date) is being maintained in sqlite
regardless of the Column type:

import datetime

from sqlalchemy import *
from sqlalchemy.orm import *

engine = create_engine('sqlite://', echo=False)

Session = sessionmaker(bind=engine, autoflush=True,
transactional=True)

meta = MetaData()

test_table = Table('test', meta,
Column('id', Integer, primary_key=True),
Column('name', String),
Column('date', Date))

class Test(object):
def __init__(self, name, date):
self.name = name
self.date = date
def __repr__(self):
return %s: %s % (self.name, self.date)

mapper(Test, test_table)

if __name__ == '__main__':

t1 = Test('date', datetime.date(2007, 12, 18))
t2 = Test('datetime', datetime.datetime(2007, 12, 18))
t3 = Test('control', datetime.date(2007, 1, 1))

meta.create_all(engine)

sess = Session()
sess.save(t1)
sess.save(t2)
sess.save(t3)

sess.flush()
sess.clear()

print 'All:'
print sess.query(Test).all()
print 'Datetime:'
print sess.query(Test).filter_by(date=datetime.datetime(2007, 12,
18)).all()
print 'Date:'
print sess.query(Test).filter_by(date=datetime.date(2007, 12,
18)).all()

Result:

All:
[date: 2007-12-18, datetime: 2007-12-18, control: 2007-01-01]
Datetime:
[datetime: 2007-12-18]
Date:
[date: 2007-12-18]

Thanks,

- Justin


On Dec 18, 1:19 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 not sure, it depends on what kind of database youre using (and what
 DBAPI) as well as the actual type of column is is on the database
 side.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Missing Sequences

2007-12-18 Thread Jonathan Gardner

On Dec 18, 10:39 am, Michael Bayer [EMAIL PROTECTED] wrote:
 if you can produce decent code, and almost more importantly good unit
 tests that ensure the decent code always works (else it might as well
 be broken), commit access is there for the taking. Start with trac
 tickets, i.e. create some and/or fix some, hang out on irc and
 sqlalchemy-devel, come up with some patches and we go from there.
 We're quite ready for many more developers.


I'll be lurking as best I can. When I get the courage I'll give it a
go.


 thanks for the compliments !  see you at Pycon.

Wish I could go... enjoy yourselves and let us know what happens!

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Fwd: [elixir] warnings on exit

2007-12-18 Thread Ryszard Szopa



On Dec 17, 5:46 pm, Michael Bayer [EMAIL PROTECTED] wrote:

 Make sure you are closing all sessions that are created.   Also, its
 important that you show us what the exact warnings youre getting
 are.   Please send along a reproducing test case illustrating the full
 issue.

I am sorry to admit it, but I cannot reproduce the warnings. I suspect
that it may have something to do with the fact I have dropped and
recreated the database in the meantime. Now I even don't have to close
the session explicitly and everything works fine.

Sorry for bugging you unnecessarily.

Cheers,

-- Richard

-- Richard

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Querying for Date column value

2007-12-18 Thread Justin

I should probably add that I'm curious if this is the desired
behavior. From my (relatively inexperienced) point of view it seems
that sqlalchemy should try to return whatever the programmer is
expecting as designated by the column type.

- Justin

On Dec 18, 2:03 pm, Justin [EMAIL PROTECTED] wrote:
 Sorry,

 Database is sqlite. From my understanding sqlite would store the value
 as a string.

 From running the following test it seems that the orignal type of
 value stored (datetime vs date) is being maintained in sqlite
 regardless of the Column type:

 import datetime

 from sqlalchemy import *
 from sqlalchemy.orm import *

 engine = create_engine('sqlite://', echo=False)

 Session = sessionmaker(bind=engine, autoflush=True,
 transactional=True)

 meta = MetaData()

 test_table = Table('test', meta,
 Column('id', Integer, primary_key=True),
 Column('name', String),
 Column('date', Date))

 class Test(object):
 def __init__(self, name, date):
 self.name = name
 self.date = date
 def __repr__(self):
 return %s: %s % (self.name, self.date)

 mapper(Test, test_table)

 if __name__ == '__main__':

 t1 = Test('date', datetime.date(2007, 12, 18))
 t2 = Test('datetime', datetime.datetime(2007, 12, 18))
 t3 = Test('control', datetime.date(2007, 1, 1))

 meta.create_all(engine)

 sess = Session()
 sess.save(t1)
 sess.save(t2)
 sess.save(t3)

 sess.flush()
 sess.clear()

 print 'All:'
 print sess.query(Test).all()
 print 'Datetime:'
 print sess.query(Test).filter_by(date=datetime.datetime(2007, 12,
 18)).all()
 print 'Date:'
 print sess.query(Test).filter_by(date=datetime.date(2007, 12,
 18)).all()

 Result:

 All:
 [date: 2007-12-18, datetime: 2007-12-18, control: 2007-01-01]
 Datetime:
 [datetime: 2007-12-18]
 Date:
 [date: 2007-12-18]

 Thanks,

 - Justin

 On Dec 18, 1:19 pm, Michael Bayer [EMAIL PROTECTED] wrote:

  not sure, it depends on what kind of database youre using (and what
  DBAPI) as well as the actual type of column is is on the database
  side.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Firebird status and some testsuite issues

2007-12-18 Thread jason kirtland

Lele Gaifax wrote:
 Hi all,
 
 I was finally able to spend a little time on the Firebird backend, and
 I'm glad to say that I'm currently down to this test summary:
  [...]
  C. a few tests don't use an explicit ordering, and assume that the
 result of a select without an ORDER BY matches the insert order;
 under Firebird I notice an intermittent behaviour, and in some
 cases (expecially under load) this is not true. 

These tests should eventually all be fixed- I've been fixing them as 
they crop up for me and I'd encourage everyone else to do the same. 
There's two approaches- add ordering to the query, or remove the 
ordering dependency from the test assertion.  Where ever reasonable I've 
been doing the latter to eliminate side effects.

  D. to my surprise, a self-reference foreign key on a table needs
 either ON DELETE CASCADE or ON DELETE SET NULL to allow the
 testsuite to delete its content in the tearDown step with a simple
 DELETE FROM table statement; one spot of this is fixed by the
 following::

I've added a basic rewriter for these FKs in r3959.  If that needs to be 
applied more narrowly, it's easy to make it conditional on a Table test 
pragma that can be applied to only the tables that need it.  (That'll 
make sense if you're looking at the testing.schema.Table function.)

MySQL's innodb has a similar issue in the tests but I've been away from 
mysql too long to nail a fix on the first try- working around the innodb 
bug is still a todo.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-18 Thread Smoke

Sorry because i'm a bit late (  work deadlines are struggling my
time! :) ).
I've made some different configurations and schema definitions... and
i've noticed that it never updates a row if i set the datetime field
as PK ( never! even if i set it as the only PK .. ). If i set
composite PKs excluding any datetime column everything works fine ( it
also works if i set a datetime as PK at the database side and
excluding it at the schema on sqlalchemy side.. ). Sorry about the
mess with the PKs between the former examples but i was only trying to
say that changing the schema results that everything works fine...

Here's a little piece of code just as an example to start playing...
( it works  for me.., but if i include start as PK it crashes)

import sqlalchemy as sa
import datetime, time
from sqlalchemy.orm import sessionmaker

sa_engine=sa.create_engine(mssql://sa:[EMAIL PROTECTED]/siaDB_BR,
echo=True)
metadata = sa.MetaData(sa_engine)
Session = sessionmaker(bind=sa_engine, autoflush=True,
transactional=True)
sa_session = Session()


jobs = sa.Table('jobs', metadata,
sa.Column('identifier', sa.Numeric(18),
primary_key=True),
sa.Column('section', sa.VARCHAR(20)),
sa.Column(start,sa.DateTime),
sa.Column(stop,sa.DateTime),
sa.Column(station, sa.VARCHAR(20),  
primary_key=True),
autoload=False)#ok

class Job(object):
def __init__(self, identifier, start, station=TCHUKI):
self.identifier, self.start, self.station=identifier, start, 
station

sa.orm.mapper(Job, jobs)

j = Job(22, datetime.datetime.now())
sa_session.save(j)
sa_session.commit()
sa_session.clear()
time.sleep(1)
j1=sa_session.query(Job).all()[0]
j1.stop=datetime.datetime.now()
sa_session.save_or_update(j1)
sa_session.commit()







On 12 Dic, 17:54, Rick Morrison [EMAIL PROTECTED] wrote:
 Hey Fabio, would you please post a full non-working copy with the new schema
 and all the PKs that you want set up? There are a few too many variants in
 this thread to see what's going on now. Your earlier versions didn't include
 'station' as a PK, but did include 'start', while this one's the opposite.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-18 Thread Smoke

Sorry because i'm a bit late (  work deadlines are struggling my
time! :) ).
I've made some different configurations and schema definitions... and
i've noticed that it never updates a row if i set the datetime field
as PK ( never! even if i set it as the only PK .. ). If i set
composite PKs excluding any datetime column everything works fine ( it
also works if i set a datetime as PK at the database side and
excluding it at the schema on sqlalchemy side.. ). Sorry about the
mess with the PKs between the former examples but i was only trying to
say that changing the schema results that everything works fine...

Here's a little piece of code just as an example to start playing...
( it works  for me.., but if i include start as PK it crashes)

import sqlalchemy as sa
import datetime, time
from sqlalchemy.orm import sessionmaker

sa_engine=sa.create_engine(mssql://sa:[EMAIL PROTECTED]/siaDB_BR,
echo=True)
metadata = sa.MetaData(sa_engine)
Session = sessionmaker(bind=sa_engine, autoflush=True,
transactional=True)
sa_session = Session()


jobs = sa.Table('jobs', metadata,
sa.Column('identifier', sa.Numeric(18),
primary_key=True),
sa.Column('section', sa.VARCHAR(20)),
sa.Column(start,sa.DateTime),
sa.Column(stop,sa.DateTime),
sa.Column(station, sa.VARCHAR(20),  
primary_key=True),
autoload=False)#ok

class Job(object):
def __init__(self, identifier, start, station=TCHUKI):
self.identifier, self.start, self.station=identifier, start, 
station

sa.orm.mapper(Job, jobs)

j = Job(22, datetime.datetime.now())
sa_session.save(j)
sa_session.commit()
sa_session.clear()
time.sleep(1)
j1=sa_session.query(Job).all()[0]
j1.stop=datetime.datetime.now()
sa_session.save_or_update(j1)
sa_session.commit()







On 12 Dic, 17:54, Rick Morrison [EMAIL PROTECTED] wrote:
 Hey Fabio, would you please post a full non-working copy with the new schema
 and all the PKs that you want set up? There are a few too many variants in
 this thread to see what's going on now. Your earlier versions didn't include
 'station' as a PK, but did include 'start', while this one's the opposite.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-18 Thread Michael Bayer


On Dec 18, 2007, at 6:44 PM, Smoke wrote:


 Sorry because i'm a bit late (  work deadlines are struggling my
 time! :) ).
 I've made some different configurations and schema definitions... and
 i've noticed that it never updates a row if i set the datetime field
 as PK ( never! even if i set it as the only PK .. ). If i set
 composite PKs excluding any datetime column everything works fine ( it
 also works if i set a datetime as PK at the database side and
 excluding it at the schema on sqlalchemy side.. ). Sorry about the
 mess with the PKs between the former examples but i was only trying to
 say that changing the schema results that everything works fine...

if i set start and identifier as primary key columns, script works  
fine for both sqlite and postgres, using both 0.4.1 and trunk.  mssql  
guys will have to try it on mssql.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-18 Thread Rick Morrison
Same here on pymssql.

I tried it with 'start' as the only PK, and with both 'identifier' and
'start' as PK. Both work fine.

Are you sure your in-database tabledef matches your declared schema?

I've attached a script that works here. This one has both 'identifier' and
'start' set as PK.

  ***---WARNING ---***:
I've added a table.drop() to the script to simplify testing and make
sure the schemas match

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

import sqlalchemy as sa
import datetime, time
from sqlalchemy.orm import sessionmaker
#dburi = 'mssql://driveulator:[EMAIL PROTECTED]/d2test'

sa_engine=sa.create_engine(mssql://:[EMAIL PROTECTED]/xxx, echo=True)
metadata = sa.MetaData(sa_engine)
Session = sessionmaker(bind=sa_engine, autoflush=True,
transactional=True)
sa_session = Session()


jobs = sa.Table('jobs', metadata,
sa.Column('identifier', sa.Numeric(18), primary_key=True),
sa.Column('section', sa.VARCHAR(20)),
sa.Column(start,sa.DateTime, primary_key=True),
sa.Column(stop,sa.DateTime),
sa.Column(station, sa.VARCHAR(20)),
autoload=False)#ok

class Job(object):
   def __init__(self, identifier, start, station=TCHUKI):
   self.identifier, self.start, self.station=identifier, start, station

sa.orm.mapper(Job, jobs)

try:
jobs.drop()
except:
pass
jobs.create()

j = Job(22, datetime.datetime.now())
sa_session.save(j)
sa_session.commit()
sa_session.clear()
time.sleep(1)
j1=sa_session.query(Job).all()[0]

while True:
j1.stop=datetime.datetime.now()
sa_session.save_or_update(j1)
sa_session.commit()
time.sleep(2)