[sqlalchemy] sqlite PK autoincrement not working when you do a composite PK?

2008-05-30 Thread Russell Warren

I have a case where adding a record to the following table definition
works great (and the PK auto increments nicely):

users_table = sa.Table("users", metadata,
sa.Column("id", sa.Integer, primary_key = True),
sa.Column("login", sa.String(40), nullable = False),
sa.Column("password", sa.String(40), nullable = False),


but then I realized I want that to be a composite PK with id and login
together, so I change the definition to this:

users_table = sa.Table("users", metadata,
sa.Column("id", sa.Integer, primary_key = True, autoincrement =
True),
sa.Column("login", sa.String(40), primary_key = True),
sa.Column("password", sa.String(40), nullable = False),


and now the autoincrement on id no longer works.  With the latter,
when I try and add a simple record that worked before I now get:

"IntegrityError: (IntegrityError) users.id may not be NULL u'INSERT
INTO users (login, password, "

I've tried sifting through the sqlite dialect to figure out what is
going on and have even tried forcing supports_pk_autoincrement to be
true, but it rapidly became clear I hadn't a clue what I was doing in
the sqlalchemy guts.

Does anyone know why autoincrement on "id" stopped working, and how I
can fix it?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: oracle speed problem and arraysize hack

2008-05-30 Thread Michael Bayer

the "arraysize=50" default is set up in r4827 on the 0.4 branch and  
r4828 on the trunk.   I was surprised to find the rules for "BLOB no  
longer accesssible" are different than what I had assumed they were,  
so binary results and all work just fine here.


On May 27, 2008, at 5:34 PM, gniquil wrote:

>
> Hey, i did a test included below with this output:
>
> $ python cxotest.py
> Setting arraysize to 1 before execution yields 25.8921508181
> Setting arraysize to 500 before execution yields 0.26524348765
> Setting arraysize to 1 after execution yields 25.8829982582
> Setting arraysize to 500 after execution yields 25.8650820146
>
>
> /* sql code for the table */
> begin
>  for i in 1..1
>  Loop
>insert into testtable
>(id, name)
>values
>(i, 'abcde');
>  end loop;
> end;
>
> #python code#
>
> from cx_Oracle import connect, Cursor
>
> conn = connect('username', 'password', 'testaccount')
> curs = Cursor(conn)
>
> def fetchdata_preset(sql, arraysize=1, n=1000):
>curs.arraysize = arraysize
>curs.execute(sql)
>curs.fetchmany(n)
>
> def fetchdata_postset(sql, arraysize=1, n=1000):
>curs.arraysize = 1 #make sure this is set back to default
>curs.execute(sql)
>curs.arraysize = arraysize
>curs.fetchmany(n)
>
>
> if __name__ == '__main__':
>from timeit import Timer
>
>sql = "\'select * from testtable\'"
>
>t1a = Timer("fetchdata_preset(%s,1,1000)" % sql,
>"from __main__ import fetchdata_preset")
>t1b = Timer("fetchdata_preset(%s,500,1000)" % sql,
>"from __main__ import fetchdata_preset")
>t2a = Timer("fetchdata_postset(%s,1,1000)" % sql,
>"from __main__ import fetchdata_postset")
>t2b = Timer("fetchdata_postset(%s,500,1000)" % sql,
>"from __main__ import fetchdata_postset")
>
>statement = "Setting arraysize to %s %s execution yields %s"
>
>print statement % (str(1), "before", str(t1a.timeit(1)))
>print statement % (str(500), "before", str(t1b.timeit(1)))
>print statement % (str(1), "after", str(t2a.timeit(1)))
>print statement % (str(500), "after", str(t2b.timeit(1)))
>
>
>
> On May 27, 1:21 pm, Michael Bayer <[EMAIL PROTECTED]> wrote:
>> On May 27, 2008, at 3:21 PM, Waldemar Osuch wrote:
>>
>>> Quote:
>>
>>> Up to this point the default arraysize is 1 meaning that a single  
>>> row
>>> is internally fetched at a time. This has nothing to do with
>>> fetchone(), fetchmany() or fetchall(). Regardless of which of those
>>> methods is used, internally cx_Oracle fetches one row at a time. If
>>> you change the arraysize to 50, internally cx_Oracle will fetch 50
>>> rows at a time. Again, this is regardless of whether you use
>>> fetchone(), fetchmany() or fetchall(). Some of the confusion may lie
>>> in the fact that the default value for rows to fetch in  
>>> fetchmany() is
>>> the arraysize -- but that is all it is, a default value!
>>
>> see, that's a really unfortunate decision on his part to reuse
>> "arraysize" in such an arbitrary way like that, while *not* using it
>> at the point at which it is entirely reasonable, that is when you  
>> have
>> already stated you want to fetchmany(n) or fetchall().  This is
>> totally a bug in cx_oracle.
>>
>>
>>
>>
>>
>>> class MyConnection(cx_Oracle.Connection):
>>
>>>   def cursor(self):
>>>   cursor = cx_Oracle.Cursor(self)
>>>   cursor.arraysize = 50
>>>   return cursor
>>
>>> What this does is automatically set the arraysize to 50 every time a
>>> cursor is created. This can be done to transparently set the  
>>> arraysize
>>> and should allow you to proceed with whatever code needs to assume  
>>> an
>>> arraysize of that value. Otherwise you can feel free to change it
>>> yourself after creating the cursor.
>>
>>> And as has already been noted, in the next release of cx_Oracle, the
>>> default arraysize will be 50 in order to resolve this problem
>>> "permanently". :-)
>>
>> that workaround works, and also implementing "default_arraysize"
>> within OracleDialect as follows is acceptable.   If someone can test
>> this and post a trac ticket I can commit this to 0.4/0.5:
>>
>> Index: lib/sqlalchemy/databases/oracle.py
>> ===
>> --- lib/sqlalchemy/databases/oracle.py  (revision 4819)
>> +++ lib/sqlalchemy/databases/oracle.py  (working copy)
>> @@ -213,6 +213,12 @@
>>   self.out_parameters[name] =
>> self.cursor.var(dbtype)
>>   self.parameters[0][name] =
>> self.out_parameters[name]
>>
>> +def create_cursor(self):
>> +cursor = self._connection.connection.cursor()
>> +if self.dialect.default_arraysize:
>> +cursor.arraysize = self.dialect.default_arraysize
>> +return cursor
>> +
>>   def get_result_proxy(self):
>>   if hasattr(self, 'out_parameters'):
>>   if self.compiled_parameters is not None and
>> len(

[sqlalchemy] Re: mapper >> 2 python property to 1 sql column table

2008-05-30 Thread Michael Bayer


On May 30, 2008, at 9:27 AM, Sed wrote:

>
> Sorry for the subject, I wasn't able to find a good one.
>
> I'd like to have 2 differente attribute of python to be mapped in one
> sql column.
>
> e.g.:
>
> person_table = Table('person', metadata
>   ,Column('id', Integer, primary_key=True)
> )
>
> class Person(object):
>   pass
>
> and I'd like to do something like
> mapper_person=mapper(Person, person_table, properties=
>   {'user_id':person_table.c.id
>   ,'id':person_table.c.id}
> )
>
> how can I achieve such things ?
>

use mapper(Person, person_table,  
properties={'user_id':synonym('id')})  there's some docs at 
http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mapper_overriding 
.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ordering

2008-05-30 Thread Michael Bayer


On May 30, 2008, at 12:25 PM, Geoff wrote:

>
> Hi!
>
> I've noticed that a very simple query has an ordering applied to it
> even though I haven't asked for one. Is there a way to stop it doing
> that?
>
> the query:
> Session.query(User).set_shard(shard).filter_by(uuid=uuid).all()
>

order_by(None) on Query, order_by=None on mapper(), order_by=None on  
relation(),  or upgrade to 0.5; in 0.5 all the default "order_by"  
stuff is removed.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: exclude_properties in .5

2008-05-30 Thread Michael Bayer


We have a unit test for this which passes (test/orm/mapper.py  
MapperTest.test_prop_filters).  Can you create a test script which  
illustrates the issue ?



On May 29, 2008, at 4:06 PM, Justin Tulloss wrote:

>
> Hello,
>
> I've been playing around with .5 a bit since I like the ability to
> select individual columns a lot. I've noticed, however, that the
> exclude_properties for mapper() doesn't seem to work. The mapped
> object still has the properties I've put in that list.
>
> Justin
>
> >


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ordering

2008-05-30 Thread Bobby Impollonia

You can add
.order_by(None)
to the query to remove the default ordering.

On Fri, May 30, 2008 at 12:25 PM, Geoff <[EMAIL PROTECTED]> wrote:
>
> Hi!
>
> I've noticed that a very simple query has an ordering applied to it
> even though I haven't asked for one. Is there a way to stop it doing
> that?
>
> the query:
> Session.query(User).set_shard(shard).filter_by(uuid=uuid).all()
>
> Thanks!
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] ordering

2008-05-30 Thread Geoff

Hi!

I've noticed that a very simple query has an ordering applied to it
even though I haven't asked for one. Is there a way to stop it doing
that?

the query:
Session.query(User).set_shard(shard).filter_by(uuid=uuid).all()

Thanks!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] mapper >> 2 python property to 1 sql column table

2008-05-30 Thread Sed

Sorry for the subject, I wasn't able to find a good one.

I'd like to have 2 differente attribute of python to be mapped in one
sql column.

e.g.:

person_table = Table('person', metadata
   ,Column('id', Integer, primary_key=True)
)

class Person(object):
   pass

and I'd like to do something like
mapper_person=mapper(Person, person_table, properties=
   {'user_id':person_table.c.id
   ,'id':person_table.c.id}
)

how can I achieve such things ?

Sed

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] exclude_properties in .5

2008-05-30 Thread Justin Tulloss

Hello,

I've been playing around with .5 a bit since I like the ability to
select individual columns a lot. I've noticed, however, that the
exclude_properties for mapper() doesn't seem to work. The mapped
object still has the properties I've put in that list.

Justin

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how to see table mappers and unload them if needed

2008-05-30 Thread coder_gus


I got it ... sorry for spamming. clear_mappers() does it.

The thing is that I have the model for a table which mappers has to be 
cleared before being serialized and sent over the wire using twisted's 
perspective broker and the same table has to be remapped on the server 
to a model's child which subclasses pb.RemoteCopy. It works now ... 
sorry for asking before doing a deeper research.

Thanks.
coder_gus wrote:
> Is there a way I can get all the mappers for a Table? And if yes how can 
> I unload them?
>
> Thanks.
>
>
> >
>
> 
> Looking for insurance? Compare and save today. Click here.
> http://ads.lavabit.com/fc/Ioyw6kdcVeESdyS59NTA1luTR3qrfUN2pMhCaaMmHgjTKKt8SZHIYY/
> 
>   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] how to see table mappers and unload them if needed

2008-05-30 Thread coder_gus

Is there a way I can get all the mappers for a Table? And if yes how can 
I unload them?

Thanks.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Interval and mx.TimeDelta

2008-05-30 Thread coder_gus

Probably you could get away with

import psycopg2.extensions

psycopg2.extensions.register_type(psycopg2._psycopg.MXDATETIME)
psycopg2.extensions.register_type(psycopg2._psycopg.MXINTERVAL)
psycopg2.extensions.register_type(psycopg2._psycopg.MXDATE)
psycopg2.extensions.register_type(psycopg2._psycopg.MXTIME)

assuming that you have psycopg2 compiled with the appropriate mx headers.

Hope it helps.

Michael Bayer wrote:
> my understanding is that mx is no longer supported with psycopg2.  You  
> might want to ask on their list; SA just passes through objects for  
> PG's date/time types.
>
> On May 29, 2008, at 2:27 PM, Eoghan Murray wrote:
>
>   
>> Hi,
>>
>> I'm trying to use mx.TimeDelta instead of datetime.timedelta on an
>> Interval column (with Postgres).
>> It seems to work fine until I get to intervals > a day, at which point
>> it fails with:
>>
>> ProgrammingError: (ProgrammingError) invalid input syntax for type
>> interval: "1:00:00:00.00"
>> 'INSERT INTO mytable (id, mytime) VALUES (%(id)s, %
>> (mytime)s)' {'mytime': > '1:00:00:00.00' at 9a5bdb8>, 'id': 105461L}
>>
>> Any idea where I could hook in to give the desired "24:00:00.00"?
>>
>> Thanks!
>>
>> Eoghan
>>
>>
>> 
>
>
> >
>
> 
> Internet Security Software - Click here.
> http://ads.lavabit.com/fc/Ioyw6kdbg20SYEDQ5FhJFllX2acTjh0tOS1KgVe74mslLV2hYt6kMo/
> 
>   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---