[sqlalchemy] Replacing a Session Engine

2008-11-24 Thread Koen Bok

Hey I was wondering if it was possible to replace an engine in a
session. It does not seem to work, but maybe I'm doing something
really stupid.

http://pastie.org/322501

Kindest regards,

Koen Bok - madebysofa.com


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inserting

2008-11-24 Thread MikeCo

Oops, not quite right. str(table.c.colname) returns 'table.colname,
and that doesn't work right as dictionary key. You need col only as
dictionary key.

http://pastebin.com/fd0653b0  has some tests

Interesting question is does SA intend that table.colname work in
the dictionary definition?

--
Mike

On Nov 23, 8:58 am, MikeCo [EMAIL PROTECTED] wrote:
 Your dictionary key CartItemTable.c.colname is an instance of class
 Column, The dictionary keys need to be strings. Use str
 (CartItemTable.c.colname) to get the string name of the column and it
 should work.

  CartItemTable.c.userId

 Column('userId', Integer(), ForeignKey('User.userId'),
 table=CartItem, primary_key=True, nullable=False) 
 str(CartItemTable.c.userId)

 'CartItem.userId'

 --
 Mike

 On Nov 23, 8:12 am, Petr Kobalíček [EMAIL PROTECTED] wrote:

  Hi devs,

  I don't understand one thing:

  I have table:

  CartItemTable = sql.Table(
    CartItem, meta.metadata,

    # Relations
    sql.Column(userId              , sql.Integer      ,
  sql.ForeignKey(User.userId), nullable=False, primary_key=True),
    sql.Column(productId           , sql.Integer      ,
  sql.ForeignKey(Product.productId), nullable=False,
  primary_key=True),
    sql.Column(variantId           , sql.Integer      , nullable=True,
  default=None),

    # Count of items in shopping cart
    sql.Column(count               , sql.Integer      ,
  nullable=False, default=1)
  )

  and I want to insert multiple rows to it using sql:

        Session().execute(
          CartItemTable.insert(),
          [{
            CartItemTable.c.userId    : self.user.userId,
            CartItemTable.c.productId : item.product.productId,
            CartItemTable.c.variantId : vid(item.variant),
            CartItemTable.c.count     : item.count
          } for item in self.items]
        )

  But this not works and I must use this way:

        Session().execute(
          CartItemTable.insert(),
          [{
            userId    : self.user.userId,
            productId : item.product.productId,
            variantId : vid(item.variant),
            count     : item.count
          } for item in self.items]
        )

  Why is not working first syntax, what em I missing ?

  Cheers
  - Petr
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Trouble with Strings getting converted to Unicode types

2008-11-24 Thread Michael Bayer

Harish Vishwanath wrote:
 Hello,

 Thanks for your input. I consulted PySqlite's docs :

 They have something like Connection.text_factory, which is by default set
 to
 Unicode. If it is set to str, that is.,

 connection.text_factory = str, then it wont convert strings to Unicode.

 I glanced thru SQLA code, but I couldn't find a place where I can pass
 this
 option to SQLA so that it can in turn use the above option while obtaining
 the connection from PySqlite.

 Please let me know if there is a way of achieving this. Thank!

use the creator argument or the listeners argument to create_engine:

create_engine('sqlite://', creator=my_connect_func)

or

create_engine('sqlite://foo.bar', listeners=[MyListener()])

MyListener is a sqlalchemy.interfaces.PoolListener.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inserting

2008-11-24 Thread Michael Bayer

the actual Column object or its key can be placed in the dict.

MikeCo wrote:

 Oops, not quite right. str(table.c.colname) returns 'table.colname,
 and that doesn't work right as dictionary key. You need col only as
 dictionary key.

 http://pastebin.com/fd0653b0  has some tests

 Interesting question is does SA intend that table.colname work in
 the dictionary definition?

 --
 Mike

 On Nov 23, 8:58 am, MikeCo [EMAIL PROTECTED] wrote:
 Your dictionary key CartItemTable.c.colname is an instance of class
 Column, The dictionary keys need to be strings. Use str
 (CartItemTable.c.colname) to get the string name of the column and it
 should work.

  CartItemTable.c.userId

 Column('userId', Integer(), ForeignKey('User.userId'),
 table=CartItem, primary_key=True, nullable=False)
 str(CartItemTable.c.userId)

 'CartItem.userId'

 --
 Mike

 On Nov 23, 8:12 am, Petr Kobalíèek [EMAIL PROTECTED] wrote:

  Hi devs,

  I don't understand one thing:

  I have table:

  CartItemTable = sql.Table(
    CartItem, meta.metadata,

    # Relations
    sql.Column(userId              , sql.Integer      ,
  sql.ForeignKey(User.userId), nullable=False, primary_key=True),
    sql.Column(productId           , sql.Integer      ,
  sql.ForeignKey(Product.productId), nullable=False,
  primary_key=True),
    sql.Column(variantId           , sql.Integer      , nullable=True,
  default=None),

    # Count of items in shopping cart
    sql.Column(count               , sql.Integer      ,
  nullable=False, default=1)
  )

  and I want to insert multiple rows to it using sql:

        Session().execute(
          CartItemTable.insert(),
          [{
            CartItemTable.c.userId    : self.user.userId,
            CartItemTable.c.productId : item.product.productId,
            CartItemTable.c.variantId : vid(item.variant),
            CartItemTable.c.count     : item.count
          } for item in self.items]
        )

  But this not works and I must use this way:

        Session().execute(
          CartItemTable.insert(),
          [{
            userId    : self.user.userId,
            productId : item.product.productId,
            variantId : vid(item.variant),
            count     : item.count
          } for item in self.items]
        )

  Why is not working first syntax, what em I missing ?

  Cheers
  - Petr
 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inserting

2008-11-24 Thread MikeCo

Using 0.5.0rc4 doesn't seem to do that. or what am I doing wrong?

The test, http://pastebin.com/fd0653b0 , looks like when using the
Column object, the values inserted are all None (test 1). When the key
is the fully qualified table.column, the value inserted is always
the default value for the column (test 3). It only works correct when
the key is the string for the unqualified column name (tests 2 and 4).


On Nov 24, 10:37 am, Michael Bayer [EMAIL PROTECTED] wrote:
 the actual Column object or its key can be placed in the dict.

 MikeCo wrote:

  Oops, not quite right. str(table.c.colname) returns 'table.colname,
  and that doesn't work right as dictionary key. You need col only as
  dictionary key.

 http://pastebin.com/fd0653b0 has some tests

  Interesting question is does SA intend that table.colname work in
  the dictionary definition?

  --
  Mike

  On Nov 23, 8:58 am, MikeCo [EMAIL PROTECTED] wrote:
  Your dictionary key CartItemTable.c.colname is an instance of class
  Column, The dictionary keys need to be strings. Use str
  (CartItemTable.c.colname) to get the string name of the column and it
  should work.

   CartItemTable.c.userId

  Column('userId', Integer(), ForeignKey('User.userId'),
  table=CartItem, primary_key=True, nullable=False)
  str(CartItemTable.c.userId)

  'CartItem.userId'

  --
  Mike

  On Nov 23, 8:12 am, Petr Kobalíèek [EMAIL PROTECTED] wrote:

   Hi devs,

   I don't understand one thing:

   I have table:

   CartItemTable = sql.Table(
     CartItem, meta.metadata,

     # Relations
     sql.Column(userId              , sql.Integer      ,
   sql.ForeignKey(User.userId), nullable=False, primary_key=True),
     sql.Column(productId           , sql.Integer      ,
   sql.ForeignKey(Product.productId), nullable=False,
   primary_key=True),
     sql.Column(variantId           , sql.Integer      , nullable=True,
   default=None),

     # Count of items in shopping cart
     sql.Column(count               , sql.Integer      ,
   nullable=False, default=1)
   )

   and I want to insert multiple rows to it using sql:

         Session().execute(
           CartItemTable.insert(),
           [{
             CartItemTable.c.userId    : self.user.userId,
             CartItemTable.c.productId : item.product.productId,
             CartItemTable.c.variantId : vid(item.variant),
             CartItemTable.c.count     : item.count
           } for item in self.items]
         )

   But this not works and I must use this way:

         Session().execute(
           CartItemTable.insert(),
           [{
             userId    : self.user.userId,
             productId : item.product.productId,
             variantId : vid(item.variant),
             count     : item.count
           } for item in self.items]
         )

   Why is not working first syntax, what em I missing ?

   Cheers
   - Petr
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inserting

2008-11-24 Thread Michael Bayer

oh, right.  Column objects only work when you say insert().values(**dict).


MikeCo wrote:

 Using 0.5.0rc4 doesn't seem to do that. or what am I doing wrong?

 The test, http://pastebin.com/fd0653b0 , looks like when using the
 Column object, the values inserted are all None (test 1). When the key
 is the fully qualified table.column, the value inserted is always
 the default value for the column (test 3). It only works correct when
 the key is the string for the unqualified column name (tests 2 and 4).


 On Nov 24, 10:37 am, Michael Bayer [EMAIL PROTECTED] wrote:
 the actual Column object or its key can be placed in the dict.

 MikeCo wrote:

  Oops, not quite right. str(table.c.colname) returns 'table.colname,
  and that doesn't work right as dictionary key. You need col only as
  dictionary key.

 http://pastebin.com/fd0653b0 has some tests

  Interesting question is does SA intend that table.colname work in
  the dictionary definition?

  --
  Mike

  On Nov 23, 8:58 am, MikeCo [EMAIL PROTECTED] wrote:
  Your dictionary key CartItemTable.c.colname is an instance of class
  Column, The dictionary keys need to be strings. Use str
  (CartItemTable.c.colname) to get the string name of the column and it
  should work.

   CartItemTable.c.userId

  Column('userId', Integer(), ForeignKey('User.userId'),
  table=CartItem, primary_key=True, nullable=False)
  str(CartItemTable.c.userId)

  'CartItem.userId'

  --
  Mike

  On Nov 23, 8:12 am, Petr Kobalíèek [EMAIL PROTECTED]
 wrote:

   Hi devs,

   I don't understand one thing:

   I have table:

   CartItemTable = sql.Table(
     CartItem, meta.metadata,

     # Relations
     sql.Column(userId              , sql.Integer      ,
   sql.ForeignKey(User.userId), nullable=False, primary_key=True),
     sql.Column(productId           , sql.Integer      ,
   sql.ForeignKey(Product.productId), nullable=False,
   primary_key=True),
     sql.Column(variantId           , sql.Integer      ,
 nullable=True,
   default=None),

     # Count of items in shopping cart
     sql.Column(count               , sql.Integer      ,
   nullable=False, default=1)
   )

   and I want to insert multiple rows to it using sql:

         Session().execute(
           CartItemTable.insert(),
           [{
             CartItemTable.c.userId    : self.user.userId,
             CartItemTable.c.productId : item.product.productId,
             CartItemTable.c.variantId : vid(item.variant),
             CartItemTable.c.count     : item.count
           } for item in self.items]
         )

   But this not works and I must use this way:

         Session().execute(
           CartItemTable.insert(),
           [{
             userId    : self.user.userId,
             productId : item.product.productId,
             variantId : vid(item.variant),
             count     : item.count
           } for item in self.items]
         )

   Why is not working first syntax, what em I missing ?

   Cheers
   - Petr
 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Replacing a Session Engine

2008-11-24 Thread Michael Bayer


that is the way to do it as far as switching engines.  your example
doesn't work because the state of p1 is persistent as opposed to
pending, so the example updates a row that isn't there.

you need to build a copy constructor on Person and make a new,
non-persistent Person object for your second engine.   the Session can't
implicitly figure out what your intent is.

Koen Bok wrote:

 Hey I was wondering if it was possible to replace an engine in a
 session. It does not seem to work, but maybe I'm doing something
 really stupid.

 http://pastie.org/322501

 Kindest regards,

 Koen Bok - madebysofa.com


 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Replacing a Session Engine

2008-11-24 Thread Koen Bok

Ok, that sounds logical, but that defeats the purpose of it for me :-)

Whenever a Checkout user logs out I clear the session. On a new login
I create fresh session that has to re-cache lot of data. The logins
depend on a valid postgres user. If I could find a way to replace the
engine/connection for a session it would keep the objects around,
making login way faster the second time.

Any ideas? :-)

- Koen

On Nov 24, 4:51 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 that is the way to do it as far as switching engines.  your example
 doesn't work because the state of p1 is persistent as opposed to
 pending, so the example updates a row that isn't there.

 you need to build a copy constructor on Person and make a new,
 non-persistent Person object for your second engine.   the Session can't
 implicitly figure out what your intent is.



 Koen Bok wrote:

  Hey I was wondering if it was possible to replace an engine in a
  session. It does not seem to work, but maybe I'm doing something
  really stupid.

 http://pastie.org/322501

  Kindest regards,

  Koen Bok - madebysofa.com
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Replacing a Session Engine

2008-11-24 Thread az

cant u cache all these objects elsewhere then sess.merge() them?

On Monday 24 November 2008 22:33:53 Koen Bok wrote:
 Ok, that sounds logical, but that defeats the purpose of it for me
 :-)

 Whenever a Checkout user logs out I clear the session. On a new
 login I create fresh session that has to re-cache lot of data. The
 logins depend on a valid postgres user. If I could find a way to
 replace the engine/connection for a session it would keep the
 objects around, making login way faster the second time.

 Any ideas? :-)

 - Koen

 On Nov 24, 4:51 pm, Michael Bayer [EMAIL PROTECTED] 
wrote:
  that is the way to do it as far as switching engines.  your
  example doesn't work because the state of p1 is persistent as
  opposed to pending, so the example updates a row that isn't
  there.
 
  you need to build a copy constructor on Person and make a new,
  non-persistent Person object for your second engine.   the
  Session can't implicitly figure out what your intent is.
 
  Koen Bok wrote:
   Hey I was wondering if it was possible to replace an engine in
   a session. It does not seem to work, but maybe I'm doing
   something really stupid.
  
  http://pastie.org/322501
  
   Kindest regards,
  
   Koen Bok - madebysofa.com

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Invalid SQL for not None on relation attributes

2008-11-24 Thread Yoann Roman

Not yet, I'm still recovering from the first one :-).

On Nov 20, 10:50 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 very nice - this fix is applied in r5314.  It's not every day someone  
 gives us a patch for strategies.py...have any more ? :)

 On Nov 20, 2008, at 9:47 PM, Yoann Roman wrote:



  I'm getting invalid SQL when I try to filter for records that have no
  matching related record in a one-to-one relationship using the not_
  function. For example, with address being a relation from User:

  print User.id==None
  users.id IS NULL
  print not_(User.id==None)
  users.id IS NOT NULL
  print User.address==None
  users.address_id IS NULL
  print not_(User.address==None)
  users.address_id != NULL

  The last expression fails to return the correct records against MySQL.
  Below is a complete test script to reproduce the above output:

  from sqlalchemy import create_engine, Table, Column, Integer, String,
  Text, \
     MetaData, ForeignKeyConstraint, not_
  from sqlalchemy.orm import mapper, relation, sessionmaker

  # initialize the engine
  engine = create_engine('sqlite:///:memory:', echo=False)
  metadata = MetaData()

  # setup the users table
  users = Table('users', metadata,
     Column('id', Integer, primary_key=True),
     Column('name', String(50)),
     Column('address_id', Integer),
     ForeignKeyConstraint(['address_id'], ['addresses.id'])
  )

  # setup the addresses table
  addresses = Table('addresses', metadata,
     Column('id', Integer, primary_key=True),
     Column('address', Text),
  )

  # create the tables
  metadata.create_all(engine)

  # define the User class
  class User(object):
     def __repr__(self):
         return 'User %s' % self.name

  # define the Address class
  class Address(object):
     def __repr__(self):
         return 'Address %s' % self.address

  # setup the mapping
  mapper(Address, addresses)
  mapper(User, users, properties={
     'address': relation(Address, backref='user')
  })

  # create the session
  Session = sessionmaker(bind=engine)

  # perform the tests
  print User.id==None
  print not_(User.id==None)
  print User.address==None
  print not_(User.address==None)

  A possible fix seems to be to add binary.negate = operators.isnot
  below lines 394 and 397 of orm/strategies.py

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Poolclass argument to create_engine

2008-11-24 Thread Iain

I was wondering if it would be possible to get the code changed to
accept the poolclass argument to create_engine (and other functions?)
optionally as a string reference to a Pool subclass rather than
requiring the class itself.

The reason I ask is because SQLAlchemy is often a component of a
larger framework and the framework will often create the engine
itself, allowing configuration by a config file which usually cannot
handle having python objects other than the basic types strings,
numbers, lists, tuples etc, so using the poolclass option requires
changing the source of the framework which is just not a good place to
go :)

I have made a simple, temporary change to my 0.5rc4 version of
sqlalchemy/engine/strategies.py that you're welcome to work from but
I'm not saying it is particularly safe, elegant or otherwise :)

Thanks
Iain

--- strategies.py.old   2008-11-24 13:08:45.0 +1030
+++ strategies.py   2008-11-25 10:25:42.0 +1030
@@ -82,6 +82,9 @@

 poolclass = (kwargs.pop('poolclass', None) or
  getattr(dialect_cls, 'poolclass',
poollib.QueuePool))
+if isinstance(poolclass,str):
+ exec(pcls = %s % poolclass) in globals(), locals()
+ poolclass = pcls
 pool_args = {}

 # consume pool arguments from kwargs, translating a few
of

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Questionable query times.

2008-11-24 Thread TheShadow

http://pastebin.ca/1266308

In the link above you will see some output and code.

Some questions I have:

1) Why are these queries taking on a magnitude of 10 times slower than
they should be?
2) Could it be the implementation I am using? If so what is
recommended for using the system as you see it (I can't use ORM based
on application requirements)
3) Is the connection being reset between queries?

I'd appreciate any insight. I find sqlalchemy's DB connection scheme
fits the framework of the rest of the app but if these queries can't
be sped up then I'll have to find a different solution.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Questionable query times.

2008-11-24 Thread Michael Bayer


On Nov 24, 2008, at 8:50 PM, TheShadow wrote:


 http://pastebin.ca/1266308

 In the link above you will see some output and code.

 Some questions I have:

 1) Why are these queries taking on a magnitude of 10 times slower than
 they should be?

 2) Could it be the implementation I am using? If so what is
 recommended for using the system as you see it (I can't use ORM based
 on application requirements)

 3) Is the connection being reset between queries?

 I'd appreciate any insight. I find sqlalchemy's DB connection scheme
 fits the framework of the rest of the app but if these queries can't
 be sped up then I'll have to find a different solution.

The magnitude of 10 times slower figure is questionable since you  
didn't post any alternate implementation which illustrates it running  
ten times faster, but I'm assuming that's against a system without  
autocommit.

As for the code, you are relying upon connectionless execution without  
an explicit transaction so there is a COMMIT occuring after each  
INSERT and UPDATE.  This is probably what is making the operation run  
slower than it seems it should be.  Connections are not reset by  
SQLAlchemy, they are pooled, and in this case each execution results  
in a connection pool checkout, but that's it.If you're going for  
speed you'll get much better results without the ORM.  The SQLA  
expression/execution system without the ORM might add a 10-20% latency  
over raw DBAPI but nothing severe, provided you control your  
transactional scope.

The threadlocal strategy is also not needed here unless you plan on  
calling begin()/commit() on your engine - but in this case if you did  
issue your statements within a begin()/commit() pair, the latency of  
each COMMIT would be removed and you'd also eliminate any connection  
pool traffic until the commit().


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Trouble with Strings getting converted to Unicode types

2008-11-24 Thread Michael Bayer

On Nov 24, 2008, at 11:47 PM, Harish Vishwanath wrote:

 Hello,

 Thanks for your reply. Please see the below implementation :

 Implementation 1:

  def my_con_func():
 ... print My Connection func
 ... import sqlite3.dbapi2 as sqlite
 ... con = sqlite.connect(:memory:)
 ... con.text_factory=str
 ... return con
 ...
  engine = create_engine(sqlite:///,creator=my_con_func)
  engine
 Engine(sqlite:///)
  engine.connect()
 My Connection func
 sqlalchemy.engine.base.Connection object at 0x01B1ADB0
  engine.connect().connection.connection.text_factory
 type 'str'


 Implementation 2:

  anotherengine = create_engine(sqlite:///:memory:)
  anotherengine.connect().connection.connection.text_factory = str

 I would like to know which one is better, since I am afraid if I am  
 missing some create_engine inbuilt implementation while returning  
 custom connection object (in Implementation 1). Please let me know  
 your suggestion.

go with this one:

from sqlalchemy.interfaces import PoolListener
class SetTextFactory(PoolListener):
 def connect(self, dbapi_con, con_record):
 dbapi_con.text_factory = str

engine = create_engine('sqlite://', listeners=[SetTextFactory()])

implementation 1 is fine, you can stay with it if you want - I just  
think creator is a little klunky since you have to re-import sqlite  
and connect manually.   implementation 2 will only work for a single- 
threaded application and will break if the pool implementation is  
changed, unless you set-up text_factory every time connect() is called  
(which is usually impossible).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Poolclass argument to create_engine

2008-11-24 Thread Michael Bayer


which frameworks are we talking about here ?   since yes I think if a  
framework is going to take over the job of calling create_engine()  
then they should provide proper inputs.  There's a lot of things which  
can be passed to create_engine() which don't work as strings.


On Nov 24, 2008, at 7:01 PM, Iain wrote:


 I was wondering if it would be possible to get the code changed to
 accept the poolclass argument to create_engine (and other functions?)
 optionally as a string reference to a Pool subclass rather than
 requiring the class itself.

 The reason I ask is because SQLAlchemy is often a component of a
 larger framework and the framework will often create the engine
 itself, allowing configuration by a config file which usually cannot
 handle having python objects other than the basic types strings,
 numbers, lists, tuples etc, so using the poolclass option requires
 changing the source of the framework which is just not a good place to
 go :)

 I have made a simple, temporary change to my 0.5rc4 version of
 sqlalchemy/engine/strategies.py that you're welcome to work from but
 I'm not saying it is particularly safe, elegant or otherwise :)

 Thanks
 Iain

 --- strategies.py.old   2008-11-24 13:08:45.0 +1030
 +++ strategies.py   2008-11-25 10:25:42.0 +1030
 @@ -82,6 +82,9 @@

 poolclass = (kwargs.pop('poolclass', None) or
  getattr(dialect_cls, 'poolclass',
 poollib.QueuePool))
 +if isinstance(poolclass,str):
 + exec(pcls = %s % poolclass) in globals(), locals()
 + poolclass = pcls
 pool_args = {}

 # consume pool arguments from kwargs, translating a few
 of

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Questionable query times.

2008-11-24 Thread TheShadow

First I appreciate your responding. But I need more information as to
why this is taking so long and this is because of the following:

When I started looking for a DBAPI I started with the MySQLdb module
executing queries against local host versus a remote host was faster
but even against remote hosts running the queries in my sample still
at most took 0.1 seconds and averaging 0.05. (This is with forcing a
commit). I decided to move to something else because it didn't support
binded variables the way I needed. It is incredibly difficult for me
to accept that simple SELECT's are taking 0.1 seconds on average when
directly on the DB server I'm looking at under 0.006. So it makes me
wonder what is happening under the hood that is eating hundreds of
milliseconds. I don't see results like this in PHP or Java so I'm
going to have to ask why when I'm basically bypassing all the ORM does
it take so long. Currently I'm showing a 2 second round trip for 6
queries which is just unacceptable.

On the other side I didn't show any other implementations because
there is a severe lack of documentation/samples when it comes to
bypassing the ORM layer. I've asked in #sqlalchemy on freenode but it
doesn't appear that many people attempt what I am.

On Nov 24, 10:29 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Nov 24, 2008, at 8:50 PM, TheShadow wrote:





 http://pastebin.ca/1266308

  In the link above you will see some output and code.

  Some questions I have:

  1) Why are these queries taking on a magnitude of 10 times slower than
  they should be?

  2) Could it be the implementation I am using? If so what is
  recommended for using the system as you see it (I can't use ORM based
  on application requirements)

  3) Is the connection being reset between queries?

  I'd appreciate any insight. I find sqlalchemy's DB connection scheme
  fits the framework of the rest of the app but if these queries can't
  be sped up then I'll have to find a different solution.

 The magnitude of 10 times slower figure is questionable since you  
 didn't post any alternate implementation which illustrates it running  
 ten times faster, but I'm assuming that's against a system without  
 autocommit.

 As for the code, you are relying upon connectionless execution without  
 an explicit transaction so there is a COMMIT occuring after each  
 INSERT and UPDATE.  This is probably what is making the operation run  
 slower than it seems it should be.  Connections are not reset by  
 SQLAlchemy, they are pooled, and in this case each execution results  
 in a connection pool checkout, but that's it.    If you're going for  
 speed you'll get much better results without the ORM.  The SQLA  
 expression/execution system without the ORM might add a 10-20% latency  
 over raw DBAPI but nothing severe, provided you control your  
 transactional scope.

 The threadlocal strategy is also not needed here unless you plan on  
 calling begin()/commit() on your engine - but in this case if you did  
 issue your statements within a begin()/commit() pair, the latency of  
 each COMMIT would be removed and you'd also eliminate any connection  
 pool traffic until the commit().

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Questionable query times.

2008-11-24 Thread Michael Bayer


On Nov 25, 2008, at 12:51 AM, TheShadow wrote:


 First I appreciate your responding. But I need more information as to
 why this is taking so long and this is because of the following:

 When I started looking for a DBAPI I started with the MySQLdb module
 executing queries against local host versus a remote host was faster
 but even against remote hosts running the queries in my sample still
 at most took 0.1 seconds and averaging 0.05. (This is with forcing a
 commit). I decided to move to something else because it didn't support
 binded variables the way I needed. It is incredibly difficult for me
 to accept that simple SELECT's are taking 0.1 seconds on average when
 directly on the DB server I'm looking at under 0.006. So it makes me
 wonder what is happening under the hood that is eating hundreds of
 milliseconds. I don't see results like this in PHP or Java so I'm
 going to have to ask why when I'm basically bypassing all the ORM does
 it take so long. Currently I'm showing a 2 second round trip for 6
 queries which is just unacceptable.

a 2 second round trip for 6 queries, where the SELECT statements  
return just one or two rows, sounds very slow and indicates either  
network latency or some issue with the MySQL database.   But I'm still  
not clear on what results you get with which kinds of setup.

In any case you should be running the Python profiler, preferably  
hotshot, to see where time is being spent.  It sounds like you're  
going to see most of the time spent sending/receiving over the  
cursor.  MySQLdb and SQLA expressions shouldn't add very much overhead.

 On the other side I didn't show any other implementations because
 there is a severe lack of documentation/samples when it comes to
 bypassing the ORM layer. I've asked in #sqlalchemy on freenode but it
 doesn't appear that many people attempt what I am.

I have to disagree here.  The majority of SQLAlchemy's documentation  
chapters do not deal with the ORM. The ORM layer does not need to  
be bypassed, it simply is not used.   This is described on the first  
page of the documentation at 
http://www.sqlalchemy.org/docs/05/intro.html#overview_overview 
  as well as in the following section on tutorials.

The example you did show is using SQLAlchemy, without the ORM. 
Information on how the engine deals with execution and transactions  
are here: http://www.sqlalchemy.org/docs/05/dbengine.html .

If you'd like to compare  SQLAlchemy to a pure DBAPI implementation,  
the documentation for DBAPI itself is outside of SQLAlchemy's domain.   
For that, you want to go here: http://www.python.org/dev/peps/pep-0249/

Similarly, documentation for how to profile Python programs is also  
outside of SQLAlchemy's domain.  For that, you'd want to go here:  
http://www.python.org/doc/2.5.2/lib/profile.html 
   and then here:  http://www.python.org/doc/2.5.2/lib/module-hotshot.html 
  .  We've observed that hotshot shows more accurate results.

When running the profiler, make sure you look at the total time spent  
for operations.  MySQLdb and SQLAlchemy are both going to add several  
hundred Python function calls to the profile - but despite this, they  
will mostly have extremely negligible amounts of time added - its  
usually the case that the network calls on the cursor is where more  
than half the time actually adds up.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Poolclass argument to create_engine

2008-11-24 Thread Iain

I was talking about TurboGears.

I must admit I had assumed that these were considered details that
most frameworks would prefer to hide away as TG does, but a quick look
over at Pylons suggests they've implemented a somewhat more
customizable way of initiating the engine but involves a bit more
coding (the usual story).

Given that TG will soon be based on Pylons  Pylons is alright, maybe
its not worth the trouble. I hadn't stopped to look at what other
options might cause problems.


On Nov 25, 3:41 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 which frameworks are we talking about here ?   since yes I think if a  
 framework is going to take over the job of calling create_engine()  
 then they should provide proper inputs.  There's a lot of things which  
 can be passed to create_engine() which don't work as strings.

 On Nov 24, 2008, at 7:01 PM, Iain wrote:



  I was wondering if it would be possible to get the code changed to
  accept the poolclass argument to create_engine (and other functions?)
  optionally as a string reference to a Pool subclass rather than
  requiring the class itself.

  The reason I ask is because SQLAlchemy is often a component of a
  larger framework and the framework will often create the engine
  itself, allowing configuration by a config file which usually cannot
  handle having python objects other than the basic types strings,
  numbers, lists, tuples etc, so using the poolclass option requires
  changing the source of the framework which is just not a good place to
  go :)

  I have made a simple, temporary change to my 0.5rc4 version of
  sqlalchemy/engine/strategies.py that you're welcome to work from but
  I'm not saying it is particularly safe, elegant or otherwise :)

  Thanks
  Iain

  --- strategies.py.old   2008-11-24 13:08:45.0 +1030
  +++ strategies.py       2008-11-25 10:25:42.0 +1030
  @@ -82,6 +82,9 @@

              poolclass = (kwargs.pop('poolclass', None) or
                           getattr(dialect_cls, 'poolclass',
  poollib.QueuePool))
  +            if isinstance(poolclass,str):
  +                 exec(pcls = %s % poolclass) in globals(), locals()
  +                 poolclass = pcls
              pool_args = {}

              # consume pool arguments from kwargs, translating a few
  of


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Selecting from a self-referential mapper: recursive joins?

2008-11-24 Thread Eric Ongerth

Below, I have attached a working testcase.  It works, yes -- but my
question is that I need to make an improved version of a particular
method on one of my classes.  The following model will probably
explain itself for the most part.  I'll let you read it first, then
offer a few explanatory notes afterward just in case.  Finally, at the
end, I will describe the difference between what the method in
question does now, and what I would like it to do.

The nature of the response I am seeking is: a description of what I
need to do to build a better version of the method I'm speaking of,
including any further insight on the practice of joining at multiple
levels of a recursive / self-referential (but loop-free) graph.


---snip---


from sqlalchemy import *
from sqlalchemy.sql import *
from sqlalchemy.orm import *

engine = create_engine('sqlite://')

metadata = MetaData(bind=engine)

itemtypes = Table('itemtypes', metadata,
Column('name', Text, primary_key=True))

itemtype_inheritance = Table('itemtype_inheritance', metadata,
Column('itemtype_name', Text, ForeignKey('itemtypes.name'),
primary_key=True),
Column('parent_name', Text, ForeignKey('itemtypes.name'),
primary_key=True))

features = Table('features', metadata,
Column('id', Integer, primary_key=True),
Column('name', Text),
Column('root_itemtype_name', Text, ForeignKey('itemtypes.name')))

feature_dependencies = Table('feature_dependencies', metadata,
Column('dependent_id', Integer, ForeignKey('features.id'),
primary_key=True),
Column('determinant_id', Integer, ForeignKey('features.id'),
primary_key=True))

metadata.drop_all()
metadata.create_all()

itemtypes.insert().execute([
{'name': 'Product'},
{'name': 'Footwear'},
{'name': 'Boot'},
{'name': 'Ski'}
])

itemtype_inheritance.insert().execute([
{'itemtype_name': 'Footwear', 'parent_name': 'Product'},
{'itemtype_name': 'Boot', 'parent_name': 'Footwear'},
{'itemtype_name': 'Ski', 'parent_name': 'Product'}
])

features.insert().execute([
{'id': 1, 'name': 'Manufacturer',
'root_itemtype_name':'Product' },
{'id': 2, 'name': 'Model', 'root_itemtype_name':'Product' },
{'id': 3, 'name': 'Year', 'root_itemtype_name':'Product' },
{'id': 4, 'name': 'Gender', 'root_itemtype_name':'Footwear' },
{'id': 5, 'name': 'US Shoe Size',
'root_itemtype_name':'Footwear' },
{'id': 6, 'name': 'Length', 'root_itemtype_name':'Ski' },
{'id': 7, 'name': 'Weight', 'root_itemtype_name':'Product' }

])

feature_dependencies.insert().execute([
{'dependent_id': 7, 'determinant_id': 1},
{'dependent_id': 7, 'determinant_id': 2},
{'dependent_id': 7, 'determinant_id': 3},
{'dependent_id': 7, 'determinant_id': 4},
{'dependent_id': 7, 'determinant_id': 5},
{'dependent_id': 7, 'determinant_id': 6}
])


class Itemtype(object):

def __repr__(self):
return 'Itemtype: %s' % (self.name)

@property
def inherited_features(self):
return reduce(list.extend,
  [base_itemtype.features for base_itemtype in
self.inherits],
  [])

@property
def features(self):
return self.own_features.extend(self.inherited_features)

@property
def dependent_features(self):
return [f for f in self.features if f.determinants]

@property
def independent_features(self):
return [f for f in self.features if not f.determinants]


class Feature(object):

def __repr__(self):
return '%s %s' % (self.root_itemtype_name, self.name)

def determinants_in_scope_of(self, itemtype):
return (session.query(Feature)
.join(FeatureDependency.determinant)
.join(Feature.root_itemtype)
.filter(and_(FeatureDependency.dependent_id==self.id,
 Itemtype.name==itemtype.name))).all()


class FeatureDependency(object):

def __repr__(self):
return F_D: %s depends on %s % (self.dependent.name,
  self.determinant.name)



mapper(Itemtype, itemtypes, properties={
'inherits':relation(Itemtype,
secondary=itemtype_inheritance,
primaryjoin=
(itemtypes.c.name==itemtype_inheritance.c.itemtype_name),
secondaryjoin=
(itemtype_inheritance.c.parent_name==itemtypes.c.name),
backref='progeny'),
'own_features':relation(Feature,
primaryjoin=(features.c.root_itemtype_name==itemtypes.c.name),
backref=backref('root_itemtype', uselist=False))
})

mapper(Feature, features, properties={
'dependents':relation(Feature,
secondary=feature_dependencies,
primaryjoin=
(feature_dependencies.c.determinant_id==features.c.id),
secondaryjoin=
(feature_dependencies.c.dependent_id==features.c.id),
backref=backref('determinants'))
})

mapper(FeatureDependency, feature_dependencies, properties={
'dependent':relation(Feature,
uselist=False,
primaryjoin=