[sqlalchemy] I can replace an object with another in python, by changing the object dict to the other object dict. How does it settle with sqlalchemy? Does it works when another mapped object poin

2008-12-03 Thread [EMAIL PROTECTED]

I wrote it on usage reciepts, but no one answered yet.

The reason I'm doing so, is to solve the following problem: I have an
object that is compounded from the fields - obj_id, num1, num2, num3.
obj_id is my primary key. I want to create a save method for the
object's class, that will do the following:
If the object is in the session, save it.
Else, if there is another object in the db with the same num1 and
num2, use the object in the db instead of the current one, and warn
the user if num3 is different.
So it's quite like merge, but not necessarily on the primary key. Now,
I want that use the object in the db wouldn't be by returning the
object after the merge (original object if didn't exist, and db_object
if existed), but really replacing it (self.dict =
existing_object.dict), so one can use that object afterwards, without
being confused.


I know that I can make a constraint in the DB that num1 and num2 would
be unique so that object won't be created (the user will get an
exception), but it won't really be a fix for my problem, because it's
important for my users not to get that kind of exceptions (Especially
because I have another mapped clazz, that is the contatiner of many
obj, so way they will save that class they don't won't to deal with
the case that one of the obj already existed).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: metadata reflecting all schemas

2008-11-30 Thread [EMAIL PROTECTED]

any sqlalchemy ways of retrieving a list of schemas?

On Nov 30, 1:23 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Nov 29, 2008, at 6:05 PM, [EMAIL PROTECTED] wrote:



  Thanks Michael

  If its just a warning and its supposed to continue past it, Why
  doesn't it finish reflecting all the tables in all the schemas instead
  of a few tables in two schemas.

 that would be a different issue.   But I would note that
 metadata.reflect() only reflects one schema at a time, either the
 tables within the default schema, or those within the schema name
 which you specify.

  I think it retrieved all the tables in the first schema which i
  specified and followed the foreign keys to retrieve the metadata for
  the second tables.

 that's what it would do, yup.

  Any suggestions on how i can reflect a list of schemas or make it
  reflect all the schemas? it didn't like '%' as the schema name.

 you have to retreive the list of desired schemas manually, then call
 reflect() for each one.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] metadata reflecting all schemas

2008-11-29 Thread [EMAIL PROTECTED]

Hi
I was wondering if there was a way to reflect all schemas in the
metadata, or get a list of schemas in the database with out querying
the catalog for postgresql.

Regards, Jar
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: metadata reflecting all schemas

2008-11-29 Thread [EMAIL PROTECTED]

Or maybe this is my problem

/home/jchesney/workspace/sqlalchemy/lib/sqlalchemy/engine/base.py:
1237: SAWarning: Did not recognize type 'name' of column 'USERNAME'
  self.dialect.reflecttable(conn, table, include_columns)
/home/jchesney/workspace/sqlalchemy/lib/sqlalchemy/engine/base.py:
1237: SAWarning: Did not recognize type 'name' of column 'SCHEMA_NAME'
  self.dialect.reflecttable(conn, table, include_columns)
/home/jchesney/workspace/sqlalchemy/lib/sqlalchemy/engine/base.py:
1237: SAWarning: Did not recognize type 'name' of column 'TABLE_NAME'
  self.dialect.reflecttable(conn, table, include_columns)
/home/jchesney/workspace/sqlalchemy/lib/sqlalchemy/engine/base.py:
1237: SAWarning: Did not recognize type 'name' of column 'FIELD_NAME'
  self.dialect.reflecttable(conn, table, include_columns)

I use the 'name' column data type for my columns.
Its a postgresql database.

engine = create_engine('postgres://.')

engine
metadata = MetaData()
metadata.reflect(engine,'')

for t in metadata.tables.values():
print t.fullname
for c in t.columns:
print \t + c.name

I get a partial listing which as two different schemas in it. Is the
reflect bombing out when it hits the above errors?
If so, I will need to try and get sqlalchemy to accept this field
type.

Regards, Jar

On Nov 29, 10:34 pm, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
 Hi
 I was wondering if there was a way to reflect all schemas in the
 metadata, or get a list of schemas in the database with out querying
 the catalog for postgresql.

 Regards, Jar
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: metadata reflecting all schemas

2008-11-29 Thread [EMAIL PROTECTED]

Thanks Michael

If its just a warning and its supposed to continue past it, Why
doesn't it finish reflecting all the tables in all the schemas instead
of a few tables in two schemas.
I think it retrieved all the tables in the first schema which i
specified and followed the foreign keys to retrieve the metadata for
the second tables.

Any suggestions on how i can reflect a list of schemas or make it
reflect all the schemas? it didn't like '%' as the schema name.

On Nov 30, 4:42 am, Michael Bayer [EMAIL PROTECTED] wrote:
 the type is not recognized but the reflection operation should  
 succeed.  thats why you're only getting a warning on those.

 On Nov 29, 2008, at 8:00 AM, [EMAIL PROTECTED] wrote:



  Or maybe this is my problem

  /home/jchesney/workspace/sqlalchemy/lib/sqlalchemy/engine/base.py:
  1237: SAWarning: Did not recognize type 'name' of column 'USERNAME'
   self.dialect.reflecttable(conn, table, include_columns)
  /home/jchesney/workspace/sqlalchemy/lib/sqlalchemy/engine/base.py:
  1237: SAWarning: Did not recognize type 'name' of column 'SCHEMA_NAME'
   self.dialect.reflecttable(conn, table, include_columns)
  /home/jchesney/workspace/sqlalchemy/lib/sqlalchemy/engine/base.py:
  1237: SAWarning: Did not recognize type 'name' of column 'TABLE_NAME'
   self.dialect.reflecttable(conn, table, include_columns)
  /home/jchesney/workspace/sqlalchemy/lib/sqlalchemy/engine/base.py:
  1237: SAWarning: Did not recognize type 'name' of column 'FIELD_NAME'
   self.dialect.reflecttable(conn, table, include_columns)

  I use the 'name' column data type for my columns.
  Its a postgresql database.

     engine = create_engine('postgres://.')

     engine
     metadata = MetaData()
     metadata.reflect(engine,'')

     for t in metadata.tables.values():
         print t.fullname
         for c in t.columns:
             print \t + c.name

  I get a partial listing which as two different schemas in it. Is the
  reflect bombing out when it hits the above errors?
  If so, I will need to try and get sqlalchemy to accept this field
  type.

  Regards, Jar

  On Nov 29, 10:34 pm, [EMAIL PROTECTED]
  [EMAIL PROTECTED] wrote:
  Hi
  I was wondering if there was a way to reflect all schemas in the
  metadata, or get a list of schemas in the database with out querying
  the catalog for postgresql.

  Regards, Jar
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] offline metadata configuration, change storage and metadata snapshot.

2008-11-28 Thread [EMAIL PROTECTED]

Hi All
I'm writing a program for graphically configuring database schemas.
To do this, I want to use the sqlalchemy schema.py objects to store my
metadata as its changed and modified.

To do this, I think i will need three things.
1. Apply the meta data changes to a development database, As this is
the easiest way to ensure integrity as you are reconfiguring the
structure.
2. Store incremental changes to the metadata
3. Snapshot the metadata to a file so it can be restored.

Does SQL Alchemy currently have any facilities to help me achieve
items 2 and 3?

Regards, Jar

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: offline metadata configuration, change storage and metadata snapshot.

2008-11-28 Thread [EMAIL PROTECTED]

Thanks.
I thought of using pickle but it says not everything can be picked, So
i was concerned the metadata objects might be in the unlucky group.
The other problem i have with pickle is what happens when i upgrade to
a newer version of sqlAlchemy some of the metadata objects might
change. How does pickle handle this?
 My other idea instead of pickle was to write code to map it out and
store it in a SQL lite or xml, Then i could patch it so it could be
restored to the never version of sqlAlchemy.

Thanks heaps again for the help,
Regards, Jar

On Nov 29, 1:56 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Nov 28, 2008, at 6:58 AM, [EMAIL PROTECTED] wrote:



  Hi All
  I'm writing a program for graphically configuring database schemas.
  To do this, I want to use the sqlalchemy schema.py objects to store my
  metadata as its changed and modified.

  To do this, I think i will need three things.
  1. Apply the meta data changes to a development database, As this is
  the easiest way to ensure integrity as you are reconfiguring the
  structure.
  2. Store incremental changes to the metadata
  3. Snapshot the metadata to a file so it can be restored.

  Does SQL Alchemy currently have any facilities to help me achieve
  items 2 and 3?

 for #1 and #2, you want to use sqlalchemy-migrate 
 athttp://code.google.com/p/sqlalchemy-migrate/
   .   For #3, the metadata object can be pickled, and for more
 sophisticated serilalization of expression and ORM constructs there is
 also a serializer extension in 0.5, documented 
 athttp://www.sqlalchemy.org/docs/05/plugins.html#plugins_serializer
 .
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: New instance ExtraStat with identity key (...) conflicts with persistent instance ExtraStat

2008-11-28 Thread [EMAIL PROTECTED]

What was your justification of changing the name of my thread to a
completely different topic instead of starting a new thread?

I don't think thats good etiquette.


On Nov 29, 5:22 am, Doug Farrell [EMAIL PROTECTED] wrote:
 Hi all,

 I'm having a problem with a new instance of a relation conflicting with
 an existing instance. I'm using SA 0.5rc with Sqlite3. Here are my
 simplified classes:

 class Stat(sqladb.Base):
 __tablename__ = stats
 name = Column(String(32), primary_key=True)
 total= Column(Integer)
 created  = Column(DateTime, default=datetime.datetime.now())
 updated  = Column(DateTime)
 states   = Column(PickleType, default={})
 extraStats   = relation(ExtraStat, backref=stat)

 class ExtraStat(sqladb.Base):
 __tablename__ = extrastats
 name = Column(String(32), ForeignKey(stats.name),
 primary_key=True)
 total= Column(Integer)
 created  = Column(DateTime, default=datetime.datetime.now())
 updated  = Column(DateTime)
 states   = Column(PickleType, default={})

 The above Stat class has a one-to-many relationship with the ExtraStat
 class (which I think I've implemented correctly). Later in the program I
 create an in memory data model that has as part of it's components two
 dictionaries that contain Stat instances. Those Stat instances have
 relationships to ExtraStat instances. My problem comes in the following
 when I'm trying to update the data in those instances/tables. Here is a
 section of code that throws the exception:

 pressName = press%s % pressNum
 # add new ExtraStat instances as relations
 self._addProductStatsPress(productType, pressName)
 self._addPressStatsProduct(pressName, productType)
 try:
   extraStat = session.query(Stat). \
   filter(Stat.name==productType). \
   join(extraStats). \
   filter(ExtraStat.name==pressName).one()
 except:
   extraStat = ExtraStat(pressName, ExtraStat.PRESS_TYPE)
   self.productStats[productType].extraStats.append(extraStat)
   extraStat.states.setdefault(sstate, 0)
   extraStat.states[sstate] += 1
   extraStat.updated = now
   extraStat = session.merge(extraStat)
 try:
   extraStat = session.query(Stat). \
   filter(Stat.name==pressName). \
   join(extraStats). \
   filter(ExtraStat.name==productType).one()    throws
 exception right here
 except:
   extraStat = ExtraStat(productType, ExtraStat.PRODUCT_TYPE)
   self.pressStats[pressName].extraStats.append(extraStat)
   extraStat.states.setdefault(sstate, 0)
   extraStat.states[sstate] += 1
   extraStat.updated = now

 The marked area is wear it throws the exception. I'm not sure what to do
 here to get past this, any help or ideas would be greatly appreciated.

 The exact exception is as follows:
 Sqlalchemy.orm.exc.FlushError: New instance [EMAIL PROTECTED] With identity
 key (class '__main__.ExtraStat',(u'C',)) conflicts with persistent
 instance [EMAIL PROTECTED]

 Thanks!
 Doug
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: New instance ExtraStat with identity key (...) conflicts with persistent instance ExtraStat

2008-11-28 Thread [EMAIL PROTECTED]

What was your justification of changing the name of my thread to a
completely different topic instead of starting a new thread?

I don't think thats good etiquette.


On Nov 29, 5:22 am, Doug Farrell [EMAIL PROTECTED] wrote:
 Hi all,

 I'm having a problem with a new instance of a relation conflicting with
 an existing instance. I'm using SA 0.5rc with Sqlite3. Here are my
 simplified classes:

 class Stat(sqladb.Base):
 __tablename__ = stats
 name = Column(String(32), primary_key=True)
 total= Column(Integer)
 created  = Column(DateTime, default=datetime.datetime.now())
 updated  = Column(DateTime)
 states   = Column(PickleType, default={})
 extraStats   = relation(ExtraStat, backref=stat)

 class ExtraStat(sqladb.Base):
 __tablename__ = extrastats
 name = Column(String(32), ForeignKey(stats.name),
 primary_key=True)
 total= Column(Integer)
 created  = Column(DateTime, default=datetime.datetime.now())
 updated  = Column(DateTime)
 states   = Column(PickleType, default={})

 The above Stat class has a one-to-many relationship with the ExtraStat
 class (which I think I've implemented correctly). Later in the program I
 create an in memory data model that has as part of it's components two
 dictionaries that contain Stat instances. Those Stat instances have
 relationships to ExtraStat instances. My problem comes in the following
 when I'm trying to update the data in those instances/tables. Here is a
 section of code that throws the exception:

 pressName = press%s % pressNum
 # add new ExtraStat instances as relations
 self._addProductStatsPress(productType, pressName)
 self._addPressStatsProduct(pressName, productType)
 try:
   extraStat = session.query(Stat). \
   filter(Stat.name==productType). \
   join(extraStats). \
   filter(ExtraStat.name==pressName).one()
 except:
   extraStat = ExtraStat(pressName, ExtraStat.PRESS_TYPE)
   self.productStats[productType].extraStats.append(extraStat)
   extraStat.states.setdefault(sstate, 0)
   extraStat.states[sstate] += 1
   extraStat.updated = now
   extraStat = session.merge(extraStat)
 try:
   extraStat = session.query(Stat). \
   filter(Stat.name==pressName). \
   join(extraStats). \
   filter(ExtraStat.name==productType).one()    throws
 exception right here
 except:
   extraStat = ExtraStat(productType, ExtraStat.PRODUCT_TYPE)
   self.pressStats[pressName].extraStats.append(extraStat)
   extraStat.states.setdefault(sstate, 0)
   extraStat.states[sstate] += 1
   extraStat.updated = now

 The marked area is wear it throws the exception. I'm not sure what to do
 here to get past this, any help or ideas would be greatly appreciated.

 The exact exception is as follows:
 Sqlalchemy.orm.exc.FlushError: New instance [EMAIL PROTECTED] With identity
 key (class '__main__.ExtraStat',(u'C',)) conflicts with persistent
 instance [EMAIL PROTECTED]

 Thanks!
 Doug
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Info needed regarding the use of cascade

2008-11-20 Thread --- [EMAIL PROTECTED] ---

I got you now
Thank you Simon
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Info needed regarding the use of cascade

2008-11-18 Thread --- [EMAIL PROTECTED] ---


Thank you Michael ,

 you only need a single relation() + backref(), books-stock.

did you mean like this ?


class Stock(declarative_base):
  __tablename__ = 'tbl_stock'
  
  
  
  pass

class Book(declarative_base):
  __tablename__ = 'tbl_books'
  
  
  stock = relation('Stock', backref=backref
('tbl_books',order_by=id))


if so how can i retrieve all the books in a particular stock  ??
in my case i could have done it by

 ins_stock = session.querry(Stock).filter(id=100).one()
 print ins.stock.books
[book1 objectbook2 object ...]
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Info needed regarding the use of cascade

2008-11-15 Thread --- [EMAIL PROTECTED] ---

am a newbie ..
encountered a small problem
any help will be appreciated

class Stock(declarative_base):
  __tablename__ = 'tbl_stock'
  
  
  books = relation(Book, order_by=Book.id,
backref=tbl_stock)
  
  pass

class Book(declarative_base):
  __tablename__ = 'tbl_books'
  
  
  stock = relation('Stock', backref=backref('tbl_books',
order_by=id))

## created a stock and added 3 books to the stock
ins_stock = Stock()
ins_book_1 = Book('Book1')
ins_book_2 = Book('Book2')
ins_book_3 = Book('Book3')
ins_stock.books = [ins_book_1,ins_book_2,ins_book_3]

# saving ...
session.add(ins_stock)
flush()

#  Later when i update the stock like removing book1 and adding book4
# book1 exists in the db

ins_stock.books.remove(ins_book_1)
ins_book_4 = Book('Book4')
ins_stock.books.add(ins_book_4)

flush()

in the table the row corresponding to the book1 remains there
what should i do to tell sqlalchemy to automatically remove orphan
books
i have tried giving cascade = 'a'',delete-orphan'
but errors comes

thanks and regards
[EMAIL PROTECTED]
Calicut,India

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DB Operational Error

2008-11-04 Thread [EMAIL PROTECTED]



On Nov 4, 3:36 am, Raoul Snyman [EMAIL PROTECTED] wrote:
 Hi Michael,

 On Nov 2, 1:07 am, Michael Bayer [EMAIL PROTECTED] wrote:

  look into the pool_recycle option described 
  athttp://www.sqlalchemy.org/docs/05/dbengine.html

 I'm also getting these errors, and I have pool_recycle, pool_size and
 max_cycle set. I'm using Pylons, SQLAlchemy 0.5 and MySQLdb 1.2.2

 # lines from my Pylons ini file
 sqlalchemy.default.pool_recycle = 3600
 sqlalchemy.default.pool_size = 32
 sqlalchemy.default.max_overflow = 1024


You have a recycle time of 1 hour (3600 seconds).  That is usually
short enough.  I've encountered situations where a firewall closed
idle connections after 30 minutes, leading to server has gone away
errors.  Could the MySQL configuration be set to an idle timeout that
is less than an hour?  Could you somehow be opening connections from
outside the pool?  (I did that to myself when trying to add some
conversion functions to the MySQLdb module.)

 I think that the reason this happens is because this is not a stateful
 app, but rather a state-less web site. Every request to the site is a
 full but isolated execution of the app, and this makes me think that
 SQLAlchemy doesn't really have a Pool to work with.


Then you'd be starting fresh every time and there would be no
opportunity for a stale connection.

 I'm rather flummoxed on this one. No one seems to have an answer other
 than pool_recycle - which is not working.

You could try cutting the recyle value down to 60 seconds.  You need a
recycle time that is less than the idle timeout.  If 60 seconds fixes
the problem, then you know that you have a short idle timeout setting
elsewhere.

 Kind regards,

 Raoul Snyman
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] infinity with mysql backend

2008-10-09 Thread [EMAIL PROTECTED]

I have a sqlalchemy table with a float column, and I would like to be
able to store +/- infinity.  I am using numpy, and have access to
np.inf.  However, if I try and store this value, I get

  OperationalError: (OperationalError) (1054, Unknown column
'Infinity' in 'field list')

Is there a way to store infinity using sqlalchemy with a mysql
backend?

In [128]: sa.__version__
Out[128]: '0.5.0beta4'

[EMAIL PROTECTED]:~ mysql --version
mysql  Ver 12.22 Distrib 4.0.24, for pc-solaris2.10 (i386)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using domain object instead of Table as first argument of select function

2008-07-03 Thread [EMAIL PROTECTED]

Hi everybody,

I am working on my graduate (final :)) exam, and SQLAlchemy is main
topic.

I was tryinig example from ORM tutorial:

 from sqlalchemy import select, func
 session.query(User).from_statement(
... select(
...[users_table],
...
select([func.max(users_table.c.name)]).label('maxuser')==users_table.c.name)
...).all()
[User('wendy','Wendy Williams', 'foobar')]

And this looks too ugly for me, and not only that, I think that it is
not O.K. to use table and columns in ORM (at least I should have an
alternative way doing it without table and column objects).

So I tried something this (in my example User is Client):

 session.query(Client).from_statement(
... select(
... [Client],
... select([func.max(Client.name)]).label('max_name') ==
Client.name)
... ).all()

but, it is not working, the problem is in first argument of select.

Then I tried Client.c as first argument, but without success.

Only this works:

 session.query(Client).from_statement(
... select(
... Client.c._data.values(),
... select([func.max(Client.name)]).label('max_name') ==
Client.name)
... ).all()

but it is so ugly hack.

Is it some better way of doing this?

Sorry for my bad English :)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Searching in related tables

2008-07-02 Thread [EMAIL PROTECTED]

Hi.

I have clients(table) with address(related one-to-one) and
persons(related one-to-many)..

I want to do this:
results =
ses.query(b.Client).filter(b.Client.address_id==b.Address.id).\
filter(b.Client.id==b.Client_person.client_id).filter(or_(*cols)).all()

Using only:
results = ses.query(b.Client).filter(or_(*cols)).all()

where *cols is for example:
b.Address.name.like('%address%')
or
b.Client_person.surname.like('%surname%')
...

When I do not specify the join filter, I get all companies...
With join filter, everything works fine.

Is there a way to not specify the join filters explicitly??

thx
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] How to use a custom collection class in stead of InstrumentedList?

2008-07-02 Thread [EMAIL PROTECTED]

SQLAlchemy creates the relationship as a collection on the parent
object containing instances of the child object. I think the
collection is an instance of
sqlalchemy.orm.collections.InstrumentedList.

I want to know how to use my own list-like class in stead of the
InstrumentedList.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to use a custom collection class in stead of InstrumentedList?

2008-07-02 Thread [EMAIL PROTECTED]

I've read the section 'Alternate Collection Implementations' in the
documentation just now.
Sorry for my carelessness.

On 7月3日, 上午11时57分, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 SQLAlchemy creates the relationship as a collection on the parent
 object containing instances of the child object. I think the
 collection is an instance of
 sqlalchemy.orm.collections.InstrumentedList.

 I want to know how to use my own list-like class in stead of the
 InstrumentedList.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Searching in all fields

2008-06-27 Thread [EMAIL PROTECTED]

Hi.

I want to do robust algorithm for searching in tables...the simplest
example is with table with no relations:

stmt = u'SELECT * FROM '
stmt += str(b.clients.name)
stmt += ' WHERE '
for c in b.Client.c:
  stmt += str(c)+' like \'%value%\' or '

clients = session.query(Client).from_statement(stmt).all()

There is one big problem using the '%' sign, because python is using
it to replace values in string like:
'Welcom %s to my site' % 'john'

Afterwards I want to search in tables with relations, like:

session.query(Client).add_entity(Address)..

Can anyone help me with this problem?
What is the sqlalchemy way to make multisearch ??

Thx in advance
m_ax
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Searching in all fields

2008-06-27 Thread [EMAIL PROTECTED]

1)
multisearch... I meant, that i want to create piece of code, that
will automaticly search in all columns of a table
So if I can use this function (or whatever it will be) for different
tables with no change..
for example:
I have a client(table) with address(related table one-to-one) and
persons(related table one-to-many) and make:
clients = my_function(clients_table, 'anna')
to return me all clients from database, 'anna' works in

2)
How can I generate the fields in or_() statement??

thx

On Jun 27, 3:41 pm, [EMAIL PROTECTED] wrote:
 what is multisearch? sort of 
 patternmatching?http://www.sqlalchemy.org/docs/05/ormtutorial.html#datamapping_queryi...

 query(A).filter( or_(
 A.c1.startswith(),
 A.c2.endswith(),
 A.c3.like('%alabal%'),
 ...
   ))
 u can generate the arguments of or_(...)

 On Friday 27 June 2008 16:12:57 [EMAIL PROTECTED] wrote:

  Hi.

  I want to do robust algorithm for searching in tables...the
  simplest example is with table with no relations:

  stmt = u'SELECT * FROM '
  stmt += str(b.clients.name)
  stmt += ' WHERE '
  for c in b.Client.c:
stmt += str(c)+' like \'%value%\' or '

  clients = session.query(Client).from_statement(stmt).all()

  There is one big problem using the '%' sign, because python is
  using it to replace values in string like:
  'Welcom %s to my site' % 'john'

  Afterwards I want to search in tables with relations, like:

  session.query(Client).add_entity(Address)..

  Can anyone help me with this problem?
  What is the sqlalchemy way to make multisearch ??

  Thx in advance
  m_ax
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] New istance in one-to-one relationship

2008-06-25 Thread [EMAIL PROTECTED]

Hi.

I'm trying to insert new data into db using one-to-one relationship,
but i'm getting this error:
sqlalchemy.exceptions.OperationalError: (OperationalError) (1048,
Column 'address_id' cannot be null) u'INSERT INTO companies
(address_id, company, ico, dic, bank_account) VALUES (%s, %s, %s, %s,
%s)' [None, u'Vnet a.s.', u'2332521351', u'SK234623513',
u'132412153/0900']

Here is the code:
class Address(Template):
pass
class Client(Template):
pass

addresses = Table('addresses', metadata, autoload=True)
clients =   Table('clients', metadata,
Column('address_id', Integer,
ForeignKey('addresses.id')),
autoload=True)

orm.mapper(Client, clients, properties={
'address': orm.relation(Address, backref=backref('client',
uselist=False)) })

ses = SQLSession()
client = Client(**client_data)
address = Address(**address_data)
client.address = address
ses.save(client)
ses.commit()
ses.close()

The problem is, that sqlalchemy does not set the 'address_id' column
in 'clients' table.
How is the sqlalchemy-way to do this??

I was able to do it this way:
ses.SQLSession()
client = Client(address_id=0, **client_data)
ses.save(client)
ses.commit()
ses.rollback()
client.address = b.Address(**address_data)
ses.commit()
ses.close()

Thanks
Pavel

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] 2 Many to many relations with extra-columns - How to for a newb

2008-05-12 Thread [EMAIL PROTECTED]

Hello All,
I am trying to understand how to use SA and need some help.

I have several tables with 2 many-to-many relations with extra columns
and 1 only with foreign keys.
See below for the definitions of tables and mappers. I also created
classes for all tables (entities and associations).

1) For the association without extra-column (self.correspond), no
problem.
I can add questions and categories. For instance:
q=Question(question='blabla')
c=Category('cat1')
q.categories.append(c)
session.save(q)
session.commit()

2) For the 2 other which have extra-columns, I don't understand how to
manage.
For info, these 2 associations relate to both the users and the
questions tables.
For instance, how can I add a question related to a user, ie go
through the ask relation ?
I went through the excellent documentation but I have to admit that I
don't understand...

Can somebody :
- check my mappers are well defined (those with extra columns:
askMapper and answerMapper and also questMapper)
- briefly explain me how to handle operations between users and
questions tables through these mappers
I'm hoping it is clear enough

Thanks a lot in advance for your help
Dominique


Tables and relations are as follows:
#Entities
self.users = Table('users',self.metadata,
Column('user_id', Integer, primary_key = True),
Column('user_name', Unicode(25), unique = True))

self.categories = Table('categories',self.metadata,
Column('categ_id',Integer, primary_key = True),
Column('categ_name',Unicode(250), unique = True))#
rajouter unique

self.questions = Table('questions', self.metadata,
Column('quest_id', Integer, primary_key = True),
Column('question', Unicode(300)))

# Associations
self.correspond = Table('categories_questions', self.metadata,
Column('quest_id', Integer,
ForeignKey('questions.quest_id'), primary_key = True),
Column('categ_id', Integer,
ForeignKey('categories.categ_id'), primary_key = True))

self.ask = Table('ask', self.metadata,
Column('user_id',Integer,
ForeignKey('users.user_id'), primary_key = True),
Column('quest_id',Integer,
ForeignKey('questions.quest_id'), primary_key = True),
Column('data1',Integer, nullable = False, default
= 50))

self.answer = Table('answer',self.metadata,
Column('user_id',Integer,
ForeignKey('users.user_id'), primary_key=True),
Column('quest_id',Integer,
ForeignKey('questions.quest_id'), primary_key=True),
Column('data2',Integer),
ForeignKeyConstraint(['user_id','quest_id'],
['ask.user_id','ask.quest_id']))

# mappers
self.userMapper = mapper(User, self.users)
self.categMapper = mapper(Category, self.categories)

self.questMapper = mapper(Question, self.questions, properties ={
# ManyToMany CorrespondAssociation between
questions and categories
'categories': relation(Category, secondary =
self.correspond, backref='questions'),

# ManyToMany AskAssociation between questions and
users
'users': relation(AskAss, backref='questions'),

# ManyToMany AnswerAssociation between questions
and users
'users': relation(AnswerAss, backref='questions')
})

self.askMapper = mapper(AskAss, self.poser, properties = {
# Ask Association between questions and users
'users': relation(User, backref = 'ask')
})

self.answerMapper = mapper(AnswerAss, self.answer, properties = {
# ManyToMany AnswerAssociation between questions
and users
'users': relation(User, backref = 'answer')
})

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Hey, try out Flock

2008-04-29 Thread [EMAIL PROTECTED]
Hello,I'm emailing you about Flock.  I'm now using Flock as my default browser, 
and I love what it's done for my whole Web experience.  Flock is a social web 
browser that uniquely pulls together the people, photos, videos and websites I 
care about.  Check it out, I think you're really going to like it.You can 
download it for free at  http://www.flock.com/invited/1209451183 Enjoy it!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Access to the attributes of a session object (newbie)

2008-04-11 Thread [EMAIL PROTECTED]

Hello All,

I have a session object, for instance:
my_object = session.query(Person).filter_by(name='MYSELF').first()

I know I can access to its attributes to modify the database:
my_object.name = 'YOURSELF'
my_object.town = 'PARIS'

Is there a way to access its attributes on another way ?

The point is that I have a dictionary that records all values of
several textcontrols (with wxPython):
my_dict = {'name':'YOURSELF', 'town':'PARIS'} etc

I would like to link (with a loop) the keys of this dictionary with
the attributes of the session object, without having to
write again all the attributes (my_object.name = my_dict['name'] etc )
since there are a lot.

I tried to loop like this without success:
for (key,value) in my_dict.iteritems():
my_object.key = value   #  my_object[key] = value  doesn't
work either

Thanks in advance for any hints and sorry for the low level of my
question !
Dominique
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Access to the attributes of a session object (newbie)

2008-04-11 Thread [EMAIL PROTECTED]

Magic ! It works perfectly.

Thanks a lot Michael for your help and all your work.

Dominique
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: invalid byte sequence for encoding:utf8 Postgresql

2008-02-14 Thread [EMAIL PROTECTED]

Solvedmy faultthe file i'm reading was latin1... and I was
using the standard open..
now I use:
self.in_file = codecs.open(self.filename, r, latin1)
after that it worked fine...
thanks
On 13 Feb, 16:50, Michael Bayer [EMAIL PROTECTED] wrote:
 On Feb 13, 2008, at 2:28 AM, [EMAIL PROTECTED] wrote:





  Hello, I had a postgresql database:
  CREATE DATABASE panizzolosas
   WITH OWNER = postgres
ENCODING = 'UTF8';

  and i'm using sqlalchemy 0.4.2p3.
  this is my code
  self.metadata=MetaData()

  engine = create_engine(stringaDATABASE, encoding='utf-8',
  echo=False,convert_unicode=True)

  self.metadata.bind= engine

  try:

 table_ditta=Table('tblditta', self.metadata, autoload=True)

 mapper(Ditta, table_ditta)

  except :

 print Error

  On the database I had some record with the caracter à and if I make
  some updates I receive the error

  ProgrammingError: (ProgrammingError) invalid byte sequence for
  encoding UTF8: 0xe03537
  HINT:  This error can also happen if the byte sequence does not match
  the encoding expected by the server, which is controlled by
  client_encoding.
  'UPDATE tblditta SET codice=%(codice)s WHERE tblditta.id = %
  (tblditta_id)s' {'tblditta_id': 592, 'codice': 'Cibra Publicit
  \xe0577'}

  \xe0577 is à I suppose..

 would need to see the code youre using to insert data.  Also, set
 assert_unicode=True on your create_engine() call; that will
 illustrate non unicode strings being passed into the dialect.  When
 using convert_unicode=True at the engine level, *all* strings must be
 python unicode strings, i.e. u'somestring'.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] invalid byte sequence for encoding:utf8 Postgresql

2008-02-13 Thread [EMAIL PROTECTED]

Hello, I had a postgresql database:
CREATE DATABASE panizzolosas
  WITH OWNER = postgres
   ENCODING = 'UTF8';

and i'm using sqlalchemy 0.4.2p3.
this is my code
self.metadata=MetaData()

engine = create_engine(stringaDATABASE, encoding='utf-8',
echo=False,convert_unicode=True)

self.metadata.bind= engine

try:

table_ditta=Table('tblditta', self.metadata, autoload=True)

mapper(Ditta, table_ditta)

except :

print Error


On the database I had some record with the caracter à and if I make
some updates I receive the error

ProgrammingError: (ProgrammingError) invalid byte sequence for
encoding UTF8: 0xe03537
HINT:  This error can also happen if the byte sequence does not match
the encoding expected by the server, which is controlled by
client_encoding.
 'UPDATE tblditta SET codice=%(codice)s WHERE tblditta.id = %
(tblditta_id)s' {'tblditta_id': 592, 'codice': 'Cibra Publicit
\xe0577'}

\xe0577 is à I suppose..

Any help would be appreciated.
Thanks..
Bye Emyr

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unique ID's

2008-01-23 Thread [EMAIL PROTECTED]

Thanks guys for your help I'm going to give Hermanns methods a go.

Morgan

Hermann Himmelbauer wrote:
 Am Montag, 21. Januar 2008 01:16 schrieb Morgan:
   
 Hi Guys,

 I have field that I want to put a unique identifier in. This unique Id i
 would like to be a composite key or simply a random number. What do you
 guys suggest for this, is there a particular method which works well for
 some of you?
 

 That's a good question, I asked myself some weeks ago, here's how I solved 
 this: 

 In my case, I have database records that have sequential numbers as primary 
 keys. These keys can be calculated by the database and are unique by design 
 (as the primary index is unique).

 This record should hold another field, which should be also unique and in a 
 form of a 8-digit number. However, I'd rather not want this number to be 
 sequential, it should look random. The first way would have been to simple 
 generate a number via random.randint(), look into the database, if 
 it's already in and if not, insert it. However, to guarantee that the number 
 is unique, one should create a unique index on this column. In case the 
 number is already there, the database will raise an error, which has to be 
 catched by the application. Another way would be to lock the table after the 
 select, so that the rare case, that another application instance inserts the 
 same number after my select, is avoided. So, the algorithm could look like 
 this (in pseudo code):

 # Variant 1 with exception handling
 while 1:
   num = random.randint()
   try:
 insert into db_table (col1, col2, col_num, col3, ) % num
   except UniqueNum_IndexViolated:
 continue
   else:
 break

 # Variant 2 with locking
 while 1:
   num = random.randint()
   lock db_table
   result = select * from db_table where col_num = num
   if result: 
 continue
   else:
 insert into db_table (col1, col2, col_num, col3, ) % num
 unlock db_table
   continue

 My problem with variant (1) was that I could not find out how to lock a whole 
 table with SQLAlchemy, moreover, each insert needs a table lock and a select, 
 which is bad performance-wise. The problem with (2) was that I did not know 
 how to catch this specific exception, as I can't simply except any database 
 error but this specific index violation (which may be different on different 
 databases).

 My third idea, which I use now, is to calculate my random number out of my 
 sequential, unique primary index, which is generated by the database during 
 the insert. One helpful guy from #sqlalchemy helped me out with 
 the randomization of the sequential number with this algorithm:

 def mk_arb_seq(id):
  Return an arbitrary number. This number is calculated out of
 the given id. For that, it is multiplied by the large prime numberA.
 Then a modulo operation with prime M where M  A. If A is
 chosen as a non-prime, the sequence is not very arbitrary,
 therefore a prime is recommended. 

 M = 9989
 A = 2760727302517

 return str((A*id) % M).zfill(len(str(M)))

 The last problem with this is that I have no real mathematical proof for that 
 algorithm, that id never maps to one number more than once. However, I 
 simply tested this with a little program and it seems to work.

 If you use the ORM, don't forget to do a session.flush() after adding the 
 object to the session, as this will calculate the primary index. Then you can 
 simply set col_num = mk_arb_seq(primary_index).

 Best Regards,
 Hermann


   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Filter by year in datetime column

2008-01-18 Thread [EMAIL PROTECTED]

Hello, pleas, i have begginer problem and question:

In table (database is sqlite) is colum for create date (create_date =
Field(DateTime, default = datetime.now))

I need query from table, with all item where have year of create date
2007.

Is this the right way ? (this don`t work)
data = Table.query().filter(func.year(Mikropost.c.create_date) ==
2008)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Schema display

2008-01-07 Thread [EMAIL PROTECTED]

I tried to import boundMetadata and this has become MetaData, and I'm 
getting an import error

ImportError: No module named sqlalchemy_schemadisplay

So I was wondering if this has moved out of MetaData or been renamed.

Morgan

Michael Bayer wrote:
 where its always been...

 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/SchemaDisplay


 On Jan 5, 2008, at 7:58 PM, [EMAIL PROTECTED] wrote:

   
 Hi Guys,

 I was wondering where the function create_schema_graph has gone, or  
 what
 it has changed to. Any assistance would be appreciated.

 Let me know,
 Morgan

 


 
   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Schema display

2008-01-05 Thread [EMAIL PROTECTED]

Hi Guys,

I was wondering where the function create_schema_graph has gone, or what 
it has changed to. Any assistance would be appreciated.

Let me know,
Morgan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: filter_by() related table columns

2007-12-28 Thread [EMAIL PROTECTED]

 theres a certain magical behavior in 0.3 which we've removed in
 filter_by(), which is that when you say description it searches
 downwards through orderstatus to find it.  0.4 wants you to be
 explicit and say
 session
 .query
 (PurchaseOrder
 ).join('orderstatus').filter_by(description='Shipped').all().

Is there anyway to turn this magic back on in the .4 release or has
it totally been removed. We have quite a few existing queries that use
the concept of searching by related table columns and it would be
quite an undertaking to change all of these. Thanks for you help.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] filter_by() related table columns

2007-12-27 Thread [EMAIL PROTECTED]

I am attempting to  upgrade from sqlalchemy 0.3.11 to current release
0.4.1 and i am getting the following error:

/recs = session.query(PurchaseOrder).filter_by(description =
'Shipped').all()
 File C:\Python24\Lib\site-packages\sqlalchemy\orm\query.py, line
322, in filter_by
   clauses = [self._joinpoint.get_property(key,
resolve_synonyms=True).compare(operator.eq, value)
 File C:\Python24\Lib\site-packages\sqlalchemy\orm\mapper.py, line
192, in get_property
   raise exceptions.InvalidRequestError(Mapper '%s' has no property
'%s' % (str(self), key))
InvalidRequestError: Mapper 'Mapper|PurchaseOrder|purchaseorder' has
no property 'description'
/
I am trying to query a column in a related table with the following
query:
recs = session.query(PurchaseOrder).filter_by(description =
'Shipped').all()

This query works in 0.3.11.

Is there a new setting when defining relationships that I need to set
so that it looks at the related table columns when running a filter_by
or am i missing something simple?

Here is my simplified example classes, table definitions, and
mappings:

class PurchaseOrder(object) : pass
class OrderStatus(object) : pass

purchaseorder_table = sqlalchemy.Table(
   'purchaseorder',  metadata,
   sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True),
   sqlalchemy.Column('createdate', sqlalchemy.TIMESTAMP),
   sqlalchemy.Column('statusid', sqlalchemy.Integer,
sqlalchemy.ForeignKey('testing.orderstatus.statusid')),
   schema = 'testing')

orderstatus_table = sqlalchemy.Table(  'orderstatus',
   metadata,
   sqlalchemy.Column('statusid', sqlalchemy.Integer,
primary_key=True),
   sqlalchemy.Column('description', sqlalchemy.VARCHAR),
   schema = 'testing')

orm.mapper(PurchaseOrder,
  purchaseorder_table,
  properties={
   'orderstatus' : orm.relation(OrderStatus)})

orm.mapper(OrderStatus,
  orderstatus_table)

Thanks,
Curtis

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Problem with Relashionship

2007-12-17 Thread [EMAIL PROTECTED]

Please people help me

My Simple Model

class Record:
has_field('special_number',Unicode(100))
belongs_to('person',of_kind='Person')

class Person:
has_field('name',Unicode(100))

=
Use this:
Record(special_number=111,person=Person(name='myname'))

But now a i need access the special_number attribute in Record
starting at Person

I try this and don't work:
print Person.query(Record).one().special_number

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Hi to ALL. I just join this group.

2007-12-17 Thread [EMAIL PROTECTED]

Greetings to all http://milearmida.tripod.com/widescreen-wallpapers-naruto.html
widescreen wallpapers naruto
 http://milearmida.tripod.com/index.html naruto wallpapers
 http://milearmida.tripod.com/naruto-akatsuki-wallpapers.html naruto
akatsuki wallpapers
 http://milearmida.tripod.com/naruto-movie-1-wallpapers.html naruto
movie 1 wallpapers
 http://milearmida.tripod.com/black-and-white-naruto-wallpapers.html
black and white naruto wallpapers

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Sessionthread problem

2007-12-02 Thread [EMAIL PROTECTED]

Hello.
I have problem with select from database via sqlalchemy.
First select is ok, but twice select is broken a i get this error:

ProgrammingError: SQLite objects created in a thread can only be used
in that same thread.The object was created in thread id -1236382832
and this is thread id -1244775536

Session is created in class with table and mappers definitions.
Every select need unique session ? Or where is the problem ?

Sorry for (maybe) stupid question and my bad english ..

Thanks.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] sqlalchemy,session,query problem

2007-12-02 Thread [EMAIL PROTECTED]

Hello, i have this code for pagination on my blog app ( cherrypy
powered) (code is not complete) :

http://www.pastebin.cz/show/2535

I have problem, selecting/filtering by category_id is ok, but
limitoffset making trouble.
When i will next list with results, i get this error:

ProgrammingError: SQLite objects created in a thread can only be used
in that same thread.The object was created in thread id -1236735088
and this is thread id -1253520496

Is this problem with session or what ? Please help and sorry for my
bad english.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] merge with dont_load=True and lazy relations

2007-11-22 Thread [EMAIL PROTECTED]

Hi,

I'm using SQLAlchemy 0.4.1. Th eproblem is reproducible with
the following code:
http://pastie.caboo.se/121057

As you can see, when merging an object with a lazy relation
(here 'zone'), the 'zone' property is replaced with a fixed None value
in the merge object (you can't load the property anymore by accessing
it). Worst, when you flush the session, this None value is persisted,
although the 'zone_id' property is not None.

This (kind of a) bug does not occur if I use dont_load=False.

   Regards, Pierre-yves.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] ensuring all connection-related file descriptors are closed?

2007-11-11 Thread [EMAIL PROTECTED]

If a process with a reference to an engine (and hence a connection
pool) forks, how can I be sure that the child process closes any
inherited descriptors before creating its own engine and connections?

If the child calls dispose() on an engine it has inherited, the engine
seems to 'dispose' of any underlying pool only to recreate it.  Is it
safe then to use the inherited engine (since it has a new pool of
connections), or should a new one be created?  If the latter, how can
I ensure that all resources associated with the inherited one are
freed?

Any guidance would be appreciated.

J


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Looking for feedback on encapsulating SA logic (newbie)

2007-10-30 Thread [EMAIL PROTECTED]

 Is this a turbogears app? or just your stand alone app?

It's a standalone (and non-web) app.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Looking for feedback on encapsulating SA logic (newbie)

2007-10-28 Thread [EMAIL PROTECTED]

I just started experimenting with .4 last night, and I'm really
jazzed. The tutorials have been helpful, but I'm struggling to figure
out how to bridge the gap between the tutorial fragments and a real-
world application.

So far, I've got a little idea working, and I'm hoping to get some
feedback if I'm working in the right direction. I'm trying to
encapsulate my SA logic in its own module and work with these objects
in my programs.

Below is a very simplified example of what I'm trying to do. I could
be handling my sessions, metadata, and tables poorly/inefficiently,
and I'd love some feedback where it could be better.

One glaring problem is the handling of the session information. I
tried to put it into a __get_session() method of ModelHandler, but I
was having trouble getting it working when called from a subclass. (b/
c User has no method __get_session())

It also seems that the four lines under my classes (creating the
engine, metadata, mapping, etc.) could be put into the constructor of
my superclass, but I'm not sure how to refrerence it yet.

Thanks in advance - I'm really looking forward to diving deeper with
SA!

*** Models.py ***

from sqlalchemy import *
from sqlalchemy.orm import mapper
from sqlalchemy.orm import sessionmaker

class ModelHandler(object):
def save(self):
# This session stuff should probably be handled by a private
method
# but I'm having trouble getting it to work when save is
subclassed()
Session = sessionmaker(bind=db, autoflush=True,
transactional=True)
session = Session()
session.save(self)
session.commit()
print Debugging Statement: Saved User

class User(ModelHandler):
def __init__(self, name, fullname, password):
self.name = name
self.fullname = fullname
self.password = password

# There should be some tidy place to put these in my objects
db = create_engine('postgres://apache:@localhost:5432/test')
metadata = MetaData(db)
users_table = Table('users', metadata, autoload=True)
mapper(User, users_table)


 from models import *
 new_user = User('wendy', 'Wendy Williams', 'foobar')
 new_user.save()


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] 04ormtutorial: IntegrityError: (IntegrityError) 1062, Duplicate entry

2007-10-03 Thread [EMAIL PROTECTED]

Hi!

I'm using tutorial 
http://www.sqlalchemy.org/docs/04/ormtutorial.html#datamapping_manytomany
When I create and save 1st object, all works fine.

But when I save 2nd post-object with the SAME KEYWORD:

cut lang=python

wendy = session.query(User).filter_by(name='wendy').one()
post = BlogPost(Wendy's Blog Post #2, This is a test #2, wendy)

post.keywords.append(Keyword('wendy')) ** wendy already
exists

post.keywords.append(Keyword('2ndpost'))
/cut

I got Exception:
...
sqlalchemy.exceptions.IntegrityError: (IntegrityError) (1062,
Duplicate entry 'wendy' for key 2) u'INSERT INTO keywords (keyword)
VALUES (%s)' ['wendy']

How can I avoid that?
How to use old keyword as object??


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] 04ormtutorial: IntegrityError: (IntegrityError) 1062, Duplicate entry

2007-10-03 Thread [EMAIL PROTECTED]

Hi!

I'm using tutorial 
http://www.sqlalchemy.org/docs/04/ormtutorial.html#datamapping_manytomany
When I create and save 1st object, all works fine.

But when I save 2nd post-object with the SAME KEYWORD:

cut lang=python

wendy = session.query(User).filter_by(name='wendy').one()
post = BlogPost(Wendy's Blog Post #2, This is a test #2, wendy)

post.keywords.append(Keyword('wendy')) ** wendy already
exists

post.keywords.append(Keyword('2ndpost'))
/cut

I got Exception:
...
sqlalchemy.exceptions.IntegrityError: (IntegrityError) (1062,
Duplicate entry 'wendy' for key 2) u'INSERT INTO keywords (keyword)
VALUES (%s)' ['wendy']

How can I avoid that?
How to use old keyword as object??


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: IntegrityError during query?

2007-10-02 Thread [EMAIL PROTECTED]

Hi Michael, sorry about the lack of information; I wasn't clear on
what you were looking for.

The failing constraint is a customer one for email addresses:

CREATE DOMAIN base.email as TEXT CHECK
(VALUE ~ '[EMAIL PROTECTED](\\.[-\\w]+)*\\.\\w{2,4}$');

Thanks again!
Mark

On Oct 1, 5:46 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Oct 1, 2007, at 2:29 PM, [EMAIL PROTECTED] wrote:





  Hi Michael,

  I'm creating the session by:

Session = sessionmaker(bind = engine,
 autoflush = True,
 transactional = True)
session = Session()

  and I'm not using any threading at all (therefore no thread-local
  storage). The only thing between the commit and the next query is some
  reporting of statistics (using sys.stdout).

  I'm getting a constraint violation IntegrityError.

 unique constraint ? PK constraint ? foreign key constraint ?are
 you doing any explicit INSERT statements of your own independent of
 the session ?

 I cant diagnose the problem any further on this end without an
 explicit example.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: IntegrityError during query?

2007-10-01 Thread [EMAIL PROTECTED]

Hi again, I really must be missing something fundamental here as I
cannot seem to solve this problem:

I have a loop that queries one table (without any contraints) and
writes to a second table (with constraints). Here's what the loop
looks like in pseudo-code:

while True:
  1) query old table and create a work list
  2) while items are in the work list:
  2.1) create a new object
  2.2) save
  3) commit

I can wrap the save (2.2) and commit (3) in try/except blocks which
solves the IntegrityError exceptions at that point. The problem is
that I'm getting IntegrityError exceptions in the query section (1)
what seem to be deferred INSERTS from the commit (3).

How can I turn off the deferred inserts?

Mark

On Sep 25, 9:27 am, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
 Hi,  I have a newbie question:

 I'm parsing a log file in order to record login-times but I'm getting
 an IntegrityError on an insert during a query. Does this make sense?
 Even though I'm going a commit at the botton of the loop should I
 expect the INSERT to actually happen during a subsequent query?

 Thanks,
 Mark


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: IntegrityError during query?

2007-10-01 Thread [EMAIL PROTECTED]

Hi Michael,

As I tried to show in the pseudo-code, the INSERTS look like they're
happening during the query (in step 1), well after the save/commit. I
even tried to add a flush and, when I turn on echo mode, I see inserts
happening at the query step. Is this even possible?

Mark

On Oct 1, 11:49 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Oct 1, 2007, at 11:02 AM, [EMAIL PROTECTED] wrote:





  Hi again, I really must be missing something fundamental here as I
  cannot seem to solve this problem:

  I have a loop that queries one table (without any contraints) and
  writes to a second table (with constraints). Here's what the loop
  looks like in pseudo-code:

  while True:
1) query old table and create a work list
2) while items are in the work list:
2.1) create a new object
2.2) save
3) commit

  I can wrap the save (2.2) and commit (3) in try/except blocks which
  solves the IntegrityError exceptions at that point. The problem is
  that I'm getting IntegrityError exceptions in the query section (1)
  what seem to be deferred INSERTS from the commit (3).

  How can I turn off the deferred inserts?

 some sample code would be helpful here in order to get some context
 as to what youre doing.  if the problem is just that the INSERT's
 dont occur until you say session.commit(), you can issue session.flush
 () at any time which will flush all pending changes/new items to the
 database.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: IntegrityError during query?

2007-10-01 Thread [EMAIL PROTECTED]

Hi Michael,

I'm creating the session by:

  Session = sessionmaker(bind = engine,
autoflush = True,
transactional = True)
  session = Session()

and I'm not using any threading at all (therefore no thread-local
storage). The only thing between the commit and the next query is some
reporting of statistics (using sys.stdout).

I'm getting a constraint violation IntegrityError.

Thanks again for any help!
Mark

On Oct 1, 12:47 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Oct 1, 2007, at 12:12 PM, [EMAIL PROTECTED] wrote:



  Hi Michael,

  As I tried to show in the pseudo-code, the INSERTS look like they're
  happening during the query (in step 1), well after the save/commit. I
  even tried to add a flush and, when I turn on echo mode, I see inserts
  happening at the query step. Is this even possible?

  Mark

 OK by code example im looking for:

 - are you on version 0.3 or 0.4 ?

 - how are you creating your session ?

 - using multiple threads ?  are you keeping each session local to a
 single thread ?

 - whats happening between steps 3 and 1 ?  depending on how the
 session is set up, yes
 a flush() can be issued right before the query executes (i.e.
 autoflush).  But, according to your
 workflow below, it should not; since you are calling a commit() at
 the end.

 - what kind of IntegrityError youre getting...duplicate row insert ?
 missing foreign key ?  no primary key ?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] ORDER_BY always in SELECT statements?

2007-09-28 Thread [EMAIL PROTECTED]

Hi,

While trying to debug my script I set echo=True and checked the SQL
statements that are generated. I noticed that all of the SELECTs
issued to the DB have the ORDER_BY clause -- even though I didn't
explicitly specify order_by() nor do I care about the order.

Is this normal? Is there any way to turn this off?

Thanks in advance,
Mark


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] IntegrityError during query?

2007-09-25 Thread [EMAIL PROTECTED]

Hi,  I have a newbie question:

I'm parsing a log file in order to record login-times but I'm getting
an IntegrityError on an insert during a query. Does this make sense?
Even though I'm going a commit at the botton of the loop should I
expect the INSERT to actually happen during a subsequent query?

Thanks,
Mark


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Jython and sqlalchemy

2007-07-16 Thread [EMAIL PROTECTED]

Hi all,

My name is Frank Wierzbicki and I'm the primary maintainer of Jython.
We are about to release 2.2, so I've turned my attention to post 2.2
stuff.  I tried our 2.3 alpha with sqlalchemy against mysql and found
that it wasn't that hard to get it to work for a (very) simple test.
This is by no means complete or even good, but I am keeping the code
here: 
http://jython.svn.sourceforge.net/svnroot/jython/trunk/sandbox/wierzbicki/sqlalchemy/

The three files are a simple test (sqla.py), a monkey-patched version
of database/mysql.py (mysql.py) and the svn diff as of today
(mysql.diff) so I can keep track of the changes I made to get things
to work.  mysql.py can be pasted on top of the real one, then you can
run jython (latest from the 2.3 branch) and it works, at least on my
machine :)

Obviously patching mysql.py might not be the way it should really be
designed and this was just enough change to get my simple test to
work, but I thought I would share and see what people think about
getting sqlalchemy to work from Jython.

-Frank


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] sqlalchemy.exceptions.SQLError: (ProgrammingError) can't adapt

2007-06-08 Thread [EMAIL PROTECTED]

Hello.  I am receiving the error:

sqlalchemy.exceptions.SQLError: (ProgrammingError) can't adapt 'INSERT
INTO workorderlines (workorderlines_rowid) VALUES (%
(workorderlines_rowid)s)' {'workorderlines_rowid':
Sequence('workorderlines_rowid_seq',start=None,increment=None,optional=False)}

running the following simplified version of what I am working with:

from sqlalchemy import *

db = create_engine('postgres://[EMAIL PROTECTED]:5432/fleettest')

db.echo = True

metadata = BoundMetaData(db)

workorderlines_table = Table('workorderlines', metadata,
Column('workorderlines_rowid', Numeric(10,0),
default=Sequence('workorderlines_rowid_seq')),
PrimaryKeyConstraint('workorderlines_rowid'),
)

class Workorder_Line(object):
def __repr__(self):
   return Workorder_Line: %d %d %d %d%s % (
   self.company, self.store, self.workorder, self.line,
self.suffix)


mapper(Workorder_Line, workorderlines_table)

def main():
session = create_session()
obj = Workorder_Line()
session.save(obj)
session.flush()

if __name__ == '__main__': main()




Primarily, I have a postgres database with a sequence setup as a
default on the workorderlines_rowid column within the database.  If I
try to write out a record without setting the workorderlines_rowid
value or without specifying a default, the SQL tries to insert it with
a NULL value.  Since I couldn't figure out how to disable that, I have
tried linking a sqlalchemy default by either explicity specifying the
database sequence as above, or by using PassiveDefault to specify
DEFAULT, but in either case, I get the above error.  Is there a way
to stop sqlalchemy from trying to insert a value for a column I
haven't specified a value for?  Is something wrong with my sequence
specification?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Query generation in 0.3.8 is broken?

2007-06-06 Thread [EMAIL PROTECTED]

Hi there.

I have just mirgated to 0.3.8 from 0.3.6 and got the followin error in
my app:
class 'sqlalchemy.exceptions.NoSuchColumnError'

Investigation shows, that queries generated in 0.3.6 and 0.3.8 are
differ:

(diff, i changed all spaces to line breaks before):
--- 1   2007-06-06 14:40:44.0 +0400
+++ 2   2007-06-06 14:40:50.0 +0400
@@ -240,64 +240,6 @@
 task_id

 FROM
-(SELECT
-task.id
-AS
-task_id,
-prop_c_s.task_id
-AS
-prop_c_s_task_id,
-task.updated
-AS
-task_updated
-
-FROM
-task
-JOIN
-(SELECT
-task.id
-AS
-task_id,
-count(msg.id)
-AS
-props_cnt
-
-FROM
-task
-LEFT
-OUTER
-JOIN
-msg
-ON
-task.id
-=
-msg.task_id
-GROUP
-BY
-task.id)
-AS
-prop_c_s
-ON
-task.id
-=
-prop_c_s.task_id
-
-WHERE
-(task.prj_id
-=
-%s)
-ORDER
-BY
-task.updated
-DESC
-
-
-LIMIT
-10
-OFFSET
-0)
-AS
-tbl_row_count,
 task
 JOIN
 (SELECT
@@ -439,24 +381,12 @@
 task.task_type_id

 WHERE
-task.id
-=
-tbl_row_count.task_id
-AND
-task.id
-=
-tbl_row_count.prop_c_s_task_id
-AND
-prop_c_s.task_id
-=
-tbl_row_count.task_id
-AND
-prop_c_s.task_id
+(task.prj_id
 =
-tbl_row_count.prop_c_s_task_id
+%s)
 ORDER
 BY
-tbl_row_count.task_updated
+task.updated
 DESC,
 anon_1649.id,
 anon_f48c.task_id,
@@ -470,7 +400,7 @@
 anon_3d17.task_id,
 anon_0e68.id
 2007-06-04
-19:58:33,976

The query is following:

j  = outerjoin( task_t, message_t,
task_t.c.id==message_t.c.task_id)
jj = select([ task_t.c.id.label('task_id'),
  func.count(message_t.c.id).label('props_cnt')],
  from_obj=[j],
group_by=[task_t.c.id]).alias('prop_c_s')
jjj = join(task_t, jj, task_t.c.id == jj.c.task_id)
#jjj = outerjoin(task_effort_t, jjj, task_effort_t.c.task_id
== jjj.c.task_id)

cls.mapper = mapper( cls, jjj,
order_by=[desc(task_t.c.updated)],
  properties=dict(type=relation(Task_Type,
lazy=False),
  status=relation(Task_Status,
lazy=False, uselist=False),
  
publication=relation(Task_Publication,
lazy=False, uselist=False),
 
summary=deferred(task_t.c.summary),
  progress=relation(Task_Progress,
lazy=False, uselist=False),
 
appointment=relation(Task_Appointment, lazy=False, uselist=False),
 ))

the idea of this query that I make mapper for join for some tables
which already have mappers,
and add some group functions.  It worked ok in 0.3.6

and (not exactly, but something like)

session.query(cls.mapper).limit(...).offset().list

It works pretty good without limit/offset. Does anybody have idea,
what's happened?
Is it my fault, or it's may be a bug in 0.3.8 ?

I may, of course, roll back to 0.3.6, but I do not want to, and there
is a some bug with unicode rows in 0.3.6, wich made me updrade the
version.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query generation in 0.3.8 is broken?

2007-06-06 Thread [EMAIL PROTECTED]

I have just submitted the ticket #523, there is a minimalistic code
snippet, wich reproduces the error.
I am sorry that I did not sent the working example right in ticket
#592, but I could not reproduce it.

But not I did (see below, or ticket #523):

The problem appears when mapper, relations and limit/offset come
together

#!/usr/bin/env python
from sqlalchemy import *
import sys, datetime

#init db
#global_connect('mysql://test:[EMAIL PROTECTED]/test')
#engine = create_engine('mysql://test:[EMAIL PROTECTED]/test')

global_connect('sqlite:///tutorial.db')
engine = create_engine('sqlite:///tutorial.db')

project_t = Table('prj',
  Column('id',Integer,
primary_key=True),
  Column('title', Unicode(100),
nullable=False),
  mysql_engine='InnoDB')


task_t = Table('task',
  Column('id',Integer,
primary_key=True),
  Column('status_id', Integer,
ForeignKey('task_status.id'), nullable=False),
  Column('title', Unicode(100),
nullable=False),
  Column('task_type_id',  Integer ,
ForeignKey('task_type.id'), nullable=False),
  Column('prj_id',Integer ,
ForeignKey('prj.id'), nullable=False),
  mysql_engine='InnoDB')

task_status_t = Table('task_status',
Column('id',Integer,
primary_key=True),
mysql_engine='InnoDB')

task_type_t = Table('task_type',
Column('id',   Integer,primary_key=True),
mysql_engine='InnoDB')

message_t  = Table('msg',
Column('id', Integer,  primary_key=True),
Column('posted',DateTime, nullable=False,
index=True, default=func.current_timestamp()),
Column('type_id',   Integer,
ForeignKey('msg_type.id'), nullable=False, index=True),
Column('from_uid',  Integer, nullable=False,
index=True),
Column('to_uid',Integer, nullable=False,
index=True),
Column('task_id',   Integer,
ForeignKey('task.id'), nullable=True,  index=True),
Column('time_est_days', Integer, nullable=True),
Column('subject',   Unicode(60), nullable=True),
Column('body',  Unicode, nullable=True),
Column('new',   Boolean, nullable=False,
default=True),
Column('removed_by_sender',  Boolean,
nullable=False, default=False),
Column('removed_by_recipient',   Boolean,
nullable=False, default=False),
mysql_engine='InnoDB')

message_type_t = Table('msg_type',
Column('id',Integer,
primary_key=True),
Column('name',  Unicode(20),
nullable=False, unique=True),
Column('display_name',  Unicode(20),
nullable=False, unique=True),
mysql_engine='InnoDB')

class Task(object):pass

class Task_Type(object):pass

class Message(object):pass

class Message_Type(object):pass

tsk_cnt_join = outerjoin(project_t, task_t,
task_t.c.prj_id==project_t.c.id)

ss = select([project_t.c.id.label('prj_id'),
func.count(task_t.c.id).label('tasks_number')],
from_obj=[tsk_cnt_join],
group_by=[project_t.c.id]).alias('prj_tsk_cnt_s')
j = join(project_t, ss, project_t.c.id == ss.c.prj_id)

Task_Type.mapper = mapper(Task_Type, task_type_t)


Task.mapper = mapper( Task, task_t,
  properties=dict(type=relation(Task_Type,
lazy=False),
 ))

Message_Type.mapper = mapper(Message_Type, message_type_t)

Message.mapper = mapper(Message, message_t,
 properties=dict(type=relation(Message_Type,
lazy=False, uselist=False),
 ))

tsk_cnt_join = outerjoin(project_t, task_t,
task_t.c.prj_id==project_t.c.id)
ss = select([project_t.c.id.label('prj_id'),
func.count(task_t.c.id).label('tasks_number')],
from_obj=[tsk_cnt_join],
group_by=[project_t.c.id]).alias('prj_tsk_cnt_s')
j = join(project_t, ss, project_t.c.id == ss.c.prj_id)

j  = outerjoin( task_t, message_t, task_t.c.id==message_t.c.task_id)
jj = select([ task_t.c.id.label('task_id'),
  func.count(message_t.c.id).label('props_cnt')],
  from_obj=[j], group_by=[task_t.c.id]).alias('prop_c_s')
jjj = join(task_t, jj, task_t.c.id == jj.c.task_id)

class cls(object):pass

props =dict(type=relation(Task_Type, lazy=False))
cls.mapper = mapper( cls, jjj, properties=props)


default_metadata.engine.echo = True
default_metadata.drop_all()
default_metadata.create_all()

session = create_session()

engine.execute(INSERT INTO prj (title) values('project 1');)
engine.execute(INSERT INTO task_status (id) values(1);)
engine.execute(INSERT INTO task_type(id) values(1);)
engine.execute(INSERT INTO task (title

[sqlalchemy] Re: pymssql and encoding - I can not get \x92 to be an '

2007-04-24 Thread [EMAIL PROTECTED]

I finally got the encoding to work.  I moved from linux to windows,
and now the encoding works with both pymssql and pyodbc.
So it had to do with using FreeTDS.  I experimented with FreeTDS.conf
to use version 7.0 and 8.0 and various charsets, but could not get it
to work, so I'll man up and use windows.

db = create_engine('mssql://./test', module=pyodbc,
module_name='pyodbc')





On Apr 11, 11:50 am, Rick Morrison [EMAIL PROTECTED] wrote:
 Last I heard, pyodbc was working on any POSIX system that supports odbc
 (most likely via unixodbc or iodbc)

 http://sourceforge.net/projects/pyodbc/

 -- check out the supported platforms

 On 4/11/07, Marco Mariani [EMAIL PROTECTED] wrote:



  Rick Morrison wrote:
   ...and while I'm making this thread unnecessarily long, I should add
   that while pymssql may not understand Unicode data, the pyodbc DB-API
   interface does. Thanks to recent work by Paul Johnston, it's on
   fast-track to becoming the preferred MSSQL db-api for SA.

  Since he starts with unfortunately, we have a ms sql server at work,
  maybe he's not developing on windows, and pyodbc is windows-specific.

  I think the data could be encoded with the 1252 charset, which is
  similar to 8859-1 but has an apostrophe in chr(146)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] [PATCH]: SSL support for MySQL

2007-04-22 Thread [EMAIL PROTECTED]

A simple (too simple?) patch to support SSL connections with MySQL.

Use the following syntax to use it:

 mysql://user:[EMAIL PROTECTED]/db?ssl_ca=path-to-ca-file

Tested with SA from svn, mysql-5.0.27, python-2.4.4 and MySQL-
python-1.2.1.

 - Terje


  Index: lib/sqlalchemy/databases/mysql.py
===
--- lib/sqlalchemy/databases/mysql.py   (revisjon 2530)
+++ lib/sqlalchemy/databases/mysql.py   (arbeidskopi)
@@ -293,8 +293,21 @@
 # note: these two could break SA Unicode type
 util.coerce_kw_type(opts, 'use_unicode', bool)
 util.coerce_kw_type(opts, 'charset', str)
-# TODO: what about options like ssl, cursorclass and
conv ?
+# ssl
+ssl_opts = ['key', 'cert', 'ca', 'capath', 'cipher']
+for opt in ssl_opts:
+util.coerce_kw_type(opts, 'ssl_' + opt, str)
+# ssl_ca option is required to use ssl
+if 'ssl_ca' in opts.keys():
+# ssl arg must be a dict
+ssl = {}
+for opt in ssl_opts:
+if 'ssl_' + opt in opts.keys():
+ssl[opt] = opts['ssl_' + opt]
+del opts['ssl_' + opt]
+opts['ssl'] = ssl

+# TODO: what about options like cursorclass and conv ?
 client_flag = opts.get('client_flag', 0)
 if self.dbapi is not None:
 try:


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLite and decimal.Decimal

2007-04-10 Thread [EMAIL PROTECTED]

It would be great. Thank you.

André

On 7 abr, 12:57, Michael Bayer [EMAIL PROTECTED] wrote:
 the thing is, we have support for 6 different databases and postgres
 is the *only* one where its DBAPI implementation decides to use
 Decimal for numeric types.  the rest return just floats.  that
 means, people who have worked with databases other than postgres will
 be totally surprised to plug in SQLAlchemy one day and all the sudden
 they arent getting their expected float types back.  So i dont think
 one DBAPI should dictate the behavior for all DBAPIs, and its
 definitely not a bug.  its a feature request, asking for a generic
 numeric type that is guaranteed to return decimal.Decimal objects
 regardless of underlying DBAPI.

 So, I would rather add a new type called DecimalType that creates
 columns using NUMERIC semantics but explicitly returns
 decimal.Decimal objects.

 On Apr 7, 2007, at 9:16 AM, [EMAIL PROTECTED] wrote:



  Hi,

  I'm using SQLite in tests and there is a problem when using
  decimal.Decimal with sqlalchemy's Numeric type:

  SQLError: (InterfaceError) Error binding parameter 5 - probably
  unsupported type.

  This is not a new issue, a similar one was posted in
 http://groups.google.com/group/sqlalchemy/browse_thread/thread/
  300b757014c7d375/ad024f5365ab2eea

  It looks like a bug in sqlalchemy, but I'd rather discuss it here
  before creating a ticket. What I'd really like is that the Numeric
  field could work with decimal.Decimal in SQLite as it does with
  postgres, without any other external hack.

  Regards,

  André


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] pymssql and encoding - I can not get \x92 to be an '

2007-04-10 Thread [EMAIL PROTECTED]

Hello all -
Unfortunately, we have a ms sql server at work.  When I get tuples
from the server they look like this:

.. (55, 26, 'Small Business and Individual Chapter 11s - The NewCode
\x92s Effect on Strategies', 'AUDIO'...

with \x92 for apostrophe etc.  I've tried putting every encoding in
the create_engine statement, including  ISO-8859-1 used by MS SQL, but
the print statements always come out like:

.. The NewCode?s Effect on ..

I also tried passing the string to unicode(string, 'ISO-8859-1'), but
this gives me:

.. UnicodeEncodeError: 'ascii' codec can't encode character u'\x96' in
position 48: ordinal not in range(128) ..

Does anyone know about MSSQL or this encoding, or how to get
apostrophes where \x92 is?
Any help would be greatly appreciated.

-Steve


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] SQLite and decimal.Decimal

2007-04-07 Thread [EMAIL PROTECTED]

Hi,

I'm using SQLite in tests and there is a problem when using
decimal.Decimal with sqlalchemy's Numeric type:

SQLError: (InterfaceError) Error binding parameter 5 - probably
unsupported type.

This is not a new issue, a similar one was posted in
http://groups.google.com/group/sqlalchemy/browse_thread/thread/300b757014c7d375/ad024f5365ab2eea

It looks like a bug in sqlalchemy, but I'd rather discuss it here
before creating a ticket. What I'd really like is that the Numeric
field could work with decimal.Decimal in SQLite as it does with
postgres, without any other external hack.

Regards,

André


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] SQL_CALC_FOUND_ROWS and FOUND_ROWS(). how?

2007-03-07 Thread [EMAIL PROTECTED]
Hi everyone.

How can I specify parameter SQL_CALC_FOUND_ROWS in SELECT query (I'm
using mysql 5.0)? Is there any (engine-independant) solution to
determine, how many rows where matched with whereclauses in complex
select query? ResultProxy.rowcount holds the number of returned rows,
limited by the LIMIT statement, it is not what I need.

Thanks, sorry for my terrible english.

___


Всем привет.
Как я могу передать select-запросу параметр SQL_CALC_FOUND_ROWS
(использую mysql 5.0)? Есть может какой-то другой (быть может, даже
движково-независимый) способ определения, сколько строк удовлетворяет
условиям отбора в сложном запросе? В ResultProxy.rowcount лежит число
отобранных записей, не более параметра LIMIT, это совсем не то, что
мне нужно.

Спасибо, прошу прощения за свой бестолковый анлгийский.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] www.OutpatientSurgicare.com/video/

2007-03-07 Thread [EMAIL PROTECTED]

www.OutpatientSurgicare.com/video/
Outpatient Doctors Surgery Center is committed to offering the
healthcare the community needs. We offer patients a meaningful
alternative to traditional surgery. This state-of-the-art outpatient
surgery center, located in the heart of Orange County, at 10900 Warner
Avenue, Suite 101A, Fountain Valley, Ca 92708, offers the latest
innovations in outpatient surgery and technology.
Please Call For Our Special Cash Discount
Toll Free: 1-877-500-2525
Please Visit Our Websites:
We offer extreme cosmetic surgery makeover packages.
http://www.SurgeonToTheStars.com
http://www.1cosmeticsurgery.com
Specializing in the cure of hyperhidrosis, sweaty palms, underarm and
foot sweating.
http://www.CuresweatyPalms.com
http://www.ControlExcessiveSweating.com
No. 1 Weight Loss Surgery Center
http://www.ControlWeightLossNow.com
http://www.FreeLapBandSeminar.com
Hernia Treatment Center
http://www.HerniaDoc.com
Take care of your feet
http://www.CureFootPain.com
The Experts in CARPAL TUNNEL SYNDROME
http://www.CureHandPain.com

Accidental Urine Leaks ? End Urinary Incontinence
http://www.WomanWellnessCenter.com
Hemorrhoid Treatment Center
http://www.hemorrhoidtreatmentcenter.com


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Support for SQL views?

2007-02-17 Thread [EMAIL PROTECTED]

I've just downloaded and played with SQLAlchemy, and I must say I
quite like it. I've always enjoyed using plain SQL, and SA lets me do
that while integrating nicely with Python. Great work!

But I was a bit disappointed when I found that I couldn't access a
view as a table (using autoload). The application I currently work on
is based on Oracle and uses views heavily. The good news is that it
uses a lot of materialized views, which do work with SA, but it would
still be really nice if SA could treat SQL views as normal tables,
just as the DBMS does. It is also common to use synonyms quite heavily
on Oracle, but SA unfortunately doesn't seem to understand those
either.

Are there are any plans for adding support for this soon?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: [Fwd: [sqlalchemy] Re: scalar select: wrong result types]

2007-01-18 Thread [EMAIL PROTECTED]




without seeing a full example, its looking like a mysql bug.  SA's
datetime implementation for MySQL doesnt do any conversion from return
value since MySQLDB handles that task.  whereas the sqlite dialect in
SA *does* do conversion since sqlite doesnt handle that.


Below my small example (It doesn't clarify where the problem is)
I hope someone else will check it, with other databases too (postgres,
for example)
I hope it can help

My results:
sqlite_engine (True, True)
mysql_engine (False, False)


My configuration:
debian etch
sqlite 3
sqlalchemy trunk Revision: 2212
mysql 5.0.30
python 2.4.4

Alessandro

PS: I hope this email will keep identation. I have also put this test in
http://pastebin.com/861936



import datetime
from sqlalchemy import *

def check(engine):
   mytest = Table('mytest', engine,
 Column('date', DateTime, primary_key=True, \
default=datetime.datetime.now))
   try:
   mytest.drop()
   except:
   pass
   mytest.create()
   mytest.insert().execute()

   tt = mytest.alias('tt')
   selTagged = select([tt.c.date,
 select([tt.c.date],
tt.c.date==mytest.c.date,
from_obj=[mytest],
limit=1, offset=0,
scalar=True).label('last_mod')] )
   #Note: 'text' need typemap
   selText  = text(selTagged.__str__(), engine, \
   typemap={'last_mod':types.DateTime})
   def isdatetime(sel):
   return isinstance(sel.execute().fetchone().last_mod, \
 datetime.datetime)
   return isdatetime(selTagged), isdatetime(selText)

sqlite_engine = create_engine('sqlite:///database_test.db')
mysql_engine =  create_engine('mysql://[EMAIL PROTECTED]/xxx')
print sqlite_engine, check(sqlite_engine)
print mysql_engine, check(mysql_engine)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: scalar select: wrong result types

2007-01-17 Thread [EMAIL PROTECTED]




the above query you are issuing straight textual SQL.  SA has no clue
what types to return and it has no say in it - the inconsistent
behavior above originates within your database/DBAPI (which you havent
told me which one it is).


I'm using Mysql5
A very simple table that give me problems:
mytest = Table('mytest', enginedb_test,
 Column('id', Integer, primary_key=True, nullable=False),
 Column('creation_date', DateTime, default=datetime.datetime.now),
 mysql_engine='InnoDB')



you can issue textual sql using the typemap parameter to text():

s = text(some sql, typemap={'x':types.DateTime})


It doesn't work; result is a 'str' type

res = sq.text(
SELECT w2.id,
 (SELECT w2t.creation_date AS creation_date
  FROM mytest AS w2t  where w2t.id=w2.id
LIMIT 1 OFFSET 0) as last_mod
FROM mytest AS w2
, enginedb_test, typemap={'last_mod':sq.types.DateTime}).execute().fetchone()


special.  the text clause above should work better (or at least is
intended for this scenario).   also the type should be propigated
through the label() youre creating above, i thought perhaps it might
not but i added a test case in 2206 that shows it does.


Yes, the test works fine!

I tried to switch db engine: it works for sqlite, it doesn't for mysql...

#works
enginedb_test = create_engine('sqlite:///database_test.db')
= (1, datetime.datetime(2007, 1, 17, 14, 33, 21, 483043))

#doen't work
enginedb_test =  create_engine('mysql://name:[EMAIL PROTECTED]/dbname')
= (1L, '2007-01-17 14:30:20')

It is a mysql engine bug?


Thanks for your help

Alessandro



--
Passa a Infostrada. ADSL e Telefono senza limiti e senza canone Telecom
http://click.libero.it/infostrada17gen07



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Order of constraints at creation time

2007-01-01 Thread [EMAIL PROTECTED]


[This is a repost, direct mail wasn't working]

Hi all,

I spent a moment at fixing here and there to make some test work under
firebird.

I'm out of luck figuring out the proper solution for
engine.ReflectionTest.testbasic: FB2 fails here because the test tries
to create a table with a primary key and a foreign key to itself, but
the issued SQL define the foreign key sooner than that the primary key.
In other words, the statement is something like

 CREATE TABLE engine_users (
   user_id INTEGER,
   ...
   FOREIGN KEY (parent_user_id) REFERENCES engine_users (user_id)
   PRIMARY KEY (user_id)
 )

This works for example in sqlite, but AFAICS FB2 is not smart enough:
it accepts the stmt only if I swap the constraints, otherwise it
complains about a missing unique index on the target field...

Digging the issue, I was looking at AnsiSchemaGenerator.visit_table()
which, at some point, calls .get_column_specification() passing a
first_pk flag, which at first seemed what I was looking for: as the
comment suggests, if at all possible the primary key should go
earlier on the column itself, but:

a) none of the current backends make any use of it

b) even if they did, how could they omit the primary key constraint
from the final loop on the table's ones?

Ideally the PK constraint (if any) should be visited before the
remaining FK constraints, but I don't see any sensible way of doing
that, since the container is a set.

Any advice?

Thanks in advance,
and happy gnu year everybody ;)

ciao, lele.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] pooled connections with failover database

2006-12-27 Thread [EMAIL PROTECTED]


Hi,
I'm using straight kinterbasdb  connections to firebird in a
multithreaded SocketServer, and actually I open a new connection at
every request. Searching for a connection pooling solution I found that
pool.py sqlalchemy can be great for my needs.

I whish modify my application so it can reuse a pool o firebird
connections, but must be possible also connect automatically to a
second failower database if the main db is unreachable.

Passing a function to a custom pool such as:

def getconn():
   try:
   conn = kinterbasdb.connect( dsn=dns1, ...)
   except kinterbasdb.OperationalError:
   conn = kinterbasdb.connect( dsn=dns2, ...)
   return conn

I can switch to a secondary database if the main db is not working, but
this works only for new connections, existing connections in the pool
will not be automatically restored.

I'm missing something? Can this be accomplished with sqlalchemy?
DBUtils from webware can catch errors on calls to dbmodule.cursor() and
try to reconnect but lacks the custom pool construction of sqlalchemy

Ezio


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Partial Metadata Divination?

2006-12-21 Thread [EMAIL PROTECTED]


I'm just learning to use SQLAlchemy and loving the way it can figure
out the columns for a simple table. Now, if I need relations, do I need
to specify all tables? (Or can I get away with only specifying he
relation?)

Thanks,

A.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] dump sqlalchemy-mapped object to xml

2006-12-15 Thread [EMAIL PROTECTED]

Greetings,

I need to dump a sqlalchemy-mapped object to XML, like it implemented
in pyxslt for SQLObject.
So, the questions are:

1) how to separate columns and properties from the other mapped
object's attributes properly -
I need columns, properties and backrefs (surprised, but backrefs work
not like mapper's properties). Is there any recommended way to do it,
without fear that it will be broken in the future versions of sqlachemy
?

2) how to avoid to load  lazy properties of an object?  I mean, if in
xml dumping code I call something like this:

...
return tag + the_mapped_object.lazy_property  + /tag
...

it potentially may cause to produce tons of selects...

---
Thanks,
Dmitry


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: PickleType and MySQL: want mediumblob instead of blob

2006-12-14 Thread [EMAIL PROTECTED]

Yes, that was exactly what I wanted!

Thanks a lot!


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] PickleType and MySQL: want mediumblob instead of blob

2006-12-13 Thread [EMAIL PROTECTED]

Hi,

Using SA 0.3.2 with Python (TurboGears) and MySQL I see that my
PickleType column is stored as a blob in the MySQL table.

Can I change the following table definition so that my 'info' column is
stored as a mediumblob in MySQL instead of a blob?

data_set_table = Table('dataset', metadata,
Column('info', PickleType(), default={})
)

Thanks,
toffe


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: python ORM

2006-12-11 Thread [EMAIL PROTECTED]

hibernate didn't magically appear, it was refined with input from users
over many years. if you want to help make sqlalchemy better for yours
and other usage. its much better to give concrete examples, that can be
used to improve it, then making grandiose negative statements. the idea
behind sa's orm is indeed to obviate the need for most common
application sql, if it fails for some reason on a given example, it
would be helpful to actually here what the usage scenario/example is,
perhaps you can post a comparison of a simple model between hibernate
and sa.

for example.. usage complaints as relates to session, are typically
things that can be handled by app server, via app agnostic appserver
integration. sa exposes various session/txn layers for application
flexibility and for easier integration into frameworks or different
programming styles, in much the same way that hibernate does, and
arguably sa is more flexible here for different usage modes (although
hibernate's scaling/deployment options, ie. caches are much nicer).

cheers,

kapil


On Dec 9, 11:35 am, flyingfrog [EMAIL PROTECTED] wrote:
 Ok, this is my first approach with python + ORM, and i must say i can't
 do what i want with it... Before that i used java Hibernate, think you
 should know about.
 I've tried both SQLObject and SQLAlchemy but both have been a delusion
 :(, i'd like to share with you my considerations, hoping in some
 suggestions!

 SQLObject forces you to embed database code into your model classes,
 and i don't really want that. And it's really buggy! Too much for
 production usage.

 SQLAlchemy lets you define separately DB code and python classes, but
 then you hve a real duplication. And to use database functionalities
 you always need to access session objects or connections, making
 sql-like queries. And i don't want that.

 Thy're both good projects but none of them has catched the real
 objective: no more sql within the application logic. Hibernate does it,
 take a look!

 Anyway, maybe i didn't look so much into python world and there could
 be other tools, or other ways of using these tools for doing what i
 want to, please let me know if you know of any!!
 
 Bye


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] working with detached objects

2006-11-28 Thread [EMAIL PROTECTED]

i'd like to detach an object from a session, modify it, and reattach it
to the session, with detection of the object's current state
(modified), and added to the session's dirty set. i thought calling
session.save_or_update, would do this but it resets the modification
status of the object (_update_impl - register_persistent -
register_clean ). such that changes aren't written on session.flush. is
there a way to attach a modified object back to a session, such that
its registered as dirty?


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: working with detached objects

2006-11-28 Thread [EMAIL PROTECTED]

fwiw. using private methods on the session, calling session._attach(
instance ) has the nesc. effects of reattaching to the session with the
object marked in the session as dirty.

[EMAIL PROTECTED] wrote:
 i'd like to detach an object from a session, modify it, and reattach it
 to the session, with detection of the object's current state
 (modified), and added to the session's dirty set. i thought calling
 session.save_or_update, would do this but it resets the modification
 status of the object (_update_impl - register_persistent -
 register_clean ). such that changes aren't written on session.flush. is
 there a way to attach a modified object back to a session, such that
 its registered as dirty?


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] is objectstore has a bug?

2006-11-19 Thread [EMAIL PROTECTED]

I have a model like that:

#models.py

import sqlalchemy.mods.threadlocal
from sqlalchemy import *

metadata =
BoundMetaData('mysql://root:[EMAIL PROTECTED]@localhost/django')
metadata.engine.echo = True

wikis = Table('wiki_wiki', metadata,
  Column('id', Integer, primary_key=True),
  Column('pagename', String(20), unique=True),
  Column('content', TEXT))

# These are the classes that will become our data classes
class Wiki(object):
@classmethod
def by_pagename(cls, pagename):
return
objectstore.context.current.query(cls).select_by(pagename=pagename)

@classmethod
def firstby_pagename(cls, pagename):
return
objectstore.context.current.query(cls).selectfirst_by(pagename=pagename)

def save(self):
return objectstore.context.current.save(self)

def flush(self):
return objectstore.context.current.flush([self, ])

mapper(Wiki, wikis)

then I write a web application to viewmodify wiki's contents, the code
like that:
#view
pages = Wiki.by_pagename(pagename)
if pages:
 return pages[0].content
#edit
pages = Wiki.by_pagename(pagename)
if pages:
pages[0].content = content
pages[0].flush()
I configure apache + mod_python for run the web application, i meet a
very strange problem, I have a wiki which contents is test, then I
modify it contents to test2, I also print the current process id, I
found that distinct apache process show diffrent result, some show
test, some show test2, but the right should be 'test2, is
objectstore has cache or some other reason?

thanks!


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] redirect sqlalchemy log to stderr

2006-11-15 Thread [EMAIL PROTECTED]

Hello there,


Is there any simple way to redirect sqlalchemy log to stderr? I did not
find  any examples for this.
I'm developing a web application with sqlalchemy using web.py, and it's
quite  annoying to have debug output in stdout.

Thanks,


---
Regards,
Dmitry


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---