[sqlalchemy] hybrid_property returning a python type

2013-07-25 Thread Matthew Pounsett

I have a class with 'start' and 'finish' attributes which are DateTime 
columns.  I'm trying to create a hybrid property 'duration' which returns 
the delta as a datetime.timedelta object.  This is working fine for the 
instance attribute, but I can't seem to get it to work for the class 
expression.

This is close, and works, except that the clas expression returns an 
integer:

@hybrid_property
def duration(self):
if self.finish:
return self.finish - self.start
else:
return timedelta(0)

@duration.expression
def duration(cls):
   return func.strftime('%s', func.coalesce(cls.finish, cls.start)) -\ 
  func.strftime('%s', cls.start)

As soon as I try to wrap that to convert it to the python object, I get an 
exception:

@duration.expression
def duration(cls):
return timedelta(
func.strftime('%s', func.coalesce(cls.finish, cls.start)) -
func.strftime('%s', cls.start)
   )

TypeError: unsupported type for timedelta days component: _BinaryExpression

Is there something I need to do to convert the return value from func() in 
order to be able to work with it?  Or, is what I'm attempting even possible?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




RE: [sqlalchemy] Alembic 0.6.0 released

2013-07-25 Thread Alexey Vihorev
Hi!

In the former USSR, there is a popular commercial development framework,
called 1C:Enterprise.  They too have what can be called code first
approach (more like set up through GUI first in their case). For many
years they used quite successfully the freezing system - they dump
metadata into a storage inside the DB the first time it is created. On a
subsequent change they compare current app metadata and the previous one and
then display a nice GUI, that shows differences between them and allows the
user to approve\cancel the migration.  Then an appropriate DB backend
commits changes to DB schema (they are DB-agnostic too). It would be nice to
have a tool like this for sqlalchemy, IMO.


-Original Message-
From: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com] On
Behalf Of Michael Bayer
Sent: Sunday, July 21, 2013 7:41 AM
To: sqlalchemy@googlegroups.com
Cc: sqlalchemy-alem...@googlegroups.com
Subject: Re: [sqlalchemy] Alembic 0.6.0 released


On Jul 20, 2013, at 11:41 AM, David Szotten davidszot...@gmail.com wrote:

 south currently uses state freezing (which aways felt messy), but i
believe the new rewrite for inclusion in django core is moving to a way of
describing schema changes in a declarative way that can be introspected (so
current state is calculated from all previous migrations instead of some
frozen state)
 
 this also allows large chunks of historic migrations to be squashed into
new shortcut migrations which sounds nice
 
 i'm not familiar with alembic (nor south) internals, but this sounds like
an interesting idea.
 
 is it something you have considered?

OK the three systems I can see are 1. database introspection 2. freezing
the last known state of the database schema as described in the application
and 3. constructing the current state of the database based on the
migrations present.

#1 is what we do now, it has the issue that there are some mismatches
between what the database can tell us about the schema versus how the
application describes the schema; information is invariably lost.  #2 is
what I've planned for, in some way.   The purpose of storing the previous
value of the metadata is so that we can answer the question,  what changes
were made to this MetaData structure.I guess what's messy is that we
have to store this new thing, the previous state of MetaData, but its a
thing that is designed to store exactly the state we need to know about.  

 #3, I don't see how that could work without the requirement that the
database was 100% built from scratch from migration directives, all those
migration directives remain present, and they all correspond perfectly to
the schema as described in the application.A large number of apps
including my own were not built from scratch from migrations.
Corresponding migration directives to a schema seems really complicated and
would have lots of cases that don't work - literal SQL directives, custom
directives, directives that were hand-edited by the user to suit various
cases, directives that are in migration scripts but are not represented in
the fixed metadata.   

That is, unless the system is, that you store the previous and new state of
the fixed metadata in terms of diff directives that actually faithfully
represent exactly changes in the metadata, and are not generally edited or
mutated.   So instead of storing just the previous version of the
metadata, you actually store the observed diffs each time along with the
version id.Which might be a nice idea because then you are storing a
record of the whole thing, and also this is data that we already know how to
get since it's essentially what autogenerate works with internally anyway.
I'm not sure what actual advantages it has, I guess you could use it to
produce the state of your MetaData structure for any version, if I could
think of a reason why that's useful.

I'll think about if there are some other wins from that approach, but it
still means storing a new set of data that's independent from the
user-edited migration scripts.  Maybe South's approach is messy just
because they implemented it messily?I'm not rushing into doing this
feature because when I do it, it's going to be *perfect*.


-- 
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an
email to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For 

Re: [sqlalchemy] oracle reflect with duplicated tables (schema casing)

2013-07-25 Thread Michael Bayer

On Jul 24, 2013, at 10:35 PM, Victor Olex victor.o...@vtenterprise.com wrote:

 
 Putting the weird aside, I am still puzzled why would the foreign key in 
 MIKE.customer in the above example be seen as coming from mike.address and 
 not MIKE.address? Shouldn't the schema name be consistent and thus in this 
 example, case-sensitive uppercase MIKE?

when SQLA asks the database for the foreign keys, it sees MIKE.ADDRESS from 
oracle; it then runs this through case insensitive conversion, which means it 
becomes mike.address.  This has no relationship to the fact that the schema 
for the owning table was named MIKE with case sensitivity.

That is, SQLA isn't going to try and guess it's way through a mixture of 
user-specified case-sensitive and database-returned case-insensitive 
identifiers.  Much simpler for the user to just use the API correctly in the 
first place :).




 
 Thanks.
 
 On Wednesday, July 24, 2013 8:02:51 PM UTC-4, Michael Bayer wrote:
 
 On Jul 24, 2013, at 7:34 PM, mdob mike.do...@gmail.com wrote: 
 
  Hi, 
  
  I got into an interesting issue where I receive duplicated tables if I use 
  capital letters schema in reflect method. 
 
 you wouldn't want to do that unless the table were actually created using a 
 case-sensitive name, which is pretty unusual in Oracle.  If Oracle's own 
 system views show the names as ALL_UPPERCASE, that's a case-insensitive name. 
  On the SQLAlchemy side, you should use all lower case names which will be 
 similarly treated as case insensitive.  Otherwise it will see a 
 case-insensitive and a locally case-sensitive name as different, leading to 
 the kinds of issues you're seeing. 
 
  
  
  Tables were created like: CREATE TABLE MIKE.CUSTOMER ... so they should 
  be case insensitive. 
 
 I see there are quotes here, but Oracle will still log these as case 
 insensitive (I just tried).   So use all lower case on the SQLAlchemy side. 
 
 
  
  What I found in sqlalchemy code is that table mike.address is mapped and 
  added to Base.metadata.tables dictionary when table MIKE.customer is being 
  mapped. I guess that's because mike.address parent table to MIKE.customer. 
  The thing is it's added lowercase. Next, MIKE.address is added in a normal 
  way. 
 
 it sees mike.address in two different ways.  One, as the table 
 MIKE.address, because you asked for the schema MIKE, and the other, as the 
 table mike.address, which is what MIKE.customer says it foreign keys out 
 to.   MIKE is not the same as mike, the former is case-sensitive on 
 SQLA's side. 
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] hybrid_property returning a python type

2013-07-25 Thread Michael Bayer

On Jul 25, 2013, at 2:44 AM, Matthew Pounsett matt.pouns...@gmail.com wrote:

 
 I have a class with 'start' and 'finish' attributes which are DateTime 
 columns.  I'm trying to create a hybrid property 'duration' which returns the 
 delta as a datetime.timedelta object.  This is working fine for the instance 
 attribute, but I can't seem to get it to work for the class expression.
 
 This is close, and works, except that the clas expression returns an integer:
 
 @hybrid_property
 def duration(self):
 if self.finish:
 return self.finish - self.start
 else:
 return timedelta(0)
 
 @duration.expression
 def duration(cls):
return func.strftime('%s', func.coalesce(cls.finish, cls.start)) -\ 
   func.strftime('%s', cls.start)
 
 As soon as I try to wrap that to convert it to the python object, I get an 
 exception:
 
 @duration.expression
 def duration(cls):
 return timedelta(
 func.strftime('%s', func.coalesce(cls.finish, cls.start)) -
 func.strftime('%s', cls.start)
)
 
 TypeError: unsupported type for timedelta days component: _BinaryExpression

func.xyz() - func.qpr() doesn't provide an integer in Python, it is a SQL 
construct that can be evaluated by the database, not unlike if you just had a 
duration column on your table, you'd say Column('duration', Interval).
Interval here is actually the SQL datatype we'd be looking to deal with.
So in theory, if relational backends were consistent about date arithmetic 
you'd want to say:

from sqlalchemy import type_coerce, Interval

type_coerce(
func.strftime('%s', func.coalesce(cls.finish, cls.start)) -
func.strftime('%s', cls.start),
Interval
)

But your backend isn't doing this; if you were using Postgresql for example, it 
should be returning a timedelta() already.  So perhaps this is MySQL.   you'd 
need to make a TypeDecorator that receives this integer and does what you want 
with it.  You'd emulate the Epoch decorator currently illustrated at 
http://docs.sqlalchemy.org/en/rel_0_8/core/types.html#sqlalchemy.types.TypeDecorator:

class MyIntervalType(types.TypeDecorator):
impl = types.Integer

def process_bind_param(self, value, dialect):
return value.days

def process_result_value(self, value, dialect):
return datetime.timedelta(days=value)

so fully:

from sqlalchemy import type_coerce, Interval

type_coerce(
func.strftime('%s', func.coalesce(cls.finish, cls.start)) -
func.strftime('%s', cls.start),
MyIntervalType
)







 
 Is there something I need to do to convert the return value from func() in 
 order to be able to work with it?  Or, is what I'm attempting even possible?
 
 Thanks!
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] mysql error creating Table

2013-07-25 Thread Michael Bayer
is there a question here?   I'm not seeing what's unexpected.


On Jul 23, 2013, at 7:08 AM, M3nt0r3 m3nt...@gmail.com wrote:

 I am porting a building scheme system to mysql ( it works on postgresql and 
 sqlite so far ) but i have some issues with datetime  ( solved )  and with ( 
 within others tables ) this table :
  
 
 t_listino_articolo = Table('listino_articolo', params[metadata],
 Column('id_listino', 
 Integer,ForeignKey(fk_prefix+listino.id,onupdate=CASCADE,ondelete=CASCADE),primary_key=True),
 Column('id_articolo', Integer, 
 ForeignKey(fk_prefix+articolo.id,onupdate=CASCADE,ondelete=CASCADE),primary_key=True),
 Column('prezzo_dettaglio', Numeric(16,4)),
 Column('prezzo_ingrosso', Numeric(16,4)),
 Column('ultimo_costo', Numeric(16,4), nullable=True),
 Column('data_listino_articolo', 
 DateTime,ColumnDefault(datetime.datetime.now),nullable=False,primary_key=True),
 Column('listino_attuale', Boolean, nullable=False),
 #ForeignKeyConstraint(['id_listino', 
 'id_articolo'],[fk_prefix+'.listino.id', 
 fk_prefix+'.articolo.id'],onupdate=CASCADE, ondelete=CASCADE),
 CheckConstraint(prezzo_dettaglio is not NULL OR prezzo_ingrosso is 
 not NULL),
 schema=params[schema]
 )
 t_listino_articolo.create(checkfirst=True)
 
 
 that what echo says, not much: 
 
 2013-07-23 12:59:58,592 INFO sqlalchemy.engine.base.Engine DESCRIBE 
 `listino_articolo`
 2013-07-23 12:59:58,592 INFO sqlalchemy.engine.base.Engine ()
 2013-07-23 12:59:58,593 INFO sqlalchemy.engine.base.Engine ROLLBACK
 2013-07-23 12:59:58,596 INFO sqlalchemy.engine.base.Engine 
 CREATE TABLE listino_articolo (
 id_listino INTEGER NOT NULL, 
 id_articolo INTEGER NOT NULL, 
 prezzo_dettaglio NUMERIC(16, 4), 
 prezzo_ingrosso NUMERIC(16, 4), 
 ultimo_costo NUMERIC(16, 4), 
 data_listino_articolo DATETIME NOT NULL, 
 listino_attuale BOOL NOT NULL, 
 PRIMARY KEY (id_listino, id_articolo, data_listino_articolo), 
 CHECK (prezzo_dettaglio is not NULL OR prezzo_ingrosso is not NULL), 
 FOREIGN KEY(id_listino) REFERENCES listino (id) ON DELETE CASCADE ON 
 UPDATE CASCADE, 
 FOREIGN KEY(id_articolo) REFERENCES articolo (id) ON DELETE CASCADE ON 
 UPDATE CASCADE, 
 CHECK (listino_attuale IN (0, 1))
 )
 
 
 2013-07-23 12:59:58,596 INFO sqlalchemy.engine.base.Engine ()
 2013-07-23 12:59:58,607 INFO sqlalchemy.engine.base.Engine ROLLBACK
 
 thanks 
 F.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Flask / sqlalchemy.exc.ResourceClosedError with SQLite

2013-07-25 Thread Sean Johnson
I have a Flask route(method) that assigns queries to 4 connect objects.  3 
work fine, 1 was working, but now gives me a sqlalchemy.exc.ResourceClosedError 
and I can't get it working again.  When I paste that same sql into a 
dbmanager it returns 5 records as expected.

What else could be causing this?

Here is the offending line:

progress = con.execute(select goals.category, sum(transactions.amount) as 
sum, goals.goal,  ((sum(transactions.amount)/ goals.goal )*100) as progress 
from goals, transactions where goals.category=transactions.category and 
goals.email=:param group by goals.category, {param:session['email']} )

return render_template('home.html', progress=progress )

end of stack trace:

   - File E:\Dropbox\flask\lib\site-packages\sqlalchemy\engine\base.py, 
   line *3051*, in __iter__
   
   row = self.fetchone()
   
   - File E:\Dropbox\flask\lib\site-packages\sqlalchemy\engine\base.py, 
   line *3241*, in fetchone
   
   row = self._fetchone_impl()
   
   - File E:\Dropbox\flask\lib\site-packages\sqlalchemy\engine\base.py, 
   line *3158*, in _fetchone_impl
   
   self._non_result()
   
   - File E:\Dropbox\flask\lib\site-packages\sqlalchemy\engine\base.py, 
   line *3182*, in _non_result
   
   raise exc.ResourceClosedError(This result object is closed.)
   
   
ResourceClosedError: This result object is closed.


This seems to imply there was a fetchone issue..  It should be producing an 
iterable set that my template for loop can work through. 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] automated mapping - sqlalchemy or sqlsoup or something else

2013-07-25 Thread Henning Sprang
Hi,

To automatically be able to access a large legacy database whithout having 
to write manual mapping code, I'd like to have a tool with automated 
mapping support.

There seem to be two tools that offer something like this - sqlsoup and 
sqlasagna, but both seem not very actively supported - no commits since 
more than a year, sqlasagna has partly wrong documentation (saying its 
available in pypi, while it isn't, one example), so I'm not sure if I 
really should use one of them and if I'm not missing something.

I am aware of the automated mapping functionality I can get with 
declarative_base and DeferredReflection, but then I still have to write 
classes for all tables plus define relationships. I might be able to script 
that myself (to make it work automated at runtime, or as a class generator 
as it is avialable in DjangoORM), but still, I'm asking if there is another 
tool I didn't find yet.

Thanks,
Henning

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] automated mapping - sqlalchemy or sqlsoup or something else

2013-07-25 Thread Michael Bayer
check out sqlautocode: https://code.google.com/p/sqlautocode/


On Jul 25, 2013, at 12:19 PM, Henning Sprang henning.spr...@gmail.com wrote:

 Hi,
 
 To automatically be able to access a large legacy database whithout having to 
 write manual mapping code, I'd like to have a tool with automated mapping 
 support.
 
 There seem to be two tools that offer something like this - sqlsoup and 
 sqlasagna, but both seem not very actively supported - no commits since more 
 than a year, sqlasagna has partly wrong documentation (saying its available 
 in pypi, while it isn't, one example), so I'm not sure if I really should use 
 one of them and if I'm not missing something.
 
 I am aware of the automated mapping functionality I can get with 
 declarative_base and DeferredReflection, but then I still have to write 
 classes for all tables plus define relationships. I might be able to script 
 that myself (to make it work automated at runtime, or as a class generator as 
 it is avialable in DjangoORM), but still, I'm asking if there is another tool 
 I didn't find yet.
 
 Thanks,
 Henning
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Appending a child which already has a parent (OneToMany relationship)

2013-07-25 Thread Edouard Sioufi

Hello,

I am using SQLAlchemy V0.8 and the question is related to the ORM. It is 
about the models in memory (I think the kind of database used is not 
relevant). 

To explain my issue, I'll consider the following basic *OneToMany* declarative 
example from the documentation with a backref relationship (with __repr__ 
functions for debugging):
 

class Parent(Base):
__tablename__ = 'parent'
parentId = Column(Integer, primary_key = True)
name = Column(String(90))

 

children = relationship(Child, backref=backref('parent'))

def __repr__(self):
return Parent('%s','%s') % (self.name, self.children)

class Child(Base):

__tablename__ = 'child'

elementId = Column(Integer, primary_key = True)

name = Column(String(90))

parentId = Column(Integer, ForeignKey(parent.parentId))

 

def __repr__(self):

if self.parent is not None:

parentName = self.parent.name

else:

parentName = None

return Child('%s','%s') % (self.name, parentName) 



If the parent attribute of a child is changed from *parent1 *to *parent2*, 
not only is the child added to the children of the new parent *parent2*, it 
is also removed from the collection of the old parent *parent1*. Example:

child.parent = parent1

print parent1
print parent2
print child
print '='
child.parent = parent2
print parent1
print parent2
print child


 yields the following output:

 Parent('Parent 1','[Child('Child 1','Parent 1')]')
 Parent('Parent 2','[]')
 Child('Child 1','Parent 1')
 =
 Parent('Parent 1','[]')
 Parent('Parent 2','[Child('Child 1','Parent 2')]')
 Child('Child 1','Parent 2')


However, if the child is appended to group2.children when it has group1 as 
a parent, the child is added to the children collection of group2 *but not 
removed from the children collection of group1. *Example:

child.parent = parent1
print parent1
print parent2
print child
print '='
parent2.children.append(child)
print parent1
print parent2
print child

The following output shows the issue:

Parent('Parent 1','[Child('Child 1','Parent 1')]')
Parent('Parent 2','[]')
Child('Child 1','Parent 1')
=
Parent('Parent 1','[Child('Child 1','Parent 2')]')
Parent('Parent 2','[Child('Child 1','Parent 2')]')
Child('Child 1','Parent 2')


I am quite new to SQLAlchemy and I don't know if this is expected behavior. 
I nevertheless dare to say that I find the states of the objects in memory 
to be incoherent as child1 no longer has parent1 as its parent while 
parent1 still thinks that it has child1 as a child. Moreover, it is not 
clear what happens when the objects are persisted to the database (what 
would the value of child1.parentId be?)
Am I missing some configuration that need to be done to solve this issue? 
Or is this to accommodate some edge case?

I have looked at *single_parent *but this forbids changing the parent, 
while the intuitive behavior would be (IMHO) the exact same as changing the 
parent attribute to guarantee the symmetry between a relationship and its 
backref.

Thanks in advance for your kind reply.

Edouard

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Getting the FK(s) associated with a Column?

2013-07-25 Thread Gerald Thibault
If I define a Column like so (declarative syntax):

location_id = Column(Integer, ForeignKey(Location.id))

How do I access that foreign key when I have the location_id attribute? I 
looked through the source a bit, and it kind of looks like it never 
actually creates a relationship pointing from the column to the fk, only 
the other way around (via 'parent'). Is this even possible?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Getting the FK(s) associated with a Column?

2013-07-25 Thread Michael Bayer

On Jul 25, 2013, at 3:00 PM, Gerald Thibault dieselmach...@gmail.com wrote:

 If I define a Column like so (declarative syntax):
 
 location_id = Column(Integer, ForeignKey(Location.id))
 
 How do I access that foreign key when I have the location_id attribute? I 
 looked through the source a bit, and it kind of looks like it never actually 
 creates a relationship pointing from the column to the fk, only the other way 
 around (via 'parent'). Is this even possible?


Column and ForeignKey are commonly prerequisites for the usage of 
relationship() but they do not imply the production of a relationship() object, 
unless you are using some specialized extensions which perform this task 
outside of SQLAlchemy.At the Core, the foreign key references associated 
with a Column object are in the foreign_keys collection:  
location_id.foreign_keys.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Appending a child which already has a parent (OneToMany relationship)

2013-07-25 Thread Michael Bayer

On Jul 25, 2013, at 2:59 PM, Edouard Sioufi edsio...@gmail.com wrote:

 
 If the parent attribute of a child is changed from parent1 to parent2, not 
 only is the child added to the children of the new parent parent2, it is also 
 removed from the collection of the old parent parent1. Example:
 
 child.parent = parent1
 print parent1
 print parent2
 print child
 print '='
 child.parent = parent2
 print parent1
 print parent2
 print child
 
  yields the following output:
 
  Parent('Parent 1','[Child('Child 1','Parent 1')]')
  Parent('Parent 2','[]')
  Child('Child 1','Parent 1')
  =
  Parent('Parent 1','[]')
  Parent('Parent 2','[Child('Child 1','Parent 2')]')
  Child('Child 1','Parent 2')

in this case, associating child.parent = parent2 causes an event to occur which 
receives parent2 as the new value of child.parent and also is aware of 
parent1 being the old value.  This event is processed by the backref setup, 
such that parent2.children now receives child in its collection, and such 
that child is removed from parent1.children as a result of parent1 being 
de-associated from child.parent.These two events fire off as siblings to 
each other.


 
 However, if the child is appended to group2.children when it has group1 as a 
 parent, the child is added to the children collection of group2 but not 
 removed from the children collection of group1. Example:
 
 child.parent = parent1
 print parent1
 print parent2
 print child
 print '='
 parent2.children.append(child)
 print parent1
 print parent2
 print child
 
 The following output shows the issue:
 Parent('Parent 1','[Child('Child 1','Parent 1')]')
 Parent('Parent 2','[]')
 Child('Child 1','Parent 1')
 =
 Parent('Parent 1','[Child('Child 1','Parent 2')]')
 Parent('Parent 2','[Child('Child 1','Parent 2')]')
 Child('Child 1','Parent 2')

in this case, appending child to parent2.children causes an event to occur 
which receives child as a new member of parent2.children.   This event is 
processed by the backref setup such that child.parent is now assigned to 
parent2, replacing parent1.The event does *not* continue to propagate 
along additional backref lines - in the case of the positive reference 
(child.parent becoming parent2), this would produce an endless loop, if 
parent2.children fired off child.parent which then fired off parent2.children 
which continued indefinitely.   In the case of the negative reference 
(child.parent no longer being parent1), the event propagation also stops once 
we're already in a backref event; while it may be possible to test that the old 
collection is present in memory, the mechanics here are not quite that 
straightforward as sometimes the old collection isn't present and such and 
there are still lots of endless loop cases that come up (I just tried in tests 
and many fail if I remove this check).   The recursion checks in this area are 
fairly aggressive as there are a lot of edge cases that can occur, sometimes 
due to user error, where we can't efficiently determine at what point we need 
to stop backreffing.   I have a little bit of time at the moment so I can look 
to see if in the current codebase there might be some refinement that can be 
easily made.

This type of situation is typically not a long lasting issue because once the 
Session is committed, collections are expired and are refreshed.

 
 I am quite new to SQLAlchemy and I don't know if this is expected behavior.

it currently is.


 I nevertheless dare to say that I find the states of the objects in memory to 
 be incoherent as child1 no longer has parent1 as its parent while parent1 
 still thinks that it has child1 as a child.

the backref mechanics are a convenience feature that synchronize the state of 
collections in memory to a limited degree, without needing to access the 
database again in order to get the fully correct result.

 Moreover, it is not clear what happens when the objects are persisted to the 
 database (what would the value of child1.parentId be?)

Persistence works based on change events, so above the attribute and collection 
assignments result in the correct UPDATE statements.  The actual state of the 
collections/attributes is not actually that important compared to the reception 
of the change event.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] sorry, too many clients already

2013-07-25 Thread kris

We are running a replicated (multiple machines behind ngnix) Turbogears 
2.1.5 App With SA 0.7.8 and postgres 8.4 

After a lot of recent usage the system ground to halt and we are receiving 
(OperationalError) FATAL: sorry, too many clients already

while trying to any DBSession.query.

Any ideas what could cause this?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] sorry, too many clients already

2013-07-25 Thread Claudio Freire
On Thu, Jul 25, 2013 at 7:32 PM, kris kkvilek...@gmail.com wrote:

 We are running a replicated (multiple machines behind ngnix) Turbogears
 2.1.5 App With SA 0.7.8 and postgres 8.4

 After a lot of recent usage the system ground to halt and we are receiving
 (OperationalError) FATAL: sorry, too many clients already

 while trying to any DBSession.query.

 Any ideas what could cause this?

As the message says, you have too many open connections.

When you have multiple machines, you must either provision your
postgres to be able to handle a full connection pool (max_overflow
preferrably, pool_size at least) for each nginx worker for each
machine, or have SQLA connect to a pgbouncer that will do the shared
pooling for you.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] sorry, too many clients already

2013-07-25 Thread kris
Thanks for the quick reply.

On Thursday, July 25, 2013 3:39:17 PM UTC-7, Klauss wrote:

 As the message says, you have too many open connections. 

 When you have multiple machines, you must either provision your 
 postgres to be able to handle a full connection pool (max_overflow 
 preferrably, pool_size at least) for each nginx worker for each 
 machine, or have SQLA connect to a pgbouncer that will do the shared 
 pooling for you. 



I should have mentioned that all the connections are idle.. 

My postgres.conf has a parameter  max_connections = 100

I was led to believe the SA would manage pool of connections 
to some good default 
(http://docs.sqlalchemy.org/en/rel_0_7/core/pooling.html?highlight=connection%20pool),
but couldn't find the default values?  Just so I can relate the two figures 
(SA Pool parameters and PG max_connections)

Also Is there a good way to get it too close some idle connections?


Thx,
Kris

 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] sorry, too many clients already

2013-07-25 Thread Claudio Freire
On Thu, Jul 25, 2013 at 7:58 PM, kris kkvilek...@gmail.com wrote:

 My postgres.conf has a parameter  max_connections = 100

That's not only the default, but it's also not really recommended to
push it much higher, so only do so if you really need a big pool on
each machine, and if you're sure that pool will be mostly idle all of
the time.

 I was led to believe the SA would manage pool of connections
 to some good default
 (http://docs.sqlalchemy.org/en/rel_0_7/core/pooling.html?highlight=connection%20pool),
 but couldn't find the default values?  Just so I can relate the two figures
 (SA Pool parameters and PG max_connections)

pool_size, see create_engine[0], the default is 5

 Also Is there a good way to get it too close some idle connections?

Yes, decrease pool_size. Just how many workers do you have? With the
defaults, you'd need a little under 20 workers to start getting those
errors.

With that many workers, your only real solution is to install a
pgbouncer to manage those connections globally. SQLA cannot manage a
pool of connections across workers, only a shared pool (pgbouncer) can
do that. SQLA's pool is local to each process (worker).

[0] 
http://docs.sqlalchemy.org/en/rel_0_7/core/engines.html#sqlalchemy.create_engine

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] sorry, too many clients already

2013-07-25 Thread kris


On Thursday, July 25, 2013 4:12:50 PM UTC-7, Klauss wrote:

 On Thu, Jul 25, 2013 at 7:58 PM, kris kkvil...@gmail.com javascript: 
 wrote: 
  
  My postgres.conf has a parameter  max_connections = 100 

 That's not only the default, but it's also not really recommended to 
 push it much higher, so only do so if you really need a big pool on 
 each machine, and if you're sure that pool will be mostly idle all of 
 the time. 


Hmm.. I just bumped it 200 and modified the shared_buffers to be 32MB

 

  I was led to believe the SA would manage pool of connections 
  to some good default 
  (
 http://docs.sqlalchemy.org/en/rel_0_7/core/pooling.html?highlight=connection%20pool),
  

  but couldn't find the default values?  Just so I can relate the two 
 figures 
  (SA Pool parameters and PG max_connections) 

 pool_size, see create_engine[0], the default is 5 

 Thanks..
 

  Also Is there a good way to get it too close some idle connections? 

 Yes, decrease pool_size. Just how many workers do you have? With the 
 defaults, you'd need a little under 20 workers to start getting those 
 errors. 


Here's the weird bit.. I have 2 remote machine  and  local access.. after 
some
light usage I already see 9 Idle connections from the local machine and a 
few from the other machines.

It may actually be that we are using uWSGI, each which has 8 workers 
configured, so really I have 24 workers 
already.

 

 With that many workers, your only real solution is to install a 
 pgbouncer to manage those connections globally. SQLA cannot manage a 
 pool of connections across workers, only a shared pool (pgbouncer) can 
 do that. SQLA's pool is local to each process (worker). 



Thanks for the help.. I will check out pgbouncer at this point.

 

 [0] 
 http://docs.sqlalchemy.org/en/rel_0_7/core/engines.html#sqlalchemy.create_engine
  


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Appending a child which already has a parent (OneToMany relationship)

2013-07-25 Thread Michael Bayer

On Jul 25, 2013, at 5:21 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 
 collection isn't present and such and there are still lots of endless loop 
 cases that come up (I just tried in tests and many fail if I remove this 
 check).   The recursion checks in this area are fairly aggressive as there 
 are a lot of edge cases that can occur, sometimes due to user error, where we 
 can't efficiently determine at what point we need to stop backreffing.   I 
 have a little bit of time at the moment so I can look to see if in the 
 current codebase there might be some refinement that can be easily made.

this issue is now resolved, and SQLAlchemy 0.9 will feature an enhanced system 
within the attribute event system which it uses to detect at what point it 
needs to stop propagating a backref event.  The backref handlers themselves 
have taken over the job of controlling at what point the event propagation 
should stop.  A full description of the change and the ramifications it might 
have for custom event handling situations (which is not your case here) is up 
in the migration guide at 
http://docs.sqlalchemy.org/en/latest/changelog/migration_09.html#backref-handlers-can-now-propagate-more-than-one-level-deep
 .


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.