Re: [sqlalchemy] Re: Working with func in dates, No response, no error message

2023-04-05 Thread James Paul Chibole
Sorry, it is a type, it should actually be  .filter(func.strftime('%m',
Event.event_date == datetime.today().strftime('%m')))

Let me go through your latest response and will get back to you. Thank you
for the prompt response.

On Wed, Apr 5, 2023 at 10:16 AM Lele Gaifax  wrote:

> Nancy Andeyo  writes:
>
> > However, the part with problems is this one: .filter(func.strftime('%m',
> > Event.event_date = datetime.today().strftime('%m'))) where the aim to
> > filter out events that will take place in the current month. This is the
> > section that I posted, yet I needed to post the entire query for what I
> am
> > intending to achieve is understood.
>
> I can't say if the typos are due to you rewriting the cide in these
> messages, or if instead they are effectively present in the real code,
> but also the above is not correct:
>
>   .filter(func.strftime('%m', Event.event_date =
> datetime.today().strftime('%m')))
>
> This should raise a syntax error when evaluated by Python...
>
> For comparison, the following complete script works for me:
>
>   from datetime import date
>   from pprint import pprint
>
>   from sqlalchemy import create_engine
>   from sqlalchemy import func
>   from sqlalchemy.orm import Session
>
>   from sol.models import Tourney
>
>
>   engine = create_engine('sqlite:///development.db')
>   session = Session(engine)
>
>   q = session.query(Tourney)
>   q = q.filter(func.strftime('%m', Tourney.date) ==
> date.today().strftime('%m'))
>
>   pprint([(t.description, t.date) for t in q.limit(3).all()])
>
> and emits
>
>   [('2° Torneo', datetime.date(2001, 4, 1)),
>('7° Torneo', datetime.date(2004, 4, 24)),
>('7° Torneo', datetime.date(2005, 4, 30))]
>
> Hope this helps,
> ciao, lele.
> --
> nickname: Lele Gaifax | Quando vivrò di quello che ho pensato ieri
> real: Emanuele Gaifas | comincerò ad aver paura di chi mi copia.
> l...@metapensiero.it  | -- Fortunato Depero, 1929.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/4oPfuzAjw48/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/87pm8ier3i.fsf%40metapensiero.it
> .
>


-- 
*James Paul Chibole*
   -In the name of God-

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CA%2B9bh8dc1ph6UC49pqjVeVfg-DEo%2BE2vFxu0NCXoB9AonsWXtQ%40mail.gmail.com.


Re: [sqlalchemy] Working with func in dates, No response, no error message

2023-03-30 Thread James Paul Chibole

Thank you Philip for your suggestion.
On Thursday, March 30, 2023 at 9:38:08 PM UTC+3 Philip Semanchuk wrote:

>
>
> > On Mar 30, 2023, at 2:32 PM, James Paul Chibole  
> wrote:
> > 
> > Hi everyone, I am trying to retrieve deceased persons who died in the 
> current month but the output gives no result. Here is my code with query 
> done in Python Flask:
> > from datetime import datetime from sqlalchemy import func 
> > @app.route('/user/') @login_required def user(username): 
> current_month = datetime.today().date().strftime("%B") 
> monthly_anniversaries = 
> current_user.followed_deaths().filter(Deceased.burial_cremation_dat e 
>  f_death== current_month)).order_by(Deceased.timestamp.desc()) return 
> render_template("user.html", monthly_anniversaries =monthly_anniversaries)
>
>
> Flask is an extra layer of complication here that’s getting in the way of 
> what you’re trying to debug. That’s not a knock on Flask (I use it too), 
> it’s just not germane to a SQLAlchemy problem. My suggestion is that you 
> try putting a breakpoint in your flask app so you can play with the query 
> inside the debugger. Ensure that current_month is what you think it is, 
> hardcode query params instead of passing variables to see if that changes 
> results, remove some of the filter clauses to see if the results change the 
> way you expect, etc. And of course having a look at the SQL that’s being 
> sent to the server will give you some clues too, although that can be hard 
> to get to depending on your environment.
>
>
> Hope this helps
> Philip

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/f6a17840-cdfd-4f26-bdc5-0a75af82fa91n%40googlegroups.com.


[sqlalchemy] Working with func in dates, No response, no error message

2023-03-30 Thread James Paul Chibole


Hi everyone, I am trying to retrieve deceased persons who died in the 
current month but the output gives no result. Here is my code with query 
done in Python Flask:
from datetime import datetime  from 
sqlalchemy import func  
@app.route('/user/')@login_required  
   def user(username):   
 current_month = datetime.today().date().strftime("%B")  
 monthly_anniversaries =  
 current_user.followed_deaths().filter(Deceased.burial_cremation_dat  e
  http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/3b972e00-e172-402a-983d-30ef7e22aef7n%40googlegroups.com.


Re: [sqlalchemy] Unexpected TypeError when using `choices` as parameter name to TypeDecorator

2021-05-06 Thread Steven James
Thank you! That makes me feel quite a bit better.

On Thursday, 6 May 2021 at 11:59:14 UTC-4 Mike Bayer wrote:

> the caching routine for datatypes is matching up the names of the 
> parameters to the internal state of the object.   any name will reproduce 
> as long as you name the parameters and internal attributes the same.
>
> On Thu, May 6, 2021, at 11:52 AM, Steven James wrote:
>
> I originally came to that conclusion, and I agree that replacing it with a 
> tuple does fix it, but I still can't explain why using a different 
> parameter name also fixes it.
>
> On Thursday, 6 May 2021 at 11:46:53 UTC-4 Mike Bayer wrote:
>
>
> this is a regression in SQLAlchemy in that the TypeDecorator is being 
> applied to caching without giving the end user any clue or API to control 
> this process.
>
> the name "choices" is not significant, it's that the parameter as given is 
> trying to be part of a cache key, so turn this into a tuple for now to 
> resolve:
>
>
> class AltType(TypeDecorator):
> impl = Unicode(255)
>
> def __init__(self, choices):
> self.choices = tuple(choices)
> super(AltType, self).__init__()
>
>
> https://github.com/sqlalchemy/sqlalchemy/issues/6436
>
>
>
> On Thu, May 6, 2021, at 11:08 AM, Steven James wrote:
>
> This one is baffling me. The following example throws an exception about 
> an unhashable type, related (I think) to the new query caching in 1.4. 
> However, if I change the parameter name to AltType.__init__ to ANYTHING 
> other than `choices`, it works fine. I can't find any reference to the name 
> `choices` in the sqlalchemy codebase.
>
> So why can't I use `choices` as a parameter name here? The answer might be 
> related to the AnnotatedColumn in the generated expression but that's about 
> as far as I've gotten.
>
> ```
> # sqlalchemy 1.4.x
>
> from sqlalchemy import Column, Integer, create_engine, TypeDecorator, 
> Unicode
> from sqlalchemy.orm import sessionmaker, as_declarative, declared_attr
>
>
> class AltType(TypeDecorator):
> impl = Unicode(255)
>
> def __init__(self, choices):
> self.choices = choices
> super(AltType, self).__init__()
>
>
> @as_declarative()
> class Base(object):
> @declared_attr
> def __tablename__(cls):
> return cls.__name__.lower()
>
> pk = Column(Integer, primary_key=True)
>
>
> class MyClass(Base):
> d = Column(AltType(['a', 'list', 'here']))
>
>
> if __name__ == '__main__':
> e = create_engine('sqlite://', echo=True)
> conn = e.connect()
>
> Base.metadata.create_all(e)
>
> s = sessionmaker(e)()
> q = s.query(MyClass).filter(MyClass.d == 'search_str')
> 
> result = q.first()  # < error here
> print(result)
>
> ```
>
> result:
> ```
> Traceback (most recent call last):
>   File "...\scratch_78.py", line 36, in 
> results = q.first()
>   File "...\\lib\site-packages\sqlalchemy\orm\query.py", line 2750, in 
> first
> return self.limit(1)._iter().first()
>   File "...\\lib\site-packages\sqlalchemy\orm\query.py", line 2834, in 
> _iter
> result = self.session.execute(
>   File "...\\lib\site-packages\sqlalchemy\orm\session.py", line 1675, in 
> execute
> result = conn._execute_20(statement, params or {}, execution_options)
>   File "...\\lib\site-packages\sqlalchemy\engine\base.py", line 1521, in 
> _execute_20
> return meth(self, args_10style, kwargs_10style, execution_options)
>   File "...\\lib\site-packages\sqlalchemy\sql\elements.py", line 313, in 
> _execute_on_connection
> return connection._execute_clauseelement(
>   File "...\\lib\site-packages\sqlalchemy\engine\base.py", line 1382, in 
> _execute_clauseelement
> compiled_sql, extracted_params, cache_hit = elem._compile_w_cache(
>   File "...\\lib\site-packages\sqlalchemy\sql\elements.py", line 531, in 
> _compile_w_cache
> compiled_sql = compiled_cache.get(key)
>   File "...\\lib\site-packages\sqlalchemy\util\_collections.py", line 918, 
> in get
> item = dict.get(self, key, default)
> TypeError: unhashable type: 'list'
> ```
>
> -Steven James
>
>
> -- 
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> T

Re: [sqlalchemy] Unexpected TypeError when using `choices` as parameter name to TypeDecorator

2021-05-06 Thread Steven James
I originally came to that conclusion, and I agree that replacing it with a 
tuple does fix it, but I still can't explain why using a different 
parameter name also fixes it.

On Thursday, 6 May 2021 at 11:46:53 UTC-4 Mike Bayer wrote:

> this is a regression in SQLAlchemy in that the TypeDecorator is being 
> applied to caching without giving the end user any clue or API to control 
> this process.
>
> the name "choices" is not significant, it's that the parameter as given is 
> trying to be part of a cache key, so turn this into a tuple for now to 
> resolve:
>
>
> class AltType(TypeDecorator):
> impl = Unicode(255)
>
> def __init__(self, choices):
> self.choices = tuple(choices)
> super(AltType, self).__init__()
>
>
> https://github.com/sqlalchemy/sqlalchemy/issues/6436
>
>
>
> On Thu, May 6, 2021, at 11:08 AM, Steven James wrote:
>
> This one is baffling me. The following example throws an exception about 
> an unhashable type, related (I think) to the new query caching in 1.4. 
> However, if I change the parameter name to AltType.__init__ to ANYTHING 
> other than `choices`, it works fine. I can't find any reference to the name 
> `choices` in the sqlalchemy codebase.
>
> So why can't I use `choices` as a parameter name here? The answer might be 
> related to the AnnotatedColumn in the generated expression but that's about 
> as far as I've gotten.
>
> ```
> # sqlalchemy 1.4.x
>
> from sqlalchemy import Column, Integer, create_engine, TypeDecorator, 
> Unicode
> from sqlalchemy.orm import sessionmaker, as_declarative, declared_attr
>
>
> class AltType(TypeDecorator):
> impl = Unicode(255)
>
> def __init__(self, choices):
> self.choices = choices
> super(AltType, self).__init__()
>
>
> @as_declarative()
> class Base(object):
> @declared_attr
> def __tablename__(cls):
> return cls.__name__.lower()
>
> pk = Column(Integer, primary_key=True)
>
>
> class MyClass(Base):
> d = Column(AltType(['a', 'list', 'here']))
>
>
> if __name__ == '__main__':
> e = create_engine('sqlite://', echo=True)
> conn = e.connect()
>
> Base.metadata.create_all(e)
>
> s = sessionmaker(e)()
> q = s.query(MyClass).filter(MyClass.d == 'search_str')
> 
> result = q.first()  # < error here
> print(result)
>
> ```
>
> result:
> ```
> Traceback (most recent call last):
>   File "...\scratch_78.py", line 36, in 
> results = q.first()
>   File "...\\lib\site-packages\sqlalchemy\orm\query.py", line 2750, in 
> first
> return self.limit(1)._iter().first()
>   File "...\\lib\site-packages\sqlalchemy\orm\query.py", line 2834, in 
> _iter
> result = self.session.execute(
>   File "...\\lib\site-packages\sqlalchemy\orm\session.py", line 1675, in 
> execute
> result = conn._execute_20(statement, params or {}, execution_options)
>   File "...\\lib\site-packages\sqlalchemy\engine\base.py", line 1521, in 
> _execute_20
> return meth(self, args_10style, kwargs_10style, execution_options)
>   File "...\\lib\site-packages\sqlalchemy\sql\elements.py", line 313, in 
> _execute_on_connection
> return connection._execute_clauseelement(
>   File "...\\lib\site-packages\sqlalchemy\engine\base.py", line 1382, in 
> _execute_clauseelement
> compiled_sql, extracted_params, cache_hit = elem._compile_w_cache(
>   File "...\\lib\site-packages\sqlalchemy\sql\elements.py", line 531, in 
> _compile_w_cache
> compiled_sql = compiled_cache.get(key)
>   File "...\\lib\site-packages\sqlalchemy\util\_collections.py", line 918, 
> in get
> item = dict.get(self, key, default)
> TypeError: unhashable type: 'list'
> ```
>
> -Steven James
>
>
> -- 
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy/3e863503-4afa-44c3-b54a-4fb26ae5e8c8n%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy/3e863503-4afa-44c3-b54a-4fb26ae5e8c8n%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
&

[sqlalchemy] Unexpected TypeError when using `choices` as parameter name to TypeDecorator

2021-05-06 Thread Steven James
This one is baffling me. The following example throws an exception about an 
unhashable type, related (I think) to the new query caching in 1.4. 
However, if I change the parameter name to AltType.__init__ to ANYTHING 
other than `choices`, it works fine. I can't find any reference to the name 
`choices` in the sqlalchemy codebase.

So why can't I use `choices` as a parameter name here? The answer might be 
related to the AnnotatedColumn in the generated expression but that's about 
as far as I've gotten.

```
# sqlalchemy 1.4.x

from sqlalchemy import Column, Integer, create_engine, TypeDecorator, 
Unicode
from sqlalchemy.orm import sessionmaker, as_declarative, declared_attr


class AltType(TypeDecorator):
impl = Unicode(255)

def __init__(self, choices):
self.choices = choices
super(AltType, self).__init__()


@as_declarative()
class Base(object):
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()

pk = Column(Integer, primary_key=True)


class MyClass(Base):
d = Column(AltType(['a', 'list', 'here']))


if __name__ == '__main__':
e = create_engine('sqlite://', echo=True)
conn = e.connect()

Base.metadata.create_all(e)

s = sessionmaker(e)()
q = s.query(MyClass).filter(MyClass.d == 'search_str')

result = q.first()  # < error here
print(result)

```

result:
```
Traceback (most recent call last):
  File "...\scratch_78.py", line 36, in 
results = q.first()
  File "...\\lib\site-packages\sqlalchemy\orm\query.py", line 2750, in first
return self.limit(1)._iter().first()
  File "...\\lib\site-packages\sqlalchemy\orm\query.py", line 2834, in _iter
result = self.session.execute(
  File "...\\lib\site-packages\sqlalchemy\orm\session.py", line 1675, in 
execute
result = conn._execute_20(statement, params or {}, execution_options)
  File "...\\lib\site-packages\sqlalchemy\engine\base.py", line 1521, in 
_execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
  File "...\\lib\site-packages\sqlalchemy\sql\elements.py", line 313, in 
_execute_on_connection
return connection._execute_clauseelement(
  File "...\\lib\site-packages\sqlalchemy\engine\base.py", line 1382, in 
_execute_clauseelement
compiled_sql, extracted_params, cache_hit = elem._compile_w_cache(
  File "...\\lib\site-packages\sqlalchemy\sql\elements.py", line 531, in 
_compile_w_cache
compiled_sql = compiled_cache.get(key)
  File "...\\lib\site-packages\sqlalchemy\util\_collections.py", line 918, 
in get
item = dict.get(self, key, default)
TypeError: unhashable type: 'list'
```

-Steven James

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/3e863503-4afa-44c3-b54a-4fb26ae5e8c8n%40googlegroups.com.


Re: [sqlalchemy] Setting Foreign Key based on natural key

2020-11-20 Thread Steven James
Thank you for taking the time to look at this. I have shied away from using 
events in the past, but now that you mention it I can see how that could 
work for this case.

I had to add slightly to my previous solution. In some cases, the 
relationship is nullable and this solution was silently giving a NULL when 
I expected a Foreign Key constraint failure. 

@parent_key.setter
def parent_key(self, val):
non_existent_id = -999
self.parent_id = coalesce(

select([Parent.id_]).select_from(Parent.__table__).where(Parent.key == 
val).as_scalar(),
non_existent_id
)

(the .as_scalar() is needed to make coalesce() happy)

On Thursday, 19 November 2020 at 14:36:12 UTC-5 Mike Bayer wrote:

>
>
> On Thu, Nov 19, 2020, at 2:03 PM, Steven James wrote:
>
> In general I have set up foreign keys using the following pattern:
>
> p = session.query(Parent).filter(Parent.natural_key == key).one()
> new_child = Child(parent=p)
>
> This makes a SELECT and an INSERT and is fine, but it gets a little 
> cumbersome when I want to create a new child object with many relationships 
> where I am not given the primary key. It basically boils down to requiring 
> custom constructor functions for each kind of object.
>
> What I would like is the following pattern:
>
> new_child = Child(parent_key=key)
>
> where parent_key can be a kind of hybrid_property() on the Child and it 
> would generate an `INSERT INTO child_table (parent_id) VALUES ((SELECT id 
> FROM parent_table WHERE key=?))`  I've gotten it working that way actually, 
> but I wanted to check to make sure that this pattern does not already exist 
> in some easier construct.
>
>
> I think that's a fine pattern to use, it all comes down to when you want 
> to be able to access the object.since you are setting up Child() with a 
> key before the Child has any connection to any session or transaction, that 
> rules out being able to know the "parent_id" right at that moment.You 
> could, if you wanted, use events to do that lookup at different times, such 
> as using the after_attach() event to look up the "parent" where you could 
> load the full Parent object and associate it, that way child.parent would 
> be ready before you ever invoked the INSERT.  Or you could use events like 
> before_insert(), before_update() to set up that parent_id too.  but setting 
> it to the SELECT expression is more succinct and pretty clever too.
>
> there's basically  a lot of event hooks (too many at this point) that can 
> be used for this, but you've found probably the one way to do it without 
> using a hook, so should be good.
>
>
>
> Working example:
>
> from sqlalchemy import Column, Integer, String, ForeignKey, create_engine, 
> select
> from sqlalchemy.ext.declarative import declarative_base
> from sqlalchemy.ext.hybrid import hybrid_property
> from sqlalchemy.orm import Session, relationship
>
> Base = declarative_base()
>
> class Parent(Base):
> __tablename__ = 'parent'
> id_ = Column(Integer, primary_key=True)
> key = Column(String(4))
> data = Column(String(64))
>
> def __repr__(self) -> str:
> return 'Parent(id={id_!r}, key={key!r})'.format(**self.__dict__)
>
>
> class Child(Base):
> __tablename__ = 'child'
> id_ = Column(Integer, primary_key=True)
> parent_id = Column(ForeignKey('parent.id_'))
> data = Column(String(64))
>
> parent = relationship(Parent, backref='children', lazy='joined')
>
> @hybrid_property
> def parent_key(self):
> return self.parent.key
>
> @parent_key.expression
> def parent_key(self):
> return Parent.key
>
> @parent_key.setter
> def parent_key(self, val):
> self.parent_id = 
> select([Parent.id_]).select_from(Parent.__table__).where(Parent.key == val)
>
>
> if __name__ == '__main__':
> e = create_engine('sqlite:///', echo=True)
> Base.metadata.create_all(e)
> s = Session(bind=e)
>
> p1 = Parent(key='p1', data='parent1')
> p2 = Parent(key='p2', data='parent2')
>
> s.add_all([p1, p2])
> s.commit()
>
> # initialize child/parent using the "natural key"
> ##  INSERT INTO child (parent_id, data) VALUES ((SELECT parent.id_ 
> FROM parent WHERE parent."key" = ?), ?)
> c = Child(data='i am the child', parent_key='p1')
> s.add(c)
> s.commit()
> print(c.parent)
>
> # update relationship using the "natural key"
> ## UPDATE child SET parent_id=(SELECT parent.id_ FROM parent WHERE 
> parent."key" = ?) WHERE child.id_ = ?
> c.parent_key = 'p2'
> s.com

[sqlalchemy] Setting Foreign Key based on natural key

2020-11-19 Thread Steven James
In general I have set up foreign keys using the following pattern:

p = session.query(Parent).filter(Parent.natural_key == key).one()
new_child = Child(parent=p)

This makes a SELECT and an INSERT and is fine, but it gets a little 
cumbersome when I want to create a new child object with many relationships 
where I am not given the primary key. It basically boils down to requiring 
custom constructor functions for each kind of object.

What I would like is the following pattern:

new_child = Child(parent_key=key)

where parent_key can be a kind of hybrid_property() on the Child and it 
would generate an `INSERT INTO child_table (parent_id) VALUES ((SELECT id 
FROM parent_table WHERE key=?))`  I've gotten it working that way actually, 
but I wanted to check to make sure that this pattern does not already exist 
in some easier construct.

Working example:

from sqlalchemy import Column, Integer, String, ForeignKey, create_engine, 
select
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.orm import Session, relationship

Base = declarative_base()

class Parent(Base):
__tablename__ = 'parent'
id_ = Column(Integer, primary_key=True)
key = Column(String(4))
data = Column(String(64))

def __repr__(self) -> str:
return 'Parent(id={id_!r}, key={key!r})'.format(**self.__dict__)


class Child(Base):
__tablename__ = 'child'
id_ = Column(Integer, primary_key=True)
parent_id = Column(ForeignKey('parent.id_'))
data = Column(String(64))

parent = relationship(Parent, backref='children', lazy='joined')

@hybrid_property
def parent_key(self):
return self.parent.key

@parent_key.expression
def parent_key(self):
return Parent.key

@parent_key.setter
def parent_key(self, val):
self.parent_id = 
select([Parent.id_]).select_from(Parent.__table__).where(Parent.key == val)


if __name__ == '__main__':
e = create_engine('sqlite:///', echo=True)
Base.metadata.create_all(e)
s = Session(bind=e)

p1 = Parent(key='p1', data='parent1')
p2 = Parent(key='p2', data='parent2')

s.add_all([p1, p2])
s.commit()

# initialize child/parent using the "natural key"
##  INSERT INTO child (parent_id, data) VALUES ((SELECT parent.id_ FROM 
parent WHERE parent."key" = ?), ?)
c = Child(data='i am the child', parent_key='p1')
s.add(c)
s.commit()
print(c.parent)

# update relationship using the "natural key"
## UPDATE child SET parent_id=(SELECT parent.id_ FROM parent WHERE 
parent."key" = ?) WHERE child.id_ = ?
c.parent_key = 'p2'
s.commit()
print(c.parent)

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/b1fe8102-289a-4b8a-a96f-8acba644f9c8n%40googlegroups.com.


[sqlalchemy] Re: calling stored procedure 1000 times in a row - system impact?

2020-10-21 Thread Steven James
Question 1:
I don't think there is a good fancy way of doing this built in to 
SQLAlchemy. With your constraint of using a stored proc for inserts (we 
have a similar constraint where I work), one way around the 
multiple-command overhead would be to do a bulk insert to a temporary 
"parameters" table, and then use another stored proc that will send those 
params to your original proc one set at a time, or will implement a more 
efficient bulk "insert from select." Error handling can become complex if 
there are instances where your parameters to the insert function could be 
rejected by the stored procedures.

Some engines will also let you send multiple commands in a "BEGIN ... END" 
sql block, but I have not tested this with Oracle and there is always a 
limit to how large a command like that can be.

Question 2: 
a / b : A single transaction will not span threads or connections in a 
pool. Your described approach would use a single thread with a single 
connection on the client side.
c: The only real overhead here is the 1000 separate db calls, especially if 
they go across a network. There is some overhead to each command sent to 
the db server. 1000 calls is a lot but might not be too many, so before you 
go through any optimization, do some performance testing to make sure you 
need it.

On Wednesday, 21 October 2020 at 13:11:06 UTC-4 thequie...@gmail.com wrote:

> In our application we are using SQLAlchemy to retrieve data from our 
> Oracle database. However, we have a policy that any INSERTS to the Oracle 
> database are done via stored procedures. We have approximately 1000 rows 
> (where each row  consists of 3-4 simple scalars such as numbers or strings) 
> that we need to insert.
>
> Question 1 - is there a sophisticated approach of creating an array of 
> these 1000 items and submitting this collection to Oracle in one shot? If 
> so, are there any references on this?
>
> Question 2 - If we adopt a naive approach of opening a transaction, making 
> 1000 calls to this stored proc that does simple inserts, and then commiting 
> to database at the end of the 1000 calls to the stored proc, does this 
> represent a bottleneck on database/computer resources of some sort, such as:
> a - will multiple threads will be created for this?
> b - presuming the default of 20 pooled connections to the database 
> , how many 
> connections from this pool will be used for the 1000 calls to the 
> inserting-stored-procedure before we commit?
> c - Would synchronous calls of this stored procedure pose a serious 
> performance/system-overhead issue? 
>
>
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d40accbf-efb2-464d-85f7-f8c75e9134can%40googlegroups.com.


Re: [sqlalchemy] SQLAlchemy Access to Native Bulk Loaders

2020-04-19 Thread James Fennell
Hi Ben,

Have you checked out bulk operations?
https://docs.sqlalchemy.org/en/13/orm/persistence_techniques.html#bulk-operations

Bulk operations provide a DB-agnostic API for doing large numbers of
inserts and/or updates. The speed up you see, compared to session.add,
depends on your database and the database connection arguments. With
Postgres, I've needed to enable batch mode to see the fullest benefit:
https://docs.sqlalchemy.org/en/13/dialects/postgresql.html#psycopg2-batch-mode

Personally I've seen huge performance impovements in my applications after
migrating big operations from session.add over to the bulk API.

Of course, as you say, you can do more low level SQL calls to get it even
faster, but then you run into a bunch of other issues.

James


On Sun, Apr 19, 2020, 12:46 PM Ben  wrote:

> I hope this is the right place for this... I need to load large files into
> my database. As I understand it, I can do this in one of two ways with
> SQLAlchemy Core: 1) Bring the data into Python and then write it out with
> the *add* method or, alternatively, 2) Use SQLAlchemy to issue a command
> to the DB to use it's native bulk loader to read data from my file. I would
> expect this second approach to be faster, to require less code, and to
> avoid issues such as trying to put too much in memory at one time. However,
> it is DB-vendor-specific (i.e. I believe the command I send to a MySQL DB
> will differ from that I send to a Postgres DB).
>
> So,
>
>- Do I properly understand SQLAlchemy's capabilities here or am I
>missing something?
>- If I do have this right, is generic access to bulk loaders something
>that is on the upgrade / new development list?
>
> Sorry if this isn't the right place for this.
> Thanks!
> Ben
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/fd55ee9a-6918-4b60-88bb-961787e3c53e%40googlegroups.com
> <https://groups.google.com/d/msgid/sqlalchemy/fd55ee9a-6918-4b60-88bb-961787e3c53e%40googlegroups.com?utm_medium=email_source=footer>
> .
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CALDF6i1nvcNQQi46WqrqDyYk2RJa0EwRQVK%2B010VXwJ1Fk8i1w%40mail.gmail.com.


[sqlalchemy] How can I use bulk_save_objects when I know the primary key?

2020-03-12 Thread James Fennell
Hey all! I'm having a problem using the bulk_save_objects function.

*Background*: I'm working on an application that updates state in a
database from an external feed. The process is: I read the feed, convert
the data into SQLAlchemy objects which are detached from the session and
then call session.merge on each one. The merge can result in an insert or
an update, depending on if the entity has been seen before.

Given the number of objects I'm working with, this has turned out to be
very CPU intensive in production. I've profiled it, and most of the work is
in the merge operation. So, I want to use bulk_save_objects to speed things
up and in my case the tradeoffs (lack of cascading etc.) are 100% worth it.

*Problem*: I can't get bulk_save_objects to do updates; no matter what, it
tries to insert and this results in primary key constraint failures:

sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate
key value violates unique constraint "trip_pkey"


Having read the documentation, I'm pretty sure the problem relates to this
comment in the docs:

For each object, whether the object is sent as an INSERT or an UPDATE is
dependent on the same rules used by the Session
<https://docs.sqlalchemy.org/en/13/orm/session_api.html#sqlalchemy.orm.session.Session>
in
traditional operation; if the object has the InstanceState.key attribute
set, then the object is assumed to be “detached” and will result in an
UPDATE. Otherwise, an INSERT is used.


In all cases I'm not playing with the instance state. I'm essentially
manually stamping primary keys on detached objects, so I'm guessing
SQLAlchemy thinks it needs to insert? Any suggestions for how I can proceed?

Thanks!
James

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CALDF6i2Mpw6EQk-q_%2Br2dm3%3Dxz0Mv5Sb_yCeMktq%3DgBf4uQYsA%40mail.gmail.com.


Re: [sqlalchemy] Re: How to check for table's existance without a separate query

2019-12-04 Thread James Fennell
>  However I cannot catch for this error, I can only catch for
"sqlalchemy.exc.ProgrammingError".

Why is that?


James

On Wed, Dec 4, 2019 at 6:03 PM Jonathan Vanasco 
wrote:

>
> Personally, I would handle the check like this:
>
> ERRORS_UNDEFINED_TABLE = (psycopg2.errors.UndefinedTable, )
>
> try:
> res = conn.execute(stmt)
> except sa.exc.ProgrammingError as err:
> if isinstance(err.orig, ERRORS_UNDEFINED_TABLE):
> print('Table does not exist')
> raise
>
>
> This would allow you to update the `ERRORS_UNDEFINED_TABLE` tuple in a
> central place, and allow you to more easily catch this situation in other
> databases if needed.  I've used this technique in a few projects that are
> built for postgres, but support mysql and use sqlite for some tests.
>
>
> On Wednesday, December 4, 2019 at 5:52:15 PM UTC-5, Zsolt Ero wrote:
>>
>> Thanks. So is the following code correct for psycopg2 specific scenario?
>>
>> try:
>> res = conn.execute(stmt)
>> except sa.exc.ProgrammingError as err:
>> if isinstance(err.orig, psycopg2.errors.UndefinedTable):
>> print('Table does not exist')
>> else:
>> raise err
>>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/f936b2c1-bdcf-4372-8943-00a31615d70a%40googlegroups.com
> <https://groups.google.com/d/msgid/sqlalchemy/f936b2c1-bdcf-4372-8943-00a31615d70a%40googlegroups.com?utm_medium=email_source=footer>
> .
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CALDF6i11hXvsywy0nAA86nSpet0h%3DGRDCZHxq6saEsyOdQA6cQ%40mail.gmail.com.


[sqlalchemy] Re: Assert ResultProxy

2019-12-02 Thread Steven James
In case you want more details about the differences, you could also use 
difflib...

from difflib import SequenceMatcher

seq1 = [tuple(row.values()) for row in resultproxy1]
seq2 = [tuple(row.values()) for row in resultproxy2]

sm = SequenceMatcher(a=seq1, b=seq2, autojunk=False)
print(sm.get_opcodes())
print(f'similarity: {sm.ratio()}')

assert sm.ratio() == 1  # example to ensure results are equivalent
assert sm.ratio() == 1, sm.get_opcodes()  # pytest syntax to show the 
opcodes if the assertion fails

Steven James

On Friday, 29 November 2019 09:13:23 UTC-5, sumau wrote:
>
> Hello
>
> I think my original question was too generic so rephrasing... Is there a 
> way in sqlalchemy to:
>
>1. Assert a ResultProxy against an expected ResultProxy (or list of 
>RowProxies against expected list of RowProxies) 
>2. Show any differences
>
> I wanted to check first before writing my own script :-)
>
> Regards
> S
>
> On Friday, 22 November 2019 10:50:54 UTC, sumau wrote:
>>
>> Hello
>>
>> I would like to assert the contents of tables in my PG schema i.e. make 
>> sure it contains the data I'm expecting
>>
>> I am aware of various options:
>>
>> 1) Compare the actual and expected tables using a sql query, orchestrated 
>> by sqlalchemy (i.e. create the actual and expected tables in DB, run the 
>> sql comparison script, return the output)
>> 2) Load the actual tables as tuples and compare them with expected tuples 
>> using something like assert_result
>>
>> https://github.com/sqlalchemy/sqlalchemy/blob/d933ddd503a1ca0a7c562c51c503139c541e707e/lib/sqlalchemy/testing/assertions.py#L465
>> 3) Load the actual tables as dataframes and compare them with expected 
>> dataframes using pandas assert_frame_equal
>>
>> https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.testing.assert_frame_equal.html
>>
>> Any recommendations / thoughts would be much appreciated, both as to the 
>> approach and the implementation :-)
>>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/ca72e6a2-1c81-4775-af26-af5d465d037b%40googlegroups.com.


Re: [sqlalchemy] 'selectin' loading with composite keys on DB2 for i

2019-12-02 Thread Steven James
Wanted to note: this fix seems to be required to use composite keys with 
sqlite / selectin as well.

On Thursday, 27 June 2019 15:53:44 UTC-4, Steven James wrote:
>
> This Worked!
>
> @compiles(BinaryExpression, 'ibm_db_sa')
> def _comp_binary(element, compiler, **kwargs):
> text = compiler.visit_binary(element, **kwargs)
> if element.operator is operators.in_op:
> text = re.sub(r"\) IN \(", ") IN (VALUES ", text)
> return text
>
>
> Had to make a couple of changes 1) the dialect name is 'ibm_db_sa' and 
> 2) (strangely!) DB2 gave me an error that it explicitly disallows parameter 
> markers in VALUES (but only in the single-column-key case). My other tests 
> seem to indicate that this was a misleading error message but I'm not going 
> to touch it again now that it is working. 
>
> Thanks!
>
> On Thursday, 27 June 2019 15:03:01 UTC-4, Mike Bayer wrote:
>>
>>
>>
>> On Thu, Jun 27, 2019, at 2:11 PM, Steven James wrote:
>>
>> Currently, `selectin` loading with composite keys works for me on MySQL 
>> and SQLite. The documentation states that it also works with Postgres. I'm 
>> currently trying to get it working on DB2 (connecting to a DB2 for i, 7.2 
>> system.
>>
>> (the following assumes a table with the primary key consisting of two 
>> columns: a and b)
>>
>> selectin loading currently emits:
>>  `SELECT * FROM table WHERE (table.a, table.b) IN ((?, ?), (?, ?))`
>>
>> For this type of loading to work in DB2 (DB2 for i), the syntax needs to 
>> be:
>> `SELECT * FROM table WHERE (table.a, table.b) IN VALUES (?, ?), (?,?)`
>>
>> Is there any way to implement this without a core change? I'm wondering 
>> if I can override the normal operation of in_() using a custom dialect or 
>> custom default comparator.
>>
>>
>> funny thing is that I'm a Red Hat employee, so assuming RH's merger with 
>> IBM goes through I may eventually be an IBM employee, and maybe they'd like 
>> to give me DB2 things to work on :)   However, that is not the case right 
>> now and I've done only very limited work with the DB2 driver as I'm sure 
>> you're aware the database itself is a beast.
>>
>> So this is something DB2's SQLAlchemy driver will have to add support for 
>> at some point, the selectinload thing is going to become more popular and 
>> also the internal mechanism for "IN" is going to be moving entirely to a 
>> newer architecture called "expanding".   That's probably not important here 
>> though.
>>
>> For now, in order to get that "VALUES" in there, you don't need to 
>> "change" Core or work with custom datatypes, there's a variety of event 
>> hooks that can give you access to that part of the SQL more at the string 
>> level.  I'm able to make this work also on Postgresql by intercepting 
>> BinaryExpression in the compiler, see the example below.
>>
>> import re
>>
>> from sqlalchemy import Column
>> from sqlalchemy import create_engine
>> from sqlalchemy import ForeignKeyConstraint
>> from sqlalchemy import Integer
>> from sqlalchemy import String
>> from sqlalchemy.ext.compiler import compiles
>> from sqlalchemy.ext.declarative import declarative_base
>> from sqlalchemy.orm import relationship
>> from sqlalchemy.orm import selectinload
>> from sqlalchemy.orm import Session
>> from sqlalchemy.sql import operators
>> from sqlalchemy.sql.expression import BinaryExpression
>>
>>
>> @compiles(BinaryExpression, "postgresql")  # because I'm testing it here
>> @compiles(BinaryExpression, "db2")
>> def _comp_binary(element, compiler, **kw):
>> text = compiler.visit_binary(element, **kw)
>> if element.operator is operators.in_op:
>> text = re.sub(r" IN \(", " IN (VALUES ", text)
>> return text
>>
>>
>> Base = declarative_base()
>>
>>
>> class A(Base):
>> __tablename__ = "a"
>>
>> id = Column(Integer, primary_key=True)
>> id2 = Column(Integer, primary_key=True)
>> data = Column(String)
>>
>> bs = relationship("B")
>>
>>
>> class B(Base):
>> __tablename__ = "b"
>> id = Column(Integer, primary_key=True)
>>
>> a_id = Column(Integer)
>> a_id2 = Column(Integer)
>>
>> __table_args__ = (
>> ForeignKeyConstraint(["a_id", "a_id2"], ["a.id", "a.id2"]),
>> )
>>

Re: [sqlalchemy] Large RAM usage in bulk_insert_mappings

2019-11-14 Thread James Fennell
Because the memory spike was so bad (the application usually runs at 250mb
RAM, and it went up to a GB during this process), I was able to find the
problem by running htop and using print statements to discover where in the
execution the Python code was when the RAM spike happened.

I unfortunately don't have any advice on actual good tools for tracking RAM
usage in Python programs but would to leave to hear if others do.



On Thu, Nov 14, 2019 at 12:41 PM Soumaya Mauthoor <
soumaya.mauth...@gmail.com> wrote:

> What did you use to profile memory usage? I've recently been investigating
> memory usage when loading data using memory_profiler and would be
> interested to find out about the best approach
>
> On Thu, 14 Nov 2019, 17:16 James Fennell,  wrote:
>
>> Hi all,
>>
>> Just sharing some perf insights into the bulk operation function
>> bulk_insert_mappings.
>>
>> I was recently debugging a SQL Alchemy powered web app that was crashing
>> due to out of memory issues on a small Kubernetes node. It turned out to be
>> "caused" by an over optimistic invocation of bulk_insert_mappings.
>> Basically I'm reading a CSV file with ~500,000 entries into a list of
>> dictionaries, and then passing it into the bulk_insert_mappings function at
>> once. It turned out the SQL Alchemy work was using 750mb of RAM, which was
>> enough to OOM the small node the web app was running on.
>>
>> A simple workaround is to split the list of 500,000 entries into chunks
>> of 1000 entries each, and then call bulk_insert_mappings on each chunk.
>> When I do this, the extra memory usage is not even noticeable. But also, it
>> seems that this chunked approach is actually faster! I might benchmark that
>> to quantify that.
>>
>> Thought it was interesting. I wonder would it be worth adding to the docs
>> on bulk_insert_mappings? Given that function is motivated by performance,
>> it seems it might be relevant.
>>
>> James
>>
>>
>>
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/sqlalchemy/CALDF6i1sx_QmW1v1jvopT-iWZqmSGmJ7JpJi10egNnbE0D7FMQ%40mail.gmail.com
>> <https://groups.google.com/d/msgid/sqlalchemy/CALDF6i1sx_QmW1v1jvopT-iWZqmSGmJ7JpJi10egNnbE0D7FMQ%40mail.gmail.com?utm_medium=email_source=footer>
>> .
>>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/CAN14jWTdFQ7tSyhr71H_wCX_JXT58S40Q4MN7qte6pE2N-%2BOLw%40mail.gmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/CAN14jWTdFQ7tSyhr71H_wCX_JXT58S40Q4MN7qte6pE2N-%2BOLw%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CALDF6i3KeKteJ8aV5XHM-g%2BMsAZdRoAnGqLfCxVETfj8vsDTSg%40mail.gmail.com.


[sqlalchemy] Large RAM usage in bulk_insert_mappings

2019-11-14 Thread James Fennell
Hi all,

Just sharing some perf insights into the bulk operation function
bulk_insert_mappings.

I was recently debugging a SQL Alchemy powered web app that was crashing
due to out of memory issues on a small Kubernetes node. It turned out to be
"caused" by an over optimistic invocation of bulk_insert_mappings.
Basically I'm reading a CSV file with ~500,000 entries into a list of
dictionaries, and then passing it into the bulk_insert_mappings function at
once. It turned out the SQL Alchemy work was using 750mb of RAM, which was
enough to OOM the small node the web app was running on.

A simple workaround is to split the list of 500,000 entries into chunks of
1000 entries each, and then call bulk_insert_mappings on each chunk. When I
do this, the extra memory usage is not even noticeable. But also, it seems
that this chunked approach is actually faster! I might benchmark that to
quantify that.

Thought it was interesting. I wonder would it be worth adding to the docs
on bulk_insert_mappings? Given that function is motivated by performance,
it seems it might be relevant.

James

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CALDF6i1sx_QmW1v1jvopT-iWZqmSGmJ7JpJi10egNnbE0D7FMQ%40mail.gmail.com.


[sqlalchemy] Track a "column FOO does not exist" bug.

2019-09-05 Thread Steven James
Do you have a trigger or a constraint that references that column?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/ae2db410-910e-4ae1-96ea-6255b2fb5c73%40googlegroups.com.


[sqlalchemy] calling LENGTH() in SQLite?

2019-07-29 Thread James Hartley
Perhaps I have been up too many hours, but my syntax foo is fizzling.
Given the following class, I want to compute the string length of
"position" instead of storing it as another attribute which can get out of
sync.  eg.

class Position(Base):
__tablename__ = 'position'
id = Column(INTEGER, primary_key=True)
timestamp = Column(TIMESTAMP, nullable=False)
position = Column(TEXT, unique=True, nullable=False)

So to get all positions of string length 2, the following is not working:
from sqlalchemy.sql.expression import func
# ...
for position, in
session.query(Position.position).filter(func.length(Position.position == 2):
print(position)

Any insight offered would certainly be appreciated.

Thanks!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAKeNXXtURxf-eXNaXn1woYE_q9i%3DggeopN%2B_%3D9hQb3FWfqqyLQ%40mail.gmail.com.


Re: [sqlalchemy] 'selectin' loading with composite keys on DB2 for i

2019-06-27 Thread Steven James
This Worked!

@compiles(BinaryExpression, 'ibm_db_sa')
def _comp_binary(element, compiler, **kwargs):
text = compiler.visit_binary(element, **kwargs)
if element.operator is operators.in_op:
text = re.sub(r"\) IN \(", ") IN (VALUES ", text)
return text


Had to make a couple of changes 1) the dialect name is 'ibm_db_sa' and 
2) (strangely!) DB2 gave me an error that it explicitly disallows parameter 
markers in VALUES (but only in the single-column-key case). My other tests 
seem to indicate that this was a misleading error message but I'm not going 
to touch it again now that it is working. 

Thanks!

On Thursday, 27 June 2019 15:03:01 UTC-4, Mike Bayer wrote:
>
>
>
> On Thu, Jun 27, 2019, at 2:11 PM, Steven James wrote:
>
> Currently, `selectin` loading with composite keys works for me on MySQL 
> and SQLite. The documentation states that it also works with Postgres. I'm 
> currently trying to get it working on DB2 (connecting to a DB2 for i, 7.2 
> system.
>
> (the following assumes a table with the primary key consisting of two 
> columns: a and b)
>
> selectin loading currently emits:
>  `SELECT * FROM table WHERE (table.a, table.b) IN ((?, ?), (?, ?))`
>
> For this type of loading to work in DB2 (DB2 for i), the syntax needs to 
> be:
> `SELECT * FROM table WHERE (table.a, table.b) IN VALUES (?, ?), (?,?)`
>
> Is there any way to implement this without a core change? I'm wondering if 
> I can override the normal operation of in_() using a custom dialect or 
> custom default comparator.
>
>
> funny thing is that I'm a Red Hat employee, so assuming RH's merger with 
> IBM goes through I may eventually be an IBM employee, and maybe they'd like 
> to give me DB2 things to work on :)   However, that is not the case right 
> now and I've done only very limited work with the DB2 driver as I'm sure 
> you're aware the database itself is a beast.
>
> So this is something DB2's SQLAlchemy driver will have to add support for 
> at some point, the selectinload thing is going to become more popular and 
> also the internal mechanism for "IN" is going to be moving entirely to a 
> newer architecture called "expanding".   That's probably not important here 
> though.
>
> For now, in order to get that "VALUES" in there, you don't need to 
> "change" Core or work with custom datatypes, there's a variety of event 
> hooks that can give you access to that part of the SQL more at the string 
> level.  I'm able to make this work also on Postgresql by intercepting 
> BinaryExpression in the compiler, see the example below.
>
> import re
>
> from sqlalchemy import Column
> from sqlalchemy import create_engine
> from sqlalchemy import ForeignKeyConstraint
> from sqlalchemy import Integer
> from sqlalchemy import String
> from sqlalchemy.ext.compiler import compiles
> from sqlalchemy.ext.declarative import declarative_base
> from sqlalchemy.orm import relationship
> from sqlalchemy.orm import selectinload
> from sqlalchemy.orm import Session
> from sqlalchemy.sql import operators
> from sqlalchemy.sql.expression import BinaryExpression
>
>
> @compiles(BinaryExpression, "postgresql")  # because I'm testing it here
> @compiles(BinaryExpression, "db2")
> def _comp_binary(element, compiler, **kw):
> text = compiler.visit_binary(element, **kw)
> if element.operator is operators.in_op:
> text = re.sub(r" IN \(", " IN (VALUES ", text)
> return text
>
>
> Base = declarative_base()
>
>
> class A(Base):
> __tablename__ = "a"
>
> id = Column(Integer, primary_key=True)
> id2 = Column(Integer, primary_key=True)
> data = Column(String)
>
> bs = relationship("B")
>
>
> class B(Base):
> __tablename__ = "b"
> id = Column(Integer, primary_key=True)
>
> a_id = Column(Integer)
> a_id2 = Column(Integer)
>
> __table_args__ = (
> ForeignKeyConstraint(["a_id", "a_id2"], ["a.id", "a.id2"]),
> )
>
>
> e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
> Base.metadata.drop_all(e)
> Base.metadata.create_all(e)
>
> s = Session(e)
>
> s.add(A(id=1, id2=1, bs=[B(), B()]))
> s.commit()
>
> s.query(A).options(selectinload(A.bs)).all()
>
>
>
>
>
>
> Thanks,
> Steven James
>
>
> --
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.co

[sqlalchemy] Re: 'selectin' loading with composite keys on DB2 for i

2019-06-27 Thread Steven James
Sorry... made a typo there... the desired syntax is:
`SELECT * FROM table WHERE (table.a, table.b) IN (VALUES (?, ?), (?,?))`


On Thursday, 27 June 2019 14:11:16 UTC-4, Steven James wrote:
>
> Currently, `selectin` loading with composite keys works for me on MySQL 
> and SQLite. The documentation states that it also works with Postgres. I'm 
> currently trying to get it working on DB2 (connecting to a DB2 for i, 7.2 
> system.
>
> (the following assumes a table with the primary key consisting of two 
> columns: a and b)
>
> selectin loading currently emits:
>  `SELECT * FROM table WHERE (table.a, table.b) IN ((?, ?), (?, ?))`
>
> For this type of loading to work in DB2 (DB2 for i), the syntax needs to 
> be:
> `SELECT * FROM table WHERE (table.a, table.b) IN VALUES (?, ?), (?,?)`
>
> Is there any way to implement this without a core change? I'm wondering if 
> I can override the normal operation of in_() using a custom dialect or 
> custom default comparator.
>
> Thanks,
> Steven James
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/f318eb7e-82dc-4429-80e9-390406706eee%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] 'selectin' loading with composite keys on DB2 for i

2019-06-27 Thread Steven James
Currently, `selectin` loading with composite keys works for me on MySQL and 
SQLite. The documentation states that it also works with Postgres. I'm 
currently trying to get it working on DB2 (connecting to a DB2 for i, 7.2 
system.

(the following assumes a table with the primary key consisting of two 
columns: a and b)

selectin loading currently emits:
 `SELECT * FROM table WHERE (table.a, table.b) IN ((?, ?), (?, ?))`

For this type of loading to work in DB2 (DB2 for i), the syntax needs to be:
`SELECT * FROM table WHERE (table.a, table.b) IN VALUES (?, ?), (?,?)`

Is there any way to implement this without a core change? I'm wondering if 
I can override the normal operation of in_() using a custom dialect or 
custom default comparator.

Thanks,
Steven James

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/e7a3b368-d40e-4aba-84d5-10020fb05fe4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Query last relation created and attributes

2019-05-13 Thread James Fennell
I think Mike's suggestion was to construct the raw SQL string you want,
then reverse engineer to get the correct SQL Alchemy code, which you can
then use with your different models. For complicated SQL logic I think this
is a good practice in general.

You current question seems like a general SQL question rather than
something specific to SQL Alchemy. After you've the SQL, we could discuss
the reverse engineering.

On Mon, May 13, 2019, 10:37 AM Scheck David  the problem is that I can't use SQL for this because this is a mixins that
> I use for several objects (tables) because they all have this status
> structure... yes it's quite difficult to do :
>
> right now I'm on this stage, but still don't work
> self.session.query(Object).outerjoin(ObjectStatus).outerjoin(Status)\
> .group_by(Object).with_entities(Object,
> func.max(Status.datum).label("status_datum")).subquery()
>
> self.session.query(Object).join((sub, sub.c.id == Status.id))\
> .filter(sub.c.statustype_id > 50)
>
> Le lun. 13 mai 2019 à 16:26, Mike Bayer  a
> écrit :
>
>> Hi -
>>
>> I was hoping someone could jump in on this.
>>
>> do you know the SQL that you want to emit?  E.g. plain SQL string.   I
>> can help you do that part.
>>
>>
>> On Mon, May 13, 2019 at 9:11 AM Scheck David  wrote:
>> >
>> > I think I'm near but I can't finish :
>> >
>> > for all the structure : https://dpaste.de/fek5#L
>> >
>> > and here my query :
>> >
>> > self.session.query(Object).outerjoin(ObjectStatus).outerjoin(Status)\
>> > .group_by(Object).with_entities(Object,
>> func.max(Status.datum).label("status_datum")).subquery()
>> >
>> > self.session.query(Object).join((sub, sub.c.id == Status.id))\
>> > .filter(sub.c.statustype_id > 50)
>> >
>> > but status type not reachable.
>> >
>> >
>> > --
>> > SQLAlchemy -
>> > The Python SQL Toolkit and Object Relational Mapper
>> >
>> > http://www.sqlalchemy.org/
>> >
>> > To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> > ---
>> > You received this message because you are subscribed to the Google
>> Groups "sqlalchemy" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an email to sqlalchemy+unsubscr...@googlegroups.com.
>> > To post to this group, send email to sqlalchemy@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/sqlalchemy.
>> > To view this discussion on the web visit
>> https://groups.google.com/d/msgid/sqlalchemy/fe2da543-2710-4b44-9f88-e68c5e35faea%40googlegroups.com
>> .
>> > For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "sqlalchemy" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/sqlalchemy/Cju-PF7mV58/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/sqlalchemy/CA%2BRjkXF3d_pffLqFLYpSNw80E%3D_2ULhh0pTiqBG-F1dFjA9r1Q%40mail.gmail.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
> --
>
> With kindest regards,
>
>
> *David SCHECK*
>
> PRESIDENT/DEVELOPER
>
> [image: Signature Logo Sphax Bleu-01.png]
>
> Phone: +32 4 87 86 70 12
> Visit our website ! https://www.sphax.org
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/CAOPG6scR1rWJTBN%3DEArX2buPqiVsGaPiQq4wYGU58Renqae0kA%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


Re: [sqlalchemy] Possible regression?

2019-04-28 Thread James Fennell
Thanks for the explanation Mike! Seeing it now, I actually think there’s a 
decent reason to want the current backerefs:

My understanding is that with session.merge in SQL Alchemy it’s possible to 
draw a very clean line between entities that are persisted (or about to be 
persisted on the next flush) and entities which will never be persisted. This 
is owing to the design choice whereby SQL Alchemy doesn’t persist the entity 
you pass into the merge; instead, that is kept alone and a new entity is 
created.

With this in mind, there are two ways to see Lyla’s example.

One way: as soon as the tassel_thread was related to the persisted my_head 
(persisted because of the line my_head=session.merge(my_head)) then 
tassel_thread should be seen as in the session already. In this view, the merge 
is necessary and possibly error-prone, as here.

Another way: instead of assigning my_head=session.merge(my_head), keep the 
unpersisted head around with say persisted_head = session.merge(my_head). Then 
relating the new tassel_thread to my_head won’t add it to the session. To get a 
record into the DB, then do a session.merge on it - everything works correctly 
this way.


In both cases, there is the idea of a persisted object graph and a distinct 
unpersisted object graph. Once you relate a new entity to something in the 
persisted object graph, it becomes persistent. 

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Possible regression?

2019-04-28 Thread James Fennell
It seems to be related to the cascades happening recursively. The merge 
cascade goes from the tassel thread to the head, and then again down from 
the head to the tassel thread - which is kind of strange, I would expect 
the cascades to only visit each node in the object graph at most once. The 
second cascade is when the second tassel thread is created.

There are a couple of ways that I can get this to work:

   - Instead of setting head when creating the tassel thread, set head_id. 
   This avoids the first cascade being triggered.
   - Add cascade=None on the relationship in the Head to avoid the second 
   cascade being triggered. However the merge operation on the tassel thread 
   now returns a warning that the "add operation along 'Head.tassel_threads' 
   will not proceed"
   - The your second merge, instead of merging in the tassel_thread, merge 
   in the head again.

Btw, I'm using SQL Alchemy 1.2.16 and the thing you're seeing is still 
repro-able.

On Sunday, 28 April 2019 08:56:40 UTC-4, lyla...@gmail.com wrote:
>
> Hi!
>
> I recently came across some confusing behavior in relations and cascading 
> using sqllite, and I was hoping that I might get some help explaining what 
> the behavior is here. I put together a minimum failing script here. I'm 
> trying to commit one instance of each of two classes, but what ends up 
> happening is that I commit two copies of the many part of a one-to-many 
> relation. I suspect that this has something to do with cascading, but I 
> found a bug report for similar behavior 
> that
>  
> claims to have been fixed several years ago, and I'm wondering if there was 
> some kind of regression? I'm running SQLAlchemy 1.3.1 on Ubuntu and I'm 
> still using sqllite at this stage of development. 
>
> from sqlalchemy import create_engine
> from sqlalchemy.orm import scoped_session, sessionmaker
> from sqlalchemy.ext.declarative import declarative_base
> from sqlalchemy import Column, Integer, String, Text, ForeignKey
> from sqlalchemy.orm import relationship
>
> import os
>
> db_engine = create_engine('sqlite:///sample.db', convert_unicode=True)
> db_session = scoped_session(sessionmaker(autocommit=False,
>  autoflush=False,
>  bind=db_engine))
>
> Base = declarative_base()
> Base.query = db_session.query_property()
>
> class Head(Base):
> __tablename__ = 'head'
> id = Column(String, primary_key=True)
> tassel_threads = relationship("TasselThread", back_populates="head")
> def __init__(self, id):
> self.id=id
>
> class TasselThread(Base):
> __tablename__ = 'tassel_thread'
> id = Column(Integer, primary_key=True)
> head_id = Column(Integer, ForeignKey('head.id'), nullable=False)
> head = relationship("Head", back_populates="tassel_threads")
> def __init__(self, head):
> self.head = head
>
> def init_db():
> Base.metadata.create_all(bind=db_engine)
>
>
> def do_db_work():
>
> my_head = Head(id="foobar")
> my_head = db_session.merge(my_head)
> db_session.commit()
>
> my_tassel_thread = TasselThread(head=my_head)
> db_session.merge(my_tassel_thread)
> db_session.commit()
>
>
> if os.path.exists("sample_data.db"):
> os.remove("sample_data.db")
> init_db()
> do_db_work()
> a = db_session.query(TasselThread).all()
> print(len(a))
> # output: 2, should be 1
>
> Thanks for any help you might be able to provide!
>
> -Lyla Fischer
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: What is the standard/best way to merge an object graph into the session?

2019-04-24 Thread James Fennell
Okay let me answer my own question. The problem is that my parent-child 
relationship does not have the delete-orphan cascade. So when I set the new 
children, the old child_2 loses its parent (as is expected, because it's no 
longer a child) and then there's an error because the DB has a not null 
constraint on the parent_pk coming from nullable=False.

I guess the moral of the story is that parent_pk being non-nullable 
essentially requires delete-orphan.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: What is the standard/best way to merge an object graph into the session?

2019-04-24 Thread James Fennell
Oooo the problem is not what I thought.

The problem is that in my 'new data' there is no new_child_2. This is an 
expected case, as sometimes children disappear, so will update the post.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] What is the standard/best way to merge an object graph into the session?

2019-04-24 Thread James Fennell
I have a parent child relationship which I construct from a data feed. At 
the time of constructing the object graph I don't have access to the 
primary keys of the entities, so I build up the object graph by using the 
relationship attributes. My understanding was that I could perform a 
session.merge to get the new state of the whole object graph into the 
database, but when I try do this I get an exception.

Sample code that reproduces the problem I encounter:

from sqlalchemy import Column, Integer, ForeignKey, create_engine, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import scoped_session, sessionmaker, relationship

Base = declarative_base()

PARENT_PK = 5
CHILD_1_PK = 6
CHILD_2_PK = 7


class Parent(Base):
__tablename__ = 'parent'

pk = Column(Integer, primary_key=True)
data = Column(String)

children = relationship(
'Child',
back_populates='parent'
)


class Child(Base):
__tablename__ = 'child'

pk = Column(Integer, primary_key=True)
parent_pk = Column(Integer, ForeignKey('parent.pk'), nullable=False)
data = Column(String)

parent = relationship(
'Parent',
back_populates='children'
)


engine = create_engine('sqlite:///temp.db')
session_factory = sessionmaker(bind=engine, autoflush=False)
Session = scoped_session(session_factory)
session = Session()
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)

# Put some data in the database from some previous feed update
parent = Parent(pk=PARENT_PK, data='First')
child_1 = Child(pk=CHILD_1_PK, data='First child')
child_2 = Child(pk=CHILD_2_PK)
parent.children = [child_1, child_2]

session.add(parent)

session.commit()

# New data in the new feed update
new_parent = Parent(pk=PARENT_PK, data='Second')
new_child_1 = Child(pk=CHILD_1_PK, data='Second child')
new_parent.children = [new_child_1]

session.merge(new_parent)

session.commit()


Exception:

sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) NOT NULL constraint 
failed: child.parent_pk

[SQL: UPDATE child SET parent_pk=? WHERE child.pk = ?]

[parameters: (None, 7)]

(Background on this error at: http://sqlalche.me/e/gkpj)


Manually setting new_child_1.parent_pk before the merge doesn't do anything 
as the relationship takes precedence. To avoid the exception I need to do 
something like:

new_child_1.parent_pk = PARENT_PK
del new_parent.children
del new_child_1.parent


Is there an easier way to use session.merge for a graph - or a more 
standard method? Or do I always have to do some 'post processing' to strip 
out the relationships before using it?  


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Hybrid Property vs Hybrid Method Expression Names

2018-10-04 Thread james
To follow this up - what would be the best way to get these extra dragons 
in? I would be happy to submit a PR or something if that is easier.

On Friday, September 14, 2018 at 10:32:52 AM UTC+2, ja...@cryptosense.com 
wrote:
>
> Thanks for the help - I had missed the "copy vs modifying in place" 
> difference between hybrid_method and hybrid_property.
>
> I think adding another dragon would be helpful here, probably located in 
> http://docs.sqlalchemy.org/en/latest/orm/extensions/hybrid.html#defining-expression-behavior-distinct-from-attribute-behavior.
>  
> I'm happy to move this thread into a docs issue if that would be helpful.
>
> On Thursday, September 13, 2018 at 5:45:05 PM UTC+2, Mike Bayer wrote:
>>
>> On Thu, Sep 13, 2018 at 7:55 AM,   wrote: 
>> > Update: I have just found 
>> > 
>> http://docs.sqlalchemy.org/en/latest/changelog/migration_12.html#hybrid-attributes-support-reuse-among-subclasses-redefinition-of-getter
>>  
>> > which documents that getters and setters must have the same name as the 
>> > original expression. 
>> > 
>> > Can I just check that it is expected for expressions to have this 
>> > requirement? If so, is it worth opening a docs issue to add this to the 
>> main 
>> > hybrid property docs? 
>>
>> this is the mechanics of Python, when you say: 
>>
>> @mything.foobar 
>> def _myotherthing(...) 
>>
>>
>> you are assigning to the name "_myotherthing".   Since 
>> @hybrid_property now creates a copy when any modifier is called, the 
>> original hybrid you have at "mything" was not changed. 
>>
>> All the documentation examples at 
>> http://docs.sqlalchemy.org/en/latest/orm/extensions/hybrid.html 
>> indicate using the same method name for each modification.I can 
>> add more dragons to the hybrid docs as well clarifying that this 
>> naming scheme is intentional and required, if that helps. 
>>
>>
>>
>> > 
>> > -- 
>> > SQLAlchemy - 
>> > The Python SQL Toolkit and Object Relational Mapper 
>> > 
>> > http://www.sqlalchemy.org/ 
>> > 
>> > To post example code, please provide an MCVE: Minimal, Complete, and 
>> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
>> > description. 
>> > --- 
>> > You received this message because you are subscribed to the Google 
>> Groups 
>> > "sqlalchemy" group. 
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an 
>> > email to sqlalchemy+...@googlegroups.com. 
>> > To post to this group, send email to sqlal...@googlegroups.com. 
>> > Visit this group at https://groups.google.com/group/sqlalchemy. 
>> > For more options, visit https://groups.google.com/d/optout. 
>>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Hybrid Property vs Hybrid Method Expression Names

2018-09-14 Thread james
Thanks for the help - I had missed the "copy vs modifying in place" 
difference between hybrid_method and hybrid_property.

I think adding another dragon would be helpful here, probably located in 
http://docs.sqlalchemy.org/en/latest/orm/extensions/hybrid.html#defining-expression-behavior-distinct-from-attribute-behavior.
 
I'm happy to move this thread into a docs issue if that would be helpful.

On Thursday, September 13, 2018 at 5:45:05 PM UTC+2, Mike Bayer wrote:
>
> On Thu, Sep 13, 2018 at 7:55 AM,  > 
> wrote: 
> > Update: I have just found 
> > 
> http://docs.sqlalchemy.org/en/latest/changelog/migration_12.html#hybrid-attributes-support-reuse-among-subclasses-redefinition-of-getter
>  
> > which documents that getters and setters must have the same name as the 
> > original expression. 
> > 
> > Can I just check that it is expected for expressions to have this 
> > requirement? If so, is it worth opening a docs issue to add this to the 
> main 
> > hybrid property docs? 
>
> this is the mechanics of Python, when you say: 
>
> @mything.foobar 
> def _myotherthing(...) 
>
>
> you are assigning to the name "_myotherthing".   Since 
> @hybrid_property now creates a copy when any modifier is called, the 
> original hybrid you have at "mything" was not changed. 
>
> All the documentation examples at 
> http://docs.sqlalchemy.org/en/latest/orm/extensions/hybrid.html 
> indicate using the same method name for each modification.I can 
> add more dragons to the hybrid docs as well clarifying that this 
> naming scheme is intentional and required, if that helps. 
>
>
>
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Hybrid Property vs Hybrid Method Expression Names

2018-09-13 Thread james
Update: I have just found 
http://docs.sqlalchemy.org/en/latest/changelog/migration_12.html#hybrid-attributes-support-reuse-among-subclasses-redefinition-of-getter
 
which documents that getters and setters must have the same name as the 
original expression.

Can I just check that it is expected for expressions to have this 
requirement? If so, is it worth opening a docs issue to add this to the 
main hybrid property docs?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Hybrid Property vs Hybrid Method Expression Names

2018-09-13 Thread james
Hi,

I've been adding a hybrid property to a model as per the documentation at 
http://docs.sqlalchemy.org/en/latest/orm/extensions/hybrid.html. In 
particular, the behavior is complicated enough that I need to define the 
expression version of the property separate from the normal version.

My model looks like this:

class MyModel(object):
# ...

@hybrid_property
def my_property(self):
# Do some stuff at the instance level

@my_property.expression
def my_property(cls):
# Do some stuff at the class level


This works as expected, but if I change the name of the class function it 
suddenly stops working - based on the error it seems like the renamed class 
function is being ignored and the instance function is being used instead. 
Weirdly, this does not happen when using hybrid methods rather than 
properties - so the following works as expected:

class MyModel(object):
# ...

@hybrid_method
def my_property(self):
# Do some stuff at the instance level

@my_property.expression
def my_property_expression(cls):
# Do some stuff at the class level

This is confusing - from the look of the code, the two decorators seem to 
work in the same way, so I am not sure why the renaming matters to the 
property and not the method. Is this expected behaviour?

Thanks,

James




-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Modifying results of a KeyedTuple query?

2018-08-14 Thread James Couch
Just a quick followup...

Thanks again for the help/advice. I did what you suggested, and the whole 
query (with bulk_update_mappings) takes .16 seconds to return a result set 
of 7800 records or so.

That's up from ~1.2 seconds it took before I did the optimizations.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Modifying results of a KeyedTuple query?

2018-08-10 Thread James Couch
Oops, I missed the part about the bulk update. I suppose that would work as 
long as it's keying on the index. The documentation doesn't make it clear 
how it picks the index to query on... I assume it inspects the column 
properties and picks the best one to use behind the scenes? Normally you'd 
query on the id or something.

On Friday, August 10, 2018 at 4:21:32 PM UTC-5, James Couch wrote:
>
> I think I see what you mean. Do an inline query/update, maybe just query 
> by primary index for speed. I guess that won't add too much overhead, I'll 
> give it a shot.
>
> On Friday, August 10, 2018 at 1:43:51 PM UTC-5, Mike Bayer wrote:
>>
>> You need to copy the keyedtuples into some other data structure, like a 
>> dictionary, modify it, then send that data back into updates.   Your best 
>> bet is to use the bulk update stuff once you have those dictionaries, see 
>> http://docs.sqlalchemy.org/en/latest/orm/session_api.html?highlight=bulk#sqlalchemy.orm.session.Session.bulk_update_mappings
>>  
>> .
>>
>>
>>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Modifying results of a KeyedTuple query?

2018-08-10 Thread James Couch
I think I see what you mean. Do an inline query/update, maybe just query by 
primary index for speed. I guess that won't add too much overhead, I'll 
give it a shot.

On Friday, August 10, 2018 at 1:43:51 PM UTC-5, Mike Bayer wrote:
>
> You need to copy the keyedtuples into some other data structure, like a 
> dictionary, modify it, then send that data back into updates.   Your best 
> bet is to use the bulk update stuff once you have those dictionaries, see 
> http://docs.sqlalchemy.org/en/latest/orm/session_api.html?highlight=bulk#sqlalchemy.orm.session.Session.bulk_update_mappings
>  
> .
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Modifying results of a KeyedTuple query?

2018-08-10 Thread James Couch
On Friday, August 10, 2018 at 4:03:06 PM UTC-5, Jonathan Vanasco wrote:
>
>
> A quick background on Mike's short answer... Tuples are immutable lists in 
> Python, and "KeyedTuple" should indicate that you can't change the values. 
> They're just a handy result storage object, not an ORM object mapped to a 
> table row.
>

Indeed. I was hoping there might be a way to convert the NamedTuple back 
into a model class without having to do a second query just to update the 
values. 

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Modifying results of a KeyedTuple query?

2018-08-10 Thread James Couch
Hey all. Long time lurker, first time poster.

I'm using sqlalchemy ORM. We have a fairly decent sized data set, and one 
table has a pretty large number of columns, some of them with foreignkeys. 
I found that limiting a query to specific columns speeds up the time it 
takes to come back with a result by quite a bit (like 100x faster for this 
one table, talking like over a second to just a few milliseconds 
difference).

So I have a query that goes something like this:

q = dbsession.query(
Host.hostname,
Host.platform,
Host.ostype,
Host.buildstatus,
Host.created_on,
Host.created_by,
Host.modified_by,
Host.jobrunid).filter(Host.buildstatus != None).all()

I then pass that result through a few functions to validate/update some of 
the fields. Problem is, I could do this on a regular Query result (q = 
dbsession.query(Host).filter(Host.buildstatus != None).all()) but the 
results I get back from this optimized query as posted above, I can't 
modify those values, I get "AttributeError: can't set attribute".

I've been Googling this and searching through these posts, but I guess I 
don't know the right terms to search on. I can't find anything in the 
documentation talking about how to modify the results of a KeyedTuple query.

Normally I would do something like:

for x in q:
if pingresult(x.hostname) is False:
x.buildstatus = 'removed'
dbsession.commit()
dbsession.close()

But if q is a set of KeyedTuples, this won't work.

What's the speediest way to accomplish this task? Should I just stick to 
querying all the columns (query(Host)...) and just take the performance 
hit?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Airflow > Configuring A SQL Alchemy Connection String To Use Unix Sockets

2017-11-10 Thread james . lloyd
Found it!

I wasn't using the $AIRFLOW_HOME environment variable (I didn't think it 
relied on it).

As such 
airflow initdb
Must've been using it's own ariflow.cfg file.Not the one in /airflow.



On Friday, 10 November 2017 07:31:54 UTC, james...@netnatives.co.uk wrote:
>
> Hello.
>
> I'm using the SQL Cloud Proxy on a Compute engine VM instance. I'm then 
> configuring Airflow (which uses SQL Alchemy).
>
> I've setup a unix socket like this:
>
> /opt/cloud_sql_proxy/cloud_sql_proxy 
> -instances=myproject:europe-west1:airflowinstance -dir=/cloudsql &
>
>
> I can connect to the Cloud SQL Proxy using this socket without any 
> trouble. i.e.,
>
> mysql -u airflowuser -p -S /cloudsql/myproject:europe-west1:
> airflowinstance
>
> But I can't get Airflow to connect using the below connection string:
>
> sql_alchemy_conn = mysql://airflowuser:xxx@/airflowdatabase?unix_socket=
> /cloudsql/myproject:europe-west1:airflowinstance
>
> I get a 'connection refused' error.
>
> Any help much appreciated!
>
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Airflow > Configuring A SQL Alchemy Connection String To Use Unix Sockets

2017-11-09 Thread james . lloyd
Hello.

I'm using the SQL Cloud Proxy on a Compute engine VM instance. I'm then 
configuring Airflow (which uses SQL Alchemy).

I've setup a unix socket like this:

/opt/cloud_sql_proxy/cloud_sql_proxy 
-instances=myproject:europe-west1:airflowinstance -dir=/cloudsql &


I can connect to the Cloud SQL Proxy using this socket without any trouble. 
i.e.,

mysql -u airflowuser -p -S /cloudsql/myproject:europe-west1:airflowinstance

But I can't get Airflow to connect using the below connection string:

sql_alchemy_conn = mysql://airflowuser:xxx@/airflowdatabase?unix_socket=
/cloudsql/myproject:europe-west1:airflowinstance

I get a 'connection refused' error.

Any help much appreciated!



-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] .count() hangs indefinitely

2016-11-06 Thread James Burke
Thanks Mike for your response.

The query is run against a staging db and the table only contains some 500 
records.

But I will check the query as you have suggested to see what is going on.

Cheers

>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] .count() hangs indefinitely

2016-11-06 Thread James Burke

>
>  
>
Thanks for your reply Simon.

- I am using Postgresql database 
- Running the SQL generated by SQL Alchemy in Postgres also hangs.
- There is no traceback.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] .count() hangs indefinitely

2016-11-04 Thread James Burke
Hi All,

I've run into a odd problem, where calling the count function hangs my code 
indefinitely. The odd thing is it was working until recently, so I'm a 
little confused.

customer = session.query(Customer).filter(Customer.phone_number.contains([
message['metadata']['MIN']]))
logger.debug(len(customer.all()))
logger.debug('Works perfectly fine')
logger.debug(customer.count())
logger.debug('I will never see this')

Has anybody run into this problem before?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Using SQL_CALC_FOUND_ROWS

2016-05-23 Thread James Li
Hi All,
I want to use SQL_CALC_FOUND_ROWS with my select query and use a following 
SELECT FOUND_ROWS() to get the total count in sqlalchemy. But how do I make 
sure *SELECT FOUND_ROWS()* returns the correct cached value in the case of 
*concurrent* queries of *select SQL_CALC_FOUND_ROWS* happen? Is the total 
count cached somewhere in a sqlalchemy session after *select 
SQL_CALC_FOUND_ROWS* runs? Can I control where it gets cached in my code?

Thanks!
-james

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Session.flush() before Session.begin()

2016-02-03 Thread James Emerton
ning of that transaction, there is no ORM I'm aware of that can do 
> this any differently than SQLAlchemy does.   More simplistic ORMs like Django 
> / Peewee etc. have no concept of expiration-on-rollback, and after a 
> transaction rollback your objects have exactly the same in-memory state as 
> before.   SQLAlchemy offers this mode of operation both through the 
> _enable_transaction_accounting=False (with small bugfixes apparently) as well 
> as the use of an external transaction.

The only requirement is that attributes and instances that were dirty before 
the transaction began must be dirty after a rollback. I’m not concerned about 
instance attributes being rolled back to their previous value (we didn’t have 
this before.) If we use _enable_transaction_accounting=False and issue multiple 
calls to flush during the course of a transaction is there a way to restore the 
dirty state to the point before the transaction began?

Thanks,
  James


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] confirming SQLite pragma settings?

2015-11-08 Thread James Hartley
I have successfully installed SQLAlchemy 1.0.9, & can enable foreign key
constraint support on each connection by hooking the event as specified in
the following:

http://docs.sqlalchemy.org/en/latest/dialects/sqlite.html

SQLite also allows pragma settings to be queried in the command-line shell
as:

sqlite> pragma foreign_keys;

It would be good if I could log the setting too.  Is it possible to query
the connection for such information?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Support for Impala?

2015-09-02 Thread James Flint
Is this actively being worked on? 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] 0.9.7: Intermittently find a dead weakref to a listens_for function while checking _stored_in_collection of a registry

2015-04-03 Thread Evan James
Michael,

Thanks - sorry to have wasted your time.  It seems I gave up on Googling my 
stack trace too soon.

Thanks,
Evan James

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] 0.9.7: Intermittently find a dead weakref to a listens_for function while checking _stored_in_collection of a registry

2015-04-02 Thread Evan James
Hi folks,

While running my test suite, I hit an issue with the following stack trace:

ERROR at setup of test_track_before_delete
 

request = SubRequest 'database_session' for Function 
'test_track_before_delete', engine = Engine(sqlite:///test.db) 


 

@pytest.fixture 

def database_session(request, engine): 

connection = engine.connect() 

trans = connection.begin() 

 

meta.Session = scoped_session(sessionmaker(autocommit=False, 
autoflush=True, bind=connection)) 

register_session_listeners(meta.Session) 

 

meta.Session.begin_nested() 

   @event.listens_for(meta.Session, after_transaction_end) 

def restart_savepoint(session, transaction): 

if transaction.nested and not transaction._parent.nested: 


 

automated_tests/conftest.py:52:  

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/api.
py:94: in decorate 

listen(target, identifier, fn, *args, **kw) 

../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/api.
py:63: in listen 

_event_key(target, identifier, fn).listen(*args, **kw) 

../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/
registry.py:187: in listen 

self.dispatch_target.dispatch._listen(self, *args, **kw) 

../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/base.
py:184: in _listen 

event_key.base_listen(propagate=propagate, insert=insert, named=named) 

../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/
registry.py:226: in base_listen 

for_modify(target.dispatch).append(self, propagate) 

../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/attr.
py:118: in append 

registry._stored_in_collection(event_key, self) 

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

 

event_key = sqlalchemy.event.registry._EventKey object at 0x111c6b3d0, 
owner = sqlalchemy.event.attr._DispatchDescriptor object at 0x10eac8590 


 

def _stored_in_collection(event_key, owner): 

key = event_key._key 

 

dispatch_reg = _key_to_collection[key] 

 

owner_ref = owner.ref 

listen_ref = weakref.ref(event_key._listen_fn) 

 

if owner_ref in dispatch_reg: 

   assert dispatch_reg[owner_ref] == listen_ref 

E   assert weakref at 0x111cc25d0; dead == weakref at 0x111cc2af8
; to 'function' at 0x111c7f668 (restart_savepoint) 


 

../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/
registry.py:74: AssertionError


While running the @event.listens_for decorator in the setup of a test 
fixture, SQLAlchemy throws an assertion because the ref it has in the 
dispatch registry isn't identical to the ref to the listener function. 
 We're on SQLAlchemy 0.9.7, pytest 2.6.4.  Note that the test suite is 
setting up nested transactions on SQLite; we are using the recommended 
workaround from SQLAlchemy documentation to take control of transactional 
integration from pysqlite.


Since there's an assertion in the code in _stored_in_collection(), I assume 
that there's some expectation that the refs might not match; am I doing 
something wrong in my setup which this assertion is meant to catch?  I've 
only seen this error once (while tracking down a different nondeterministic 
error in my own app's code), so I can't provide much more information than 
this, but the portion of test fixture code seen above in the stack trace is 
basically the entire reproduction case.  This fixture runs before every 
test in my suite, but I've only seen an error once across a large number of 
runs, so the error is *very* intermittent.  Because of that, I'm not 
worried too much about the error itself, but I thought I should post it 
here in case it's a symptom of something I should be worrying about.


Thanks,

Evan James

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Relationship behavior change when switching from reflected to declarative styles; test case included

2015-03-04 Thread Evan James
On Tuesday, March 3, 2015 at 4:41:20 PM UTC-5, Michael Bayer wrote:

 Essentially the issue is likely because the mappings in these two examples 
 are not equivalent; the reflection based version has Widget.frobnicator 
 and 
 Frobnicator.widget communicating with each other through a backref, and 
 the 
 declarative version does not. 


Thanks, Michael.  Yes, we're on SQLAlchemy 0.9.  We used a script to 
generate the declarative models by introspecting on the reflective ones, 
and in the process it turned a relationship defined once with a backref 
into two relationships defined without using the backref keyword.  We 
missed that the backref argument is responsible for the event listeners as 
well as for creating the relationship on the other model.

Adding the back_populates argument to the model declarations fixes our 
issue.

Thanks,
Evan James

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Relationship behavior change when switching from reflected to declarative styles; test case included

2015-03-03 Thread Evan James
Hi folks,

I'm working on a SQLAlchemy-based app where we've decided to make some 
infrastructure changes, in particular moving from reflection to declaration 
for mapping the models.

However, we're now running into issues where, after switching to 
declarative, relationships aren't populated the way we expect when 
manipulated in Python.  For example, we have code that looks like this:

class Widget(...):
def merge(self, other_widget):
Merge the widgets, transferring the dependent items on the other 
widget to this one.
for frobnicator in other_widget.frobnicators[:]:
frobnicator.widget = self

meta.Session.delete(other_widget)


This code worked as hoped-for when we were reflecting on the database to 
create our mappers, but after switching to declarative, the dependent items 
are cascade-deleted on commit when other_widget is deleted, rather than 
being preserved as children of the merged widget.

It's not difficult to fix this particular issue - explicitly removing the 
frobnicators from the other_widget.frobnicators collection will prevent 
them from being deleted, and then the merged widget correctly has them - 
but we're finding we have a class of bugs where relationships aren't being 
handled the same way as before.  Unfortunately, build a comprehensive test 
suite is one of the infrastructure changes we're in the process of making 
- which means it's not done yet and we can't easily track down all the 
places we could get tripped up.  We would really prefer to resolve this by 
changing the definitions in the models, not by changing the application 
code that manipulates the membership of relationships.

I've created a reduced test case here 
https://github.com/ejames/sqlalchemy_reflection_problem_reduction which 
specifically displays the behavior we're having trouble with in minimal 
form.  If one line in the test case is commented out, the test will pass 
for reflective models and fail for declarative models; if the line is put 
back in, success and failure reverse.

My question:  How can we make relationships function identically in 
declarative syntax as they did in reflective syntax?  We thought we had 
migrated mapping styles in a way that wouldn't change anything, but here we 
are.  What are we missing?

Thanks,
Evan James

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Windowed Queries breaking after a commit and emitting many, many Selects.

2014-10-27 Thread James Meneghello
Using a scoped session with a session generator and I didn't want 
expire_on_commit to be False for everything, so setting it using the 
Session constructor wouldn't work properly. If a session was created prior 
to the one that needed that flag, it'd give me a ProtocolError since it 
couldn't change the session after it'd already been created. Manually 
setting the expire_on_commit attribute in the session and setting it back 
after it was done worked fine, though, and didn't mess with the scoped 
session pool:

with db_session() as db:
db.expire_on_commit = False
# do stuff
db.expire_on_commit = True

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Windowed Queries breaking after a commit and emitting many, many Selects.

2014-10-26 Thread James Meneghello
The application I'm working on operates over extremely large datasets, so 
I'm using the query windowing from here 
(https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/WindowedRangeQuery) 
to break it into manageable chunks. The query window is usually around 10k 
rows, after which it updates/deletes some rows and continues on. Simple 
breakdown is like this:

query = session.query(Item).filter(...several filters)
total_items = query.count() # used for logging
for row in windowed_query(query, Item.id, 1):
count += 1

# process, determine whether to keep (and update) or delete (put in a 
list for batch-deletion)
# one such example is:
if row.group_name != regex.group_name:
continue

if count = 1:
save(items) # items to be kept, issues updates
deleted = 
db.query(Item).filter(Item.id.in_(dead_items)).delete(synchronize_session='fetch')
session.commit()
count = 0

This works fine until it's gone through a save/delete cycle. Once it's 
saved, it goes back to access the windowed query again and pull the next 
10k rows. This works until the following line:

if row.group_name != regex.group_name:

At which point sqla will emit a SELECT for the item of that specific ID, 
presumably because the group_name wasn't available and it had to fetch it. 
This only occurs after the commit - so I assume that committing the session 
is breaking the query. Hence, for the next 10k rows, it emits 10k queries 
(one per row).

Because the script is potentially processing so many rows, I don't want to 
let the dead_items list grow to be massive, so the deletes need to occur 
fairly regularly throughout the process.

Any idea what's causing this / how to fix it? Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] translating many-to-many relationship from CSV to database?

2014-10-13 Thread James Hartley
I have a CSV file with lots of redundant data which models many-to-many
relationships.  I'm needing to scrub the data as it is inserted into the
database littered with unique constraints.  Is there a way to insert the
data once without querying for each object before inserting?

I'm sure this is a common CSV translation nit, but it is new to me.

Thanks for any insight provided!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Threading Queries

2014-05-08 Thread James Meneghello
A couple of questions:

I'm writing an application using concurrent.futures (by process). The 
processes themselves are fairly involved - not simple functions. I'm using 
scoped_sessions and a context manager like so:

# db.py

engine = create_engine(sqlalchemy_url)
Session = scoped_session(sessionmaker(bind=engine))

@contextmanager
def db_session():
session = Session()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.remove()

Using this context manager and something like the below code:

def process():
with db_session() as db:
# the function is obviously more involved than this
u = User(name='bob')
db.add(u)

return u

def main():
with db_session() as db:
g = Group(name='peeps')
user = process()
user.group = g

# this line breaks
db.add(g)

I'm guessing this is because the call to db_session() is nested inside 
another, meaning that the thread-local session is being closed inside 
process(), and so when it gets passed back to main() the session object is 
gone. Is there a recommended way to handle this?

Along similar lines, the application (using the session/engine creation as 
above) also has to use raw_connection() at a few points to access the 
copy_expert() cursor function from psycopg2. I'm getting very strange 
errors coming out of the copy functions - I suspect due to multiple copies 
occurring at once (there's ~4 processes running at once, but rarely copying 
at the same time). The copy code looks like this:


from db import engine

conn = engine.raw_connection()
cur = conn.cursor()
cur.copy_expert(COPY parts ({}) FROM STDIN WITH CSV ESCAPE 
E''.format(', '.join(ordering)), s)
conn.commit()

Does raw_connection() still pull from a connection pool, or could two calls 
to it at once potentially destroy things?

Some of the errors are below (the data going in is clean, I've manually 
checked it).

Thanks!


---

Traceback (most recent call last):
  File /usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/base.py, 
line 940, in _execute_context
context)
  File 
/usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/default.py, line 
435, in do_execute
cursor.execute(statement, parameters)
psycopg2.DatabaseError: insufficient data in D message
lost synchronization with server: got message type 5, length 808464640

...

sqlalchemy.exc.DatabaseError: (DatabaseError) insufficient data in D 
message
lost synchronization with server: got message type 5, length 808464640

...

psycopg2.InterfaceError: connection already closed

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File /usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/base.py, 
line 508, in _rollback_impl
self._handle_dbapi_exception(e, None, None, None, None)
  File /usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/base.py, 
line 1108, in _handle_dbapi_exception
exc_info
  File /usr/local/lib/python3.3/dist-packages/sqlalchemy/util/compat.py, 
line 174, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=exc_value)
  File /usr/local/lib/python3.3/dist-packages/sqlalchemy/util/compat.py, 
line 167, in reraise
raise value.with_traceback(tb)
  File /usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/base.py, 
line 506, in _rollback_impl
self.engine.dialect.do_rollback(self.connection)
  File 
/usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/default.py, line 
405, in do_rollback
dbapi_connection.rollback()
sqlalchemy.exc.InterfaceError: (InterfaceError) connection already closed 
None None

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File /usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/base.py, 
line 233, in connection
return self.__connection
AttributeError: 'Connection' object has no attribute 
'_Connection__connection'

...

Traceback (most recent call last):
  File /usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/base.py, 
line 940, in _execute_context
context)
  File 
/usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/default.py, line 
435, in do_execute
cursor.execute(statement, parameters)
psycopg2.DatabaseError: lost synchronization with server: got message type 

...

Traceback (most recent call last):
  File /usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/base.py, 
line 506, in _rollback_impl
self.engine.dialect.do_rollback(self.connection)
  File 
/usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/default.py, line 
405, in do_rollback
dbapi_connection.rollback()
psycopg2.InterfaceError: connection already closed

...
psycopg2.DatabaseError: error with no message from the libpq

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

[sqlalchemy] Re: Bulk Inserts and Unique Constraints

2014-03-25 Thread James Meneghello
I wasn't going to bother, but I had a look at doing this just out of 
curiosity, and these were the results:

executemany():

Inserting 424 entries: 0.3362s
Inserting 20,000 segments: 14.01s

COPY:

Inserting 425 entries: 0.04s
Inserting 20,000 segments: 0.3s

So a pretty massive boost. Thanks :)


On Monday, 24 March 2014 23:30:32 UTC+8, Jonathan Vanasco wrote:

 Since you're using Postgres... have you considered using python to 
 generate a COPY file ?

 Sqlalchemy doesn't seem to support it natively... maybe via 'text', but 
 your underlying psycopg2 driver does.

 it's way way way faster.  i've found it significantly faster than dropping 
 fkeys and using prepared statements.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bulk Inserts and Unique Constraints

2014-03-24 Thread James Meneghello
Thanks for the quick reply!

This seems to work pretty well. I took out the batching (as it's already 
batched at a higher level) and modified it to suit the insertion of 
children as well (and reducded the unique to a single field) , and it 
appears to work.

with db_session() as db:
existing_entries = dict(
((entry.subject, entry) for entry in

db.query(Entry).filter(Entry.subject.in_(entries.keys())).options(subqueryload('segments')).all()
)
)

segment_inserts = []
for subject, entry in entries.items():
existing_entry = existing_entries.get(subject, None)
if existing_entry:
segments = dict(((s.segment, s) for s in 
existing_entry.segments))
for segment_number, segment in entry['segments'].items():
if int(segment_number) not in segments:
segment['entry_id'] = existing_entry.id
segment_inserts.append(segment)
else:
entry_id = engine.execute(Entry.__table__.insert(), 
entry).inserted_primary_key
for segment in entry['segments'].values():
segment['entry_id'] = entry_id[0]
segment_inserts.append(segment)

engine.execute(Segment.__table__.insert(), segment_inserts)


For 20,000 segments, this ends up being about 45 seconds and 1650 queries - 
2 to select all the entries and segments, 1 to insert the segments and the 
rest to insert parts. From here, however, I rewrote it a bit:

with db_session() as db:
existing_entries = dict(
((entry.subject, entry) for entry in

db.query(Entry).filter(Entry.subject.in_(entries.keys())).all()
)
)

entry_inserts = []
for subject, entry in entries.items():
existing_entry = existing_entries.get(subject, None)
if not existing_entry:
entry_inserts.append(entry)

engine.execute(Entry.__table__.insert(), entry_inserts)

existing_entries = dict(
((entry.subject, entry) for entry in

db.query(Entry).filter(Entry.subject.in_(entries.keys())).options(subqueryload('segments')).all()
)
)

segment_inserts = []
for subject, entry in entries.items():
existing_entry = existing_entries.get(subject, None)
if existing_entry:
segments = dict(((s.segment, s) for s in 
existing_entry.segments))
for segment_number, segment in entry['segments'].items():
if int(segment_number) not in segments:
segment['entry_id'] = existing_entry.id
segment_inserts.append(segment)
else:
log.error('i\'ve made a huge mistake')

engine.execute(Segment.__table__.insert(), segment_inserts)

This ends up being about 19 seconds, 6 queries for a clean dump, and a bit 
less if the table is already populated. Removing the unique indexes on both 
the entries and segments tables and replacing them with standard indexes 
saves about a second in a full dump, and about 6 seconds for an update. I'm 
pretty happy with where it is now, and I suspect most of the time (aside 
from the two insert calls) is being spent in Python. That said, if you have 
any tips for improvements I'd be all ears.

Thanks for the help!

On Monday, 24 March 2014 09:19:25 UTC+8, Michael Bayer wrote:


 On Mar 23, 2014, at 11:33 AM, James Meneghello 
 muro...@gmail.comjavascript: 
 wrote:

 I'm having a few issues with unique constraints and bulk inserts. The 
 software I'm writing takes data from an external source (a lot of it, 
 anywhere from 1,000 rows per minute to 100-200k+), crunches it down into 
 its hierarchy and saves it to the DB, to be aggregated in the background. 
 The function handling the initial DB save is designed to work with about 
 20-50k rows at a time - very little modification takes place, it's pretty 
 much just grabbed and thrown into the table. Obviously the amount of data 
 being saved somewhat excludes the use of the ORM in this particular table, 
 but there are a number of other tables that benefit from using the ORM. 
 Hence, the small stuff uses the ORM and the big stuff uses the Core.

 The main problem I'm having is with the initial save. The data comes in 
 unordered and sometimes contains duplicates, so there's a UniqueConstraint 
 on Entry on sub, division, created. Unfortunately, this hampers the bulk 
 insert - if there's a duplicate, it rolls back the entire insert and hence 
 the entries aren't available to be referenced by the segments later. 
 Obviously, capturing it in a try/catch would skip the whole block as well. 
 Both Entry and Segment have the same problem - there are often duplicate 
 segments. Since there's a large amount of data being pushed through it, I 
 assume it's impractical to insert the elements individually - while there's 
 only 100-200

Re: [sqlalchemy] Bulk Inserts and Unique Constraints

2014-03-24 Thread James Meneghello
Oops, I should add - the reason I can't use an itertools counter to 
pre-assign IDs is because the table is potentially being dumped to by 
multiple scripts, which is why I have to commit the parts prior to the 
segments (since engine.execute can't return multiple insert_ids).

On Monday, 24 March 2014 14:40:52 UTC+8, James Meneghello wrote:

 Thanks for the quick reply!

 This seems to work pretty well. I took out the batching (as it's already 
 batched at a higher level) and modified it to suit the insertion of 
 children as well (and reducded the unique to a single field) , and it 
 appears to work.

 with db_session() as db:
 existing_entries = dict(
 ((entry.subject, entry) for entry in
 
 db.query(Entry).filter(Entry.subject.in_(entries.keys())).options(subqueryload('segments')).all()
 )
 )

 segment_inserts = []
 for subject, entry in entries.items():
 existing_entry = existing_entries.get(subject, None)
 if existing_entry:
 segments = dict(((s.segment, s) for s in 
 existing_entry.segments))
 for segment_number, segment in entry['segments'].items():
 if int(segment_number) not in segments:
 segment['entry_id'] = existing_entry.id
 segment_inserts.append(segment)
 else:
 entry_id = engine.execute(Entry.__table__.insert(), 
 entry).inserted_primary_key
 for segment in entry['segments'].values():
 segment['entry_id'] = entry_id[0]
 segment_inserts.append(segment)

 engine.execute(Segment.__table__.insert(), segment_inserts)


 For 20,000 segments, this ends up being about 45 seconds and 1650 queries 
 - 2 to select all the entries and segments, 1 to insert the segments and 
 the rest to insert parts. From here, however, I rewrote it a bit:

 with db_session() as db:
 existing_entries = dict(
 ((entry.subject, entry) for entry in
 
 db.query(Entry).filter(Entry.subject.in_(entries.keys())).all()
 )
 )

 entry_inserts = []
 for subject, entry in entries.items():
 existing_entry = existing_entries.get(subject, None)
 if not existing_entry:
 entry_inserts.append(entry)

 engine.execute(Entry.__table__.insert(), entry_inserts)

 existing_entries = dict(
 ((entry.subject, entry) for entry in
 
 db.query(Entry).filter(Entry.subject.in_(entries.keys())).options(subqueryload('segments')).all()
 )
 )

 segment_inserts = []
 for subject, entry in entries.items():
 existing_entry = existing_entries.get(subject, None)
 if existing_entry:
 segments = dict(((s.segment, s) for s in 
 existing_entry.segments))
 for segment_number, segment in entry['segments'].items():
 if int(segment_number) not in segments:
 segment['entry_id'] = existing_entry.id
 segment_inserts.append(segment)
 else:
 log.error('i\'ve made a huge mistake')

 engine.execute(Segment.__table__.insert(), segment_inserts)

 This ends up being about 19 seconds, 6 queries for a clean dump, and a bit 
 less if the table is already populated. Removing the unique indexes on both 
 the entries and segments tables and replacing them with standard indexes 
 saves about a second in a full dump, and about 6 seconds for an update. I'm 
 pretty happy with where it is now, and I suspect most of the time (aside 
 from the two insert calls) is being spent in Python. That said, if you have 
 any tips for improvements I'd be all ears.

 Thanks for the help!

 On Monday, 24 March 2014 09:19:25 UTC+8, Michael Bayer wrote:


 On Mar 23, 2014, at 11:33 AM, James Meneghello muro...@gmail.com wrote:

 I'm having a few issues with unique constraints and bulk inserts. The 
 software I'm writing takes data from an external source (a lot of it, 
 anywhere from 1,000 rows per minute to 100-200k+), crunches it down into 
 its hierarchy and saves it to the DB, to be aggregated in the background. 
 The function handling the initial DB save is designed to work with about 
 20-50k rows at a time - very little modification takes place, it's pretty 
 much just grabbed and thrown into the table. Obviously the amount of data 
 being saved somewhat excludes the use of the ORM in this particular table, 
 but there are a number of other tables that benefit from using the ORM. 
 Hence, the small stuff uses the ORM and the big stuff uses the Core.

 The main problem I'm having is with the initial save. The data comes in 
 unordered and sometimes contains duplicates, so there's a UniqueConstraint 
 on Entry on sub, division, created. Unfortunately, this hampers the bulk 
 insert - if there's a duplicate, it rolls back the entire insert

[sqlalchemy] Re: Bulk Inserts and Unique Constraints

2014-03-24 Thread James Meneghello
That's effectively what I'm doing now. I'm not sure there's much I can 
speed up at this point - the SELECTs take about 0.05s, it's just the 
INSERTs taking a bulk of the time - 11-15s depending on the number of rows. 
That said, I'm still running on development and there'll be a significant 
boost once it's on proper hardware.

On Monday, 24 March 2014 22:44:09 UTC+8, Jonathan Vanasco wrote:

 The data comes in unordered and sometimes contains duplicates, so there's 
 a UniqueConstraint on Entry on sub, division, created.

 Have you tried pre-processing the list first ?

 I've had similar situations, when dealing with browser , user and app 
 analytics.  

 I normally do a first pass to restructure the raw log file and note any 
 'selects' i might need to associate the records to; then I lock tables, 
 precache the selects, and do all the inserts.  the speed pickups have been 
 great.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Bulk Inserts and Unique Constraints

2014-03-23 Thread James Meneghello
I'm having a few issues with unique constraints and bulk inserts. The 
software I'm writing takes data from an external source (a lot of it, 
anywhere from 1,000 rows per minute to 100-200k+), crunches it down into 
its hierarchy and saves it to the DB, to be aggregated in the background. 
The function handling the initial DB save is designed to work with about 
20-50k rows at a time - very little modification takes place, it's pretty 
much just grabbed and thrown into the table. Obviously the amount of data 
being saved somewhat excludes the use of the ORM in this particular table, 
but there are a number of other tables that benefit from using the ORM. 
Hence, the small stuff uses the ORM and the big stuff uses the Core.

The main problem I'm having is with the initial save. The data comes in 
unordered and sometimes contains duplicates, so there's a UniqueConstraint 
on Entry on sub, division, created. Unfortunately, this hampers the bulk 
insert - if there's a duplicate, it rolls back the entire insert and hence 
the entries aren't available to be referenced by the segments later. 
Obviously, capturing it in a try/catch would skip the whole block as well. 
Both Entry and Segment have the same problem - there are often duplicate 
segments. Since there's a large amount of data being pushed through it, I 
assume it's impractical to insert the elements individually - while there's 
only 100-200 entries per block, there's usually 20-50k segments.

Is there any way of forcing the engine to skip over duplicates and not 
rollback the transaction on exception? Code's below. Using Postgres, with 
psycopg2 as the driver.


engine.execute(Entry.__table__.insert(), entries)

segment_list = []
for sub, entry in entry.items():
segments = entry.pop('segments')

e = db.query(Entry)\
.filter(Entry.sub==entry['sub'])\
.filter(Entry.division==entry['division'])\
.filter(Entry.created==entry['created']).first()

for segment in segments:
segment['entry_id'] = e.id
segment_list.append(segment)

engine.execute(Segment.__table__.insert(), segment_list)

In addition, is there some way to pre-fetch data? Rather than query for 
each Entry, it'd be nice to pre-load all entries and save a couple hundred 
queries.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Unit testing, mocking and dependency injection with SA Declarative.

2013-09-12 Thread James C
Regarding the Postgres in production and SQLite in testing differences, 
we've previously had problems with Postgres's Array - which doesn't exist 
in SQLite. Also watch out for the differences between how they interpret 
GROUP BY and DISTINCT - I've had this problem only today where a 
complicated query I wrote and tested in unittest (SQLite) doesn't work on 
development localhost (Postgres).

Alex, thanks very much for this blog post - very helpful. I was just 
talking about this strategy with my colleague today. I've previously switch 
over to Postgres in test to address problems with Array - however I had to 
bring the project back over again because of speed issues - it was taking 
far too long to do a set up and tear down of the DB for each test. It looks 
like your solution will be much quicker than our previous one because of 
your strategy with transactions.

Cheers,

James



On Wednesday, September 11, 2013 6:12:07 PM UTC+1, Alex Grönholm wrote:

 I wrote a blog post on this very topic recently: 
 http://alextechrants.blogspot.fi/2013/08/unit-testing-sqlalchemy-apps.html


 tiistai, 10. syyskuuta 2013 19.43.35 UTC+3 Toph Burns kirjoitti:

  Could you use an in-memory, sqlite db for your testing?  For our 
 applications, we have an initialization function that loads the database 
 connection strings from a config (.ini) file, passing those on to 
 create_engine.  In production it's a postgresql connection string, for 
 test, it's a sqlite:///:memory:'


   Toph Burns | Software Engineer
 5885 Hollis St.  Suite 100
 Emeryville, CA 94608
 510-597-4797
 bu...@amyris.com
   --
 *From:* sqlal...@googlegroups.com [sqlal...@googlegroups.com] on behalf 
 of Michel Albert [exh...@gmail.com]
 *Sent:* Tuesday, September 10, 2013 1:46 AM
 *To:* sqlal...@googlegroups.com
 *Subject:* [sqlalchemy] Unit testing, mocking and dependency injection 
 with SA Declarative.

   I am trying to wrap my head around how to do Dependency Injection with 
 SQLAlchemy and I am walking in circles. 

  I want to be able to mock out SA for most of my tests. I trust SA and 
 don't want to test serialisation into the DB. I just want to test my own 
 code. So I was thinking to do dependency injection, and mock out SA during 
 testing.

  But I don't know what to mock out, how and when to set up the session 
 properly, without doing it at the module level (which causes unwanted 
 side-effects only by importing the module).

  The only solution which comes to mind is to have one singleton which 
 deals with that. But that feels very unpythonic to me and I am wondering if 
 there's a better solution.

  I also saw that create_engine has an optional module kwarg, which I 
 could mock out. But then SA begins complaining that the return types are 
 not correct. And I don't want to specify return values for every possible 
 db-module call. That's way out of scope of my tests. I am not calling 
 anything on the db-module. That's SA's job, and, as said, I already trust 
 SA.

  Whenever I work on this I always run into the session_maker 
 initialisation as well. The examples to this on the module level, which I 
 really make me feel uneasy.

  Any tips? Just prodding myself in the right direction might help me out 
 enough.
  
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com.
 To post to this group, send email to sqlal...@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
   


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] views declarative?

2013-04-19 Thread James Hartley
On Wed, Apr 17, 2013 at 2:59 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 James Hartley jjhart...@gmail.com writes:
  Is it possible to map Table instances back to classes defined through
  declarative_base()?

 the typical form is:

 Base = declarative_base()

 some_table = Table('some_table', Base.metadata, Column('id', Integer,
 primary_key=True))

 class SomeClass(Base):
__table__ = some_table


Thanks all for the responses.

The Wiki recipe for views:

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/Views

...creates  drops the defined view on-the-fly.  This raises two
questions.  Is there a way to allow the view created in Python code to
persist?  Likewise, is there a way to take advantage of an existing view
defined at the database level?  An obvious workaround is to create a
duplicate view with a different name, but I'm curious as to whether the two
can be merged.

Thanks again for the insight shared.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] views declarative?

2013-04-17 Thread James Hartley
Starting with the Wiki article on implementing views:

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/Views

Is it possible to map Table instances back to classes defined through
declarative_base()?  I'm using SQLAlchemy 0.7.1.

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Re: views declarative?

2013-04-17 Thread James Hartley
On Wed, Apr 17, 2013 at 6:20 AM, Lele Gaifax l...@metapensiero.it wrote:

 James Hartley jjhart...@gmail.com writes:
  Is it possible to map Table instances back to classes defined through
  declarative_base()?

 ...I assume you are asking whether you can map a view onto a
 Python class using the declarative layer. If so, yes, SQLAlchemy does
 not care about how the underlying table is implemented, it will issue
 the same SQL in either cases.


Thanks for your reply.  Any hints on how to tie a class inheriting from
Base to a Table?  I have been unsuccessful in bridging the two.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] implementing one-to-many relationship?

2013-04-14 Thread James Hartley
On Thu, Apr 4, 2013 at 1:39 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 the requery is due to the default expire_on_commit of session.commit():
 http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#committing.  Feel
 free to disable this feature if you don't need it.

 as far as one-to-many, I don't see the use of relationship() here, you'd
 likely find it easier to use rather than assigning primary key identities
 to foreign key attributes directly:

 http://docs.sqlalchemy.org/en/rel_0_8/orm/tutorial.html#building-a-relationship



 On Apr 3, 2013, at 2:49 PM, James Hartley jjhart...@gmail.com wrote:

I am finding it curious in the following output that once the subordinate
 tuples are committed, SQLAlchemy is querying the database once again to
 retrieve the primary keys of the second table.  Am I performing too much
 work in client code?

 Thanks, Michael!  expire_on_commit=False was exactly what I needed to stop
redundant queries.

I now need to go back one step,  ask about sqlalchemy.orm.relationship.
Below is the table/class definitions I am using:

=8---
class Heartbeat(Base):
__tablename__ = 'heartbeat'

id = Column(Integer, primary_key=True)
timestamp = Column(DateTime, unique=True, nullable=False)
elapsed_time = Column(Float, CheckConstraint('elapsed_time  0'),
nullable=False)

def __init__(self, elapsed_time):
Constructor.
self.timestamp = datetime.datetime.now()
self.elapsed_time = elapsed_time

def __repr__(self):
Overload.
return 'Heartbeat({},{},{})'.format(self.id,
self.timestamp, self.elapsed_time)

class Platform(Base):
__tablename__ = 'platforms'

id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
#first_heartbeat_id = Column(Integer, ForeignKey('heartbeat.id'),
nullable=False)

first_heartbeat_id = Column(Integer,
CheckConstraint('first_heartbeat_id != last_heartbeat_id'), ForeignKey('
heartbeat.id'), nullable=False)
last_heartbeat_id = Column(Integer, CheckConstraint('last_heartbeat_id
!= first_heartbeat_id'), ForeignKey('heartbeat.id'), nullable=True)

UniqueConstraint('name', 'first_heartbeat_id')

first_heartbeat = relationship('Heartbeat')
#last_heartbeat =
relationship('Heartbeat')


def __init__(self, name):
self.name = name

def __repr__(self):
return 'Platform({},{},{},{})'.format(self.id, self.name,
self.first_heartbeat_id, self.last_heartbeat_id)
=8---

Numerous foreign keys in various tables refer back to the timestamp
maintained in the heartbeat table, however in the case of the platforms
table, timestamps are used to designate when the platform was originally
detected  deactivated.  If I don't specify a relationship()  work
directly with the primary key ID values, my code works without specifying a
relationship().  If I use the code above which abstracts away the direct
use of assigning to first_heartbeat_id  last_heartbeat_id in favor of
dealing with class instances, I get the following error upon creating any
Heartbeat instance:

ArgumentError: Could not determine join condition between parent/child
tables on relationship Platform.first_heartbeat.  Specify a 'primaryjoin'
expression.  If 'secondary' is present, 'secondaryjoin' is needed as well.

I am guessing that this arises from relationship()'s focus on types.  Since
I have identical foreign keys when it comes to where the foreign keys
point, I am surmising that the underlying code cannot determine that
first_heartbeat actually maps to first_heartbeat_id.  Is this correct?
If this is correct, I am not lobbying for SQLAlchemy to address this
strange edge case.  I'm only trying to understand how to leverage the
mapping of Python objects to the underlying SQL tables better.

Thanks for you insight.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] implementing one-to-many relationship?

2013-04-03 Thread James Hartley
I have implemented a (simplified) one-to-many relationship which works, but
I suspect I am reimplementing functionality in a suboptimal fashion which
is already done by SQLAlchemy.  The following short example:

8---
#!/usr/bin/env python

import datetime
from sqlalchemy import create_engine, Column, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, relationship, backref
from sqlalchemy.schema import UniqueConstraint
from sqlalchemy.dialects.sqlite import INTEGER, TEXT, DATETIME

Base = declarative_base()
Session = sessionmaker()
engine = create_engine('sqlite:///test.db', echo=True)

class Subordinate(Base):
__tablename__ = 'subordinate'

id = Column(INTEGER, primary_key=True)
name = Column(TEXT, unique=True, nullable=False)
discovered = Column(DATETIME, nullable=False)
discontinued = Column(DATETIME, nullable=True)

def __init__(self, name):
constructor
self.name = name
self.discovered = datetime.datetime.now()

def __repr__(self):
string representation overload
return 'subordinate(%d,%s,%s)' % (self.id, self.discovered,
self.discontinued)

class Record(Base):
__tablename__ = 'record'

id = Column(INTEGER, primary_key=True)
subordinate_id = Column(INTEGER, ForeignKey('subordinate.id'),
nullable=False)
timestamp = Column(DATETIME, nullable=False)
UniqueConstraint('subordinate_id', 'timestamp', name='occurrence')

def __init__(self, subordinate):
constructor
self.subordinate_id = subordinate.id
self.timestamp = datetime.datetime.now()

def __repr__(self):
string representation overload
return 'Snapshot(%s,%s,%s)' % (self.id,
self.subordinate_id, self.timestamp)

if __name__ == '__main__':
Session.configure(bind=engine)
session = Session()

Base.metadata.create_all(engine)

d = {'subordinates':{}, 'records':{}}

lst = []
for p in ('abc', 'ijk', 'xyz'):
d['subordinates'][p] = Subordinate(p)
lst.append(d['subordinates'][p])
session.add_all(lst)
session.commit()

lst = []
for p in ('abc', 'ijk', 'xyz'):
d['records'][p] = Record(d['subordinates'][p])
lst.append(d['records'][p])
session.add_all(lst)
session.commit()
8---
I am finding it curious in the following output that once the subordinate
tuples are committed, SQLAlchemy is querying the database once again to
retrieve the primary keys of the second table.  Am I performing too much
work in client code?

Any insight shared would be appreciated.

8---
2013-04-03 13:35:38,291 INFO sqlalchemy.engine.base.Engine ()
2013-04-03 13:35:38,293 INFO sqlalchemy.engine.base.Engine COMMIT
2013-04-03 13:35:38,297 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2013-04-03 13:35:38,299 INFO sqlalchemy.engine.base.Engine INSERT INTO
subordinate (name, discovered, discontinued) VALUES (?, ?, ?)
2013-04-03 13:35:38,300 INFO sqlalchemy.engine.base.Engine ('abc',
'2013-04-03 13:35:38.296111', None)
2013-04-03 13:35:38,301 INFO sqlalchemy.engine.base.Engine INSERT INTO
subordinate (name, discovered, discontinued) VALUES (?, ?, ?)
2013-04-03 13:35:38,301 INFO sqlalchemy.engine.base.Engine ('ijk',
'2013-04-03 13:35:38.296223', None)
2013-04-03 13:35:38,302 INFO sqlalchemy.engine.base.Engine INSERT INTO
subordinate (name, discovered, discontinued) VALUES (?, ?, ?)
2013-04-03 13:35:38,302 INFO sqlalchemy.engine.base.Engine ('xyz',
'2013-04-03 13:35:38.296309', None)
2013-04-03 13:35:38,303 INFO sqlalchemy.engine.base.Engine COMMIT
2013-04-03 13:35:38,305 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2013-04-03 13:35:38,306 INFO sqlalchemy.engine.base.Engine SELECT
subordinate.id AS subordinate_id, subordinate.name AS subordinate_name,
subordinate.discovered AS subordinate_discovered, subordinate.discontinued
AS subordinate_discontinued
FROM subordinate
WHERE subordinate.id = ?
2013-04-03 13:35:38,306 INFO sqlalchemy.engine.base.Engine (1,)
2013-04-03 13:35:38,309 INFO sqlalchemy.engine.base.Engine SELECT
subordinate.id AS subordinate_id, subordinate.name AS subordinate_name,
subordinate.discovered AS subordinate_discovered, subordinate.discontinued
AS subordinate_discontinued
FROM subordinate
WHERE subordinate.id = ?
2013-04-03 13:35:38,309 INFO sqlalchemy.engine.base.Engine (2,)
2013-04-03 13:35:38,311 INFO sqlalchemy.engine.base.Engine SELECT
subordinate.id AS subordinate_id, subordinate.name AS subordinate_name,
subordinate.discovered AS subordinate_discovered, subordinate.discontinued
AS subordinate_discontinued
FROM subordinate
WHERE subordinate.id = ?
2013-04-03 13:35:38,311 INFO sqlalchemy.engine.base.Engine (3,)
2013-04-03 13:35:38,313 INFO sqlalchemy.engine.base.Engine INSERT INTO
record (subordinate_id, timestamp) VALUES (?, ?)
2013-04-03 13:35:38,313 INFO sqlalchemy.engine.base.Engine (1, '2013-04-03
13:35:38.308225')

[sqlalchemy] Re: Dialect for Vertica db connectivity ?

2013-04-01 Thread James Casbon
I put a rework of the code posted by Bo into a package 
https://pypi.python.org/pypi/vertica-sqlalchemy/0.1

Selects, joins, table introspection works.   Let me know if you can use it.

Does anyone have an email for Bo so I can attribute him and check the 
license?

thanks,
James


On Saturday, 16 March 2013 22:44:56 UTC, Femi Anthony wrote:

 Jonathan, thanks a lot. I'll test it out using the postgresSQL dialect.

 Femi

 On Friday, March 15, 2013 4:06:33 PM UTC-4, Jonathan Vanasco wrote:

 @Femi - 

 I did a quick search online, but couldn't find any current ( since HP 
 acquisition ) documentation. 

 HOWEVER -- all of the old documentation and QAs that are still online 
 talk about Vertica reimplementing the PostgreSQL syntax and 
 functions.  That's in line with what I remembered earlier, where the 
 psql client was even their recommended command-line interface. 
 ( Also, it was invented/started by the same guy who started 
 PostgreSQL ) 

 It's possible that things have changed, but I would try treating it as 
 PostgreSQL.  Unless they did a HUGE 360 pivot, I think that should 
 work. 



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] max() min() string lengths?

2013-01-03 Thread James Hartley
 Embarrassingly, I'm gotten lost in calling SQL functions in SQLAlchemy
0.7.1.

I can boil the problem down to the following table structure:

CREATE TABLE words (
id INTEGER NOT NULL,
timestamp DATETIME NOT NULL,
word TEXT NOT NULL,
PRIMARY KEY (id),
UNIQUE (word)
);

...where I would like to find the maximum  minimum stored string lengths.
eg.

SELECT MAX(LENGTH(word), MAX(LENGTH(word)) FROM words;

The code below constructs  populates the table correctly, but translating
the above SQL into something more Pythonic is eluding me.  Any suggestions
would be welcomed, as I'm in a rut.

Thanks.

#8

#!/usr/bin/env python

from datetime import datetime

from sqlalchemy import create_engine, Column, func
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy.dialects.sqlite import INTEGER, DATETIME, TEXT

Base = declarative_base()

def get_dbname():
return 'test.db'

class Word(Base):
__tablename__ = 'words'

id = Column(INTEGER, primary_key=True)
timestamp = Column(DATETIME, nullable=False, default=datetime.now())
word = Column(TEXT, nullable=False, unique=True)

def __init__(self, word):
self.word = word

def __repr__(self):
return 'Word(%d, %s, %s)' % (self.id, self.timestamp, self.word)

if __name__ == '__main__':
engine = create_engine('sqlite:///' + get_dbname(), echo=True)
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()

words = THE OF AND TO A IN THAT IS WAS HE FOR IT WITH AS HIS ON BE AT
SAME ANOTHER KNOW WHILE LAST.split()

for w in words:
session.add(Word(w))
session.commit()

print 'total words = %d' % session.query(Word).count()

# minimum length = ?
# maximum length = ?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] fractional second percision- mysql

2012-07-17 Thread James
I have created a feature request ticket for MySQLdb to add fractional 
second support:

http://sourceforge.net/tracker/?func=detailaid=3545195group_id=22307atid=374935

Currently, I am still using my patched version of MySQLdb/times.py however 
I did notice a slight formatting issue with my original patch. To fix the 
issue, the return statement of 'def format_TIMEDELTA(v)' now reads:

return '%d %d:%d:%d.%06d' % (v.days, hours, minutes, seconds, microseconds)

Hopefully, MySQLdb will add support on their end, so you can proceed with 
committing the hypothetical changes that you suggested, which that work for 
me. 

Thank you for your help.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/x_-Bd0lbg2gJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] fractional second percision- mysql

2012-07-13 Thread James


 Yeah, I kind of suspected MySQLdb might have problems here.  You need to 
 file a bug report with the DBAPI's bug tracker:
 http://sourceforge.net/tracker/?group_id=22307atid=374932


I didn't see anything on the subject of fractional second support but I 
ended up getting the insert working by the format_TIME function in 
/MySQLdb/times.py to process microseconds:

def format_TIMEDELTA(v):

microseconds = v.microseconds
seconds = float(v.seconds) % 60
minutes = int(v.seconds / 60) % 60
hours = int(v.seconds / 3600) % 24

return '%d %d:%d:%d.%d' % (v.days, hours, minutes, seconds, 
microseconds)

Thank you for your help with the SQLalchemy side of things, redefining how 
the DDL is emitted for the type and whatnot. Hopefully we can see these 
changes in future releases of the 0.7 series.

--James

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/9u5Yiq9v6LMJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] fractional second percision- mysql

2012-07-10 Thread James



 I have not made any changes, have only proposed some hypothetical changes 
 for the 0.7 series.  I don't have this newest version of MySQL installed, 
 so I was asking you to test the workaround I gave and/or the patch, to 
 ensure it solves all the problems fully.  This testing would also establish 
 that the MySQL DBAPI is properly receiving/returning the microseconds field.

 If the workarounds I gave solve your problem fully, then I can commit the 
 patch to SQLAlchemy and resolve that we are doing all that's needed for 
 fractional time support.


I have tried implementing your suggestions. Unfortunately, your workaround 
did not solve my problem fully. I successfully created the FracTime type 
which extends Time and that shows a field for fractional seconds. However, 
when I try to insert values into the FracTime column, I am still unable to 
to specify the fractional seconds part. Even after modifying Time's process 
method in lib/sqlalchemy/dialects/mysql/base.py to accept microseconds, I 
was still unable to get it to work. 

To ensure that it was not a problem with sqlalchemy, I turned on logging to 
look at was being passed to the DBAPI. What I see is:

...
INFO:sqlalchemy.engine.base.Engine.0x...a3d0:(4L, 7.48, 0.27, 
datetime.timedelta
(0, 8, 26), 93, 0, 0, 62032, 0, 0, 19, 15573, 0) 
...

It looks like datetime.timedelta is holding the microseconds correctly 
(that is 0 minutes, 8 seconds and 26 microseconds). It looks to me that 
sqlalchemy is correctly passing the microsecond value onto the DBAPI. After 
your workaround, this seems to have confirmed that I am having a problem 
with my DBAPI's (which I think is MySQLdb) communication with the db.

Please let me know if you have anymore ideas. Thank you for your 
suggestions.

-James 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/OLtWHwRJzKMJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] fractional second percision- mysql

2012-07-08 Thread James
Michael,

Thank you for your response. Your help is greatly appreciated. Just to be 
clear, are these changes that you have made and that I can access if I 
update to 0.7 or later? And would I simply need to specify the new FracTime 
type instead of Time?

Example:
 meta_timings = Table('meta_timings', metadata, 
   ... 
Column('elapsed', FracTime, nullable=False), 
  ...) 

--James

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/Xj8EoZp3NBkJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] fractional second percision- mysql

2012-07-03 Thread James
Support,

I recently updated our MySQL database to version 5.6.5 with hopes of
using the newly added fractional second support for the Time datatype.
Using SQLalchemy version 0.6.5 to create our table definitions, I add
the fractional second percision paramater to the time type as shown in
the MySQL documentation:

meta_timings = Table('meta_timings', metadata,
   ...
Column('elapsed', Time(2), nullable=False),
  ...)

However, when I build the database with these new table definitions,
sqlalchemy seems to ignore the new time parameters and creates the
table using the default time command without fractional second
precision. The resulting table definition is:

CREATE TABLE `meta_timings` (
...
 `elapsed` time NOT NULL,
...
) ;

When I attempt to bypass this problem by manually creating this table
definition by using `session.execute(''' table definition '''), the
MySQL database renders the time in the correct format with two
fractional seconds places after the seconds place (00:00:00.00 instead
of 00:00:00). This is a step closer, however, when I use timedelta to
insert the data, everything except the fractional seconds is parsed
and put into the time datatype correctly. The result is that every
time has .00 fractional seconds (example xx:xx:xx.00).

Here is how I am parsing our data using timedelta:

# Match time values
   time_re = re.match('(\d{1,2}):(\d{1,2})\.(\d{1,2})', v)
   if time_re:
 minutes, seconds, milliseconds = [int(x) for x in
time_re.groups()]
 td = timedelta(minutes=minutes, seconds=seconds,
milliseconds=milliseconds)
 return td

Does sqlalchemy support this type of operation?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] defining foreign keys?

2011-10-26 Thread James Hartley
On Wed, Oct 26, 2011 at 10:15 AM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Oct 26, 2011, at 1:04 PM, James Hartley wrote:

 On Wed, Oct 26, 2011 at 2:22 AM, Stefano Fontanelli 
 s.fontane...@asidev.com wrote:


 Hi James,
 you cannot define two mapper properties that use the same name.

 If you wish to locate classes based on their string name as you are doing
 in relationship('User') here, the calling class (Address) must share the
 same registry of names that the desired class (User) does.  This registry is
 part of the Base.   Therefore your entire application needs to have
 exactly one usage of declarative_base(), where all descending classes use
 the same Base object,  and not one usage per file.


This is what I had missed.  Moving the call to declarative_base() to its own
module  importing it as needed has taken care of all remaining problems.
Thank you Michael   Stefano for taking the time to clear this up.  I
sincerely appreciate it.

Jim

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] defining foreign keys?

2011-10-25 Thread James Hartley
I suspect this is user error, but I am not ferreting out my mistake.

I'm porting some older code to SQLAlchemy 0.71 on top of Python 2.7.1.  Code
which had originally implemented foreign keys without using REFERENCES
clauses in CREATE TABLE statements previously ran fine.  Now, adding formal
foreign keys isn't working.  I have boiled this down to the following
variant on the example found in the SQLAlchemy Documentation:

=8--
#!/usr/bin/env
python


from sqlalchemy import create_engine, Column, ForeignKey, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, relationship, backref

Base = declarative_base()

class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
fullname = Column(String)
password = Column(String)

addresses = relationship('Address', order_by='Address.id',
backref='user')

def __init__(self, name, fullname, password):
self.name = name
self.fullname = fullname
self.password = password

def __repr__(self):
return User('%s', '%s', '%s', '%s') % (self.id, self.name,
self.fullname, self.password)

class Address(Base):
__tablename__ = 'addresses'
id = Column(Integer, primary_key=True)
email_address = Column(String, nullable=False)
user_id = Column(Integer, ForeignKey('users.id'))

user = relationship('User', backref=backref('addresses', order_by=id))

def __init__(self, email):
self.email_address = email

def __repr__(self):
return Address('%s', '%s', '%s') % (self.id, self.email_address,
self.user_id)

if __name__ == '__main__':
engine = create_engine('sqlite:///:memory:', echo=True)
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()

user = User('jdoe', 'John Doe', 'password')
print user
session.add(user)
session.commit()
=8--

Execution yields the following traceback:

=8--
traceback (most recent call last):
  File ./test.py, line 51, in module
user = User('jdoe', 'John Doe', 'password')
  File string, line 2, in __init__
  File
/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/instrumentation.py,
line 309, in _new_state_if_none
state = self._state_constructor(instance, self)
  File
/usr/local/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py,
line 432, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
  File
/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/instrumentation.py,
line 157, in _state_constructor
self.dispatch.first_init(self, self.class_)
  File /usr/local/lib/python2.7/site-packages/sqlalchemy/event.py, line
274, in __call__
fn(*args, **kw)
  File /usr/local/lib/python2.7/site-packages/sqlalchemy/orm/mapper.py,
line 2787, in _event_on_first_init
configure_mappers()
  File /usr/local/lib/python2.7/site-packages/sqlalchemy/orm/mapper.py,
line 2719, in configure_mappers
mapper._post_configure_properties()
  File /usr/local/lib/python2.7/site-packages/sqlalchemy/orm/mapper.py,
line 1035, in _post_configure_properties
prop.init()
  File
/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/interfaces.py, line
121, in init
self.do_init()
  File
/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/properties.py, line
905, in do_init
self._generate_backref()
  File
/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/properties.py, line
1376, in _generate_backref
self, mapper))
sqlalchemy.exc.ArgumentError: Error creating backref 'user' on relationship
'User.addresses': property of that name exists on mapper
'Mapper|Address|addresses'
=8--

Any insight shared would be greatly appreciated.  Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] support of regular expressions?

2011-09-11 Thread James Hartley
I'm needing to extract domain information from stored email addresses --
something akin to the following:

SELECT DISTINCT (REGEXP_MATCHES(email, '@(.+)$'))[1] AS domain
FROM tablename
WHERE email ~ '@.+$'

While I was able to gather the information through session.execute(), I
didn't find an equivalent filter (?) in the code for regular expression
related functions.  Is this too database specific, or did I miss something?

Thanks, and thank you for SQLAlchemy.

Jim

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SQLAlchemy 0.7.2 Released

2011-08-02 Thread James Studdart
I'd just like to echo Martin's statement, thank you very much. Just your 
responses to this list seem like a full time job, let alone the 
development to SQLAlchemy - which continues to surprise and impress me 
with it's features and support.


James.

On 08/02/2011 09:28 AM, Martijn Moeling wrote:

Thank you for all the effort you put in.
SQLAlchemy has been a proven tool for me and as it seems for many others.




On Aug 1, 2011, at 02:17 , Michael Bayer wrote:

   

SQLAlchemy version 0.7.2 is now available.

A lot has been going on leading up to this release, and there was actually a 
bunch more I've wanted to do;  but as we went about six weeks since the last 
release we've accumulated at least twenty five bug fixes, and it's time for 
them to go out.Work continues towards the next release.

This release features a relatively big change to the mechanics of joined and subquery 
eager loading, which is that when invoked from a Query (as opposed to from a lazy load), 
the eager loader will traverse the graph of objects fully regardless of collections and 
attributes that are already loaded, populating any expired or not-yet-loaded attributes 
all the way down the hierarchy.   Previously it tried to save time by not descending into 
already loaded subtrees.   This is to better support the use case of using eager loading 
in order to fully populate a tree, such that it can be detached and sent to a cache in a 
fully loaded state.  It is also behaviorally closer to the spirit of I asked for X, 
I should get X, i.e. if you say subqueryload(), you'll get your subquery no matter 
what.

Other than that there were a *lot* of ORM fixes, most of which have been also applied to the 
0.6 branch and will be in 0.6.9.  Also some additional 0.6-0.7 regressions fixed, and 
some fixes to the new Mutable extension including one which was kind of a show 
stopper.

Download SQLAlchemy 0.7.2 at:

http://www.sqlalchemy.org/download.html

Changelog follows.

0.7.2
=
- orm
  - Feature enhancement: joined and subquery
loading will now traverse already-present related
objects and collections in search of unpopulated
attributes throughout the scope of the eager load
being defined, so that the eager loading that is
specified via mappings or query options
unconditionally takes place for the full depth,
populating whatever is not already populated.
Previously, this traversal would stop if a related
object or collection were already present leading
to inconsistent behavior (though would save on
loads/cycles for an already-loaded graph). For a
subqueryload, this means that the additional
SELECT statements emitted by subqueryload will
invoke unconditionally, no matter how much of the
existing graph is already present (hence the
controversy). The previous behavior of stopping
is still in effect when a query is the result of
an attribute-initiated lazyload, as otherwise an
N+1 style of collection iteration can become
needlessly expensive when the same related object
is encountered repeatedly. There's also an
as-yet-not-public generative Query method
_with_invoke_all_eagers()
which selects old/new behavior [ticket:2213]

  - A rework of replacement traversal within
the ORM as it alters selectables to be against
aliases of things (i.e. clause adaption) includes
a fix for multiply-nested any()/has() constructs
against a joined table structure.  [ticket:2195]

  - Fixed bug where query.join() + aliased=True
from a joined-inh structure to itself on
relationship() with join condition on the child
table would convert the lead entity into the
joined one inappropriately.  [ticket:2234]
Also in 0.6.9.

  - Fixed regression from 0.6 where Session.add()
against an object which contained None in a
collection would raise an internal exception.
Reverted this to 0.6's behavior which is to
accept the None but obviously nothing is
persisted.  Ideally, collections with None
present or on append() should at least emit a
warning, which is being considered for 0.8.
[ticket:2205]

  - Load of a deferred() attribute on an object
where row can't be located raises
ObjectDeletedError instead of failing later
on; improved the message in ObjectDeletedError
to include other conditions besides a simple
delete. [ticket:2191]

  - Fixed regression from 0.6 where a get history
operation on some relationship() based attributes
would fail when a lazyload would emit; this could
trigger within a flush() under certain conditions.
[ticket:2224]  Thanks to the user who submitted
the great test for this.

  - Fixed bug apparent only in Python 3 whereby
sorting of persistent + pending objects during
flush would produce an illegal comparison,
if the persistent object primary key
is not a single integer.  [ticket:2228]
Also in 0.6.9

  - Fixed bug

Re: [sqlalchemy] hybrid, relationships and inheritance.

2011-07-11 Thread James Studdart

Hi Michael,
 thanks for your reply. I did get it working with the 
@property/hybrid_property, but it turns out what I really needed was a 
custom collection class. At least, that seems to be working in a cleaner 
fashion.


I'm still fumbling around in the dark a little bit, so we'll see how it 
goes. SQL Alchemy keeps surprising me with features, it's very cool.


Cheers,
 James.



Assuming you don't need that (class level behavior), you don't really need 
@hybrid_property either.   You can just use Python's standard @property.

If you *did* want that, it would be a little tricky, probably would need a 
custom comparator.


   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] hybrid, relationships and inheritance.

2011-07-07 Thread James Studdart

Hi all,
 I've got a question regarding hybrid properties and how to use them 
with single table inheritance.


I've got a class hierarchy like this (in semi-pseudo code):

class MyBase(object):
# This has the tablename declared attr, id as primary key, generic 
table args etc.


class Person(MyBase, Base):
children = relationship('Children')

class SpecialPerson(Person):
partner = relationship('Person')

Okay, so what I want is for SpecialPerson to return both it's own plus 
it's partners children. But, if I add to list of children of a special 
person, it only adds to it's local children list. Does that make sense?


This is what I've got now, I'm stabbing around in the dark a little bit, 
so I'm hoping for some guidance in the correct  way to do this with SQL 
Alchemy.


class Person(Mybase, Base):
_children = relationship('Children')

@hybrid_property
def children(self):
return self._children

class SpecialPerson(Person):
partner = relationship('Person')

@hybrid_property
def children(self):
return self._children + self.parter._children

@children.setter
def children(self, value):
self._children = value

Thank you for your time.

Cheers,
 James.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Declarative Class registry ?

2011-03-08 Thread James Mills
On Wed, Mar 9, 2011 at 2:16 AM, Michael Bayer mike...@zzzcomputing.com wrote:
 there's no registry of tables to mappers.   You'd need to track that 
 yourself, or otherwise scan through all mappers (non-public attribute 
 sqlalchemy.orm._mapper_registry)  looking for tables (each mapper has a 
 .local_table attribute).   Note that many mappers can be created against a 
 single table.

 To track yourself:

 from sqlalchemy.orm import mapper as _mapper
 import collections

 my_registry_of_tables = collections.defaultdict(set)
 def mapper(cls, table=None, *arg, **kw):
    my_registry_of_tables[table].add(cls)
    return _mapper(cls, table, *arg, **kw)

 from sqlalchemy.ext.declarative import declarative_base
 Base = declarative_base(mapper=mapper)

Thanks Michael.

cheers
James

-- 
-- James Mills
--
-- Problems are solved by method

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Declarative Class registry ?

2011-03-07 Thread James Mills
Hello,

Given a scenario where you're using declarative_base(...) and defining 
classes

Is there a way to ask SA what the mapper class (declarative) is for a given 
table
by inspecting something in metadata[table_name] ?

cheers
James


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] finding if a table is already join in a query

2010-11-30 Thread James Neethling
Hi all,

We have a small function that helps us create a simple search query by
automatically joining on required relations if needed.

For example, consider an employee ORM that has a 1:M relationship with
addresses (for postal/physical). We can do something like:

query = Employee().search('streetname', [Employee.name,
Address.street1])

We have that working, but when we add a second search field on Address:
query = Employee.search('streetname', [Employee.name, Address.street1,
Address.street2])
our method fails with: table name address specified more than once

We need to be able to identify if the query already has a join on
'address'

I've tried getting details on the query object (it
has ._from, ._from_obj, ._from_alias and .from_statement) that looked
interesting, but they don't appear to give us what we need.


Here is a cut down sample implementation that will hopefully remove any
confusion... Note the TODO: in Employee.search()

---8---8---8

from sqlalchemy import create_engine, Column, ForeignKey, or_
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import scoped_session, sessionmaker, relationship,
joinedload
from sqlalchemy.types import Integer, String, Text
from sqlalchemy.sql.expression import cast

engine = create_engine('sqlite:///:memory:', echo=True)
Base = declarative_base(bind=engine)
Session = scoped_session(sessionmaker(bind=engine))


class Employee(Base):
__tablename__ = 'employee'
id = Column(Integer, primary_key=True)
name = Column(String)

def search(self, value, columns):
query = Session.query(Employee)
for i, column in enumerate(columns):
model = column.parententity.class_
if Employee is not model:
#TODO: Are we already joined from Employee onto model?
query = query.outerjoin(model)
args = [cast(c, Text).ilike('%%%s%%' % value) for c in columns]
return query.filter(or_(*args))

class Address(Base):
__tablename__ = 'address'
id = Column(Integer, primary_key=True)
employee_id = Column(Integer, ForeignKey(Employee.id))
street1 =  Column(String(50))
street2 =  Column(String(50))
employee = relationship(Employee)

Base.metadata.create_all()


#e = Employee(name='Bob')
#a = Address(employee=e, street1='street1', street2='street2')
#Session.add(a)
#Session.commit()


q = Employee().search('stree', [Employee.name, Address.street1,
Address.street2])
print q

SELECT employee.id AS employee_id, employee.name AS employee_name 
FROM employee LEFT OUTER JOIN address ON employee.id =
address.employee_id LEFT OUTER JOIN address ON employee.id =
address.employee_id 
WHERE lower(CAST(employee.name AS TEXT)) LIKE lower(?) OR
lower(CAST(address.street1 AS TEXT)) LIKE lower(?) OR
lower(CAST(address.street2 AS TEXT)) LIKE lower(?)

---8---8---8

TIA
Jim


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] finding if a table is already join in a query

2010-11-30 Thread James Neethling
On Tue, 2010-11-30 at 11:52 -0500, Michael Bayer wrote:
 On Nov 30, 2010, at 11:13 AM, James Neethling wrote:
 
  Hi all,
  
  We have a small function that helps us create a simple search query by
  automatically joining on required relations if needed.
  
  For example, consider an employee ORM that has a 1:M relationship with
  addresses (for postal/physical). We can do something like:
  
  query = Employee().search('streetname', [Employee.name,
  Address.street1])
  
  We have that working, but when we add a second search field on Address:
  query = Employee.search('streetname', [Employee.name, Address.street1,
  Address.street2])
  our method fails with: table name address specified more than once
  
  We need to be able to identify if the query already has a join on
  'address'
  
  I've tried getting details on the query object (it
  has ._from, ._from_obj, ._from_alias and .from_statement) that looked
  interesting, but they don't appear to give us what we need.
 
 if you would like multiple references to Address to all work from the same 
 join, your routine needs to track which entities have already been joined as 
 a destination in a separate collection:
 
 
 def search(columns):
   already_joined = set()
 ...
 if class_ not in already_joined:
   q = q.join(destination)
   already_joined.add(class_)

Hi Michael,

Thank you for the quick response.

Unfortunately we don't always know where this query comes from (my
example was a little contrived :( )

Is there any way to get the tables that are currently in the join for a
query?

  
  Here is a cut down sample implementation that will hopefully remove any
  confusion... Note the TODO: in Employee.search()
  
  ---8---8---8
  
  from sqlalchemy import create_engine, Column, ForeignKey, or_
  from sqlalchemy.ext.declarative import declarative_base
  from sqlalchemy.orm import scoped_session, sessionmaker, relationship,
  joinedload
  from sqlalchemy.types import Integer, String, Text
  from sqlalchemy.sql.expression import cast
  
  engine = create_engine('sqlite:///:memory:', echo=True)
  Base = declarative_base(bind=engine)
  Session = scoped_session(sessionmaker(bind=engine))
  
  
  class Employee(Base):
 __tablename__ = 'employee'
 id = Column(Integer, primary_key=True)
 name = Column(String)
  
 def search(self, value, columns):
  query = Session.query(Employee)
 for i, column in enumerate(columns):
 model = column.parententity.class_
 if Employee is not model:
  #TODO: Are we already joined from Employee onto model?
 query = query.outerjoin(model)
 args = [cast(c, Text).ilike('%%%s%%' % value) for c in columns]
 return query.filter(or_(*args))
  
  class Address(Base):
 __tablename__ = 'address'
 id = Column(Integer, primary_key=True)
 employee_id = Column(Integer, ForeignKey(Employee.id))
 street1 =  Column(String(50))
 street2 =  Column(String(50))
 employee = relationship(Employee)
  
  Base.metadata.create_all()
  
  
  #e = Employee(name='Bob')
  #a = Address(employee=e, street1='street1', street2='street2')
  #Session.add(a)
  #Session.commit()
  
  
  q = Employee().search('stree', [Employee.name, Address.street1,
  Address.street2])
  print q
  
  SELECT employee.id AS employee_id, employee.name AS employee_name 
  FROM employee LEFT OUTER JOIN address ON employee.id =
  address.employee_id LEFT OUTER JOIN address ON employee.id =
  address.employee_id 
  WHERE lower(CAST(employee.name AS TEXT)) LIKE lower(?) OR
  lower(CAST(address.street1 AS TEXT)) LIKE lower(?) OR
  lower(CAST(address.street2 AS TEXT)) LIKE lower(?)
  
  ---8---8---8
  
  TIA
  Jim
  
  
  -- 
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at 
  http://groups.google.com/group/sqlalchemy?hl=en.
  
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 


-- 
James Neethling
Development Manager
XO Africa Safari
(t) +27 21 486 2700 (ext. 127)
(e) jam...@xoafrica.com


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http

Re: [sqlalchemy] session.execute(sql_statement) does not flush the session in a autoflush=True session ?

2010-11-30 Thread James Neethling
On Fri, 2010-11-26 at 15:41 -0500, Michael Bayer wrote:
 I wouldn't say its a bug since its intentional.   But I'll grant the 
 intention is up for debate.   I've always considered usage of execute() to 
 mean, you're going below the level of the ORM and would like to control the 
 SQL interaction directly, not to mention with as minimal overhead as 
 possible, which is why it works that way currently.   It might be just as 
 surprising to many users if execute() issued a whole series of insert/update 
 statements as much as it was apparently surprising to you that it did not.


I agree with the current behaviour. I've never actually thought of
execute as doing anything other than *exactly* what I want it to do -
and only that.



  Hi,
  
  In a session which has some dirty objects, doing
  session.execute(sql_statement) doesn't not flush the dirty objects to
  the database before executing the sql_statement query.
  
  The session was initialized with  autoflush=True.
  
  Is it the expected behaviour ? Is it a bug ?
  
  Thanks
  


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] NamedTuple error in multiple join?

2010-10-30 Thread James Hartley
I'm using SQLAlchemy 0.6.4 on top of OpenBSD utilitizing PostgreSQL 8.4.4.
As a first project, I am gathering statistics on the availability of another
Open Source project.  The schema is normalized,  the following SQL query
(which works at the console) to find the latest snapshot is giving me fits
when translating to Python:

SELECT s.id
FROM snapshots s
WHERE s.cron_id = (
SELECT ce.id FROM cron_events ce
WHERE ce.timestamp = (
SELECT MAX(ce.timestamp)
FROM snapshots s
JOIN cron_events ce ON ce.id = s.cron_id
JOIN platforms p ON p.id = s.platform_id
WHERE p.name = 'amd64'))

Aside from the nested subqueries, I'm stuck at implementing the innermost
SELECT which finds the latest recorded snaphot:

SELECT MAX(ce.timestamp)
FROM snapshots s
JOIN cron_events ce ON ce.id = s.cron_id
JOIN platforms p ON p.id = s.platform_id
WHERE p.name = 'amd64'

The class structure is as follows:

class Cron(Base):
collect various timed values here

__tablename__ = CRON_TABLENAME

id = Column(Integer, Sequence(CRON_TABLENAME + '_id_seq'),
primary_key=True)
timestamp = Column(DateTime, nullable=False, unique=True)
ftp_time = Column(Interval, nullable=False)
db_time = Column(Interval, nullable=True)

platforms = relationship('Platform', order_by='Platform.id',
backref='cron')
snapshots = relationship('Snapshot', order_by='Snapshot.id',
backref='cron')

def __init__(self, timestamp, ftp_time):
self.timestamp = timestamp
self.ftp_time = ftp_time

def __repr__(self):
return Cron'%s','%s','%s','%s' % (self.id, self.timestamp,
self.ftp_time, self.db_time)


class Platform(Base):
abstraction of platform name  first occurrence

__tablename__ = PLATFORM_TABLENAME

id = Column(Integer, Sequence(PLATFORM_TABLENAME + '_id_seq'),
primary_key=True)
cron_id = Column(Integer, ForeignKey(CRON_TABLENAME + '.id'),
nullable=False)
name = Column(String(32), nullable=False, unique=True)

def __init__(self, cron_id, name):
self.cron_id = cron_id
self.name = name

def __repr__(self):
return Platform'%s','%s','%s','%s' % (self.id, self.cron_id,
self.cron.timestamp, self.name)


class Snapshot(Base):
abstraction of individual platform snapshot

__tablename__ = SNAPSHOT_TABLENAME

id = Column(Integer, Sequence(SNAPSHOT_TABLENAME + '_id_seq'),
primary_key=True)
cron_id = Column(Integer, ForeignKey(CRON_TABLENAME + '.id'),
nullable=False)
platform_id = Column(Integer, ForeignKey(PLATFORM_TABLENAME + '.id'),
nullable=False)

def __init__(self, cron_id, platform_id):
self.cron_id = cron_id
self.platform_id = platform_id

def __repr__(self):
return Snapshot'%s','%s','%s','%s' % (self.id, self.cron_id,
self.cron.timestamp, self.platform_id\
)

The following Python code:

for t in session.query(func.max(Cron.timestamp)).\
 join((Snapshot, Cron.id == Snapshot.cron_id), (Platform,
Platform.id == Snapshot.platform_id)).\
 filter(Platform.id == platform_id):
   print t

...or variations such as:

t = session.query(func.max(Cron.timestamp)).\
 join((Snapshot, Cron.id == Snapshot.cron_id), (Platform,
Platform.id == Snapshot.platform_id)).\
 filter(Platform.id == platform_id).one()

...all are giving me the following error.  I suspect I am missing something
obvious.  Any insight shared would certainly be appreciated.

Thanks.

2010-10-30 14:47:43,783 INFO sqlalchemy.engine.base.Engine.0x...dccL SELECT
max(cron_events.timestamp) AS max_1
FROM cron_events JOIN snapshots ON cron_events.id = snapshots.cron_id JOIN
platforms ON platforms.id = snapshots.platform_id
WHERE platforms.id = %(id_1)s
2010-10-30 14:47:43,790 INFO sqlalchemy.engine.base.Engine.0x...dccL
{'id_1': (1,)}
Traceback (most recent call last):
  File ./snapshots.py, line 138, in module
snapshot_id = get_latest_snapshot(cron_id, platform_id, name)
  File ./snapshots.py, line 110, in get_latest_snapshot
filter(Platform.id == platform_id):
  File /usr/local/lib/python2.6/site-packages/sqlalchemy/orm/query.py,
line 1451, in __iter__
return self._execute_and_instances(context)
  File /usr/local/lib/python2.6/site-packages/sqlalchemy/orm/query.py,
line 1456, in _execute_and_instances
mapper=self._mapper_zero_or_none())
  File /usr/local/lib/python2.6/site-packages/sqlalchemy/orm/session.py,
line 737, in execute
clause, params or {})
  File /usr/local/lib/python2.6/site-packages/sqlalchemy/engine/base.py,
line 1109, in execute
return Connection.executors[c](self, object, multiparams, params)
  File /usr/local/lib/python2.6/site-packages/sqlalchemy/engine/base.py,
line 1186, in _execute_clauseelement
return self.__execute_context(context)
  File /usr/local/lib/python2.6/site-packages/sqlalchemy/engine/base.py,
line 1215, in __execute_context
context.parameters[0], context=context)
  File 

[sqlalchemy] Passing model objects to other threads

2010-10-01 Thread James
Hi all,
i'm using SQLAlchemy in a TG2 web app which has long-running jobs
kicked off by user POSTs.

I'm planning on writing the pending jobs to the DB, then kick off a
threading.Thread at the end of the web app method to actually do the
work.

Question - I'll need to pass the job description (a SA object) to the
child thread; but to use the description, I need to detach it from the
old TG2 thread's SA session, and reattach it to the child thread's SA
session, right?

How best to do that? I don't see a way of getting the current session
from a live SA object...

Thanks!
James

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Passing model objects to other threads

2010-10-01 Thread James
Hi,
Yes I was going to have TGScheduler run at intervals to complete any
jobs that fail the first time round (as they might, for reasons I
can't control).

However, the initial job run is reasonably time-sensitive, so I would
like it to get started right as the user POSTs the instructions; but I
was thinking that having every web app instance polling the DB at
short intervals would be a lot of overhead.

That said, if TGScheduler handles the session initiation gracefully,
and I'm going to have it running anyway, I could get away with polling
a few times a minute and the user wouldn't realistically notice...

Thanks for the suggestion! If all else fails, I'll bastardise the
session initialisation code from TGScheduler to roll my own :)

James

On Oct 1, 3:11 pm, NiL nicolas.laura...@gmail.com wrote:
 Hi,

 have you considered using TGScheduler ?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] multi Table FKs - ORM mapping

2010-05-21 Thread James Neethling
Hi All,

We're looking to add tags to a number of 'entities' within our
application. A simplified data structure would be:

Image:
id
file
title
description

Article:
id
text

Tag:
id
value


Entity_tags:
id
entity_type  ('image' or 'article'
entity_id(PK of the image/article table)
tag_id

We've got the following (highly simplified) structure:

image_table = Table('image', meta.metadata, 
Column('id', types.Integer, primary_key=True), ) 

article_table = Table('article', meta.metadata, 
Column('id', types.Integer, primary_key=True), ) 

tag_table = Table('tag', meta.metadata, 
Column('id', types.Integer, primary_key=True),
Column('name', types.Unicode(50), unique=True), ) 

entity_tag_table = Table('entity_tag', meta.metadata, 
Column('tag_id', types.Integer, ForeignKey(tag_table.c.id)),
Column('entity_type', types.String, nullable=False),
Column('entity_id', types.Integer, nullable=False),
) 

# And the ORM Mappings:

class Image(object): pass 
class Article(object): pass 
class Tag(object): pass 
class EntityTag(object): pass 

orm.mapper(Image, image_table, properties={ 
'tags': orm.relation(Tag, secondary=entity_tag_table,
primaryjoin=and_(image_table.c.id==EntityTag.entity_id,
EntityTag.entity_type=='image'),
secondaryjoin=EntityTag.tag_id==Tag.id,
foreignkeys=[EntityTag.entity_id, EntityTag.tag_id], ), 
}) 

orm.mapper(Article, article_table, properties={
'tags': orm.relation(Tag, secondary=entity_tag_table,
primaryjoin=and_(article_table.c.id==EntityTag.entity_id,
EntityTag.entity_type=='article'),
secondaryjoin=EntityTag.tag_id==Tag.id,
foreignkeys=[EntityTag.entity_id, EntityTag.tag_id], ), 
})


When we append to the image.tags collection, the entity_tag table needs
to know that the entity_type is 'image', but we can't seem to set that.

What is the standard way of dealing with this problem?

Is there the concept of a generic foreign key in SQLAlchemy?

Does anyone know if this database pattern has a formal name?

TIA,
Jim



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Multiple relationships in table to other kinds of tables

2010-03-11 Thread Noel James

Hello,

I have a question about using multiple polymorphic tables with different 
parents which relate to the the same parent table but other polymorphic 
child.


I have two tables staff (class Staff) and contract (class Contract).
The Staff table has an identity manager (class Manager) and the contract 
table has an identity peon (class Peon).
I have a third table called payments. Payments has two identities 
identity StatusReport and BillableHours.


I want to have StausReport have a relationship to Peon 
ForeignKey('contract.id') and the BillableHours have a relationship to 
Manager ForeignKey('staff.id')


The relationships look like this:
staff.id = payments.user_id
contract.id = payments.user_id
payments.user_id = staff.id
payments.user_id = contract.id

I am using SQLAlchemy v0.5.8 on Python v2.6

Here is an *example* of one of the tests i have tried. I  hope it gives 
a better idea of what I am tring to do (and is not just confusing).
I get failures about specifying foreign_keys or Could not locate any 
equated locally mapped column pairs...

--

from   sqlalchemy import MetaData, orm, schema
from   sqlalchemy.types  import Integer, String
from   sqlalchemy.schema import Column, Sequence, ForeignKey
import sqlalchemy as sa
from   sqlalchemy.orm import sessionmaker, relation
from   sqlalchemy.ext.declarative import declarative_base

engine   = sa.create_engine( 'sqlite://' )
metadata = MetaData(   )
Base = declarative_base( metadata = metadata )
Session  = sessionmaker(  )
Session.configure( bind = engine )

def create( ):
Base.metadata.create_all( engine )

class Staff( Base ):
__tablename__ = 'staff'
id  = Column( Integer, Sequence( 'user_seq' ),
  nullable=False, primary_key=True )
name = Column( String, nullable=False )
type = Column( String, nullable=False )
__mapper_args__ = {'polymorphic_on': type }

class Manager( Staff ):
__mapper_args__ = {'polymorphic_identity': 'manager'}
Billables   = relation( 'BillableHours' )


class Contract( Base  ):
__tablename__ = 'contract'
id  = Column( Integer, Sequence( 'contract_seq' ),
  nullable=False, primary_key=True )
type = Column( String, nullable=False )
__mapper_args__ = {'polymorphic_on': type }

class Peon( Contract ):
__mapper_args__ = {'polymorphic_identity': 'peon'}
StatusReports   = relation( 'StatusReport' )


class Payments( Base ):
__tablename__ = 'payments'
id  = Column( Integer, Sequence( 'payments_seq' ),
  nullable=False, primary_key=True )
type = Column( String,  nullable=False )
__mapper_args__ = {'polymorphic_on': type }

class StatusReport( Payments ):
__mapper_args__ = {'polymorphic_identity': 'status'}
user = Column( Integer, ForeignKey('contract.id'),  nullable=False )
job  = Column( String, nullable=False, default=offshore )
hrs  = Column( Integer, nullable=False, default=0 )
Peons = relation( 'Peon' )

class BillableHours( Payments ):
__mapper_args__ = {'polymorphic_identity': 'billable'}
user = Column( Integer, ForeignKey('staff.id'),  nullable=False )
job  = Column( String, nullable=False  )
hrs  = Column( Integer, nullable=False, default=8 )
Managers = relation( 'Manager' )
--

Thanks.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Generic ODBC connection (4D anyone?)

2010-02-19 Thread James
Anyone heard of 4D? Probably not, but I would love to work with
SQLAlchemy and this database.

How hard is it to write a new dialect?

Anyone had luck using generic odbc (ie not mysql moduled to pyodbc) to
connect to various unsupported databases?

I've tried a couple connection strings, the biggest problem is 4D
doesn't have a database name.

# connect to the actual database
from sqlalchemy import create_engine
#using DSN
engine = create_engine('mysql+pyodbc://4D_v11_Dev/DEFAULT_SCHEMA')
#using URL
engine = create_engine('mysql://user:p...@127.0.0.1', module='pyodbc')
#another dialect with DSN = ERROR: AttributeError: 'str' object has
no attribute 'paramstyle'
engine = create_engine('mssql://4D_v11_Dev', module='pyodbc')
# yet another try
engine = create_engine('mysql+pyodbc://4D_v11_Dev')
# show me output
engine.echo = True

None of those work, I have some stack traces, but the gist is this:
# when used without a database name
sqlalchemy.exc.DBAPIError: (Error) ('08004', '[08004] Server rejected
the connection:\nFailed to parse statement.\r (1301)
(SQLExecDirectW)') 'SELECT DATABASE()' ()

# when I try to specify a name
sqlalchemy.exc.DBAPIError: (Error) ('0', '[0] [iODBC][Driver
Manager]dlopen({MySQL}, 6): image not found (0) (SQLDriverConnectW)')
None None

But connection directly via pyodbc does work
import pyodbc
cnxn = pyodbc.connect(DSN=4D_v11_Dev;UID=user;PWD=pass)
cursor = cnxn.cursor()
cursor.execute('select * from ODBCTest')
a=cursor.fetchall()
print 'pyodbc',a

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] OperationalError: (OperationalError) no such table:

2010-01-29 Thread James Sathre
I’ve been stuck trying to get the pylons application to connect to my
database.  I was able to connect to the database through a python
shell in the “virtualenv” as you can see below.  The app acts like it
can connect to the database, but not to the table.

I admit this is my first pylons project and I'm a little confused as
to where to start looking for a problem.  There seems to be a lot of
outdated doc's on the web and I don't know what to believe is the
current way of doing things.

 import sqlalchemy as sa
 engine = sa.create_engine(login-info)
 from pwi import model
 model.init_model(engine)
 engine.has_table(pwi_wildcard)
True


OperationalError: (OperationalError) no such table: pwi_wildcard
u'SELECT pwi_wildcard.wildcard_id AS pwi_wildcard_wildcard_id,
pwi_wildcard.priority AS pwi_wildcard_priority, pwi_wildcard.name AS
pwi_wildcard_name, pwi_wildcard.shot AS pwi_wildcard_shot,
pwi_wildcard.scene AS pwi_wildcard_scene, pwi_wildcard.created_by AS
pwi_wildcard_created_by, pwi_wildcard.expires AS pwi_wildcard_expires
\nFROM pwi_wildcard' []

thanks in advance,

James

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] mysql + query.execute memory usage

2009-11-18 Thread James Casbon
Hi,

I'm using sqlalchemy to generate a query that returns lots of data.
The trouble is, when calling query.execute() instead of returning
the resultproxy straight away and allowing me to fetch data as I would
like, query.execute blocks and the memory usage grows to gigabytes
before getting killed for too much memory.  This looks to me like
execute is prefetching the entire result.

Is there any way to prevent query.execute loading the entire result?

Thanks,
James

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.




[sqlalchemy] Re: mysql + query.execute memory usage

2009-11-18 Thread James Casbon


On Nov 18, 3:01 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Nov 18, 2009, at 9:57 AM, James Casbon wrote:

  Hi,

  I'm using sqlalchemy to generate a query that returns lots of data.
  The trouble is, when calling query.execute() instead of returning
  the resultproxy straight away and allowing me to fetch data as I would
  like, query.execute blocks and the memory usage grows to gigabytes
  before getting killed for too much memory.  This looks to me like
  execute is prefetching the entire result.

  Is there any way to prevent query.execute loading the entire result?

 for ORM look into using yield_per() or applying limit()/offset().  without 
 the ORM no rows are buffered on the SQLA side.  Note however that MySQLdb is 
 likely prefetching the entire result set in any case (this is psycopg2s 
 behavior but haven't confirmed for MySQLdb).

Thanks, but not using the ORM.

Looks like you have to specify a server side cursor - see SSCursor in
http://mysql-python.sourceforge.net/MySQLdb.html

I don't recall any way of forcing sqlalchemy to use a particular
cursor?

James

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.




[sqlalchemy] no such table OperationalError despite CREATE TABLE being issued

2009-11-07 Thread James

Hi, this concerns running functional tests in TurboGears2, using SA
0.5.1.

As part of the functional test set up, all the model's tables are
CREATEd, and DROPped as part of the tear down.

However, despite seeing the expected sequence of CREATE, 1st test,
DROP, CREATE, 2nd test, DROP, the second test fails with no such
table errors.

Condensed INFO level logging of sqlalchemy.engine:
...
INFO PRAGMA table_info(tg_user)
INFO ()
CREATE TABLE tg_user (
user_id INTEGER NOT NULL,
user_name VARCHAR(16) NOT NULL,
email_address VARCHAR(255) NOT NULL,
display_name VARCHAR(255),
password VARCHAR(80),
created TIMESTAMP,
PRIMARY KEY (user_id),
 UNIQUE (user_name),
 UNIQUE (email_address)
)
INFO ()
INFO COMMIT
...
INFO BEGIN
[DB interactions for first test]
INFO COMMIT
...
INFO PRAGMA table_info(tg_user)
INFO ()
...
DROP TABLE tg_user
INFO ()
INFO COMMIT
...
INFO PRAGMA table_info(tg_user)
INFO ()
...
CREATE TABLE tg_user (
user_id INTEGER NOT NULL,
user_name VARCHAR(16) NOT NULL,
email_address VARCHAR(255) NOT NULL,
display_name VARCHAR(255),
password VARCHAR(80),
created TIMESTAMP,
PRIMARY KEY (user_id),
 UNIQUE (user_name),
 UNIQUE (email_address)
)
INFO ()
INFO COMMIT
...

Result in this stack trace when trying to interact with the tg_user
table during the second test:
Traceback (most recent call last):
  ...
  File /Users/james/virtual/unit_tests/UnitTests/unit_tests/
websetup.py, line 54, in setup_app
model.DBSession.flush()
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/scoping.py, line 121, in do
return getattr(self.registry(), name)(*args, **kwargs)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/session.py, line 1347, in
flush
self._flush(objects)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/session.py, line 1417, in
_flush
flush_context.execute()
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/unitofwork.py, line 243, in
execute
UOWExecutor().execute(self, tasks)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/unitofwork.py, line 706, in
execute
self.execute_save_steps(trans, task)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/unitofwork.py, line 721, in
execute_save_steps
self.save_objects(trans, task)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/unitofwork.py, line 712, in
save_objects
task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/mapper.py, line 1346, in
_save_obj
c = connection.execute(statement.values(value_params), params)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/engine/base.py, line 824, in
execute
return Connection.executors[c](self, object, multiparams, params)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/engine/base.py, line 874, in
_execute_clauseelement
return self.__execute_context(context)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/engine/base.py, line 896, in
__execute_context
self._cursor_execute(context.cursor, context.statement,
context.parameters[0], context=context)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/engine/base.py, line 950, in
_cursor_execute
self._handle_dbapi_exception(e, statement, parameters, cursor,
context)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/engine/base.py, line 931, in
_handle_dbapi_exception
raise exc.DBAPIError.instance(statement, parameters, e,
connection_invalidated=is_disconnect)
OperationalError: (OperationalError) no such table: tg_user u'INSERT
INTO tg_user (user_name, email_address, display_name, password,
created) VALUES (?, ?, ?, ?, ?)' [u'manager',
u'mana...@somedomain.com', u'Example manager',
u'276e4c1a24e5c8005f71dc8a2a86912347355f4dae87891e37ccea9c5fdc2753c49549168c8d558e',
'2009-11-07 10:51:40.039211']


Can anyone see why a new created table wouldn't be found by
SQLAlchemy? What more information could I give that would be useful?

This is using sqlalchemy.url = sqlite:///:memory:

Thanks!
James
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email

[sqlalchemy] Re: SQLAlchemy and unit tests

2009-05-11 Thread James Brady
Perfect, thanks Michael - I'll pass this on to the TG list

2009/5/11 Michael Bayer mike...@zzzcomputing.com



 clear out the session (or make a new one) between tests.   while the
 session attempts to weak reference its contents, it usually ends up
 holding onto a lot of stuff due to a particular reference cycle
 created by backrefs (I'm thinking of ways to eliminate that behavior).


 On May 11, 2009, at 10:54 PM, James wrote:

 
  Hi all,
  I'm trying to track down an error where running a full TurboGears unit
  test suite fails with a SQLAlchemy error, while running the single
  (failing) unit test passes OK.
 
  The SA error is of the form:
  FlushError: New instance terms...@0x2b15590 with identity key (class
  'yel.model.select_source.TermSite', (1, 1), None) conflicts with
  persistent instance terms...@0x2b8ad50
 
  TermSite is an association table for a many-to-many relationship,
  which has a multi-column primary key:
  term_site_table = Table('term_site', metadata,
 Column('term_id', Integer, ForeignKey(term.id,
  ondelete=CASCADE), primary_key=True),
 Column('site_id', Integer, ForeignKey(site.id,
  ondelete=CASCADE), primary_key=True),
 Column('weight', Integer, default=1, nullable=False),
  )
 
  The error seems to signify that SA thinks there is already an object
  waiting to be flushed with the same term_id and site_id. The fact that
  the test only fails when a full test suite is run implies that there
  is some state left over in between unit test executions.
 
  Abbreviated setUp and tearDown methods:
 def setUp(self):
 cherrypy.root = root.Root()
 turbogears.startup.startTurboGears()
 metadata.create_all()
 def tearDown(self):
 metadata.drop_all()
 turbogears.startup.stopTurboGears()
 
  TurboGears provides a sqlalchemy_cleanup method (here:
  http://svn.turbogears.org/branches/1.0/turbogears/testutil.py), but
  that seemed to be too aggressive, producing errors of the form:
  InvalidRequestError: Class 'Visit' entity name 'None' has no mapper
  associated with it
 
  So:
  - what is the recommended database initialisation / cleanup strategy
  for unit tests involving SA?
  - can anyone suggest how ORM state could be hanging around between
  unit tests (I'm using an in-memory DB)?
  - is there a convenient way to check on objects in the ORM, waiting to
  be flushed?
 
  This is SA 0.4.3 and TG 1.0.8 running against an in-memory sqlite DB
 
  Thanks!
  James
 
  (Cross-posted to TG list)
  


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Advice on many-to-many schema

2009-05-06 Thread James

Hello all,
I'm creating a SA model of a many to many relationship, where the
association has a particular weight.

I've always used straightforward association tables for this task
before (with an extra column in the association table for the weight
in this case), but was wondering if the associationproxy plugin might
be useful in this case?

I've not used it before, and it's suggested uses don't seem to tally
too closely with what I'm doing, but advice on whether it would fit
this situation would be much appreciated.

The model represents weighted association between 'left' and 'right'
objects. So I'll be inserting new 'left' and 'right' objects if
they've not been seen before, inserting new associations if we've
never linked particular 'left' and 'right' objects before, and
finally, incrementing the association weight if we see a new case of
already associated 'left' and 'right' objects.

Many thanks,
James
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Allowing orphaned children

2009-02-07 Thread James

Oh, of course - thanks Michael!

On Feb 7, 1:51 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 cascade=all includes delete cascade.   any Hat objects attached to  
 User will be deleted when the User is deleted.    To resolve, leave  
 the cascade argument out.  it defaults to save-update, merge which  
 is enough for most use cases.

 On Feb 6, 2009, at 11:05 PM, James wrote:



  Hi, I'm trying to set up a model where child objects are allowed to
  not have parents. At present, I can't get SA to leave the children
  intact, despite having ondelete=SET NULL and no delete-orphans.

  This is with SA 0.4.3.

  To demonstrate my confusion, can someone explain why this code deletes
  all my hats:

  import sys, time
  from datetime import datetime
  from sqlalchemy import Table, Column, ForeignKey, MetaData,
  create_engine
  from sqlalchemy.orm import relation, sessionmaker, mapper, backref
  from sqlalchemy import String, Unicode, Integer, DateTime

  metadata=MetaData()
  engine = create_engine(sqlite:///:memory:)

  users_table = Table('tg_user', metadata,
     Column('user_id', Integer, primary_key=True),
  )

  hat_table = Table('hat', metadata,
     Column('id', Integer, primary_key=True),
     Column('user_id', Integer, ForeignKey('tg_user.user_id',
  ondelete='SET NULL')),
  )

  metadata.create_all(engine)

  class User(object):
     pass

  class Hat(object):
     pass

  mapper(User, users_table)

  mapper(Hat, hat_table,
     properties = {
         'user': relation(User, backref=backref(hats,
  cascade=all)),
     }
  )

  Session = sessionmaker(bind=engine, autoflush=False,
  transactional=True)
  session = Session()

  me = User()
  me.hats.extend([Hat(), Hat(), Hat()])
  session.save(me)
  session.flush()

  print session.query(Hat).count(), hats
  session.delete(me)
  session.flush()
  print session.query(Hat).count(), hats

  Thank you!
  James
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Allowing orphaned children

2009-02-06 Thread James

Hi, I'm trying to set up a model where child objects are allowed to
not have parents. At present, I can't get SA to leave the children
intact, despite having ondelete=SET NULL and no delete-orphans.

This is with SA 0.4.3.

To demonstrate my confusion, can someone explain why this code deletes
all my hats:

import sys, time
from datetime import datetime
from sqlalchemy import Table, Column, ForeignKey, MetaData,
create_engine
from sqlalchemy.orm import relation, sessionmaker, mapper, backref
from sqlalchemy import String, Unicode, Integer, DateTime

metadata=MetaData()
engine = create_engine(sqlite:///:memory:)

users_table = Table('tg_user', metadata,
Column('user_id', Integer, primary_key=True),
)

hat_table = Table('hat', metadata,
Column('id', Integer, primary_key=True),
Column('user_id', Integer, ForeignKey('tg_user.user_id',
ondelete='SET NULL')),
)

metadata.create_all(engine)

class User(object):
pass

class Hat(object):
pass

mapper(User, users_table)

mapper(Hat, hat_table,
properties = {
'user': relation(User, backref=backref(hats,
cascade=all)),
}
)

Session = sessionmaker(bind=engine, autoflush=False,
transactional=True)
session = Session()

me = User()
me.hats.extend([Hat(), Hat(), Hat()])
session.save(me)
session.flush()

print session.query(Hat).count(), hats
session.delete(me)
session.flush()
print session.query(Hat).count(), hats

Thank you!
James
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Equivalent of UPDATE ... WHERE ... in ORM?

2009-01-30 Thread James

Perfect - thanks Mike!

As I'm using 0.4 (sorry, neglected to mention that) I've gone for the
table.update(...).execute() option and it works a treat.

On Jan 25, 12:25 am, Michael Bayer mike...@zzzcomputing.com wrote:
 the update() method on Query accomplishes this.  Make sure you read  
 the docstring for it which describes some various behaviors you'll  
 want to be aware of.

 alternatively, any SQL expression, like table.update(), UPDATE  
 table can be issued within the ORM's transaction using  
 session.execute().

 On Jan 24, 2009, at 7:14 PM, James wrote:



  Hi,
  Is there a way to update a large number of objects without looping
  through each one, using SA's ORM?

  E.g.
  I want to achieve the following:

  for o in session.query(MyClass).filter_by(prop='some value'):
     o.prop = 'new value'
     session.update(o)

  Without fetching and saving each object from the database. I.e.
  something which would produce SQL like this:

  UPDATE `my_class` SET `prop`='new value' WHERE `prop` = 'some value'

  If not, am I safe mixing and matching ORM operations with SQL
  operations like:
  u = my_classes.update(my_class.c.prop=='some value'), values=
  {'prop':'new value'})?

  Thanks!
  James
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Equivalent of UPDATE ... WHERE ... in ORM?

2009-01-24 Thread James

Hi,
Is there a way to update a large number of objects without looping
through each one, using SA's ORM?

E.g.
I want to achieve the following:

for o in session.query(MyClass).filter_by(prop='some value'):
o.prop = 'new value'
session.update(o)

Without fetching and saving each object from the database. I.e.
something which would produce SQL like this:

UPDATE `my_class` SET `prop`='new value' WHERE `prop` = 'some value'

If not, am I safe mixing and matching ORM operations with SQL
operations like:
u = my_classes.update(my_class.c.prop=='some value'), values=
{'prop':'new value'})?

Thanks!
James
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Conflict between SQLAlchemy = 0.4.4 and coverage.py?

2009-01-18 Thread James

Ah, ok thanks Michael - I'd found that bug from some googling, but
Doug's comment here:
http://groups.google.com/group/turbogears/msg/26d74c947dec400e

implies that it was fixed in Python 2.5.2 (I'm using 2.5.4 - sorry I
forgot to include that). However, the comments on the official Roundup
bug say it can't be backported from 2.6 after all...

Looks like I'll be using 0.4.3 until TG supports 2.6!

On Jan 18, 2:37 am, Michael Bayer mike...@zzzcomputing.com wrote:
 this is a known Python issue fixed in 2.6.  
 Seehttp://www.sqlalchemy.org/trac/ticket/1138
   for details.

 On Jan 17, 2009, at 9:23 PM, James wrote:



  I'm using SA underneath a TurboGears 1.0 app. Upgrading SA from 0.4.3
  to 0.4.4 causes previously passing unit tests to fail when run in
  conjunction with nose's coverage plugin -- I've included an example
  stack trace below.

  The unit tests run just fine when not using nose's --with-coverage
  option.

  The only other useful information I have is that some basic logging
  shows that more of my model code is being run under 0.4.4 than under
  0.4.3, when using coverage.

  I don't have a small reproduction, nor have I looked at this in lots
  of detail - before I do, is this a known issue, or does it can anyone
  suggest a good place for me to start debugging?

  Thanks!
  James

  Stacktrace:

  Traceback (most recent call last):
   File /Users/james/virtual/queue/src/pull_client/tests/
  test_core_agent.py, line 29, in setUp
     self.search_server = model.SearchServer('test_internal',
  'test_external', 'test_directory')
   File /Users/james/virtual/queue/lib/python2.5/site-packages/
  SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/attributes.py, line 1226,
  in init
     extra_init(class_, oldinit, instance, args, kwargs)
   File /Users/james/virtual/queue/lib/python2.5/site-packages/
  SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/mapper.py, line 733, in
  extra_init
     self.compile()
   File /Users/james/virtual/queue/lib/python2.5/site-packages/
  SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/mapper.py, line 237, in
  compile
     mapper.__initialize_properties()
   File /Users/james/virtual/queue/lib/python2.5/site-packages/
  SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/mapper.py, line 257, in
  __initialize_properties
     prop.init(key, self)
   File /Users/james/virtual/queue/lib/python2.5/site-packages/
  SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/interfaces.py, line 370, in
  init
     self.do_init()
   File /Users/james/virtual/queue/lib/python2.5/site-packages/
  SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/properties.py, line 181, in
  do_init
     sessionlib.register_attribute(class_, self.key, uselist=False,
  proxy_property=self.instrument, useobject=False,
  comparator=comparator)
   File /Users/james/virtual/queue/lib/python2.5/site-packages/
  SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/unitofwork.py, line 85, in
  register_attribute
     return attributes.register_attribute(class_, key, *args, **kwargs)
   File /Users/james/virtual/queue/lib/python2.5/site-packages/
  SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/attributes.py, line 1291,
  in register_attribute
     inst = proxy_type(key, proxy_property, comparator)
   File /Users/james/virtual/queue/lib/python2.5/site-packages/
  SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/attributes.py, line 123, in
  __init__
     self.descriptor = self.user_prop = descriptor
  AttributeError: can't set attribute
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Conflict between SQLAlchemy = 0.4.4 and coverage.py?

2009-01-17 Thread James

I'm using SA underneath a TurboGears 1.0 app. Upgrading SA from 0.4.3
to 0.4.4 causes previously passing unit tests to fail when run in
conjunction with nose's coverage plugin -- I've included an example
stack trace below.

The unit tests run just fine when not using nose's --with-coverage
option.

The only other useful information I have is that some basic logging
shows that more of my model code is being run under 0.4.4 than under
0.4.3, when using coverage.

I don't have a small reproduction, nor have I looked at this in lots
of detail - before I do, is this a known issue, or does it can anyone
suggest a good place for me to start debugging?

Thanks!
James

Stacktrace:

Traceback (most recent call last):
  File /Users/james/virtual/queue/src/pull_client/tests/
test_core_agent.py, line 29, in setUp
self.search_server = model.SearchServer('test_internal',
'test_external', 'test_directory')
  File /Users/james/virtual/queue/lib/python2.5/site-packages/
SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/attributes.py, line 1226,
in init
extra_init(class_, oldinit, instance, args, kwargs)
  File /Users/james/virtual/queue/lib/python2.5/site-packages/
SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/mapper.py, line 733, in
extra_init
self.compile()
  File /Users/james/virtual/queue/lib/python2.5/site-packages/
SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/mapper.py, line 237, in
compile
mapper.__initialize_properties()
  File /Users/james/virtual/queue/lib/python2.5/site-packages/
SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/mapper.py, line 257, in
__initialize_properties
prop.init(key, self)
  File /Users/james/virtual/queue/lib/python2.5/site-packages/
SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/interfaces.py, line 370, in
init
self.do_init()
  File /Users/james/virtual/queue/lib/python2.5/site-packages/
SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/properties.py, line 181, in
do_init
sessionlib.register_attribute(class_, self.key, uselist=False,
proxy_property=self.instrument, useobject=False,
comparator=comparator)
  File /Users/james/virtual/queue/lib/python2.5/site-packages/
SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/unitofwork.py, line 85, in
register_attribute
return attributes.register_attribute(class_, key, *args, **kwargs)
  File /Users/james/virtual/queue/lib/python2.5/site-packages/
SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/attributes.py, line 1291,
in register_attribute
inst = proxy_type(key, proxy_property, comparator)
  File /Users/james/virtual/queue/lib/python2.5/site-packages/
SQLAlchemy-0.4.4-py2.5.egg/sqlalchemy/orm/attributes.py, line 123, in
__init__
self.descriptor = self.user_prop = descriptor
AttributeError: can't set attribute
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



  1   2   >