Re: [sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-04 Thread Ladislav Lenart
Hello.

I responded yesterday but it seems the mail did not make it. Here it goes and
sorry for the duplicate...


On 3.3.2016 21:09, Ladislav Lenart wrote:
> Hello.
> 
> 
> On 3.3.2016 19:52, Brian Cherinka wrote:
>> Yeah, that might ultimately be the best way to go if things get too 
>> complicated.
>>   I think people might not want to re-run several lines of code to change 
>> some
>> parameters but that could be a thing I make them just live with.
> 
> I think that the two concerns, namely:
> * update SQLAlchemy query / rebuild it from scratch
> * execute query manually / automatically
> are independent.
> 
> You could implement the following background task:
> * parse the code snippet
> * build the query from it / report errors to the user
> * execute the query
> * show / update results to the user
> and (re)schedule it in the near future whenever the user edits her code 
> snippet
> (like an incremental search).
> 
> But this starts to get complicated, i.e. the usual threading caveats apply.
> 
> HTH,
> 
> Ladislav Lenart
> 
> 
>> On Thursday, March 3, 2016 at 3:06:57 AM UTC-5, Ladislav Lenart wrote:
>>
>> Hello.
>>
>> I think it would be (much) easier to simply rebuild the query from 
>> scratch
>> before each run. IMHO the time to build the query is not that big a 
>> factor to
>> justify the added source code complexity.
>>
>> HTH,
>>
>> Ladislav Lenart
>>
>>
>> On 3.3.2016 05:47, Brian Cherinka wrote:
>> >
>> >
>> > well you need a list of names so from a mapped class you can get:
>> >
>> > for name in inspect(MyClass).column_attrs.keys():
>> > if name in :
>> > q = q.filter_by(name = bindparam(name))
>> >
>> > though I'd think if you're dynamically building the query you'd 
>> have the
>> > values already, not sure how it's working out that you need 
>> bindparam()
>> > at that stage...
>> >
>> >  
>> > Ok.  I'll try this out. This looks like it could work.  I think I need 
>> it for
>> > the cases where a user specifies a query with condition e.g. X < 10, 
>> runs it,
>> > gets results.  Then they want to change the condition to X < 5 and 
>> rerun the
>> > query.  As far as I know, if condition 2 gets added into the filter, 
>> you
>> would
>> > have both X < 10 and X < 5 in your filter expression.  Rather than a 
>> single
>> > updated X < 5.
>> >
>> > What would be even more awesome is if there was a way to also update 
>> the
>> > operator in place as well.  So changing X < 10 to X > 10.  
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-03 Thread Ladislav Lenart
Hello.

I think it would be (much) easier to simply rebuild the query from scratch
before each run. IMHO the time to build the query is not that big a factor to
justify the added source code complexity.

HTH,

Ladislav Lenart


On 3.3.2016 05:47, Brian Cherinka wrote:
> 
> 
> well you need a list of names so from a mapped class you can get:
> 
> for name in inspect(MyClass).column_attrs.keys():
> if name in :
> q = q.filter_by(name = bindparam(name))
> 
> though I'd think if you're dynamically building the query you'd have the
> values already, not sure how it's working out that you need bindparam()
> at that stage...
> 
>  
> Ok.  I'll try this out. This looks like it could work.  I think I need it for
> the cases where a user specifies a query with condition e.g. X < 10, runs it,
> gets results.  Then they want to change the condition to X < 5 and rerun the
> query.  As far as I know, if condition 2 gets added into the filter, you would
> have both X < 10 and X < 5 in your filter expression.  Rather than a single
> updated X < 5. 
> 
> What would be even more awesome is if there was a way to also update the
> operator in place as well.  So changing X < 10 to X > 10.  
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email
> to sqlalchemy+unsubscr...@googlegroups.com
> <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
> To post to this group, send email to sqlalchemy@googlegroups.com
> <mailto:sqlalchemy@googlegroups.com>.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] ORM: walk() related objects

2015-09-10 Thread Ladislav Lenart
On 10.9.2015 16:30, Mike Bayer wrote:
> On 9/10/15 10:26 AM, Mike Bayer wrote:
>>
>>
>> On 9/10/15 10:13 AM, Ladislav Lenart wrote:
>>> Hello.
>>>
>>> Just a really, really tiny and pedantic correction... The stack 
>>> variable in the
>>> code is in fact a queue. This could potentially surprise some users / 
>>> readers.
>>>
>>> To fix, please do one of the following:
>>> * Rename stack local var to queue.
>>> * Use stack.pop() to pop the last element from the stack.
>>
>> there really should be some other name, because all the time, I have 
>> routines like this where even dynamically within the routine, the 
>> pop(0) is a pop() or vice-versa, because we'd like the iteration to 
>> work one way or another.  Changing pop(0) to pop() here means we are 
>> doing depth-first instead of breadth-first. So.  Some word that means 
>> "collection of things to operate upon"? I've seen "work" used.
>> "buffer" ?  eh.
>>
> 
> renamed to "deque".

Well, for me the name 'stack' automatically means depth-first whereas the name
'queue' means breadth-first. And when the name and the code do not match, I am
not sure what is the original intent. However, deque (or maybe dequeue?) is a
fine name for me.

BTW I've noticed that you use the term 'seen' for the set of already visited
objects. Perhaps you will like the name 'visit' (or 'to_visit' or
'not_visited')? At least that's what I use when I implement a visitor / iterator
pattern.

HTH,

Ladislav Lenart


>>> HTH,
>>>
>>> Ladislav Lenart
>>>
>>>
>>> On 10.9.2015 15:48, Mike Bayer wrote:
>>>>
>>>> On 9/10/15 9:35 AM, Mike Bayer wrote:
>>>>>
>>>>> On 9/10/15 8:48 AM, Pavel S wrote:
>>>>>> Let's say, I have declarative classes A, B, C, D.
>>>>>>
>>>>>> A is the parent
>>>>>> B has FK to A
>>>>>> C has FK to B,
>>>>>> D has FK to C etc...
>>>>>>
>>>>>> I'd like to implement _generic method_ walk(obj) which will 
>>>>>> recursively yield
>>>>>> dependent/related objects of obj (which is instance of A).
>>>>>>
>>>>>> I know that there is introspection interface inspect(), however 
>>>>>> I'm don't
>>>>>> really understand how to use it properly in my use case.
>>>>>>
>>>>>> Shall I do inspect(obj) or rather inspect(obj.__class__) and then 
>>>>>> somehow
>>>>>> apply inspection to obj?
>>>>>>
>>>>>> Are there an examples and best practices?
>>>>> right now you can kind of get this effect using cascade_iterator:
>>>>> http://docs.sqlalchemy.org/en/rel_1_0/orm/mapping_api.html?highlight=cascade_iterator#sqlalchemy.orm.mapper.Mapper.cascade_iterator
>>>>>  
>>>>>
>>>>>
>>>>> the limitation is that right now its based on relationship cascade 
>>>>> settings,
>>>>> as that's what it was intended for, so you'd probably want to use 
>>>>> "save-update":
>>>>>
>>>>> insp = inspect(my_object)
>>>>> for obj in insp.mapper.cascade_iterator("save-update", insp):
>>>>> # ...
>>>>>
>>>>> to implement your own system, the graph of objects is strictly 
>>>>> based on
>>>>> relationship.   so walk() is pretty simple:
>>>>>
>>>>> def walk(obj):
>>>>>  yield obj
>>>>>  insp = inspect(obj)
>>>>>  for relationship in insp.mapper.relationships:
>>>>>  related = getattr(obj, relationship.key)
>>>>>  if relationship.uselist:
>>>>>  for collection_member in related:
>>>>>  for walk_related in walk(collection_member):
>>>>>  yield walk_related
>>>>>  elif related is not None:
>>>>>  for walk_related in walk(related):
>>>>>  yield walk_related
>>>> here's one im putting in the FAQ for now, which solves recursion 
>>>> depth as well
>>>> as cycles:
>>>>
>>>>  def walk(obj):
>>>>  stack = [obj]
>>>>
>>>>  seen = set()
>>>>
>>

Re: [sqlalchemy] ORM: walk() related objects

2015-09-10 Thread Ladislav Lenart
Hello.

Just a really, really tiny and pedantic correction... The stack variable in the
code is in fact a queue. This could potentially surprise some users / readers.

To fix, please do one of the following:
* Rename stack local var to queue.
* Use stack.pop() to pop the last element from the stack.

HTH,

Ladislav Lenart


On 10.9.2015 15:48, Mike Bayer wrote:
> 
> 
> On 9/10/15 9:35 AM, Mike Bayer wrote:
>>
>>
>> On 9/10/15 8:48 AM, Pavel S wrote:
>>> Let's say, I have declarative classes A, B, C, D.
>>>
>>> A is the parent
>>> B has FK to A
>>> C has FK to B,
>>> D has FK to C etc...
>>>
>>> I'd like to implement _generic method_ walk(obj) which will recursively 
>>> yield
>>> dependent/related objects of obj (which is instance of A).
>>>
>>> I know that there is introspection interface inspect(), however I'm don't
>>> really understand how to use it properly in my use case.
>>>
>>> Shall I do inspect(obj) or rather inspect(obj.__class__) and then somehow
>>> apply inspection to obj?
>>>
>>> Are there an examples and best practices?
>> right now you can kind of get this effect using cascade_iterator:
>> http://docs.sqlalchemy.org/en/rel_1_0/orm/mapping_api.html?highlight=cascade_iterator#sqlalchemy.orm.mapper.Mapper.cascade_iterator
>>
>> the limitation is that right now its based on relationship cascade settings,
>> as that's what it was intended for, so you'd probably want to use 
>> "save-update":
>>
>> insp = inspect(my_object)
>> for obj in insp.mapper.cascade_iterator("save-update", insp):
>># ...
>>
>> to implement your own system, the graph of objects is strictly based on
>> relationship.   so walk() is pretty simple:
>>
>> def walk(obj):
>> yield obj
>> insp = inspect(obj)
>> for relationship in insp.mapper.relationships:
>> related = getattr(obj, relationship.key)
>> if relationship.uselist:
>> for collection_member in related:
>> for walk_related in walk(collection_member):
>> yield walk_related
>> elif related is not None:
>> for walk_related in walk(related):
>> yield walk_related
> 
> here's one im putting in the FAQ for now, which solves recursion depth as well
> as cycles:
> 
> def walk(obj):
> stack = [obj]
> 
> seen = set()
> 
> while stack:
> obj = stack.pop(0)
> if obj in seen:
> continue
> else:
> seen.add(obj)
> yield obj
> insp = inspect(obj)
> for relationship in insp.mapper.relationships:
> related = getattr(obj, relationship.key)
> if relationship.uselist:
> stack.extend(related)
> elif related is not None:
> stack.append(related)
> 
> 
> 
> 
>>
>>
>>
>>
>>
>>
>>> -- 
>>> You received this message because you are subscribed to the Google Groups
>>> "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to sqlalchemy+unsubscr...@googlegroups.com
>>> <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
>>> To post to this group, send email to sqlalchemy@googlegroups.com
>>> <mailto:sqlalchemy@googlegroups.com>.
>>> Visit this group at http://groups.google.com/group/sqlalchemy.
>>> For more options, visit https://groups.google.com/d/optout.
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com
>> <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
>> To post to this group, send email to sqlalchemy@googlegroups.com
>> <mailto:sqlalchemy@googlegroups.com>.
>> Visit this group at http://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email
> to sqlalchemy+unsubscr...@googlegroups.com
> <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
> To post to this group, send email to sqlalchemy@googlegroups.com
> <mailto:sqlalchemy@googlegroups.com>.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] mapper_configured event, class fully configured?

2015-08-28 Thread Ladislav Lenart
On 27.8.2015 22:13, Douglas Russell wrote:
 Ok, thanks, that makes sense.
 
 Generally everything I see about SQLAlchemy referencing is using backref and 
 it
 is convenient, but knowing that you actually think back_populates would be 
 your
 preferred way to implement things is quite nice. I quite prefer that as I 
 think
 it makes my code much more readable to have the relationship properties 
 declared
 where they will exist.

+100

I came to the same conclusion a long time ago and have never looked back :-)

Just my 2c,

Ladislav Lenart


 Cheers,
 
 Douglas
 
 On Monday, 24 August 2015 21:54:17 UTC-4, Michael Bayer wrote:
 
 
 
 On 8/24/15 9:21 PM, Mike Bayer wrote:


 On 8/24/15 7:41 PM, Douglas Russell wrote:

 |
 def print_stuff(mapper, class_):
 print getattr(class_, 'books')

 event.listen(mapper, 'mapper_configured', print_stuff)

 author = Author(name='Chuck Paluhniuk')
 |

 When I run this code, the result is that when Author is first used, the
 'mapper_configured' event will fire and my print_stuff function callback
 will run.

 The results is this:

 |
 Traceback (most recent call last):
   File marshmallow_experiment.py, line 37, in module
 author = Author(name='Chuck Paluhniuk')
   File string, line 2, in __init__
   File
 
 /usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sqlalchemy/orm/instrumentation.py,
 line 347, in _new_state_if_none
 state = self._state_constructor(instance, self)
   File
 
 /usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py,
 line 747, in __get__
 obj.__dict__[self.__name__] = result = self.fget(obj)
   File
 
 /usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sqlalchemy/orm/instrumentation.py,
 line 177, in _state_constructor
 self.dispatch.first_init(self, self.class_)
   File
 
 /usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sqlalchemy/event/attr.py,
 line 258, in __call__
 fn(*args, **kw)
   File
 
 /usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sqlalchemy/orm/mapper.py,
 line 2792, in _event_on_first_init
 configure_mappers()
   File
 
 /usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sqlalchemy/orm/mapper.py,
 line 2691, in configure_mappers
 mapper, mapper.class_)
   File
 
 /usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sqlalchemy/event/attr.py,
 line 218, in __call__
 fn(*args, **kw)
   File
 
 /usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sqlalchemy/orm/events.py,
 line 527, in wrap
 fn(*arg, **kw)
   File marshmallow_experiment.py, line 33, in print_stuff
 print getattr(class_, 'books')
 AttributeError: type object 'Author' has no attribute 'books'
 |

 Should not the backref be configured at that point? What I'm trying to 
 do
 is use the mapper_configured event in order to autocreate
 marshamallow-sqlalchemy schemas, which works fine except that because
 books doesn't exist at the point when the callback function runs, the
 resulting schema is also missing the books part.

 If I do this instead:

 |
 author = Author(name='Chuck Paluhniuk')
 print_stuff(None, Author)
 |

 Then the result is:

 |
 Author.books
 |
 so you can see what's happening here.   Author.books is being set up, but
 not within the event.  In this case Author is being configured before
 Book, and as far as it knows, that's it, it's done.  The event is 
 called. 
 Book hasn't been touched so therefore neither has books.   Only outside
 of the whole thing, where Book also got set up, does your inspection 
 work.

 The solution is to either use back_populates instead of backref, which is
 probably the only way the ORM would have done it today if starting again,
 or use the after_configured() event
 
 http://docs.sqlalchemy.org/en/rel_1_0/orm/events.html#sqlalchemy.orm.events.MapperEvents.after_configuredhttp://docs.sqlalchemy.org/en/rel_1_0/orm/events.html#sqlalchemy.orm.events.MapperEvents.after_configured
 
 http://docs.sqlalchemy.org/en/rel_1_0/orm/events.html#sqlalchemy.orm.events.MapperEvents.after_configured
 which is intentionally here so that you can do things that require all
 known mappers to be fully set up, which is the case here.

 As is so often the case I'm horrified the API docs don't have cross
 linking here.   doing that..
 I've pushed up new descriptions

Re: [sqlalchemy] how to auto set foreignkey column?

2015-08-11 Thread Ladislav Lenart
Hello.

Please take a look at official examples:

http://docs.sqlalchemy.org/en/rel_1_0/orm/backref.html?highlight=back_populates

The first two examples use your classes (User and Address). Note the use of
backref kw arg in the relationship definition (first example) and back_populates
kw arg in the second example. I personally prefer back_populates because each
ORM definition is then complete on its own (i.e. both User and Address will each
define the appropriate relationship connected to the other with back_populates 
kw).


HTH,

Ladislav Lenart


On 11.8.2015 09:59, Sean Lin wrote:
 Dear all,
 
 I have a question about how to automatic set the foreignkey column ?
 
 
 we have two class User and Address 
 
 |
 
 classUser(Base):
 __tablename__ ='user'
 id =Column(Integer,primary_key=True)
 name =Column(String)
 addresses =relationship(Address)
 def__init__(self,id,name):
 self.id =id
 self.name =name
 
 classAddress(Base):
 __tablename__ ='address'
 id =Column(Integer,primary_key=True)
 user_id =Column(Integer,ForeignKey('user.id'))
 def__init__(self,id,user_id):
 self.id =id
 self.user_id =user_id
 
 |
 
 if we create a user then add a address 
 
 |
 
 User1=User(1,Tom)
 Address1=Address('1',None)
 User1.addresses(Address1)
 printUser1.Address[0].user_id
 
 |
 
 after that we will get a None result ... 
 
 How to automatic assign user_id when we add address to a user?


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how fast can pure sqlalchemy perform?

2015-08-07 Thread Ladislav Lenart
Hello.

ORM is certainly slower. How much depends A LOT on your workload. For example
bulk operations with ORM are an order of magnitude slower than raw SQL. On the
other hand, SQLAlchemy Core let's you write generative SQL queries without ORM
features which are as performant as raw SQL. Overall SQLAlchemy is an excellent
library to work with!

For some numbers, see:

http://docs.sqlalchemy.org/en/latest/faq/performance.html
http://docs.sqlalchemy.org/en/latest/orm/examples.html#examples-performance


HTH,

Ladislav Lenart


On 7.8.2015 11:16, kk wrote:
 Dear all,
 I am planning to totally revamp my rdbms model and totally migrate to 
 sqlalchemy.
 Right now we have a lot of stored procedurs for obvious performance 
 benefits.  However it is becoming more and more difficult to maintain 
 the system and also difficult to migrate existing users when there are 
 major changes to our software.
 Basically our stored procedures get created when the database is created 
 in the deploy phase.
 So I wish to know how much performance I will loos if I totally switch 
 to using ORM, specifically SQLAlchemy.
 I am also planning to use some thing like json columns in postgresql 
 tables, so will it really matter with an ORM?
 Happy hacking.
 Krishnakant.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] query with HAVING COUNT doesn't work as expected

2015-08-03 Thread Ladislav Lenart
On 3.8.2015 17:24, jurie.horne...@gmail.com wrote:
 Thanks! That solved it.
 
 I'm a JOIN newbie, so I didn't realize left join and outer join were the same 
 thing.

Well, in that case this might be of some use to you...

SQL:
* JOIN is short for INNER JOIN.
* LEFT JOIN is short for LEFT OUTER JOIN.
* RIGHT JOIN is short for RIGHT OUTER JOIN.
* FULL JOIN is short for FULL OUTER JOIN.
For a detailed explanation see e.g.:
http://www.postgresql.org/docs/9.4/static/queries-table-expressions.html

SQLAlchemy ORM supports only:
* INNER JOIN with join().
* LEFT OUTER JOIN with outerjoin().

You can simulate RIGHT OUTER JOIN with LEFT JOIN if you change the order of the
tables in the query.

HTH,

Ladislav Lenart


 J.
 
 
 
 On Monday, August 3, 2015 at 5:08:44 PM UTC+2, Ladislav Lenart wrote:
 
 Hello.
 
 pgAdmin query uses LEFT JOIN whereas SQLAlchemy query uses (inner) JOIN.
 Replace
 .join(...) with .outerjoin(...) in your SQLAlchemy query.
 
 HTH,
 
 Ladislav Lenart
 
 
 On 3.8.2015 16:48, jurie.h...@gmail.com javascript: wrote:
  I am using Python 2.7, SQLAlchemy 0.9.3, and PostgreSQL 9.4.
 
  This:
 
  session.query(self.db.recording_table.c.id
 http://self.db.recording_table.c.id).\
  join(self.db.frame_table, self.db.recording_table.c.id
 http://self.db.recording_table.c.id == 
 self.db.frame_table.c.recording_id).\
  group_by(self.db.recording_table.c.id
 http://self.db.recording_table.c.id).\
  having(func.coalesce(func.count(self.db.frame_table.c.frame_nr), 0) 
 
 2).\
  all()
 
 
  generates the following SQL:
 
  SELECT recordings.id http://recordings.id AS recordings_id
  FROM recordings JOIN recording_frames ON recordings.id
 http://recordings.id =
  recording_frames.recording_id GROUP BY recordings.id 
 http://recordings.id
  HAVING coalesce(count(recording_frames.frame_nr), %(param_1)s) 
 %(coalesce_1)s
  {'param_1': 0, 'coalesce_1': 2}
 
  and returns an empty list.
 
  If I execute this in pgAdmin:
 
  SELECT recordings.id http://recordings.id
  FROM recordings
  LEFT JOIN recording_frames
  ON recordings.id http://recordings.id = recording_frames.recording_id
  GROUP BY recordings.id http://recordings.id
  HAVING (coalesce(count(recording_frames.frame_nr), 0)  2)
 
  I get a list with 2 IDs in the recordings table, which is the expected
 result.
 
  I have no idea why SQLAlchemy won't give me the same result. If I 
 change the
  criterion to  0 I get the same result as with raw SQL. It seems 
 SQLAlchemy,
  with the code above, somehow filters out the rows where COUNT returns
 nothing,
  despite the COALESCE.
 
  What am I doing wrong?
 
 -- 
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email
 to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] query with HAVING COUNT doesn't work as expected

2015-08-03 Thread Ladislav Lenart
Hello.

pgAdmin query uses LEFT JOIN whereas SQLAlchemy query uses (inner) JOIN. Replace
.join(...) with .outerjoin(...) in your SQLAlchemy query.

HTH,

Ladislav Lenart


On 3.8.2015 16:48, jurie.horne...@gmail.com wrote:
 I am using Python 2.7, SQLAlchemy 0.9.3, and PostgreSQL 9.4.
 
 This:
 
 session.query(self.db.recording_table.c.id).\
 join(self.db.frame_table, self.db.recording_table.c.id == 
 self.db.frame_table.c.recording_id).\
 group_by(self.db.recording_table.c.id).\
 having(func.coalesce(func.count(self.db.frame_table.c.frame_nr), 0)  2).\
 all()
 
 
 generates the following SQL:
 
 SELECT recordings.id AS recordings_id
 FROM recordings JOIN recording_frames ON recordings.id =
 recording_frames.recording_id GROUP BY recordings.id
 HAVING coalesce(count(recording_frames.frame_nr), %(param_1)s)  
 %(coalesce_1)s
 {'param_1': 0, 'coalesce_1': 2}
 
 and returns an empty list.
 
 If I execute this in pgAdmin:
 
 SELECT recordings.id
 FROM recordings
 LEFT JOIN recording_frames
 ON recordings.id = recording_frames.recording_id
 GROUP BY recordings.id
 HAVING (coalesce(count(recording_frames.frame_nr), 0)  2)
 
 I get a list with 2 IDs in the recordings table, which is the expected result.
 
 I have no idea why SQLAlchemy won't give me the same result. If I change the
 criterion to  0 I get the same result as with raw SQL. It seems SQLAlchemy,
 with the code above, somehow filters out the rows where COUNT returns nothing,
 despite the COALESCE.
 
 What am I doing wrong?
 
 -- 
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email
 to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bizzarre insert behavior: NULL constraint violation with non-null value, column changes every run.

2015-07-29 Thread Ladislav Lenart
Hello.

I don't know why but the problematic version uses bytes as keys in the params
dictionary (e.g. b'batch_id') whereas the working version uses strings (e.g.
'batch_id'). I am not a Python 3 expert but I think that the two types are
distinct and thus the search for a string fails. This would also explain why the
column in the error changes - because dictionaries are nondeterministic.
Whatever key is searched for first will become the culprit.

Note however that these are just my assumptions...

HTH,

L.


On 28.7.2015 18:08, Bob Ternosky wrote:
 That did it.
 
 2015-07-28 12:06:16,283 INFO sqlalchemy.engine.base.Engine INSERT INTO
 corp.test_table (batch_id, batch_type, scheduled_date, status, emp_id, 
 bill_per,
 label, log_file, debug_file, completed_date) VALUES (:batch_id, :batch_type,
 :scheduled_date, :status, :emp_id, :bill_per, :label, :log_file, :debug_file,
 :completed_date)
 2015-07-28 12:06:16,283 INFO sqlalchemy.engine.base.Engine {'log_file': None,
 'batch_id': 99, 'debug_file': None, 'scheduled_date':
 datetime.datetime(2015, 7, 28, 12, 6, 16, 282779), 'label': 'Testing insert',
 'batch_type': 1, 'bill_per': 201501, 'status': 0, 'completed_date': None,
 'emp_id': 8329}
 2015-07-28 12:06:16,311 INFO sqlalchemy.engine.base.Engine COMMIT
 
 You just saved my sanity.
 
 Thanks a million!
 
 And thanks to Jonathan Vanasco, your suggestions didn't work, but I learned a
 few new settings.
 
 On Tuesday, July 28, 2015 at 11:54:14 AM UTC-4, Michael Bayer wrote:
 
 Just curious, can you try out cx_Oracle 5.1.3?   I've seen some problems
 reported with Py3K and cx_Oracle 5.2.
 
 On 7/28/15 11:17 AM, Bob Ternosky wrote:
 I'm new to SQLAlchemy and have been playing with it for a week. I've got
 many SELECT based pieces working and exercising properly, but have hit a
 huge wall when trying to test inserts. Worse, what's happening makes no
 sense at all.
 This will hopefully contain the full set of information needed. Any help
 would be greatly appreciated.

 The summary: When running a simple test insert it will fail with the 
 error:

cx_Oracle.IntegrityError: ORA-01400: cannot insert NULL into
 (CORP.TEST_TABLE.XXX)

 where XXX changes just about every time I run the insert statement - with
 identical data.

 My machine is running Linux Mint 17.2 64-bit

 Software:
 * Python 3.4.0 (with virtualenv)
 * SQLAlchemy 1.0.7
 * cx_Oracle 5.2

 Connecting to an Oracle 9i (9.2.0.6.0) database (joys of legacy systems).
 Using Oracle Instant Client 11.2.0 libraries

 An Oracle description of the table (the table is empty):

  desc test_table
 BATCH_ID   NOT NULL NUMBER(10,0)
 BATCH_TYPE NOT NULL NUMBER(2,0) 
 SCHEDULED_DATE NOT NULL DATE
 STATUS NOT NULL NUMBER(1,0) 
 EMP_ID NOT NULL NUMBER(10,0)
 BILL_PER   NOT NULL NUMBER(6,0) 
 LABEL  NOT NULL VARCHAR2(128)
 LOG_FILE   NULL VARCHAR2(256)
 DEBUG_FILE NULL VARCHAR2(256)
 COMPLETED_DATE NULL DATE

 The table resides in the CORP schema.



 Test script named: isolated.py

 
 # isolated.py
 import argparse
 import datetime

 from sqlalchemy import Column, create_engine, DateTime, insert, MetaData,
 Numeric, String, Table

 # Declare insert test table
 metadata = MetaData()
 t_test_table = Table(
 'test_table', metadata,
 Column('batch_id', Numeric(9, 0, asdecimal=False), primary_key=True),
 Column('batch_type', Numeric(2, 0, asdecimal=False), nullable=False),
 Column('scheduled_date', DateTime, nullable=False),
 Column('status', Numeric(1, 0, asdecimal=False), nullable=False),
 Column('emp_id', Numeric(10, 0, asdecimal=False), nullable=False),
 Column('bill_per', Numeric(6, 0, asdecimal=False), nullable=False),
 Column('label', String(128), nullable=False),
 Column('log_file', String(256)),
 Column('debug_file', String(256)),
 Column('completed_date', DateTime),
 schema='corp'
 )

 # Oracle Credentials
 USER   = 'REDACTED'
 PASSWD = 'REDACTED'
 SID= 'REDACTED'

 ###
 # Main
 ###
 parser = argparse.ArgumentParser(description = 'Test SQLAlchemy Insert')
 parser.add_argument('-c', '--cxoracle', dest = 'cxoracle', action =
 'store_true',
 required = False, default = False, help = 'Use
 oracle+cx_oracle engine')
 parser.add_argument('-o', '--oracle', dest = 'oracle', action = 
 'store_true',
 required = False, default = False, help = 'Use oracle
 only engine')

 args = parser.parse_args()
 if not args.cxoracle and not args.oracle:
 parser.error(You must provide one of: [-c] [-o])

 # Pick an Oracle connection method
 if args.cxoracle:
 LIBRARY = 

Re: [sqlalchemy] Unlogged Postgresql Table

2014-12-17 Thread Ladislav Lenart
Hello.


On 17.12.2014 16:30, Danish Abdullah wrote:
 What's the recommended way to create, assuming there is a way, an unlogged
 postgresql table through the ORM?

This might be of help:

http://stackoverflow.com/questions/13455207/sqlalchemy-temporary-table-with-declarative-base

Just replace TEMPORARY with UNLOGGED.


HTH,

Ladislav Lenart


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] [Bug?][v0.9.8][postgres] Table.create(): checkfirst=True does NOT work for temporary tables

2014-12-04 Thread Ladislav Lenart
Hello.

The following code crashes:

# db init...
meta = MetaData()
foo = Table('tmp_foo' meta,
Column('id', Integer, primary_key=True),
prefixes=['TEMPORARY'],
)
conn = session.connection()
foo.create(conn, checkfirst=True)
foo.create(conn, checkfirst=True)


This is because the 'check-first' logic emmits the following SQL:

SELECT relname
FROM
pg_class c
JOIN pg_namespace n ON n.oid=c.relnamespace
WHERE
n.nspname=CURRENT_SCHEMA()
AND relname='tmp_foo'


The culrpit is in a call to CURRENT_SCHEMA() because temporary tables in
postgres reside in a special schema.

Is there a fix/workaround for this?


Thank you in advance,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] [Bug?][v0.9.8][postgres] Table.create(): checkfirst=True does NOT work for temporary tables

2014-12-04 Thread Ladislav Lenart
Wow! Thank you!

I guess this is a near-light-speed support in practice! :-)

I stumbled upon this issue while I was trying to figure out how to work with
temporary tables in SQLAlchemy. Final version of my code does not use the
checkfirst flag at all, because I know when to create and when to drop the temp
table.

However it is nice to know that it will work correctly in all circumstances.

Please, keep up the excellent work, because SQLAlchemy is probably the best
library I have ever worked with! :-)


Thank you,

Ladislav Lenart


On 4.12.2014 18:03, Michael Bayer wrote:
 
 On Dec 4, 2014, at 9:36 AM, Ladislav Lenart lenart...@volny.cz wrote:

 Hello.

 The following code crashes:

 # db init...
 meta = MetaData()
 foo = Table('tmp_foo' meta,
Column('id', Integer, primary_key=True),
prefixes=['TEMPORARY'],
 )
 conn = session.connection()
 foo.create(conn, checkfirst=True)
 foo.create(conn, checkfirst=True)


 This is because the 'check-first' logic emmits the following SQL:

 SELECT relname
 FROM
pg_class c
JOIN pg_namespace n ON n.oid=c.relnamespace
 WHERE
n.nspname=CURRENT_SCHEMA()
AND relname='tmp_foo'


 The culrpit is in a call to CURRENT_SCHEMA() because temporary tables in
 postgres reside in a special schema.

 Is there a fix/workaround for this?
 
 issue 
 https://bitbucket.org/zzzeek/sqlalchemy/issue/3264/has_table-in-postgresql-doesnt-work-for
  has been created and fixed for 1.0.   
 
 For now, I’d avoid using “checkfirst” for a temporary table.  As this is only 
 local to a transaction it should be straightforward just to make sure the 
 code is only calling create() once.  If this is unavoidable, then place the 
 call to create() within a conditional that runs the new query: 
 
 select relname from pg_class c join pg_namespace n on 
 n.oid=c.relnamespace where 
 pg_catalog.pg_table_is_visible(c.oid) 
 and relname=:name”,
 
 PG’s behavior unfortunately allows a non-temporary table to silently 
 overwrite a temporary one, so this change is a significant behavioral change 
 to the checkfirst flag.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] [Q][postgres] Temporary table with ON COMMIT

2014-12-04 Thread Ladislav Lenart
Hello.

How can I specify ON COMMIT... for a TEMP table in SA?

I.e. the following Python code

meta = MetaData()
foo = Table('tmp_foo' meta,
Column('id', Integer, primary_key=True),
Column('val', Integer, nullable=False),
prefixes=['TEMPORARY'],
)
conn = session.connection()
foo.create(conn)


will issue the following SQL

CREATE TEMPORARY TABLE tmp_foo(
id integer PRIMARY KEY,
val integer NOT NULL
);


but what if I want to issue this instead

CREATE TEMPORARY TABLE tmp_foo(
id integer PRIMARY KEY,
val integer NOT NULL
) ON COMMIT DROP; -- e.g. this line


I was unable to figure this out on my own. Is this even possible?

Fortunately for me, I don't need this feature (yet), the default is good enough
for my purposes.


Thank you in advance,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] [Q] Relative speed of different querying methods

2014-11-12 Thread Ladislav Lenart
Hello.

I am in the process of eliminating bottlenecks in our application.

I have measured relative speed of the following querying methods:
limit = 1
q_orm = session.query(Foo).limit(limit)
q_id = session.query(Foo.id).limit(limit) # primary key
stmt = q_id.statement
sql = SELECT foo.id FROM foo LIMIT {}.format(limit)

Execution:
r = q_orm.all()
r = q_id.all()
r = session.execute(stmt).fetchall()
r = session.execute(sql).fetchall()

I have found the following relative speeds of the above methods:
q_orm   1x  (slowest, expected)
q_id   13x  (relatively slow, NOT expected)
stmt   70x  (fastest, expected)
sql70x  (fastest, expected)

I am curious about a relative slowness of the second method (q_id). What is it
doing? I thought there should be NO ORM overhead given that the query does NOT
return ORM instances.

Given the above results, is the following style reliable?

# Imagine a rather complex query with several join-s and
# filter conditions. Specifically, the query HAS various
# literal arguments (i.e. Foo.bar == bar).
q = session.query(Foo.id).distinct().join(...).filter(...)
r = session.execute(q.statement).fetchall()


Thank you in advance,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] [Q] Semantic difference in not equal operator

2014-11-03 Thread Ladislav Lenart
Hello.


On 31.10.2014 18:33, Simon King wrote:
 At a guess, I would say that the Python code Foo.bar != bar means 
 that you are looking for Foo objects that aren't related to bar.
 This includes Foos that are related to a *different* Bar instance,
 as well as Foos that aren't related to *any* bar instance. In sql
 terms, foo.bar_id != bar.id OR foo.bar_id is NULL.
 
 Whereas in the other instance, you've asked for something more
 specific, so that's what SA has given you.

Thank you for the explanation. Seems logical. I have just never looked at it
this way. The generated SQL has surprised me because in my model:
* Foo.bar cannot be NULL by definition and
* bar instance (the query parameter used in the filter()) already has an id.

Also, the SQL SELECT with an OR condition might confuse the DB query planner to
choose a suboptimal plan.

I have learned new lesson: do not use SA object references when the ultimate
goal is to create a SQL SELECT.


Thank you once again to both of you,

Ladislav Lenart

PS: Sorry for the delay. I was offline during the weekend.


 On 31 Oct 2014, at 16:36, Ladislav Lenart lenart...@volny.cz wrote:

 Hello.

 I have just noticed (by accident) a semantic difference of the not-equal
 operator (!=), depending on the style used. For example a query

session.query(Foo).filter(Foo.bar_id != bar.id)

 produces SQL like (expected)

SELECT *
FROM foo
WHERE foo.bar_id != %(bar_id_1)s

 whereas a query

session.query(Foo).filter(Foo.bar != bar)

 produces SQL like (NOT expected)

SELECT *
FROM foo
WHERE (
foo.bar_id != %(bar_id_1)s
OR foo.bar_id IS NULL
)

 I thought that

Foo.bar != bar

 is just a shortcut for

   Foo.bar_id != bar.id

 Is this difference intentional?
 May I know why (i.e. the motivation)?


 Thanks in advance,

 Ladislav Lenart


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] [Q] Semantic difference in not equal operator

2014-10-31 Thread Ladislav Lenart
Hello.

I have just noticed (by accident) a semantic difference of the not-equal
operator (!=), depending on the style used. For example a query

session.query(Foo).filter(Foo.bar_id != bar.id)

produces SQL like (expected)

SELECT *
FROM foo
WHERE foo.bar_id != %(bar_id_1)s

whereas a query

session.query(Foo).filter(Foo.bar != bar)

produces SQL like (NOT expected)

SELECT *
FROM foo
WHERE (
foo.bar_id != %(bar_id_1)s
OR foo.bar_id IS NULL
)

I thought that

Foo.bar != bar

is just a shortcut for

   Foo.bar_id != bar.id

Is this difference intentional?
May I know why (i.e. the motivation)?


Thanks in advance,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how to tell if a relationship was loaded or not ?

2014-06-12 Thread Ladislav Lenart
Hello.

Out of curiosity. Do you want this during (unit) testing or in the application
logic itself?

Ladislav Lenart


On 11.6.2014 20:17, Jonathan Vanasco wrote:
 I can't find this in the API or by using `inspect` on an object.
 
 I'm trying to find out how to tell if a particular relationship was loaded or 
 not.
 
 ie, I loaded Foo from the ORM, and want to see if foo.bar was loaded.
 
 I thought it might have been the `.attrs[column].state` , which is an
 InstanceState, but it doesn't appear to be so.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email
 to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] [Q] How to fix a circular dependency error?

2014-05-28 Thread Ladislav Lenart
On 27.5.2014 22:09, Michael Bayer wrote:
 some improvement was made to this formatting in 0.8 or so, though even now 
 it's not very easy to read directly.   
 
 Typically I try to look at the classes and relationships being mentioned in 
 the error and then inspect the mappings.  In this case it appears like the 
 cycle is just on the Partner class and the bazaar relationship alone.   
 Almost as though maybe you set a Partner.bazaar to be self, e.g. pointing 
 to itself.

Thank you for the explanation.

I suspected that the cycles attribute tells me everything I need to know.

You are also right about the following assertion:

# p is a Partner.
p.bazaar is p # True

However the operation in question does not modify this attribute at all. This
assertion is True before, during and after the operation. Does the error mean
that the code is infact trying to modify the bazaar attribute? OR the error is
triggered simply by the fact that a cycle exists no matter whether the
particular relationship is being updated or not?


Thank you in advance,

Ladislav Lenart


 On May 27, 2014, at 5:48 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 A specific DML request crashed with a CircularDependencyError on our test 
 env. I
 know that I should set post_update=True to the problematic relationship. The
 thing is I do not see the cycle and the error description is quite big. Could
 you please provide some guidance as to how to find the cycle from the 
 provided
 error description?

 The error has the following two attributes:
 * cycles = set([
  SaveUpdateState(Partner at 0x7f2be28113d0),
  ProcessState(
  ManyToOneDP(Partner.bazaar),
  Partner at 0x7f2be28113d0,
  delete=False
  )
  ])
 * all_edges = set([...]) # about a text page of data

 but I do not understand them.

 If it matters, I am still using SA v0.7.

 If you need more info, just ask.


 Thank you in advance,

 Ladislav Lenart

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] [Q] How to fix a circular dependency error?

2014-05-27 Thread Ladislav Lenart
Hello.

A specific DML request crashed with a CircularDependencyError on our test env. I
know that I should set post_update=True to the problematic relationship. The
thing is I do not see the cycle and the error description is quite big. Could
you please provide some guidance as to how to find the cycle from the provided
error description?

The error has the following two attributes:
* cycles = set([
  SaveUpdateState(Partner at 0x7f2be28113d0),
  ProcessState(
  ManyToOneDP(Partner.bazaar),
  Partner at 0x7f2be28113d0,
  delete=False
  )
  ])
* all_edges = set([...]) # about a text page of data

but I do not understand them.

If it matters, I am still using SA v0.7.

If you need more info, just ask.


Thank you in advance,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] a [probably] stupid query question

2014-03-20 Thread Ladislav Lenart
Hello.


On 20.3.2014 12:13, Richard Gerd Kuesters wrote:
 thanks again, AM. no need to have this extra trouble, as I just asking this 
 as
 a matter of curiosity. it thought it was something rather simple as hell :)

I think the following:

q = session.query(A, B).filter(A.b_id == B.id)
print q
# for eachA, eachB in q...

should produce what you want:

SELECT A.*, B.* FROM A, B WHERE A.b_id = B.id


HTH,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] [SA0.7] Possible feature request for subqueryload?

2013-09-26 Thread Ladislav Lenart
Hello.


On 25.9.2013 17:15, Michael Bayer wrote:
 
 On Sep 25, 2013, at 10:11 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 Would it be possible to make these two forms

session.query(cls).options(
subqueryload(cls.foos),
subqueryload(cls.foos, Foo.bar),
)

 and

session.query(cls).options(
subqueryload_all(cls.foos, Foo.bar)
)

 completely equivalent, i.e. subqueryload_all being just a shortcut of the 
 former
 if the paths have the above pattern (i.e. if one subqueryload is subsumed by
 another)?
 
 unless there's some bug I'm unaware of, those two forms are exactly 
 equivalent.   XYZload_all(a.b.c) is a shortcut to XYZload(a), 
 XYZLoad(a.b), XYZLoad(a.b.c).In 0.9 I'm working out a new system 
 backing the loader options that will allow more intuitive patterns, e.g. 
 load(cls).subqueryload(foos).subqueryload(bar).

Hmm, that's not what I see here. I captured SQL Alchemy debug log (echo=True) of
one such query and modified it to be easier to read, added corresponding python
code and also my remarks/questions. Please look at it, though it's rather 
long...


First, a query that returns ids of ALL Partner instances of the result (this is
how I do windowed queries).

[python]
session.query(distinct(Partner.id)).filter(
Partner.visible == True,
Partner.sponsor_id == self.id,
)

[sql]
SELECT DISTINCT partner.id AS anon_1
FROM partner
WHERE partner.visible = true AND partner.sponsor_id = %(sponsor_id_1)s

This result list of (partner) ids is then divided to windows of size 100. Each
window is queried individually. Following are the queries to completely populate
one such window.


Main query to populate data of max 100 partners (one window).

[python]
# win is a list of 100 ids.
session.query(Partner).filter(
Partner.id.in_(win)
).options(
joinedload(Partner.subject, innerjoin=True),
joinedload(Partner.subject, Subject.subject_personal, innerjoin=True),
joinedload(Partner.subject, Subject.subject_personal, 
SubjectPersonal.address),
subqueryload(Partner.subject, Subject.contacts),
subqueryload(Partner.subject, Subject.contacts, Contact.contact_personal),
subqueryload(Partner.subject, Subject.contacts, Contact.contact_personal,
ContactPersonal.contact_address),
subqueryload(Partner.subject, Subject.contacts, Contact.contact_personal,
ContactPersonal.permanent_address),
subqueryload_all(Partner.subject, Subject.tag_subjects, TagSubject.tag),
subqueryload_all(Partner.partner_regions),
)

[sql]
SELECT
partner.state AS partner_state,
partner.position AS partner_position,
partner.own_points AS partner_own_points,
partner.net_points AS partner_net_points,
partner.net_admin AS partner_net_admin,
partner.team_leader AS partner_team_leader,
partner.bazaar_owner AS partner_bazaar_owner,
partner.bazaar_extractor AS partner_bazaar_extractor,
partner.desired_team_leader AS partner_desired_team_leader,
partner.desired_bazaar_owner AS partner_desired_bazaar_owner,
partner.desired_bazaar_extractor AS partner_desired_bazaar_extractor,
partner.visible AS partner_visible,
partner.sms_credits AS partner_sms_credits,
partner.id AS partner_id,
partner.external_id AS partner_external_id,
partner.subject_id AS partner_subject_id,
partner.sponsor_id AS partner_sponsor_id,
partner.team_id AS partner_team_id,
partner.bazaar_id AS partner_bazaar_id,
partner.higher_bazaar_id AS partner_higher_bazaar_id
FROM partner
WHERE partner.id IN (...)


[python]
joinedload(Partner.subject, innerjoin=True),
joinedload(Partner.subject, Subject.subject_personal, innerjoin=True),
joinedload(Partner.subject, Subject.subject_personal, SubjectPersonal.address,
innerjoin=True),

corresponds to

[sql]
SELECT
subject.created_at AS subject_created_at,
subject.id AS subject_id,
subject.subject_personal_id AS subject_subject_personal_id,
subject.subject_corporate_id AS subject_subject_corporate_id,
subject.creator_id AS subject_creator_id,
anon_1.partner_subject_id AS anon_1_partner_subject_id,
address_1.street AS address_1_street,
address_1.postal_code AS address_1_postal_code,
address_1.city AS address_1_city,
address_1.country AS address_1_country,
address_1.id AS address_1_id,
subject_personal_1.title_prefix AS subject_personal_1_title_prefix,
subject_personal_1.title_suffix AS subject_personal_1_title_suffix,
subject_personal_1.first_name AS subject_personal_1_first_name,
subject_personal_1.last_name AS subject_personal_1_last_name,
subject_personal_1.identification_number AS
subject_personal_1_identification_number,
subject_personal_1.birth_date AS subject_personal_1_birth_date,
subject_personal_1.mobile_personal AS subject_personal_1_mobile_personal,
subject_personal_1.phone_personal AS subject_personal_1_phone_personal,
subject_personal_1.phone_home AS subject_personal_1_phone_home

Re: [sqlalchemy] [SA0.7] Possible feature request for subqueryload?

2013-09-26 Thread Ladislav Lenart
Thank you for pointing this out!

I did not realize that it has such a consequencies, though it is perfectly
logical. My bad. All joinedloads are part of the main query now, as they should.

Also, ignore my remark about Tag info being not loaded. When there actually are
any tags, they get loaded as expected (though I stil don't understand why the
SQL is different in both cases).

The only thing remaining is that the following subqueryloads are loaded in
isolation:

 subqry - /subject/contacts/
 subqry - /subject/contacts/contact_personal/
 subqry - /subject/contacts/contact_personal/contact_address/
 subqry - /subject/contacts/contact_personal/permanent_address/

And I just realized / understood why! Because subqueryload uses JOIN and all
above relations can be NULL. Is this correct?

I have two types of contacts, personal and corporate. The above loads:
* ALL Subject.contacts (i.e. personal AND CORPORATE)
* The rest only work with the personal.


Thank you very much! It has been enlightening as always,

Ladislav Lenart


On 26.9.2013 16:13, Michael Bayer wrote:
 
 On Sep 26, 2013, at 9:30 AM, Ladislav Lenart lenart...@volny.cz wrote:
 

 Your thoughts on this? Do you see anything suspicious?

 BTW the new system for 0.9 looks very nice!
 
 OK I only started reading the first few queries, but immediately what I think 
 is causing confusion here is that your loading options are conflicting with 
 each other, so the joinedload() you're looking for at the top is being 
 partially cancelled out:
 
 session.query(Partner).filter(
Partner.id.in_(win)
 ).options(
joinedload(Partner.subject, innerjoin=True),
joinedload(Partner.subject, Subject.subject_personal, innerjoin=True),
joinedload(Partner.subject, Subject.subject_personal, 
 SubjectPersonal.address),
subqueryload(Partner.subject, Subject.contacts),
subqueryload(Partner.subject, Subject.contacts, Contact.contact_personal),
subqueryload(Partner.subject, Subject.contacts, Contact.contact_personal, 
 ContactPersonal.contact_address),
subqueryload(Partner.subject, Subject.contacts, Contact.contact_personal, 
 ContactPersonal.permanent_address),
subqueryload_all(Partner.subject, Subject.tag_subjects, TagSubject.tag),
subqueryload_all(Partner.partner_regions),
 )
 
 
 the directive subqueryload_all(Partner.subject, Subject.tag_subjects, 
 TagSubject.tag) is after the joinedloads you have above, so the 
 joinedload(Partner.subject) is cancelled out by that, hence all the 
 additional joinedload() that build on top of that is pushed out to be part of 
 the subqueryloads.   
 
 you can read these directives just like paths.  The path is a unique key in 
 a dictionary.  as the loading traverses along each relationship, it looks up 
 the path, and the kind of loader called for.  the above is the equivalent of:
 
 joined - /subject/
 joined - /subject/subject_personal/
 joined - /subject/subject_personal/address/
 subqry - /subject/contacts/
 subqry - /subject/contacts/contact_personal/
 subqry - /subject/contacts/contact_personal/contact_address/
 subqry - /subject/contacts/contact_personal/permanent_address/
 subqry - /subject/   (overrides the joined)
 subqry - /subject/tag_subjects/
 subqry - /subject/tag_subjects/tag/
 subqry - /partner_regions/
 
 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] [SA0.7] Possible feature request for subqueryload?

2013-09-25 Thread Ladislav Lenart
([
subqueryload_all(*prefix + [alias.tag_subjects, 
TagSubject.tag]),
])
if fetch_all:
options.extend([
subqueryload(*prefix + [alias.contacts, Contact.phones]),
subqueryload(*prefix + [alias.contacts, Contact.emails]),
])
return options


Thank you,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Unit testing, mocking and dependency injection with SA Declarative.

2013-09-11 Thread Ladislav Lenart
However, be aware of differences between PostgreSQL and sqlite. For example
sqlite does not support recursive CTEs. But I am sure there's more.


Ladislav Lenart


On 10.9.2013 18:43, Toph Burns wrote:
 Could you use an in-memory, sqlite db for your testing?  For our applications,
 we have an initialization function that loads the database connection strings
 from a config (.ini) file, passing those on to create_engine.  In production
 it's a postgresql connection string, for test, it's a sqlite:///:memory:'
 
 
 Toph Burns | Software Engineer
 5885 Hollis St.  Suite 100
 Emeryville, CA 94608
 510-597-4797
 bu...@amyris.com
 
 *From:* sqlalchemy@googlegroups.com [sqlalchemy@googlegroups.com] on behalf of
 Michel Albert [exh...@gmail.com]
 *Sent:* Tuesday, September 10, 2013 1:46 AM
 *To:* sqlalchemy@googlegroups.com
 *Subject:* [sqlalchemy] Unit testing, mocking and dependency injection with SA
 Declarative.
 
 I am trying to wrap my head around how to do Dependency Injection with
 SQLAlchemy and I am walking in circles.
 
 I want to be able to mock out SA for most of my tests. I trust SA and don't 
 want
 to test serialisation into the DB. I just want to test my own code. So I was
 thinking to do dependency injection, and mock out SA during testing.
 
 But I don't know what to mock out, how and when to set up the session 
 properly,
 without doing it at the module level (which causes unwanted side-effects only 
 by
 importing the module).
 
 The only solution which comes to mind is to have one singleton which deals
 with that. But that feels very unpythonic to me and I am wondering if there's 
 a
 better solution.
 
 I also saw that create_engine has an optional module kwarg, which I could
 mock out. But then SA begins complaining that the return types are not 
 correct.
 And I don't want to specify return values for every possible db-module call.
 That's way out of scope of my tests. I am not calling anything on the 
 db-module.
 That's SA's job, and, as said, I already trust SA.
 
 Whenever I work on this I always run into the session_maker initialisation as
 well. The examples to this on the module level, which I really make me feel 
 uneasy.
 
 Any tips? Just prodding myself in the right direction might help me out 
 enough.
 
 -- 
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email
 to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 -- 
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email
 to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Occasional IntegrityError when identifying model not by its ID

2013-09-03 Thread Ladislav Lenart
On 3.9.2013 14:28, Simon King wrote:
 OK, I agree that doesn't sound like a race as such. It sounds more
 like some connection state is being shared between requests.

That might be it. We had a very similar bug: We stored ORM objects in the module
level and did not imediately realize that these are shared among web requests
and db sessions.

Ladislav Lenart


 I haven't used Tornado before and don't know much about it. Its main
 feature is that it uses asynchronous IO, unlike most other python web
 servers that use threads for request handling. How do you make sure
 that a single request uses a single SQLAlchemy session, and that the
 session gets closed properly at the end of the request?
 
 On Tue, Sep 3, 2013 at 1:08 PM, Ofir Herzas herz...@gmail.com wrote:
 I Agree, but when there is very little load on the system (e.g. 5 logged-in
 users)  what are the odds that the same user will get the same error 10
 times?

 On 3 Sep, 2013 3:05 PM, Simon King si...@simonking.org.uk wrote:

 Race conditions can happen at any time, not just when the system is
 under heavy load. You only need 2 requests to arrive at approximately
 the same time to trigger a race condition.

 On Tue, Sep 3, 2013 at 12:49 PM, herzaso herz...@gmail.com wrote:
 Another important thing - I don't think it's a race condition - the load
 on
 the system isn't that high and users getting this error continue to get
 it
 all the time


 On Tuesday, September 3, 2013 2:47:26 PM UTC+3, herzaso wrote:

 I've added session.close in my get method just to see if it solves the
 issue and it doesn't!
 I'm so frustrated with this issue... not only do I feel helpless, I
 don't
 have any clue on how to get around it ...

 What if I make the change without the session? Would the session pick
 up
 the changes on its first query?

 On Monday, September 2, 2013 3:58:02 PM UTC+3, Simon King wrote:

 I'm no expert on isolation levels - I was just trying to help you
 understand what the problem was :-)

 Fixing it really depends on how your application is supposed to work
 in the face of concurrent requests. For this specific part of the
 application, you probably want to be able to see the Foo object that
 was created by the other transaction. Reducing the transaction
 isolation is probably the easiest way to do that, but might have
 knock-on effects in your application, so you ought to think carefully
 before doing it.

 The alternative is to discard the existing session state when you get
 into this situation (via session.close) and start a new transaction.
 However, it wouldn't be appropriate to do this inside your get
 method - session lifecycle operations like this really belong at an
 outer scope, so making a change like this may require a certain amount
 of restructuring.

 Basically, dealing with concurrent operations is hard, and SQLAlchemy
 isn't going to magically make it any easier I'm afraid.

 Simon

 On Mon, Sep 2, 2013 at 1:40 PM, herzaso her...@gmail.com wrote:
 I'm sorry, it was a misunderstanding on my part regarding the
 transactions.
 So what are you saying? that I should replace the transaction
 isolation
 level?


 On Monday, September 2, 2013 3:29:25 PM UTC+3, Simon King wrote:

 What exactly do you mean by not using transactions? The Session
 always
 works within a transaction:




 http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#managing-transactions

 I assume you are also using InnoDB tables.

 On Mon, Sep 2, 2013 at 1:19 PM, herzaso her...@gmail.com wrote:
 I do have it set as REPEATABLE READ.
 However, I don't use transactions in sqlalchemy


 On Monday, September 2, 2013 3:08:58 PM UTC+3, Simon King wrote:

 Do you know what transaction isolation level you are running at?
 The
 default apparently is REPEATABLE READ:





 http://dev.mysql.com/doc/refman/5.6/en/set-transaction.html#isolevel_repeatable-read

 The important sentence in that link is:

   All consistent reads within the same transaction read the
 snapshot
 established by the first read

 When you query the database for the first time, to see if the
 entity
 already exists, you are setting that initial snapshot. If you
 run
 the
 same query again (such as in your exception handler), you will
 get
 the
 same results, whether or not another connection has inserted a
 matching row in the meantime.

 Simon

 On Mon, Sep 2, 2013 at 12:54 PM, herzaso her...@gmail.com
 wrote:
 I'm not sure what to make of the results:
 On the first connection, I ran BEGIN and INSERT and both were
 successful,
 but when I tried the INSERT statement on the second
 connection, I
 got
 ERROR
 1205 (HY000): Lock wait timeout exceeded; try restarting
 transaction.
 Running the same query on the first connection produced the
 required
 result
 which is ERROR 1062 (23000): Duplicate entry
 After the ROLLBACK on the first connection, the INSERT
 statement
 worked
 well
 on the second connection

 Regarding your second remark, the answer is yes, the error was
 due to
 the
 unique constraint

Re: [sqlalchemy] How to make a SQLAlchemy query using exists with a correlated subquery containing joins?

2013-08-08 Thread Ladislav Lenart
Hello.

This is the correct solution. I think you HAVE to specify the correlation
between the exists() subquery and the main query. But I am no SA expert.

I would suggest to add id (a sequence) as your primary key and use it as FK
everywhere. You can turn the current composite primary key into a composite
UNIQUE constraint to preserve the integrity check.

HTH,

Ladislav Lenart


On 8.8.2013 09:25, Etienne Rouxel wrote:
 I have managed to add the missing SQL part with the following query:
 
 e = session.query(Plant).\
 join(Article, Plant.articles).\
 join(Catalogitem, Article.catalogitems).\
 filter(
 Catalogitem.marketingseason == marketingseason, 
 Plant.taxon_id == Taxon.id 
 ).\
 exists()
 
 taxon = session.query(Taxon).filter(e)
 
 
 However, I would like (if possible) to fully take advantage of SQLAlchemy and
 avoid writing the test with the columns explicitly. Indeed, I have composite
 primary keys with 4 columns in some of my other real case scenario so that 
 would
 be great if I could say something like: Plant.taxon == Taxon of the enclosing 
 query.
 
 Le jeudi 8 août 2013 09:02:34 UTC+2, Etienne Rouxel a écrit :
 
 Hello
 
 I tried the use of where(e) but it fires the following error:
 AttributeError: 'Query' object has no attribute 'where'
 
 I tried with filter(e) instead and the query does not fire any error but 
 the
 result is not exactly what is expected. Here is the query:
 
 e = session.query(Plant).\
 join(Article, Plant.articles).\
 join(Catalogitem, Article.catalogitems).\
 filter(Catalogitem.marketingseason == marketingseason).\
 exists()
 taxon = session.query(Taxon).filter(e)
 
 Here is the SQL output:
 
 SELECT botany.taxon.id http://botany.taxon.id AS botany_taxon_id
 FROM botany.taxon
 WHERE EXISTS (SELECT 1
 FROM botany.plant
 JOIN product.article ON botany.plant.id http://botany.plant.id =
 product.article.plant_id
 JOIN catalog.catalogitem ON product.article.plant_id =
 catalog.catalogitem.plant_id AND product.article.article_id =
 catalog.catalogitem.article_id
 WHERE :param_1 = catalog.catalogitem.marketingseason_id
 )
 
 As we can see, the subquery is not correlated to the enclosing query via 
 the
 relationship Taxon.plant (the AND botany.plant.taxon_id = botany.taxon.id
 http://botany.taxon.id/ is missing in the subquery).
 Is it possible to do that?
 
 I join a new file (example2.py) with the new query.
 
 Le mercredi 7 août 2013 20:56:43 UTC+2, Michael Bayer a écrit :
 
 
 On Aug 7, 2013, at 11:58 AM, Etienne Rouxel rouxel@gmail.com 
 wrote:
 
 Hello

 Long story cut short, I would like to know if it is possible to
 generate a query with SQLAlchemy ORM such as the one below and how.

 SELECT botany.taxon.id http://botany.taxon.id AS 
 botany_taxon_id
 FROM botany.taxon
 WHERE EXISTS (SELECT 1
 FROM botany.plant
 JOIN product.article ON botany.plant.id
 http://botany.plant.id = product.article.plant_id
 JOIN catalog.catalogitem ON product.article.plant_id =
 catalog.catalogitem.plant_id AND product.article.article_id =
 catalog.catalogitem.article_id
 WHERE :param_1 = catalog.catalogitem.marketingseason_id
 AND botany.plant.taxon_id = botany.taxon.id
 http://botany.taxon.id
 )

 Put differently, it is like the regular use of EXISTS
 (http://docs.sqlalchemy.org/en/rel_0_8/orm/tutorial.html#using-exists
 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/tutorial.html#using-exists) but
 with a more complex subquery that contains JOINs.

 Is it possible to do such a query?
 
 
 Query has an exists() method that will turn the SELECT you've
 constructed into an
 EXISTS: 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/query.html?highlight=query.exists#sqlalchemy.orm.query.Query.exists
 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/query.html?highlight=query.exists#sqlalchemy.orm.query.Query.exists
  So you say e = query(Plant).join(..).join(..).filter(...).exists();
  query(Taxon).where(e).
 
 before we had that method you also could construct the joins using
 orm.join() and then use sqlalchemy.exists() to produce a select, but 
 the
 Query method is a shortcut on that.
 
 -- 
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email
 to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You

Re: [sqlalchemy] Rolling back all changes between unit tests

2013-07-24 Thread Ladislav Lenart
Hello.

I am not familiar with the usage of nested transactions, though I am pretty sure
what you want is possible.

I would like to suggest a simpler approach. If you implement all your functions
without commit and move the commit to one place (e.g. to the end of a web
request processing), you can write tests like this:

 *# my_db.py*
 session =  # setup connect  session to postgres

 *# my_app.py*
 from .my_db import session

 def add_stuff():
 session.add(stuff)

 *# test.py*
 from .my_db import session
 from .my_app import add_stuff

 class MyTest(unittest.TestCase):

 def tearDown(self):
  session.rollback()

 def test_app(self):
  add_stuff()
  session.flush()
  # assert against db

Recap:
* Do not use session.commit() in your code. Move it to one place outside the app
logic.
* Call session.flush() in your test whenever you want to assert something
against the database.
* Use rollback() at the end of a unit test.


HTH,

Ladislav Lenart


On 23.7.2013 18:58, Ib Lundgren wrote:
 Hey all,
 
 I'd like to have a clean DB between tests, one approach that works is to drop
 tables and recreate fixtures. This can be quite slow and I'd rather have
 something more lean. 
 
 If possible I'd like to create a snapshot before each test and rollback to the
 snapshot after each test. From browsing the docs it seems like SQLAlchemy 
 might
 be able to achieve this using nested transactions or subtransactions. FWIW I 
 am
 using Postgres 9.1+.
 
 I had a quick go at this and ended up with something similar to
 
 *# my_db.py*
 session =  # setup connect  session to postgres
 
 *# my_app.py*
 from .my_db import session
 
 def add_stuff():
 session.add(stuff)
 session.commit()
 
 *# test.py*
 from .my_db import session
 from .my_app import add_stuff
 
 class MyTest(unittest.TestCase):
 
 def setUp(self):
  session.begin_nested()
 
 def tearDown(self):
  session.rollback()
 
 def test_app(self):
  add_stuff()
 
 but that does not work. The commit in add_stuff is not reverted by the 
 rollback
 in tearDown. Am I going in the right direction?
 
 Cheers!
 
 -- 
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email
 to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] flask-sqlalchemy pysybase connections

2013-06-17 Thread Ladislav Lenart
Hello.

This will probably be completely off-topic, but we have recently solved a
similar issue. In our case it was cherrypy's fault, because it uses 'implicit'
'sesssions.locking' by default. It acquires web session's lock at the beginning
of a web request processing and releases it at the end of the processing. If
more web requests use the same web session, they will be serialized (e.g. more
tabs in one browser, more AJAX calls on the page). We solved this by using
'explicit' instead and locking the web session manually only when we need to via
web_session.acquire_lock() and web_session.release_lock().


HTH,

Ladislav Lenart


On 16.6.2013 19:10, Michael Bayer wrote:
 
 On Jun 16, 2013, at 12:55 PM, Kevin S kevinrst...@gmail.com
 mailto:kevinrst...@gmail.com wrote:
 
 I can try to get another dbapi installed later this week and see if that
 works. However, I had to jump through some hoops just to get pysybase working
 in the first place, so I'm not terribly looking forward to trying to tackle
 another one.

 I don't know much about how sessions are managed (I believe flask creates
 scoped-session objects). Could it be something that is just not implemented 
 in
 the pysybase sqlalchemy dialect, but available in the dbapi? I'm not sure
 exactly what to look for.
 
 
 not really.  The DBAPI is a very simple API, it's pretty much mostly 
 execute(),
 rollback(), and commit().   We have a test suite that runs against pysybase as
 well, it certainly has a lot of glitches, not the least of which is that
 pysybase last time I checked could not handle non-ASCII data in any way. 
 
 If pysybase is halting the entire intepreter on a query, there's nothing about
 the DBAPI in the abstract which refers to that.   It sounds like pysybase
 probably grabs the GIL on execute() while waiting for results, which would be
 pretty bad.   Perhaps it has settings, either run time or compile time, which
 can modify its behavior in this regard.   
 
 If it were me, I'd probably seek some way to not produce a web application
 directly against a Sybase database, as the severe lack of driver support will
 probably lead to many unsolvable scaling issues.  I'd look to mirror the 
 Sybase
 data in some other more modern system, either another RDBMS or a more 
 cache-like
 system like Redis.
 
 
 
 
 
 

 On Saturday, June 15, 2013 3:33:36 PM UTC-4, Michael Bayer wrote:


 On Jun 14, 2013, at 3:18 PM, Kevin S kevin...@gmail.com javascript:
 wrote:

  I am running into a problem while developing a flask application using
 flask-sqlalchemy. Now, I'm not even 100% sure my problem is sqlalchemy
 related, but I don't know how to debug this particular issue.
 
  To start, I have a sybase database that I want to see if I can build a
 report generating application for. The reports will all be custom SQL
 queries that are requested by our users, and they will be able to refresh
 throughout the day as they edit and clean up their data (we focus on a 
 lot
 of data curation). We plan to do other things that merit the use of an
 ORM, and we have a lot of complex relationships. Anyway, that's why I'm
 first trying to get this to work in our flask + sqlalchemy stack. And it
 does work in fact.
 
  Now the problem is, my current application is not scalable, because any
 time I do a long query (say several seconds or more), flask will not
 accept any additional requests until that query finishes. (Note: I am
 running the application through cherrypy). I have tested various things 
 to
 ensure that the application can handle multiple incoming requests. If I
 have it just loop through a big file, or even just sleep instead of doing
 a query, then I can bang away at it all I want from other browser 
 windows,
 and it's fine.
 
  We also have a copy of our database that is in postgres (this is only
 for testing, and can't be a final solution, because it gets updated only
 once a week). So, I've found that if I hook the application up to the
 postgres version, I don't have this problem. I can initiate a long query
 in one browser tab, and any other page requests in subsequent windows 
 come
 back fine. The problem is only when using Sybase. We have other
 applications that are not flask or sqlalchemy, and they don't seem to 
 have
 this limitation. As far as I can tell, I've narrowed it down to as soon 
 as
 it executes a query. The entire app will wait until that query finishes,
 not allowing any new connections. I have log statements in my request
 handlers, and even in my before_request method, and those will not print 
 a
 thing until the moment that first query returns.
 
  Additional info: I am using Sybase 15 with the pysybase driver.
  I initiate the raw SQL queries like this:
 
  con = db.session.connection()
  results = con.execute(query)
 
  But I also see the same

Re: [sqlalchemy] [ANN] Modified WindowedRangeQuery recipe

2013-06-17 Thread Ladislav Lenart
Hello.

This is the current version of our windowed query:


from sqlalchemy.sql.expression import distinct


def windowed_query(query, column, options_or_callback, window_size=100):
Perform (a correct) yield_per() operation. See WindowedQuery.yield_per()
for more.

EXAMPLE:
q = session.query(Foo).filter(Foo.name.like(u'%foo%'))
wq = windowed_query(q, Foo.id, [subqueryload(Foo.bars)])
for each_foo in wq:
print each_foo.name
for each_bar in each_foo.bars:
print each_bar

return WindowedQuery(
query,
column,
options_or_callback
).yield_per(window_size)


class WindowedQuery(object):
Perform (a correct) yield_per() operation.
def __init__(self, query, column, options_or_callback):

The query MUST have NO options(...) and NO order_by(...). It MUST
contain all necessary join() and filter() to limit the result set as
desired.

The column is the id column of the main result ORM class. It is used to
divide the results into windows of equal size.

The options_or_callback can be a list of Query.options(...) such as
subqueryload(). If so, the following query is created to fetch data of
each window:

q = session.query(self._column.class_).options(*self._options)
q = q.filter(self._column.in_(each_window)

The options_or_callback can be a one-argument function responsible for
complete processing of one window. Its only argument is the list of ids
of the window. It MUST return an iterable over results. It is called
once for each window.

self._query = query
self._column = column
self._session = query.session
if isinstance(options_or_callback, list):
self._options = options_or_callback
self._callback = None
else:
self._options = None
self._callback = options_or_callback

def yield_per(self, window_size):
Process results in chunks (windows).
Steps:
* Obtain ids of ALL result rows via slightly modified self._query.
* Divide ids to chunks of equal size and perform ONE query for EACH
chunk to fetch the data.

A chunk is determined by the test q.filter(self._column.in_(chunk)).
This is the only way that works in presence of the read-committed
isolation level.

if self._options is not None:
return self._yield_per_options(window_size)
if self._callback is not None:
return self._yield_per_callback(window_size)

def _yield_per_options(self, window_size):
Deduce data query from self._column and self._options.
q = self._session.query(self._column.class_).options(*self._options)
for each_window in self._windows(window_size):
for each_result in q.filter(self._column.in_(each_window)):
yield each_result

def _yield_per_callback(self, window_size):
Use a callback function responsible for obtaining the results:
def callback(win):
# Equivalent to the use of self._options.
q = session.query(Foo).options(...)
q = q.filter(Foo.id.in_(win))
return q

for each_window in self._windows(window_size):
for each_result in self._callback(each_window):
yield each_result

def _windows(self, window_size):
win = []
win_size = 0
for each in self._q_column():
if win_size  window_size:
win.append(each)
win_size += 1
if win_size == window_size:
yield win
win = []
win_size = 0
if win_size  0:
yield win

def _q_column(self):
distinct() ensures that each id is returned at most once despite
a possible multiplying effect of a join().

return self._query.with_entities(distinct(self._column))


We use it in the production for a few days now. Perhaps someone will find it
useful too.


Ladislav Lenart


On 7.6.2013 20:03, Ladislav Lenart wrote:
 Hello.
 
 Resending because the original e-mail does not seem to make it to the mailing
 list. Apologise for any duplicates. Here we go...
 
 
 Hello.
 
 I modified the recipe at
 
 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery
 
 to better suit my needs. Perhaps someone else will find this useful:
 
 
 
  CODE BEGIN 
 
 
 from sqlalchemy.orm import subqueryload
 from sqlalchemy.sql.expression import distinct
 
 def windowed_query(filter_query, data_query, column, window_size):
 Perform (a correct) yield_per() operation. See 
 WindowedQuery.yield_per()
 for more.
 
 EXAMPLE:
 gen = windowed_query

Re: [sqlalchemy] SA0.7.9 - poor query performance (full scan instead of index scan because of joined table inheritance)

2013-06-17 Thread Ladislav Lenart
Hello.

Here is the fully self-contained regression of the issue, including the
workaround for SA 0.7.9. Thank you again, because I wouldn't figure it out
without your help (the select_from part). I haven't tried it on SA 0.9.

If you have any questions, please ask.

HTH,

Ladislav Lenart


On 14.6.2013 11:18, Ladislav Lenart wrote:
 Hello.
 
 
 On 13.6.2013 18:44, Michael Bayer wrote:

 On Jun 13, 2013, at 11:03 AM, Ladislav Lenart lenart...@volny.cz wrote:

 Unfortunately migrating to SA 0.9 is not an option for me at the moment due 
 to
 severe time constraints.

 I was pretty sure you'd say that, though I'm really looking to verify that 
 my fixes are going to hold up under real world usage.   The issues you're 
 having are real issues, and they've been fixed.
 
 I will make a 1:1 self-containted test case for this issue during the next 
 week,
 I promise!
 
 
 Could you please help me write SA query for 0.7.9 that uses index scan and 
 also
 loads all the necessary relations? It must be possible with a proper use of
 from_statement(), contains_eager() and/or other SA features. It is just 
 that toy
 examples in the documentation don't help me much with this complex beast.

 you use the SQL expression language in conjunction with .join()/outerjoin(), 
 pass to query.select_from(), then use contains_eager():

 j = 
 Foo.__table__.outerjoin(Bar.__table__.outerjoin(BarA.__table__).outerjoin(Data.__table__))

 q = s.query(Foo).\
 select_from(j).\
 filter(Foo.id.in_([1, 2, 3])).\
 options(
 contains_eager(Foo.bar),
 contains_eager(Foo.bar.of_type(BarA), BarA.data)
 )
 
 Thank you very much! I am going to try it :-)
 
 
 Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


# coding=utf-8

Standalone regression of the full table-scan problem in combination with joined 
table inheritance. Applies to SA 0.7.9 at least.

See: Issue, Workaround, main_issue, main_workaround and main_fill.

The rest is support code and/or ORM definitions.

from sqlalchemy import Column, ForeignKey, UniqueConstraint, CheckConstraint,\
Integer, Unicode, String, Date
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.engine import create_engine
from sqlalchemy.sql.expression import distinct
from sqlalchemy.orm.session import sessionmaker
from sqlalchemy.orm import relationship, subqueryload_all, subqueryload,\
joinedload_all, contains_eager
from sqlalchemy.orm.util import aliased


Base = declarative_base()
session = None


class Contact(Base):
__tablename__ = 'Contact'
id = Column(Integer(), primary_key=True)
type = Column(Integer(), nullable=False) # 1 - personal, 2 - corporate
parent_id = Column(Integer(), ForeignKey('Partner.id', ondelete='CASCADE'), nullable=False)
parent = relationship('Partner', back_populates='contacts', primaryjoin='Contact.parent_id == Partner.id', remote_side='Partner.id', uselist=False)
contact_tags = relationship('ContactTag', back_populates='contact', primaryjoin='Contact.id == ContactTag.contact_id', cascade='all, delete-orphan', passive_updates=True, passive_deletes=True)
phones = relationship('Phone', back_populates='contact', primaryjoin='Contact.id == Phone.contact_id', cascade='all, delete-orphan', passive_updates=True, passive_deletes=True)
emails = relationship('Email', back_populates='contact', primaryjoin='Contact.id == Email.contact_id', cascade='all, delete-orphan', passive_updates=True, passive_deletes=True)
__mapper_args__ = {'polymorphic_on': type}
__table_args__ = (CheckConstraint(u'type in (1, 2)'),)

class ContactTag(Base):
__tablename__ = 'ContactTag'
id = Column(Integer(), primary_key=True)
tag_id = Column(Integer(), ForeignKey('Tag.id', ondelete='CASCADE'), nullable=False)
tag = relationship('Tag', primaryjoin='ContactTag.tag_id == Tag.id', remote_side='Tag.id', uselist=False)
contact_id = Column(Integer(), ForeignKey('Contact.id', ondelete='CASCADE'), nullable=False)
contact = relationship('Contact', back_populates='contact_tags', primaryjoin='ContactTag.contact_id == Contact.id', remote_side='Contact.id', uselist=False)
__table_args__ = (UniqueConstraint('contact_id', 'tag_id'),)

class Tag(Base):
__tablename__ = 'Tag'
id = Column(Integer(), primary_key=True)
partner_id = Column(Integer(), ForeignKey('Partner.id', ondelete='CASCADE'), nullable=False, index=True)
partner = relationship('Partner', back_populates='tags', primaryjoin='Tag.partner_id == Partner.id', remote_side='Partner.id', uselist=False)
label

Re: [sqlalchemy] SA0.7.9 - poor query performance (full scan instead of index scan because of joined table inheritance)

2013-06-17 Thread Ladislav Lenart
Hello.

I ended up with the following query:

@classmethod
def _find_contacts_fetch_window(cls, contact_cls, win):
Special data-fetching query for contacts and all their related info
including tags, partner, client,...

NOTE: We build the FROM part entirely by hand, because SA generates bad
SQL for postgres. It does a FULL SCAN of client and personal_client /
corporate_client even though it reads at most window_size rows from
them. All this because SA inheritance creates a subselect which leads
to the full scan.

# win .. list of ids
# contact_cls .. PersonalContact / CorporateContact
client_cls = contact_cls.client_cls() # PersonalClient / CorporateClient
data_cls = client_cls.data_cls() # PersonalData / CorporateData

# We need TABLEs to build the FROM part by hand.
# We reference PersonalData/CorporateData and Address twice, hence we
# need to alias them.
# We also need their aliased ORM classes for contains_eager() to work.
contact_table = Contact.__table__
contact_subtable = contact_cls.__table__
client_table = Client.__table__
personal_client_table = client_cls.__table__
partner_table = Partner.__table__
partner_data_table = PersonalData.__table__.alias(name='partner_data')
partner_address_table = Address.__table__.alias(name='partner_address')
client_data_table = data_cls.__table__.alias(name='client_data')
client_address_table = Address.__table__.alias(name='client_address')
partner_data = aliased(PersonalData, partner_data_table)
partner_address = aliased(Address, partner_address_table)
client_data = aliased(data_cls, client_data_table)
client_address = aliased(Address, client_address_table)
select_from = contact_table.join(
contact_subtable,
contact_table.c.id == contact_subtable.c.id
).outerjoin(
client_table,
contact_subtable.c.client_id == client_table.c.id
).outerjoin(
personal_client_table,
client_table.c.id == personal_client_table.c.id
).outerjoin(
client_data_table,
personal_client_table.c.data_id == client_data_table.c.id
).outerjoin(
client_address_table,
client_data_table.c.address_id == client_address_table.c.id
)
options = [
subqueryload_all(contact_cls.contact_tags, ContactTag.tag),
subqueryload(contact_cls.phones),
subqueryload(contact_cls.emails),
contains_eager(contact_cls.client),
contains_eager(contact_cls.client, client_cls.data, 
alias=client_data),
contains_eager(contact_cls.client, client_cls.data,
client_data.address, alias=client_address),
]
if contact_cls is PersonalContact:
select_from = select_from.outerjoin(
partner_table,
contact_subtable.c.partner_id == partner_table.c.id
).outerjoin(
partner_data_table,
partner_table.c.personal_data_id == partner_data_table.c.id
).outerjoin(
partner_address_table,
partner_data_table.c.address_id == partner_address_table.c.id
)
options.extend([
contains_eager(contact_cls.partner),
contains_eager(contact_cls.partner, Partner.personal_data,
alias=partner_data),
contains_eager(contact_cls.partner, Partner.personal_data,
partner_data.address, alias=partner_address),
])
q = session.query(contact_cls).select_from(select_from)
q = q.filter(contact_cls.id.in_(win))
q = q.options(*options)
return q


It works great though it took me quite a while to get right, especially the
aliasing between tables and classes. Also, I wouldn't figure it out without your
help (I mean the part with select_from).

Thank you again,

Ladislav Lenart


On 13.6.2013 18:44, Michael Bayer wrote:
 
 On Jun 13, 2013, at 11:03 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Unfortunately migrating to SA 0.9 is not an option for me at the moment due 
 to
 severe time constraints.
 
 I was pretty sure you'd say that, though I'm really looking to verify that my 
 fixes are going to hold up under real world usage.   The issues you're having 
 are real issues, and they've been fixed.
 

 Could you please help me write SA query for 0.7.9 that uses index scan and 
 also
 loads all the necessary relations? It must be possible with a proper use of
 from_statement(), contains_eager() and/or other SA features. It is just that 
 toy
 examples in the documentation don't help me much with this complex beast.
 
 you use the SQL expression language in conjunction with .join()/outerjoin(), 
 pass to query.select_from(), then use contains_eager

Re: [sqlalchemy] SA0.7.9 - poor query performance (full scan instead of index scan because of joined table inheritance)

2013-06-17 Thread Ladislav Lenart
Hello.

I ended up with the following query:

@classmethod
def _find_contacts_fetch_window(cls, contact_cls, win):
Special data-fetching query for contacts and all their related info
including tags, partner, client,...

NOTE: We build the FROM part entirely by hand, because SA generates bad
SQL for postgres. It does a FULL SCAN of client and personal_client /
corporate_client even though it reads at most window_size rows from
them. All this because SA inheritance creates a subselect which leads
to the full scan.

# win .. list of ids
# contact_cls .. PersonalContact / CorporateContact
client_cls = contact_cls.client_cls() # PersonalClient / CorporateClient
data_cls = client_cls.data_cls() # PersonalData / CorporateData

# We need TABLEs to build the FROM part by hand.
# We reference PersonalData/CorporateData and Address twice, hence we
# need to alias them.
# We also need their aliased ORM classes for contains_eager() to work.
contact_table = Contact.__table__
contact_subtable = contact_cls.__table__
client_table = Client.__table__
personal_client_table = client_cls.__table__
partner_table = Partner.__table__
partner_data_table = PersonalData.__table__.alias(name='partner_data')
partner_address_table = Address.__table__.alias(name='partner_address')
client_data_table = data_cls.__table__.alias(name='client_data')
client_address_table = Address.__table__.alias(name='client_address')
partner_data = aliased(PersonalData, partner_data_table)
partner_address = aliased(Address, partner_address_table)
client_data = aliased(data_cls, client_data_table)
client_address = aliased(Address, client_address_table)
select_from = contact_table.join(
contact_subtable,
contact_table.c.id == contact_subtable.c.id
).outerjoin(
client_table,
contact_subtable.c.client_id == client_table.c.id
).outerjoin(
personal_client_table,
client_table.c.id == personal_client_table.c.id
).outerjoin(
client_data_table,
personal_client_table.c.data_id == client_data_table.c.id
).outerjoin(
client_address_table,
client_data_table.c.address_id == client_address_table.c.id
)
options = [
subqueryload_all(contact_cls.contact_tags, ContactTag.tag),
subqueryload(contact_cls.phones),
subqueryload(contact_cls.emails),
contains_eager(contact_cls.client),
contains_eager(contact_cls.client, client_cls.data, 
alias=client_data),
contains_eager(contact_cls.client, client_cls.data,
client_data.address, alias=client_address),
]
if contact_cls is PersonalContact:
select_from = select_from.outerjoin(
partner_table,
contact_subtable.c.partner_id == partner_table.c.id
).outerjoin(
partner_data_table,
partner_table.c.personal_data_id == partner_data_table.c.id
).outerjoin(
partner_address_table,
partner_data_table.c.address_id == partner_address_table.c.id
)
options.extend([
contains_eager(contact_cls.partner),
contains_eager(contact_cls.partner, Partner.personal_data,
alias=partner_data),
contains_eager(contact_cls.partner, Partner.personal_data,
partner_data.address, alias=partner_address),
])
q = session.query(contact_cls).select_from(select_from)
q = q.filter(contact_cls.id.in_(win))
q = q.options(*options)
return q


It works great though it took me quite a while to get right, especially the
aliasing between tables and classes. Also, I wouldn't figure it out without your
help (I mean the part with select_from).

Thank you again,

Ladislav Lenart


On 13.6.2013 18:44, Michael Bayer wrote:
 
 On Jun 13, 2013, at 11:03 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Unfortunately migrating to SA 0.9 is not an option for me at the moment due 
 to
 severe time constraints.
 
 I was pretty sure you'd say that, though I'm really looking to verify that my 
 fixes are going to hold up under real world usage.   The issues you're having 
 are real issues, and they've been fixed.
 

 Could you please help me write SA query for 0.7.9 that uses index scan and 
 also
 loads all the necessary relations? It must be possible with a proper use of
 from_statement(), contains_eager() and/or other SA features. It is just that 
 toy
 examples in the documentation don't help me much with this complex beast.
 
 you use the SQL expression language in conjunction with .join()/outerjoin(), 
 pass to query.select_from(), then use contains_eager

Re: [sqlalchemy] SA0.7.9 - poor query performance (full scan instead of index scan because of joined table inheritance)

2013-06-14 Thread Ladislav Lenart
Hello.


On 13.6.2013 18:44, Michael Bayer wrote:
 
 On Jun 13, 2013, at 11:03 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Unfortunately migrating to SA 0.9 is not an option for me at the moment due 
 to
 severe time constraints.
 
 I was pretty sure you'd say that, though I'm really looking to verify that my 
 fixes are going to hold up under real world usage.   The issues you're having 
 are real issues, and they've been fixed.

I will make a 1:1 self-containted test case for this issue during the next week,
I promise!


 Could you please help me write SA query for 0.7.9 that uses index scan and 
 also
 loads all the necessary relations? It must be possible with a proper use of
 from_statement(), contains_eager() and/or other SA features. It is just that 
 toy
 examples in the documentation don't help me much with this complex beast.
 
 you use the SQL expression language in conjunction with .join()/outerjoin(), 
 pass to query.select_from(), then use contains_eager():
 
 j = 
 Foo.__table__.outerjoin(Bar.__table__.outerjoin(BarA.__table__).outerjoin(Data.__table__))
 
 q = s.query(Foo).\
 select_from(j).\
 filter(Foo.id.in_([1, 2, 3])).\
 options(
 contains_eager(Foo.bar),
 contains_eager(Foo.bar.of_type(BarA), BarA.data)
 )

Thank you very much! I am going to try it :-)


Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] SA0.7.9 - poor query performance (full scan instead of index scan because of joined table inheritance)

2013-06-13 Thread Ladislav Lenart
Hello.

I have a query that does a full scan of an inherited table with more than
million rows even though I need only 100 of them (on postgres 9.1). This is a
real bummer! Please help me rewrite the SA query or instruct postgres to not do
this stupidity.


I have the following setup (only the interesting relations):

Contact
contact_tags - ContactTag (collection)
phones - Phone (collection)
emails - Email (collection)

ContactTag
tag - Tag (cannot be NULL)

PersonalContact (Contact subclass)
partner - Partner (can be NULL)
client - PersonalClient (can be NULL)

CorporateContact (Contact subclass)
client - CorporateClient (can be NULL)

Client

PersonalClient (Client subclass)
data - PersonalData (cannot be NULL)

CorporateClient (Client subclass)
data - CorporateData (cannot be NULL)


I have the following query that loads data of one window:

# window (input argument) is a list of id values.
q = session.query(PersonalContact).filter(PersonalContact.id.in_(window)
q = q.options(
subqueryload_all(PersonalContact.contact_tags, ContactTag.tag),
subqueryload(PersonalContact.phones),
subqueryload(PersonalContact.emails),
joinedload_all(
PersonalContact.partner,
Partner.personal_data,
PersonalData.address,
   ),
joinedload_all(
PersonalContact.client,
PersonalClient.data,
PersonalData.address
),
)


(Note that I have similar query for CorporateContact. Infact, I generate them
both in the same method.)

It produces SQL like this (the problematic part is emphasized):

SELECT *
FROM
contact
JOIN personal_contact ON contact.id = personal_contact.id
-- ***
LEFT OUTER JOIN (
SELECT *
FROM
client
JOIN personal_client ON client.id = personal_client.id
) AS anon_1 ON personal_contact.client_id = anon_1.personal_client_id
-- ***
LEFT OUTER JOIN partner AS partner_1
ON personal_contact.partner_id = partner_1.id
LEFT OUTER JOIN personal_data AS personal_data_1
ON partner_1.personal_data_id = personal_data_1.id
LEFT OUTER JOIN address AS address_1
ON personal_data_1.address_id = address_1.id
LEFT OUTER JOIN personal_data AS personal_data_2
ON anon_1.personal_client_data_id = personal_data_2.id
LEFT OUTER JOIN address AS address_2
ON personal_data_2.address_id = address_2.id
WHERE personal_contact.id IN (...)


The inner select directly corresponds to joinedload of PersonalContact.client, a
PersonalClient instance (and a Client subclass).

The postgres does a full scan of tables Client and PersonalClient even though I
will need at most 100 rows from each one.

However, if I rewrite the problematic part by hand like this:

LEFT OUTER JOIN client
ON personal_contact.client_id = client.id
 LEFT OUTER JOIN personal_client
ON client.id = personal_client.id

it works like a charm.

Unfortunately I don't know how to write such a query in SA. I am really stuck so
any help is much appreciated.


Thank you,

Ladislav Lenart


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] SA0.7.9 - poor query performance (full scan instead of index scan because of joined table inheritance)

2013-06-13 Thread Ladislav Lenart
Unfortunately migrating to SA 0.9 is not an option for me at the moment due to
severe time constraints.

Could you please help me write SA query for 0.7.9 that uses index scan and also
loads all the necessary relations? It must be possible with a proper use of
from_statement(), contains_eager() and/or other SA features. It is just that toy
examples in the documentation don't help me much with this complex beast.

Here is a simplified version of my problem:

Foo
bar (can be NULL)

Bar

BarA (subclass of Bar)
data - Data (cannot be NULL)

Data
value (string)

I need to rewrite this query:

q = session.query(Foo).filter(Foo.id.in_(...))
q = q.options(
joinedload_all(Foo.bar, BarA.data)
)

in such a way that it does NOT perform any full scan and also populates
Foo.bar.data.value of each returned Foo. The new query can return whatever it
pleases as long as it returns Foo instances with properly populated relations.

Please help me write it (or tell me that it is not possible in SA 0.7.9, though
I highly doubt that).


Thank you,

Ladislav Lenart


On 13.6.2013 15:51, Michael Bayer wrote:
 Please try out 0.9 from the git master which fixes the issue of the nested 
 SELECT on the right side of a join. 
 
 Sent from my iPhone
 
 On Jun 13, 2013, at 9:18 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 I have a query that does a full scan of an inherited table with more than
 million rows even though I need only 100 of them (on postgres 9.1). This is a
 real bummer! Please help me rewrite the SA query or instruct postgres to not 
 do
 this stupidity.


 I have the following setup (only the interesting relations):

Contact
contact_tags - ContactTag (collection)
phones - Phone (collection)
emails - Email (collection)

ContactTag
tag - Tag (cannot be NULL)

PersonalContact (Contact subclass)
partner - Partner (can be NULL)
client - PersonalClient (can be NULL)

CorporateContact (Contact subclass)
client - CorporateClient (can be NULL)

Client

PersonalClient (Client subclass)
data - PersonalData (cannot be NULL)

CorporateClient (Client subclass)
data - CorporateData (cannot be NULL)


 I have the following query that loads data of one window:

# window (input argument) is a list of id values.
q = session.query(PersonalContact).filter(PersonalContact.id.in_(window)
q = q.options(
subqueryload_all(PersonalContact.contact_tags, ContactTag.tag),
subqueryload(PersonalContact.phones),
subqueryload(PersonalContact.emails),
joinedload_all(
PersonalContact.partner,
Partner.personal_data,
PersonalData.address,
   ),
joinedload_all(
PersonalContact.client,
PersonalClient.data,
PersonalData.address
),
)


 (Note that I have similar query for CorporateContact. Infact, I generate them
 both in the same method.)

 It produces SQL like this (the problematic part is emphasized):

SELECT *
FROM
contact
JOIN personal_contact ON contact.id = personal_contact.id
-- ***
LEFT OUTER JOIN (
SELECT *
FROM
client
JOIN personal_client ON client.id = personal_client.id
) AS anon_1 ON personal_contact.client_id = anon_1.personal_client_id
-- ***
LEFT OUTER JOIN partner AS partner_1
ON personal_contact.partner_id = partner_1.id
LEFT OUTER JOIN personal_data AS personal_data_1
ON partner_1.personal_data_id = personal_data_1.id
LEFT OUTER JOIN address AS address_1
ON personal_data_1.address_id = address_1.id
LEFT OUTER JOIN personal_data AS personal_data_2
ON anon_1.personal_client_data_id = personal_data_2.id
LEFT OUTER JOIN address AS address_2
ON personal_data_2.address_id = address_2.id
WHERE personal_contact.id IN (...)


 The inner select directly corresponds to joinedload of 
 PersonalContact.client, a
 PersonalClient instance (and a Client subclass).

 The postgres does a full scan of tables Client and PersonalClient even 
 though I
 will need at most 100 rows from each one.

 However, if I rewrite the problematic part by hand like this:

LEFT OUTER JOIN client
ON personal_contact.client_id = client.id
 LEFT OUTER JOIN personal_client
ON client.id = personal_client.id

 it works like a charm.

 Unfortunately I don't know how to write such a query in SA. I am really 
 stuck so
 any help is much appreciated.


 Thank you,

 Ladislav Lenart


 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com

Re: [sqlalchemy] SA0.7.9 - poor query performance (full scan instead of index scan because of joined table inheritance)

2013-06-13 Thread Ladislav Lenart
Hello.

Just a minor correction:

The new query can return whatever it pleases as long as it *also*
returns Foo instances with properly populated relations.

The rest of the e-mail is the same.


Please help me,

Ladislav Lenart


On 13.6.2013 17:03, Ladislav Lenart wrote:
 Unfortunately migrating to SA 0.9 is not an option for me at the moment due to
 severe time constraints.
 
 Could you please help me write SA query for 0.7.9 that uses index scan and 
 also
 loads all the necessary relations? It must be possible with a proper use of
 from_statement(), contains_eager() and/or other SA features. It is just that 
 toy
 examples in the documentation don't help me much with this complex beast.
 
 Here is a simplified version of my problem:
 
 Foo
 bar (can be NULL)
 
 Bar
 
 BarA (subclass of Bar)
 data - Data (cannot be NULL)
 
 Data
 value (string)
 
 I need to rewrite this query:
 
 q = session.query(Foo).filter(Foo.id.in_(...))
 q = q.options(
 joinedload_all(Foo.bar, BarA.data)
 )
 
 in such a way that it does NOT perform any full scan and also populates
 Foo.bar.data.value of each returned Foo. The new query can return whatever it
 pleases as long as it returns Foo instances with properly populated relations.
 
 Please help me write it (or tell me that it is not possible in SA 0.7.9, 
 though
 I highly doubt that).
 
 
 Thank you,
 
 Ladislav Lenart
 
 
 On 13.6.2013 15:51, Michael Bayer wrote:
 Please try out 0.9 from the git master which fixes the issue of the nested 
 SELECT on the right side of a join. 

 Sent from my iPhone

 On Jun 13, 2013, at 9:18 AM, Ladislav Lenart lenart...@volny.cz wrote:

 Hello.

 I have a query that does a full scan of an inherited table with more than
 million rows even though I need only 100 of them (on postgres 9.1). This is 
 a
 real bummer! Please help me rewrite the SA query or instruct postgres to 
 not do
 this stupidity.


 I have the following setup (only the interesting relations):

Contact
contact_tags - ContactTag (collection)
phones - Phone (collection)
emails - Email (collection)

ContactTag
tag - Tag (cannot be NULL)

PersonalContact (Contact subclass)
partner - Partner (can be NULL)
client - PersonalClient (can be NULL)

CorporateContact (Contact subclass)
client - CorporateClient (can be NULL)

Client

PersonalClient (Client subclass)
data - PersonalData (cannot be NULL)

CorporateClient (Client subclass)
data - CorporateData (cannot be NULL)


 I have the following query that loads data of one window:

# window (input argument) is a list of id values.
q = session.query(PersonalContact).filter(PersonalContact.id.in_(window)
q = q.options(
subqueryload_all(PersonalContact.contact_tags, ContactTag.tag),
subqueryload(PersonalContact.phones),
subqueryload(PersonalContact.emails),
joinedload_all(
PersonalContact.partner,
Partner.personal_data,
PersonalData.address,
   ),
joinedload_all(
PersonalContact.client,
PersonalClient.data,
PersonalData.address
),
)


 (Note that I have similar query for CorporateContact. Infact, I generate 
 them
 both in the same method.)

 It produces SQL like this (the problematic part is emphasized):

SELECT *
FROM
contact
JOIN personal_contact ON contact.id = personal_contact.id
-- ***
LEFT OUTER JOIN (
SELECT *
FROM
client
JOIN personal_client ON client.id = personal_client.id
) AS anon_1 ON personal_contact.client_id = anon_1.personal_client_id
-- ***
LEFT OUTER JOIN partner AS partner_1
ON personal_contact.partner_id = partner_1.id
LEFT OUTER JOIN personal_data AS personal_data_1
ON partner_1.personal_data_id = personal_data_1.id
LEFT OUTER JOIN address AS address_1
ON personal_data_1.address_id = address_1.id
LEFT OUTER JOIN personal_data AS personal_data_2
ON anon_1.personal_client_data_id = personal_data_2.id
LEFT OUTER JOIN address AS address_2
ON personal_data_2.address_id = address_2.id
WHERE personal_contact.id IN (...)


 The inner select directly corresponds to joinedload of 
 PersonalContact.client, a
 PersonalClient instance (and a Client subclass).

 The postgres does a full scan of tables Client and PersonalClient even 
 though I
 will need at most 100 rows from each one.

 However, if I rewrite the problematic part by hand like this:

LEFT OUTER JOIN client
ON personal_contact.client_id = client.id
 LEFT OUTER JOIN personal_client
ON client.id = personal_client.id

 it works like a charm.

 Unfortunately I don't know how to write

Re: [sqlalchemy] [Q][0.7.9] All scoped sessions use ONE DB connection

2013-06-12 Thread Ladislav Lenart
Correction. It was cherrypy's fault. It uses 'implicit' locking of web sessions
by default ('tools.sessions.locking'). We changed it to 'explicit' and call
web_session.acquire_lock() / web_session.release_lock() manually when we need
to. This works like a charm!

HTH,

Ladislav Lenart


On 11.6.2013 10:58, Ladislav Lenart wrote:
 Hello.
 
 Problem solved! It was a false alarm. My Firefox simply serializes all 
 requests
 to one domain. (BTW Chrome does that too.) This is strange however, because I
 have more connections enabled in Firefox about:config (network.http.* and
 friends). Have anyone of you solved this?
 
 
 Thank you and sorry for the noise,
 
 Ladislav Lenart
 
 
 On 11.6.2013 09:08, Ladislav Lenart wrote:
 Hello.


 On 10.6.2013 18:41, Michael Bayer wrote:

 On Jun 10, 2013, at 11:46 AM, Ladislav Lenart lenart...@volny.cz wrote:


 I have no idea what is wrong. Please help me diagnose this! Note that OS 
 monitor
 shows several cherrypy threads, each gets its share, so the cherrypy setup 
 seems
 to be OK.
 class _SessionContext(object):
def __init__(self):
self._level = 0

def __enter__(self):
if self._level == 0:
self._session = _Session()
self._level += 1
return self._session

def __exit__(self, type_, value, traceback):
self._level -= 1
if self._level == 0:
self._session.close()
self._session = None

 _thread_local = local()

 def get_session():
ctx = getattr(_thread_local, 'session_context', None)
if ctx:
return ctx
ctx = _SessionContext()
_thread_local.session_context = ctx
return ctx


 what's all this thread local stuff for?   The problem is likely there, a 
 scoped_session() already uses a threading.local.   A very simple usage 
 pattern should work fine.   Just put a session.remove() type of thing to 
 occur at the end of each request.   guidelines are here: 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#using-thread-local-scope-with-web-applications
  

 Well, this is our way to call Session.remove() at the end of the request 
 processing.

 get_session() creates _SessionContext instance (a python context manager)
 responsible for closing the session at the end. We use scoped_session() and 
 we
 wanted to support nesting like this:

 with get_session():
 with get_session():
 ...

 Hence we create only one _SessionContext per thread. It increments its 
 internal
 counter whenever its __enter__() is called and decrements it whenever its
 __exit__() is called. It closes the session when the counter reaches zero.

 The decorator @session.use wraps the function in the call

 with get_session():

 We mark every method of our handler that needs to access a database with this
 decorator. It is always the top level decorator, i.e.:

 @session.use
 @other_decorator
 def foo():
 pass

 I will investigate further and let you know.


 Thank you,

 Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][0.7.9] All scoped sessions use ONE DB connection

2013-06-11 Thread Ladislav Lenart
Hello.


On 10.6.2013 18:41, Michael Bayer wrote:
 
 On Jun 10, 2013, at 11:46 AM, Ladislav Lenart lenart...@volny.cz wrote:
 

 I have no idea what is wrong. Please help me diagnose this! Note that OS 
 monitor
 shows several cherrypy threads, each gets its share, so the cherrypy setup 
 seems
 to be OK.
 class _SessionContext(object):
def __init__(self):
self._level = 0

def __enter__(self):
if self._level == 0:
self._session = _Session()
self._level += 1
return self._session

def __exit__(self, type_, value, traceback):
self._level -= 1
if self._level == 0:
self._session.close()
self._session = None

 _thread_local = local()

 def get_session():
ctx = getattr(_thread_local, 'session_context', None)
if ctx:
return ctx
ctx = _SessionContext()
_thread_local.session_context = ctx
return ctx
 
 
 what's all this thread local stuff for?   The problem is likely there, a 
 scoped_session() already uses a threading.local.   A very simple usage 
 pattern should work fine.   Just put a session.remove() type of thing to 
 occur at the end of each request.   guidelines are here: 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#using-thread-local-scope-with-web-applications
  

Well, this is our way to call Session.remove() at the end of the request 
processing.

get_session() creates _SessionContext instance (a python context manager)
responsible for closing the session at the end. We use scoped_session() and we
wanted to support nesting like this:

with get_session():
with get_session():
...

Hence we create only one _SessionContext per thread. It increments its internal
counter whenever its __enter__() is called and decrements it whenever its
__exit__() is called. It closes the session when the counter reaches zero.

The decorator @session.use wraps the function in the call

with get_session():

We mark every method of our handler that needs to access a database with this
decorator. It is always the top level decorator, i.e.:

@session.use
@other_decorator
def foo():
pass

I will investigate further and let you know.


Thank you,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][0.7.9] All scoped sessions use ONE DB connection

2013-06-11 Thread Ladislav Lenart
Hello.

Problem solved! It was a false alarm. My Firefox simply serializes all requests
to one domain. (BTW Chrome does that too.) This is strange however, because I
have more connections enabled in Firefox about:config (network.http.* and
friends). Have anyone of you solved this?


Thank you and sorry for the noise,

Ladislav Lenart


On 11.6.2013 09:08, Ladislav Lenart wrote:
 Hello.
 
 
 On 10.6.2013 18:41, Michael Bayer wrote:

 On Jun 10, 2013, at 11:46 AM, Ladislav Lenart lenart...@volny.cz wrote:


 I have no idea what is wrong. Please help me diagnose this! Note that OS 
 monitor
 shows several cherrypy threads, each gets its share, so the cherrypy setup 
 seems
 to be OK.
 class _SessionContext(object):
def __init__(self):
self._level = 0

def __enter__(self):
if self._level == 0:
self._session = _Session()
self._level += 1
return self._session

def __exit__(self, type_, value, traceback):
self._level -= 1
if self._level == 0:
self._session.close()
self._session = None

 _thread_local = local()

 def get_session():
ctx = getattr(_thread_local, 'session_context', None)
if ctx:
return ctx
ctx = _SessionContext()
_thread_local.session_context = ctx
return ctx


 what's all this thread local stuff for?   The problem is likely there, a 
 scoped_session() already uses a threading.local.   A very simple usage 
 pattern should work fine.   Just put a session.remove() type of thing to 
 occur at the end of each request.   guidelines are here: 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#using-thread-local-scope-with-web-applications
  
 
 Well, this is our way to call Session.remove() at the end of the request 
 processing.
 
 get_session() creates _SessionContext instance (a python context manager)
 responsible for closing the session at the end. We use scoped_session() and we
 wanted to support nesting like this:
 
 with get_session():
 with get_session():
 ...
 
 Hence we create only one _SessionContext per thread. It increments its 
 internal
 counter whenever its __enter__() is called and decrements it whenever its
 __exit__() is called. It closes the session when the counter reaches zero.
 
 The decorator @session.use wraps the function in the call
 
 with get_session():
 
 We mark every method of our handler that needs to access a database with this
 decorator. It is always the top level decorator, i.e.:
 
 @session.use
 @other_decorator
 def foo():
 pass
 
 I will investigate further and let you know.
 
 
 Thank you,
 
 Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] [Q][0.7.9] All scoped sessions use ONE DB connection

2013-06-10 Thread Ladislav Lenart
Hello.

We run our web application in cherrypy. I have just found that all our threads
share *one* DB connection! A web page contains several AJAX calls, each queries
the database. However only one runs at a time because the threads share DB
connection.

Notes to the code below:
* We run postgres 9.1 via psycopg2.
* db.init() creates engine with pool_size of 5 by default.
* scoped_session() ensures that each thread gets its own session object.
* See our module session and its use.

I have no idea what is wrong. Please help me diagnose this! Note that OS monitor
shows several cherrypy threads, each gets its share, so the cherrypy setup seems
to be OK.


Thank you in advance,

Ladislav Lenart



# Snippet of db.py #


def init(conn_string, echo=False, base=None, pool_size=5):
engine = create_engine(conn_string, echo=echo, pool_size=pool_size)
session.init(engine)
if base is None:
base = Base
_import_model()
base.metadata.bind = engine
return engine


#
# Snippet of session.py #
#

_Session = None

def init(engine):
global _Session
_Session = scoped_session(sessionmaker(bind=engine, autoflush=False))


class _SessionContext(object):
def __init__(self):
self._level = 0

def __enter__(self):
if self._level == 0:
self._session = _Session()
self._level += 1
return self._session

def __exit__(self, type_, value, traceback):
self._level -= 1
if self._level == 0:
self._session.close()
self._session = None

_thread_local = local()

def get_session():
ctx = getattr(_thread_local, 'session_context', None)
if ctx:
return ctx
ctx = _SessionContext()
_thread_local.session_context = ctx
return ctx


def use(fun):
Decorator that creates SQLAlchemy session to use in inner method and
cleans it up afterwards. It is safe for nested calls.
Session is just closed, any commits or rollbacks must be handled
explicitly in a decorated method.

def wrapper(*args, **kwargs):
with get_session():
return fun(*args, **kwargs)
return wrapper


# DB session wrappers for standard session functions. Add more as you see fit.

def query(*args, **kwargs):
return _Session().query(*args, **kwargs)

def add(*args, **kwargs):
return _Session().add(*args, **kwargs)

def add_all(*args, **kwargs):
return _Session().add_all(*args, **kwargs)

def delete(*args, **kwargs):
NOTE: DO NOT USE THIS DIRECTLY! USE obj.delete() instead.
return _Session().delete(*args, **kwargs)

def execute(*args, **kwargs):
return _Session().execute(*args, **kwargs)

def flush(*args, **kwargs):
return _Session().flush(*args, **kwargs)

def commit(*args, **kwargs):
return _Session().commit(*args, **kwargs)

def rollback(*args, **kwargs):
return _Session().rollback(*args, **kwargs)

def close(*args, **kwargs):
return _Session().close(*args, **kwargs)


#
# USAGE #
#

@session.use
def handle_ajax1(...):
session.query(...)
session.commit()

@session.use
def handle_ajax2(...):
session.query(...)
session.commit()

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][0.7.9] problem with with_entities(), over() and label()

2013-06-07 Thread Ladislav Lenart
On 6.6.2013 19:33, Michael Bayer wrote:
 
 On Jun 6, 2013, at 12:56 PM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 I have already solved the issue by using subquery:

 SELECT
t.id AS t_id,
t.rownum AS t_rownum
 FROM (
SELECT
FROM
foo.id AS id,
row_number() OVER (ORDER BY foo.id) AS rownum
 ) AS t
 WHERE rownum % 50 = 1

 I have just tried your suggestion about using HAVING instead of WHERE, but 
 that
 fails with the same error. Thus a label cannot be used inside a query.

 However, I am still curious whether the original WindowedRangeQuery recipe at

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery

 works or also has this error.
 
 the recipe as you noted uses from_self(), which means, wrap myself in a 
 subquery, so that's where the necessary subquery is applied.

That explains it :-)


Thank you,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] [ANN] Modified WindowedRangeQuery recipe

2013-06-07 Thread Ladislav Lenart
Hello.

I modified the recipe at

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery

to better suit my needs. Perhaps someone else will find this useful:



 CODE BEGIN 


from sqlalchemy.orm import subqueryload
from sqlalchemy.sql.expression import distinct

def windowed_query(filter_query, data_query, column, window_size):
Perform (a correct) yield_per() operation. See WindowedQuery.yield_per()
for more.

EXAMPLE:
gen = windowed_query(
filter_query=session.query(Foo).filter(Foo.name.like(u'%foo%')),
data_query=session.query(Foo).options(Foo.bars),
column=Foo.id,
window_size=50,
)
for each_foo in gen:
print each_foo.name
for each_bar in each_foo.bars:
print each_bar

return WindowedQuery(filter_query, data_query, 
column).yield_per(window_size)

class WindowedQuery(object):
Perform (a correct) yield_per() operation.
def __init__(self, filter_query, data_query, column):
# A query with NO options(...) and NO order_by(...). MUST contain all
# necessary join() and filter() to limit the result set as desired.
self._filter_query = filter_query
# A simple query with options() to fetch the actual data.
self._data_query = data_query
# id column of the (main) result ORM class.
self._column = column

def yield_per(self, window_size):
Process results in chunks.
Steps:
* Obtain ids of ALL result rows via self._filter_query.
* Divide ids to chunks of equal size and perform ONE query for EACH
chunk to fetch the data via self._data_query.

A chunk is determined by the test q.filter(self._column.in_(chunk)).
This is the only way that works in presence of the read-committed
isolation level.

q = self._data_query
for each_window in self._windows(window_size):
for each_result in q.filter(each_window):
yield each_result

def _windows(self, window_size):
chunk = []
chunk_size = 0
for each in self._q_column():
if chunk_size  window_size:
chunk.append(each)
chunk_size += 1
if chunk_size == window_size:
yield self._window_for_chunk(chunk)
chunk = []
chunk_size = 0
if chunk_size  0:
yield self._window_for_chunk(chunk)

def _q_column(self):
# distinct() ensures that each id is returned at most once despite
# a possible multiplying effect of a join().
return self._filter_query.with_entities(distinct(self._column))

def _window_for_chunk(self, chunk):
return self._column.in_(chunk)

##
 CODE END 
##


MOTIVATION: I have learned recently that Query.yield_per() does not work nicely
in combination with subqueryload(). The above recipe fixes that. Unfortunately
its usage is not as elegant and simple as q.yield_per(...).

If you have any idea how to accomplish the same with ONE query only (in SA 
0.7.9):

def windowed_query(query, column, window_size):
query --magic- filter_query
query --magic- data_query
...

I would very much like to hear about it.


PERFORMANCE: My first tests suggest that it might be one order of magnitude
better than the Query.yield_per() we use now.

Note also that yield_per() with subqueryload() was still about twice as fast as
the same query without yield_per(). But this will be highly dependent on the
query I guess.


WARNING: We do not use this in the production yet. Use at your own risk.


Happy SA hacking,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] [ANN] Modified WindowedRangeQuery recipe

2013-06-07 Thread Ladislav Lenart
Hello.

Resending because the original e-mail does not seem to make it to the mailing
list. Apologise for any duplicates. Here we go...


Hello.

I modified the recipe at

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery

to better suit my needs. Perhaps someone else will find this useful:



 CODE BEGIN 


from sqlalchemy.orm import subqueryload
from sqlalchemy.sql.expression import distinct

def windowed_query(filter_query, data_query, column, window_size):
Perform (a correct) yield_per() operation. See WindowedQuery.yield_per()
for more.

EXAMPLE:
gen = windowed_query(
filter_query=session.query(Foo).filter(Foo.name.like(u'%foo%')),
data_query=session.query(Foo).options(Foo.bars),
column=Foo.id,
window_size=50,
)
for each_foo in gen:
print each_foo.name
for each_bar in each_foo.bars:
print each_bar

return WindowedQuery(filter_query, data_query, 
column).yield_per(window_size)

class WindowedQuery(object):
Perform (a correct) yield_per() operation.
def __init__(self, filter_query, data_query, column):
# A query with NO options(...) and NO order_by(...). MUST contain all
# necessary join() and filter() to limit the result set as desired.
self._filter_query = filter_query
# A simple query with options() to fetch the actual data.
self._data_query = data_query
# id column of the (main) result ORM class.
self._column = column

def yield_per(self, window_size):
Process results in chunks.
Steps:
* Obtain ids of ALL result rows via self._filter_query.
* Divide ids to chunks of equal size and perform ONE query for EACH
chunk to fetch the data via self._data_query.

A chunk is determined by the test q.filter(self._column.in_(chunk)).
This is the only way that works in presence of the read-committed
isolation level.

q = self._data_query
for each_window in self._windows(window_size):
for each_result in q.filter(each_window):
yield each_result

def _windows(self, window_size):
chunk = []
chunk_size = 0
for each in self._q_column():
if chunk_size  window_size:
chunk.append(each)
chunk_size += 1
if chunk_size == window_size:
yield self._window_for_chunk(chunk)
chunk = []
chunk_size = 0
if chunk_size  0:
yield self._window_for_chunk(chunk)

def _q_column(self):
# distinct() ensures that each id is returned at most once despite
# a possible multiplying effect of a join().
return self._filter_query.with_entities(distinct(self._column))

def _window_for_chunk(self, chunk):
return self._column.in_(chunk)

##
 CODE END 
##


MOTIVATION: I have learned recently that Query.yield_per() does not work nicely
in combination with subqueryload(). The above recipe fixes that. Unfortunately
its usage is not as elegant and simple as q.yield_per(...).

If you have any idea how to accomplish the same with ONE query only (in SA 
0.7.9):

def windowed_query(query, column, window_size):
query --magic- filter_query
query --magic- data_query
...

I would very much like to hear about it.


PERFORMANCE: My first tests suggest that it might be one order of magnitude
better than the Query.yield_per() we use now.

Note also that yield_per() with subqueryload() was still about twice as fast as
the same query without yield_per(). But this will be highly dependent on the
query I guess.


WARNING: We do not use this in the production yet. Use at your own risk.


Happy SA hacking,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] [Q][0.7.9] problem with with_entities(), over() and label()

2013-06-06 Thread Ladislav Lenart
Hello.

I am trying to adapt WindowedRangeQuery recipe

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery

I have just stumbled upon the following issue:

def _q_windows(self, window_size):
q = self._query.with_entities(
self._column,
func.row_number().over(order_by=self._column).label('rownum'),
)
if window_size  1:
q = q.filter(rownum % {} = 1.format(window_size))
return q

where

self._query = session.query(Foo)
self._column = Foo.id
window_size = 2

The returned query q produces the following SQL:

SELECT
foo.id AS foo_id,
row_number() OVER (ORDER BY foo.id) AS rownum
FROM foo
WHERE rownum % 2 = 1

When executed, it generates the following error:

sqlalchemy.exc.ProgrammingError:
(ProgrammingError) column rownum does not exist

I am kind of lost here, because something very similar should work according to
the original recipe. Note that I apply with_entities() on the original query
whereas the recipe uses from_self() on the column.

Do you have any idea what should I do to fix this?


Thank you in advance,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][0.7.9] problem with with_entities(), over() and label()

2013-06-06 Thread Ladislav Lenart
Hello.

I have already solved the issue by using subquery:

SELECT
t.id AS t_id,
t.rownum AS t_rownum
FROM (
SELECT
FROM
foo.id AS id,
row_number() OVER (ORDER BY foo.id) AS rownum
) AS t
WHERE rownum % 50 = 1

I have just tried your suggestion about using HAVING instead of WHERE, but that
fails with the same error. Thus a label cannot be used inside a query.

However, I am still curious whether the original WindowedRangeQuery recipe at

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery

works or also has this error.


Thank you,

Ladislav Lenart


On 6.6.2013 18:26, Charlie Clark wrote:
 Am 06.06.2013, 16:30 Uhr, schrieb Ladislav Lenart lenart...@volny.cz:
 
 The returned query q produces the following SQL:
SELECT
 foo.id AS foo_id,
 row_number() OVER (ORDER BY foo.id) AS rownum
 FROM foo
 WHERE rownum % 2 = 1
 When executed, it generates the following error:
sqlalchemy.exc.ProgrammingError:
 (ProgrammingError) column rownum does not exist
 
 Just a guess but the error is coming from Postgres. You'll need to  
 doublecheck the docs but I don't think rownum is available for the WHERE  
 clause because OVER works like the aggregate functions. You can either try  
 wrapping the SELECT in correlated subquery as the example shows or simply  
 use HAVING rownum % 2 = 1 (I think this is best approach but I'm far from  
 an expert). Try the SQL directly in the DB to see which works.
 
 Charlie

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-05 Thread Ladislav Lenart
Hello.

I have one more question about my approach to WindowedRangeQuery:

 * Suppose a complex query q with join(...), filter(...) and options(...).
 * I need to create q2 from q such that:
  * It has no options.
* Can I reset the options with q = q.options(None)?
  * It has select with the over by magic taken from WindowedRangeQuery recipe.
* I know I can use with_entities() for this.
 * I will use q2 to obtain the window ranges.
 * I will iterate over the window ranges and apply each to the original q and
 execute it in a loop.

Suppose I have a complex query (without options() / order_by()) to obtain the
objects I want. I use this query to calculate window intervals. I don't have to
use it again to fetch the objects, because I already have their id intervals. Am
I right?


Thank you,

Ladislav Lenart


On 4.6.2013 19:15, Ladislav Lenart wrote:
 On 4.6.2013 18:49, Michael Bayer wrote:

 On Jun 4, 2013, at 12:28 PM, Ladislav Lenart lenart...@volny.cz wrote:


 Absolutely, you should do whatever you have to in order to get the range
 you want, in fact the recipe even says this...

 Ok. What I want to do is basically the following:
 * Suppose a complex query q with join(...), filter(...) and options(...).
 * I need to create q2 from q such that:
  * It has no options.
* Can I reset the options with q = q.options(None)?
  * It has select with the over by magic taken from WindowedRangeQuery 
 recipe.
* I know I can use with_entities() for this.
 * I will use q2 to obtain the window ranges.
 * I will iterate over the window ranges and apply each to the original q and
 execute it in a loop.

 Can this strategy work?

 there's not a publicly supported feature to reset the options right now so 
 you'd probably need to apply them after you get your window ranging query.   
You can probably remove the effect of eager loaders if you were to just 
 assign a blank _attributes dictionary to the new query though (just no long 
 term guarantees).

 other than that, I use the windowing concept extensively and it works very 
 well.
 
 Ok, I will try it.
 
 Thank you very much for your invaluable insights,
 
 Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-05 Thread Ladislav Lenart
Hello.

I've tried an experiment to verify that yield_per() with subqueryload() behaves
as badly as you described, but according to my practical observation, it issues
ONE subqueryload() and everything behaves as I would expect. It emits two
SELECTs, one to fetch the objects and the second one to fetch the related data
and that's about it, see below. So back to square one, what is wrong with it?


Base = declarative_base()

class Contact(Base):
# Just a sketch, I hope you get the picture.
id,
name,
phones = relationship(Phone) # many

class Phone(Base):
# Just a sketch, I hope you get the picture.
id,
number,
contact = relationship(Contact) # one

# Setup engine with echo set to True.

phones = ['123456789', '987654321', '555777999']
for i in range(1, 11):
c = Contact(name=u' '.join([u'Contact', unicode(i)]))
session.add(c)
session.add_all(Phone(contact=c, number=e) for e in phones)
session.flush()
session.expunge_all()

q = session.query(Contact).options(subqueryload(Contact.phones))
for each in q.yield_per(2):
print each.last_name
for e in each.phones:
print e

The output is like this:
SA info about all the inserts after session.flush().
SA info about select for contacts.
SA info about select for their phones.
Contact 10
123456789
987654321
555777999
Contact 9
123456789
987654321
555777999
Contact 8
123456789
987654321
555777999
Contact 7
123456789
987654321
555777999
Contact 6
123456789
987654321
555777999
Contact 5
123456789
987654321
555777999
Contact 4
123456789
987654321
555777999
Contact 3
123456789
987654321
555777999
Contact 2
123456789
987654321
555777999
Contact 1
123456789
987654321
555777999


Thank you,

Ladislav Lenart


On 5.6.2013 11:26, Ladislav Lenart wrote:
 Hello.
 
 I have one more question about my approach to WindowedRangeQuery:
 
 * Suppose a complex query q with join(...), filter(...) and options(...).
 * I need to create q2 from q such that:
  * It has no options.
* Can I reset the options with q = q.options(None)?
  * It has select with the over by magic taken from WindowedRangeQuery recipe.
* I know I can use with_entities() for this.
 * I will use q2 to obtain the window ranges.
 * I will iterate over the window ranges and apply each to the original q and
 execute it in a loop.
 
 Suppose I have a complex query (without options() / order_by()) to obtain the
 objects I want. I use this query to calculate window intervals. I don't have 
 to
 use it again to fetch the objects, because I already have their id intervals. 
 Am
 I right?
 
 
 Thank you,
 
 Ladislav Lenart
 
 
 On 4.6.2013 19:15, Ladislav Lenart wrote:
 On 4.6.2013 18:49, Michael Bayer wrote:

 On Jun 4, 2013, at 12:28 PM, Ladislav Lenart lenart...@volny.cz wrote:


 Absolutely, you should do whatever you have to in order to get the range
 you want, in fact the recipe even says this...

 Ok. What I want to do is basically the following:
 * Suppose a complex query q with join(...), filter(...) and options(...).
 * I need to create q2 from q such that:
  * It has no options.
* Can I reset the options with q = q.options(None)?
  * It has select with the over by magic taken from WindowedRangeQuery 
 recipe.
* I know I can use with_entities() for this.
 * I will use q2 to obtain the window ranges.
 * I will iterate over the window ranges and apply each to the original q 
 and
 execute it in a loop.

 Can this strategy work?

 there's not a publicly supported feature to reset the options right now 
 so you'd probably need to apply them after you get your window ranging 
 query.  You can probably remove the effect of eager loaders if you were 
 to just assign a blank _attributes dictionary to the new query though (just 
 no long term guarantees).

 other than that, I use the windowing concept extensively and it works very 
 well.

 Ok, I will try it.

 Thank you very much for your invaluable insights,

 Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-05 Thread Ladislav Lenart
Hello.

One more note. I've just tried the below experiment with joinedload() instead of
subqueryload() and that does NOT work just as you expected. One contact is
returned several times and the first occurrences have incomplete phones.

However my experiments with subqueryload() suggest that it works just fine with
yield_per(). Can you elaborate on that one please?


Thank you,

Ladislav Lenart


On 5.6.2013 14:03, Ladislav Lenart wrote:
 Hello.
 
 I've tried an experiment to verify that yield_per() with subqueryload() 
 behaves
 as badly as you described, but according to my practical observation, it 
 issues
 ONE subqueryload() and everything behaves as I would expect. It emits two
 SELECTs, one to fetch the objects and the second one to fetch the related data
 and that's about it, see below. So back to square one, what is wrong with it?
 
 
 Base = declarative_base()
 
 class Contact(Base):
 # Just a sketch, I hope you get the picture.
 id,
 name,
 phones = relationship(Phone) # many
 
 class Phone(Base):
 # Just a sketch, I hope you get the picture.
 id,
 number,
 contact = relationship(Contact) # one
 
 # Setup engine with echo set to True.
 
 phones = ['123456789', '987654321', '555777999']
 for i in range(1, 11):
 c = Contact(name=u' '.join([u'Contact', unicode(i)]))
 session.add(c)
 session.add_all(Phone(contact=c, number=e) for e in phones)
 session.flush()
 session.expunge_all()
 
 q = session.query(Contact).options(subqueryload(Contact.phones))
 for each in q.yield_per(2):
 print each.last_name
 for e in each.phones:
 print e
 
 The output is like this:
 SA info about all the inserts after session.flush().
 SA info about select for contacts.
 SA info about select for their phones.
 Contact 10
 123456789
 987654321
 555777999
 Contact 9
 123456789
 987654321
 555777999
 Contact 8
 123456789
 987654321
 555777999
 Contact 7
 123456789
 987654321
 555777999
 Contact 6
 123456789
 987654321
 555777999
 Contact 5
 123456789
 987654321
 555777999
 Contact 4
 123456789
 987654321
 555777999
 Contact 3
 123456789
 987654321
 555777999
 Contact 2
 123456789
 987654321
 555777999
 Contact 1
 123456789
 987654321
 555777999
 
 
 Thank you,
 
 Ladislav Lenart
 
 
 On 5.6.2013 11:26, Ladislav Lenart wrote:
 Hello.

 I have one more question about my approach to WindowedRangeQuery:

 * Suppose a complex query q with join(...), filter(...) and options(...).
 * I need to create q2 from q such that:
  * It has no options.
* Can I reset the options with q = q.options(None)?
  * It has select with the over by magic taken from WindowedRangeQuery 
 recipe.
* I know I can use with_entities() for this.
 * I will use q2 to obtain the window ranges.
 * I will iterate over the window ranges and apply each to the original q and
 execute it in a loop.

 Suppose I have a complex query (without options() / order_by()) to obtain the
 objects I want. I use this query to calculate window intervals. I don't have 
 to
 use it again to fetch the objects, because I already have their id 
 intervals. Am
 I right?


 Thank you,

 Ladislav Lenart


 On 4.6.2013 19:15, Ladislav Lenart wrote:
 On 4.6.2013 18:49, Michael Bayer wrote:

 On Jun 4, 2013, at 12:28 PM, Ladislav Lenart lenart...@volny.cz wrote:


 Absolutely, you should do whatever you have to in order to get the range
 you want, in fact the recipe even says this...

 Ok. What I want to do is basically the following:
 * Suppose a complex query q with join(...), filter(...) and options(...).
 * I need to create q2 from q such that:
  * It has no options.
* Can I reset the options with q = q.options(None)?
  * It has select with the over by magic taken from WindowedRangeQuery 
 recipe.
* I know I can use with_entities() for this.
 * I will use q2 to obtain the window ranges.
 * I will iterate over the window ranges and apply each to the original q 
 and
 execute it in a loop.

 Can this strategy work?

 there's not a publicly supported feature to reset the options right now 
 so you'd probably need to apply them after you get your window ranging 
 query.  You can probably remove the effect of eager loaders if you 
 were to just assign a blank _attributes dictionary to the new query though 
 (just no long term guarantees).

 other than that, I use the windowing concept extensively and it works very 
 well.

 Ok, I will try it.

 Thank you very much for your invaluable insights,

 Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-05 Thread Ladislav Lenart
Ok, so yield_per() is useless when subqueryload() is used.

Thank you,

Ladislav Lenart


On 5.6.2013 16:27, Michael Bayer wrote:
 the Query emitted by subqueryload does not use yield_per.  so if your total 
 result set is 1000 rows, and the total rows represented by all the 
 collections is 1,  the first time that query is emitted, 1 rows will 
 be fully loaded and processed into memory at once.  This would occur 
 typically somewhere in the first few rows of your 50 -row yield_per batch.   
 So the intent of yield_per, which is to conserve memory and upfront loading 
 overhead, would be defeated entirely by this.
 
 The subqueryload is emitting once if I recall correctly because I probably at 
 some point have the query result being memoized in the query context to 
 prevent it from being emitted many times in a yield_per scenario.
 
 
 On Jun 5, 2013, at 10:20 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 One more note. I've just tried the below experiment with joinedload() 
 instead of
 subqueryload() and that does NOT work just as you expected. One contact is
 returned several times and the first occurrences have incomplete phones.

 However my experiments with subqueryload() suggest that it works just fine 
 with
 yield_per(). Can you elaborate on that one please?


 Thank you,

 Ladislav Lenart


 On 5.6.2013 14:03, Ladislav Lenart wrote:
 Hello.

 I've tried an experiment to verify that yield_per() with subqueryload() 
 behaves
 as badly as you described, but according to my practical observation, it 
 issues
 ONE subqueryload() and everything behaves as I would expect. It emits two
 SELECTs, one to fetch the objects and the second one to fetch the related 
 data
 and that's about it, see below. So back to square one, what is wrong with 
 it?


 Base = declarative_base()

 class Contact(Base):
# Just a sketch, I hope you get the picture.
id,
name,
phones = relationship(Phone) # many

 class Phone(Base):
# Just a sketch, I hope you get the picture.
id,
number,
contact = relationship(Contact) # one

 # Setup engine with echo set to True.

 phones = ['123456789', '987654321', '555777999']
 for i in range(1, 11):
c = Contact(name=u' '.join([u'Contact', unicode(i)]))
session.add(c)
session.add_all(Phone(contact=c, number=e) for e in phones)
 session.flush()
 session.expunge_all()

 q = session.query(Contact).options(subqueryload(Contact.phones))
 for each in q.yield_per(2):
print each.last_name
for e in each.phones:
print e

 The output is like this:
 SA info about all the inserts after session.flush().
 SA info about select for contacts.
 SA info about select for their phones.
 Contact 10
 123456789
 987654321
 555777999
 Contact 9
 123456789
 987654321
 555777999
 Contact 8
 123456789
 987654321
 555777999
 Contact 7
 123456789
 987654321
 555777999
 Contact 6
 123456789
 987654321
 555777999
 Contact 5
 123456789
 987654321
 555777999
 Contact 4
 123456789
 987654321
 555777999
 Contact 3
 123456789
 987654321
 555777999
 Contact 2
 123456789
 987654321
 555777999
 Contact 1
 123456789
 987654321
 555777999


 Thank you,

 Ladislav Lenart


 On 5.6.2013 11:26, Ladislav Lenart wrote:
 Hello.

 I have one more question about my approach to WindowedRangeQuery:

 * Suppose a complex query q with join(...), filter(...) and options(...).
 * I need to create q2 from q such that:
 * It has no options.
   * Can I reset the options with q = q.options(None)?
 * It has select with the over by magic taken from WindowedRangeQuery 
 recipe.
   * I know I can use with_entities() for this.
 * I will use q2 to obtain the window ranges.
 * I will iterate over the window ranges and apply each to the original q 
 and
 execute it in a loop.

 Suppose I have a complex query (without options() / order_by()) to obtain 
 the
 objects I want. I use this query to calculate window intervals. I don't 
 have to
 use it again to fetch the objects, because I already have their id 
 intervals. Am
 I right?


 Thank you,

 Ladislav Lenart


 On 4.6.2013 19:15, Ladislav Lenart wrote:
 On 4.6.2013 18:49, Michael Bayer wrote:

 On Jun 4, 2013, at 12:28 PM, Ladislav Lenart lenart...@volny.cz wrote:


 Absolutely, you should do whatever you have to in order to get the 
 range
 you want, in fact the recipe even says this...

 Ok. What I want to do is basically the following:
 * Suppose a complex query q with join(...), filter(...) and 
 options(...).
 * I need to create q2 from q such that:
 * It has no options.
   * Can I reset the options with q = q.options(None)?
 * It has select with the over by magic taken from WindowedRangeQuery 
 recipe.
   * I know I can use with_entities() for this.
 * I will use q2 to obtain the window ranges.
 * I will iterate over the window ranges and apply each to the original 
 q and
 execute it in a loop.

 Can this strategy work?

 there's not a publicly supported feature to reset the options right 
 now so you'd probably need

Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-04 Thread Ladislav Lenart
Hello.

 assuming cls.foo is a many-to-one, it will produce the correct result,
 but will be far worse in terms of memory and performance, as the
 subqueryload() call will be invoked for each distinct batch of 50 rows,
 across the *full* result set.  So if your result has 1000 rows, and the
 number of bars total is 1, you will load 1 additional rows for
 *each* yield per of 50.

Ok, I think I get it. Is there a way to make it all work without the performance
penalty of subqueryload? For example, what will happen if I replace it with
joinedload(cls.bars)?

Ladislav Lenart


On 4.6.2013 16:19, Michael Bayer wrote:
 
 On Jun 4, 2013, at 8:18 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 I have a hard time to understand the following comment for Query.yield_per():
 
  Yield only ``count`` rows at a time.

WARNING: use this method with caution; if the same instance is present
in more than one batch of rows, end-user changes to attributes will be
overwritten.

In particular, it's usually impossible to use this setting with
eagerly loaded collections (i.e. any lazy='joined' or 'subquery')
since those collections will be cleared for a new load when
encountered in a subsequent result batch.   In the case of 'subquery'
loading, the full result for all rows is fetched which generally
defeats the purpose of :meth:`~sqlalchemy.orm.query.Query.yield_per`.

Also note that many DBAPIs do not stream results, pre-buffering
all rows before making them available, including mysql-python and
psycopg2.  :meth:`~sqlalchemy.orm.query.Query.yield_per` will also
set the ``stream_results`` execution
option to ``True``, which currently is only understood by psycopg2
and causes server side cursors to be used.

 Suppose I have a code like this:

q = session.query(cls).filter(...)
q = q.options(
joinedload(cls.foo),
  subqueryload(cls.bars),
)
for each in q.yield_per(50):
# access each.foo or each.bars

 Does it work? Is so, why? If not, why?
 
 assuming cls.foo is a many-to-one, it will produce the correct result, but 
 will be far worse in terms of memory and performance, as the subqueryload() 
 call will be invoked for each distinct batch of 50 rows, across the *full* 
 result set.  So if your result has 1000 rows, and the number of bars total 
 is 1, you will load 1 additional rows for *each* yield per of 50.
 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-04 Thread Ladislav Lenart
Hello.

 You will then get the wrong results.   The docstring tries to explain this -
 a joinedload uses a JOIN.  For each cls instance, there are many rows, one
 for each bar.  If you cut off the results in the middle of populating that
 collection, the collection is incomplete, you'll see the wrong collection on
 your cls.bars.  On the next load, cls.bars will be wiped out and populated
 with the remaining bar objects.

Ok, I think I understand this too.

I've tried WindowedRangeQuery. It looked promising at first but it is (much)
slower than yield_per() with all its quirks, at least for my usecase. If I
understand the WindowedRangeQuery recipe, it does a full scan of the target
table first to read all the ids and calculate the bounds of all the windows. I
don't want to it like this. I am working with relatively large datasets but it
is still far less than all rows in the table. Something like 10-5 rows from
a table with 1-2 million rows. The windowed query iterates over many completely
empty windows.

Can I modify the recipe so it preserves the filtering and creates windows only
for the interesting subset of the table?

Thank you,

Ladislav Lenart


On 4.6.2013 17:18, Michael Bayer wrote:
 
 On Jun 4, 2013, at 10:45 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 assuming cls.foo is a many-to-one, it will produce the correct result,
 but will be far worse in terms of memory and performance, as the
 subqueryload() call will be invoked for each distinct batch of 50 rows,
 across the *full* result set.  So if your result has 1000 rows, and the
 number of bars total is 1, you will load 1 additional rows for
 *each* yield per of 50.

 Ok, I think I get it. Is there a way to make it all work without the 
 performance
 penalty of subqueryload? For example, what will happen if I replace it with
 joinedload(cls.bars)?
 
 You will then get the wrong results.   The docstring tries to explain this - 
 a joinedload uses a JOIN.  For each cls instance, there are many rows, one 
 for each bar.  If you cut off the results in the middle of populating that 
 collection, the collection is incomplete, you'll see the wrong collection on 
 your cls.bars.  On the next load, cls.bars will be wiped out and populated 
 with the remaining bar objects.
 
 Don't use yield_per.  Use windowing instead, see 
 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-04 Thread Ladislav Lenart
Hello.

 OK, but with yield_per() you want to use eagerloading also, so yield_per()
 not fast enough either, I guess

No. I use yield_per() on complex queries with join(), filter() and both
joinedload() and subqueryload(). It is possible that they sometimes returns
wrong results because of yield_per(). I am not sure about that, but it is
definitely much faster than the original WindowedRangeQuery recipe. I can only
speculate that postgres caches subqueryload results...


 Absolutely, you should do whatever you have to in order to get the range
 you want, in fact the recipe even says this...

Ok. What I want to do is basically the following:
* Suppose a complex query q with join(...), filter(...) and options(...).
* I need to create q2 from q such that:
  * It has no options.
* Can I reset the options with q = q.options(None)?
  * It has select with the over by magic taken from WindowedRangeQuery recipe.
* I know I can use with_entities() for this.
* I will use q2 to obtain the window ranges.
* I will iterate over the window ranges and apply each to the original q and
execute it in a loop.

Can this strategy work?


Thank you,

Ladislav Lenart


On 4.6.2013 17:57, Michael Bayer wrote:
 
 On Jun 4, 2013, at 11:41 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 You will then get the wrong results.   The docstring tries to explain this -
 a joinedload uses a JOIN.  For each cls instance, there are many rows, one
 for each bar.  If you cut off the results in the middle of populating that
 collection, the collection is incomplete, you'll see the wrong collection on
 your cls.bars.  On the next load, cls.bars will be wiped out and populated
 with the remaining bar objects.

 Ok, I think I understand this too.

 I've tried WindowedRangeQuery. It looked promising at first but it is (much)
 slower than yield_per() with all its quirks, at least for my usecase.
 
 OK, but with yield_per() you want to use eagerloading also, so yield_per() 
 not fast enough either, I guess
 
 
 If I
 understand the WindowedRangeQuery recipe, it does a full scan of the target
 table first to read all the ids and calculate the bounds of all the windows. 
 I
 don't want to it like this. I am working with relatively large datasets but 
 it
 is still far less than all rows in the table. Something like 10-5 rows 
 from
 a table with 1-2 million rows. The windowed query iterates over many 
 completely
 empty windows.

 Can I modify the recipe so it preserves the filtering and creates windows 
 only
 for the interesting subset of the table?
 
 Absolutely, you should do whatever you have to in order to get the range you 
 want, in fact the recipe even says this:
 
  Enhance this yourself !  Add a where argument
 so that windows of just a subset of rows can
 be computed.
 
 if your situation is even simpler than that, such as just querying from PKs 
 50-1000, you could just make up your own integer ranges within those two 
 endpoints.   

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-04 Thread Ladislav Lenart
On 4.6.2013 18:49, Michael Bayer wrote:
 
 On Jun 4, 2013, at 12:28 PM, Ladislav Lenart lenart...@volny.cz wrote:
 

 Absolutely, you should do whatever you have to in order to get the range
 you want, in fact the recipe even says this...

 Ok. What I want to do is basically the following:
 * Suppose a complex query q with join(...), filter(...) and options(...).
 * I need to create q2 from q such that:
  * It has no options.
* Can I reset the options with q = q.options(None)?
  * It has select with the over by magic taken from WindowedRangeQuery recipe.
* I know I can use with_entities() for this.
 * I will use q2 to obtain the window ranges.
 * I will iterate over the window ranges and apply each to the original q and
 execute it in a loop.

 Can this strategy work?
 
 there's not a publicly supported feature to reset the options right now so 
 you'd probably need to apply them after you get your window ranging query.
   You can probably remove the effect of eager loaders if you were to just 
 assign a blank _attributes dictionary to the new query though (just no long 
 term guarantees).
 
 other than that, I use the windowing concept extensively and it works very 
 well.

Ok, I will try it.

Thank you very much for your invaluable insights,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] nested inheritance / polymorphic relationships

2013-06-04 Thread Ladislav Lenart
Hello.

 It seems like I have to choose, D can either be a polymorphic child or it can 
 be
 a parent - it can't be both.

 Do I have any options here?

I am almost sure you are correct. This is not possible in SA so you have to
flatten your hierarchy.

I don't know about experiences with inheritance of others on this list, but mine
in SA0.7.9 is not that good. I encountered various limitations along the way. I
guess it dependes on the complexity of the queries. We plan to get rid of it
eventually in our app. However, if I am not mistaken, SA0.8 addresses all the
quirks.


Ladislav Lenart


On 4.6.2013 07:55, Amir Elaguizy wrote:
 Hi there,
 
 I have a tree that looks like this, reflected via polymorphic inheritance:
 
 
  A
   /   |   \
 B   C   D
 
 That works great, like:
 
 
 class BaseModel(db.Model): # Table A in diagram
 __tablename__ = entities
 
 id = db.Column(db.BigInteger, primary_key=True, nullable=False,
 server_default=func.nextval('guid_seq'))
 type_id = db.Column(db.SmallInteger, db.ForeignKey(EntityTypesModel.id))
 
 __mapper_args__ = {
 'polymorphic_identity':'entity',
 'polymorphic_on':type_id,
 'with_polymorphic':'*'
 }
 
 class BrandModel(BaseModel):   # Table B, C, D in diagram
 __tablename__ = 'brands'
 
 id = db.Column(db.BigInteger, db.ForeignKey(StufffModel.id),
 primary_key=True, nullable=False)
 name = db.Column(db.String, nullable=False)
 
 __mapper_args__ = {
 'polymorphic_identity':ET_BRAND,
 }
 
 
 The problem is I need to reflect something more like this:
 
  A
   /   |   \
 B   C   D
  /   \
EF
 
 Where D is not only a polymorphic child of A but also the polymorphic parents 
 of
 E  F.
 
 It seems like I have to choose, D can either be a polymorphic child or it can 
 be
 a parent - it can't be both.
 
 Do I have any options here?
 
 -- 
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email
 to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][0.7.9] How to issue apply_labels() on an ORM query?

2013-05-31 Thread Ladislav Lenart
Glad I could help, but I don't understand what is going on. Neither ticket
description nor the patch itself helped me. Sorry.

What is select_from() good for when it generates a cartesian query?

What MIGHT help ME (a lot infact) is a couple of DOs and DONTs examples in one
place for all these three constructs

OR

A 'highlevel user-friendly step-by-step description' about what each construct
does, so that I can see where they differ.


Anyway, keep the excellent work!

Ladislav Lenart


On 31.5.2013 01:12, Michael Bayer wrote:
 this is very helpful because you are here running into an older feature that 
 I think is not very applicable to modern usage, not to mention not 
 terrifically documented, so I've added 
 http://www.sqlalchemy.org/trac/attachment/ticket/2736/ to address changing 
 the role of select_from() to be more what it says, and to offload a 
 particular magic aspect of it into a new method called select_entity_from().  
  
 
 if you look at the docstrings added to the patch there, you may get a sense 
 for what's been going on.   I'll probably commit this soon and those docs 
 will be up on the site.  There will be an upgrade path for users in this case.
 
 
 
 
 On May 30, 2013, at 1:10 PM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 Sorry for the long delay. I finally had enough time to produce a minimal
 self-contained regression. The attached file produces the following SQL:

 WITH RECURSIVE
 q_cte(partner_id, max_depth) AS (
SELECT
partner.id AS partner_id,
1 AS max_depth
FROM partner
WHERE partner.sponsor_id IS NULL
UNION ALL
SELECT
partner_alias.id AS partner_id,
max_depth + 1 AS max_depth
FROM
partner AS partner_alias,
q_cte AS q_cte_alias
WHERE partner_alias.sponsor_id = q_cte_alias.partner_id
 )
 SELECT
q_cte.partner_id AS q_cte_partner_id, -- suspicious
partner.sponsor_id AS partner_sponsor_id
 FROM
q_cte
JOIN partner ON q_cte.partner_id = q_cte.partner_id -- BUG


 I would expect this (q_cte part is correct):

 WITH RECURSIVE
 q_cte(partner_id, max_depth) AS (
...
 )
 SELECT
partner.id AS partner_id,
partner.sponsor_id AS partner_sponsor_id
 FROM
q_cte
JOIN partner ON q_cte.partner_id = partner.id


 The bug is somehow related to the use of select_from().


 Hope this helps,

 Ladislav Lenart


 On 10.5.2013 22:04, Michael Bayer wrote:

 On May 10, 2013, at 3:03 PM, Ladislav Lenart lenart...@volny.cz wrote:

 Hello.

 My main concern was that the query creates a cartesian product and I 
 thought the
 warning might have something to do with it. It haven't. The problem is 
 related
 to the use of select_from():

   q = session.query(cls, PersonalContact).select_from(q_cte_union)
   q = q.join(cls, cls.id == q_cte_union.c.partner_id)

 This part renders the following SQL:

 SELECT *
 FROM
   q_cte
   JOIN partner ON q_cte.partner_id = q_cte.partner_id

 As you can see, the JOIN condition is wrong though I have no idea why. The
 following works, however:

   q = session.query(cls)
   q = q.add_entity(PersonalContact)
   q = q.join(q_cte_union, cls.id == q_cte_union.c.partner_id)

 Is my usage of select_from() bad for some reason? Do you know what's going 
 on? I
 can provide you a fully runnable test case if you still need it (on 
 Monday).
 Note also that I use SA 0.7.9. Is this fixed in later versions?

 I've no idea why you're getting it without more context - if everything is 
 generated from a query(), typically the labels are already applied, with 
 the exception of some operations such as subquery() (maybe cte? not sure, 
 I'd need to try it, hence an already existing example saves me lots of time 
 tinkering around trying to guess what you're doing).

 Things have been improved in 0.8 regarding same-named columns but that only 
 applies when the labels are already present.

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.


 select_from_bug.py

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][0.7.9] How to issue apply_labels() on an ORM query?

2013-05-31 Thread Ladislav Lenart
Hello.

I've read the patch and the new documentation and i've learned about the
existence of select_entity_from(). I was trying to say that the new
documentation does not help me to understand the meaning / preferred usage of
these constructs (i.e. select_from, select_entity_from and aliased). I simply
don't understand when should I use which and why.

I have already solved my original problem by replacing select_from() with
add_entity() and join(). Take a look at older posts in this thread for more
details if you're interested, though the original query is a bit more involved
than the supplied regression.

Ladislav Lenart


On 31.5.2013 12:06, Andrija Zarić wrote:
 On Friday, May 31, 2013 11:46:46 AM UTC+2, Ladislav Lenart wrote:
 
 Glad I could help, but I don't understand what is going on. Neither ticket
 description nor the patch itself helped me. Sorry.
 
 What is select_from() good for when it generates a cartesian query?
 
 What MIGHT help ME (a lot infact) is a couple of DOs and DONTs examples 
 in one
 place for all these three constructs
 
 OR
 
 A 'highlevel user-friendly step-by-step description' about what each 
 construct
 does, so that I can see where they differ.
 
  
 Hi, Lenart!
 
 In the patch you can find doc for the new method Query.select_entity_from. As 
 I
 read it, it replaces Query.select_from.
 
 Have you tried simply replacing old method with the new one (after applying 
 the
 patch)?
  
 a.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][0.7.9] How to issue apply_labels() on an ORM query?

2013-05-31 Thread Ladislav Lenart
Thank you for the excellent description. The replace in bold did the trick for
me :-)

Ladislav Lenart


On 31.5.2013 16:31, Michael Bayer wrote:
 
 
 On Friday, May 31, 2013 6:31:25 AM UTC-4, Ladislav Lenart wrote:
 
 Hello.
 
 I've read the patch and the new documentation and i've learned about the
 existence of select_entity_from(). I was trying to say that the new
 documentation does not help me to understand the meaning / preferred 
 usage of
 these constructs (i.e. select_from, select_entity_from and aliased). I 
 simply
 don't understand when should I use which and why.
 
 
 use select_from() as you have been, and you're done.  It now does what you 
 expect.
 
 A more detailed explanation is here:
 http://docs.sqlalchemy.org/en/latest/changelog/migration_09.html#query-select-from-no-longer-applies-the-clause-to-corresponding-entities
 
 
  
 
 
 I have already solved my original problem by replacing select_from() with
 add_entity() and join(). Take a look at older posts in this thread for 
 more
 details if you're interested, though the original query is a bit more 
 involved
 than the supplied regression.
 
 Ladislav Lenart
 
 
 On 31.5.2013 12:06, Andrija Zarić wrote:
  On Friday, May 31, 2013 11:46:46 AM UTC+2, Ladislav Lenart wrote:
 
  Glad I could help, but I don't understand what is going on. Neither
 ticket
  description nor the patch itself helped me. Sorry.
 
  What is select_from() good for when it generates a cartesian query?
 
  What MIGHT help ME (a lot infact) is a couple of DOs and DONTs
 examples in one
  place for all these three constructs
 
  OR
 
  A 'highlevel user-friendly step-by-step description' about what each
 construct
  does, so that I can see where they differ.
 
   
  Hi, Lenart!
 
  In the patch you can find doc for the new method 
 Query.select_entity_from.
 As I
  read it, it replaces Query.select_from.
 
  Have you tried simply replacing old method with the new one (after
 applying the
  patch)?
   
  a.
 
 -- 
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email
 to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] [Q][0.7.9] DetachedInstanceError

2013-05-30 Thread Ladislav Lenart
Hello.

I occasionally see DetachedInstanceError in apache error log of my web app.

According to

http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#expunging

an object is detached when:

1. It is expunged.
2. The session is closed.

I use expunge() only in some of my unit tests, never in the production code.
Thus the first option is out. The error occurs on various places but I think the
session *should* be still opened when it happens. Can an object become detached
any other way?

Do you have any idea how can I diagnose this problem given that:
* It happens sparingly and in random places?
* I am unable to reproduce it at will?

The web app is running cherrypy under apache.


Thank you in advance,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][0.7.9] How to issue apply_labels() on an ORM query?

2013-05-30 Thread Ladislav Lenart
Hello.

Sorry for the long delay. I finally had enough time to produce a minimal
self-contained regression. The attached file produces the following SQL:

WITH RECURSIVE
q_cte(partner_id, max_depth) AS (
SELECT
partner.id AS partner_id,
1 AS max_depth
FROM partner
WHERE partner.sponsor_id IS NULL
UNION ALL
SELECT
partner_alias.id AS partner_id,
max_depth + 1 AS max_depth
FROM
partner AS partner_alias,
q_cte AS q_cte_alias
WHERE partner_alias.sponsor_id = q_cte_alias.partner_id
)
SELECT
q_cte.partner_id AS q_cte_partner_id, -- suspicious
partner.sponsor_id AS partner_sponsor_id
FROM
q_cte
JOIN partner ON q_cte.partner_id = q_cte.partner_id -- BUG


I would expect this (q_cte part is correct):

WITH RECURSIVE
q_cte(partner_id, max_depth) AS (
...
)
SELECT
partner.id AS partner_id,
partner.sponsor_id AS partner_sponsor_id
FROM
q_cte
JOIN partner ON q_cte.partner_id = partner.id


The bug is somehow related to the use of select_from().


Hope this helps,

Ladislav Lenart


On 10.5.2013 22:04, Michael Bayer wrote:
 
 On May 10, 2013, at 3:03 PM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 My main concern was that the query creates a cartesian product and I thought 
 the
 warning might have something to do with it. It haven't. The problem is 
 related
 to the use of select_from():

q = session.query(cls, PersonalContact).select_from(q_cte_union)
q = q.join(cls, cls.id == q_cte_union.c.partner_id)

 This part renders the following SQL:

 SELECT *
 FROM
q_cte
JOIN partner ON q_cte.partner_id = q_cte.partner_id

 As you can see, the JOIN condition is wrong though I have no idea why. The
 following works, however:

q = session.query(cls)
q = q.add_entity(PersonalContact)
q = q.join(q_cte_union, cls.id == q_cte_union.c.partner_id)

 Is my usage of select_from() bad for some reason? Do you know what's going 
 on? I
 can provide you a fully runnable test case if you still need it (on Monday).
 Note also that I use SA 0.7.9. Is this fixed in later versions?
 
 I've no idea why you're getting it without more context - if everything is 
 generated from a query(), typically the labels are already applied, with 
 the exception of some operations such as subquery() (maybe cte? not sure, I'd 
 need to try it, hence an already existing example saves me lots of time 
 tinkering around trying to guess what you're doing).
 
 Things have been improved in 0.8 regarding same-named columns but that only 
 applies when the labels are already present.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


# coding=utf-8
from sqlalchemy import Column, ForeignKey, Integer
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.sql.expression import literal_column
from sqlalchemy.orm.util import aliased
from sqlalchemy.engine import create_engine
from sqlalchemy.orm.session import sessionmaker
from sqlalchemy.orm import relationship


Base = declarative_base()
session = None


class Partner(Base):
__tablename__ = 'partner'
id = Column(Integer(), primary_key=True)
sponsor_id = Column(Integer(), ForeignKey('partner.id', ondelete='SET NULL'))
sponsor = relationship('Partner', primaryjoin='Partner.sponsor_id == Partner.id', remote_side='Partner.id', uselist=False)

def find_subtree(self, max_depth=None):
cls = self.__class__
i0 = literal_column('1').label('max_depth')
q_base = session.query(cls.id.label('partner_id'), i0).filter(cls.sponsor_id == self.id)
q_cte = q_base.cte(name='q_cte', recursive=True)
q_cte_alias = aliased(q_cte, name='q_cte_alias')
partner_alias = aliased(cls, name='partner_alias')
i1 = literal_column('max_depth + 1').label('max_depth')
q_rec = session.query(partner_alias.id.label('partner_id'), i1)
q_rec = q_rec.filter(partner_alias.sponsor_id == q_cte_alias.c.partner_id)
if max_depth is not None:
q_rec = q_rec.filter(q_cte_alias.c.max_depth  max_depth)
q_cte_union = q_cte.union_all(q_rec)
q = session.query(cls).select_from(q_cte_union)
q = q.join(cls, cls.id == q_cte_union.c.partner_id)
# q = q.join(q_cte_union, cls.id == q_cte_union.c.partner_id)
return q


def main():
global session
conn_string = 'postgresql+psycopg2://lada:heslo@localhost:5433/zfp_xxx'
engine = create_engine(conn_string, echo=True)
session = sessionmaker(bind=engine, autoflush=False)()
Base.metadata.bind = engine
Base.metadata.create_all()
root

[sqlalchemy] [Q][0.7.9] How to issue apply_labels() on an ORM query?

2013-05-10 Thread Ladislav Lenart
Hello.

I get a warning like this:

usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py:2276: SAWarning:
Column 'id' on table sqlalchemy.sql.expression.Select at 0xa3e19cc; Select
object being replaced by another column with the same key.  Consider use_labels
for select() statements.

use_labels() or apply_labels() should help but both work only on the core
constructs. How can I do this on a Query level?

Also, can this alone cause a cartesian product in a query?

My entire (recursive) query:

def find_subtree(self, max_depth=None, eager=False):
cls = self.__class__
i0 = literal_column('1').label('max_depth')
q_base = session.query(cls.id.label('partner_id'),
i0).filter(cls.sponsor_id == self.id)
q_cte = q_base.cte(name='q_cte', recursive=True)
q_cte_alias = aliased(q_cte, name='q_cte_alias')
partner_alias = aliased(cls, name='partner_alias')
i1 = literal_column('max_depth + 1').label('max_depth')
q_rec = session.query(partner_alias.id.label('partner_id'), i1)
q_rec = q_rec.filter(partner_alias.sponsor_id == 
q_cte_alias.c.partner_id)
if max_depth is not None:
q_rec = q_rec.filter(q_cte_alias.c.max_depth  max_depth)
q_cte_union = q_cte.union_all(q_rec)
if eager:
q = session.query(cls, PersonalContact).select_from(q_cte_union)
q = q.join(cls, cls.id == q_cte_union.c.partner_id)
q = q.join(PersonalContact, cls.id == PersonalContact.partner_id)
q = q.options(
subqueryload_all(cls.partner_regions),
joinedload_all(cls.personal_data, PersonalData.address,
innerjoin=True),
subqueryload_all(PersonalContact.contact_tags, ContactTag.tag),
)
else:
q = session.query(cls)
q = q.join(q_cte_union, cls.id == q_cte_union.c.partner_id)
return q


Thank you,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][0.7.9] How to issue apply_labels() on an ORM query?

2013-05-10 Thread Ladislav Lenart
Hello.

My main concern was that the query creates a cartesian product and I thought the
warning might have something to do with it. It haven't. The problem is related
to the use of select_from():

q = session.query(cls, PersonalContact).select_from(q_cte_union)
q = q.join(cls, cls.id == q_cte_union.c.partner_id)

This part renders the following SQL:

SELECT *
FROM
q_cte
JOIN partner ON q_cte.partner_id = q_cte.partner_id

As you can see, the JOIN condition is wrong though I have no idea why. The
following works, however:

q = session.query(cls)
q = q.add_entity(PersonalContact)
q = q.join(q_cte_union, cls.id == q_cte_union.c.partner_id)

Is my usage of select_from() bad for some reason? Do you know what's going on? I
can provide you a fully runnable test case if you still need it (on Monday).
Note also that I use SA 0.7.9. Is this fixed in later versions?


Thank you,

Ladislav Lenart


On 10.5.2013 17:17, Michael Bayer wrote:
 The Query usually does apply_labels automatically.  if you are getting that 
 warning with your query below, there's too much going on there for me to 
 identify by sight where that might be happening, I would need actual code 
 which I can run in order to diagnose.


 On May 10, 2013, at 11:10 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 I get a warning like this:

 usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py:2276: SAWarning:
 Column 'id' on table sqlalchemy.sql.expression.Select at 0xa3e19cc; Select
 object being replaced by another column with the same key.  Consider 
 use_labels
 for select() statements.

 use_labels() or apply_labels() should help but both work only on the core
 constructs. How can I do this on a Query level?

 Also, can this alone cause a cartesian product in a query?

 My entire (recursive) query:

def find_subtree(self, max_depth=None, eager=False):
cls = self.__class__
i0 = literal_column('1').label('max_depth')
q_base = session.query(cls.id.label('partner_id'),
 i0).filter(cls.sponsor_id == self.id)
q_cte = q_base.cte(name='q_cte', recursive=True)
q_cte_alias = aliased(q_cte, name='q_cte_alias')
partner_alias = aliased(cls, name='partner_alias')
i1 = literal_column('max_depth + 1').label('max_depth')
q_rec = session.query(partner_alias.id.label('partner_id'), i1)
q_rec = q_rec.filter(partner_alias.sponsor_id == 
 q_cte_alias.c.partner_id)
if max_depth is not None:
q_rec = q_rec.filter(q_cte_alias.c.max_depth  max_depth)
q_cte_union = q_cte.union_all(q_rec)
if eager:
q = session.query(cls, PersonalContact).select_from(q_cte_union)
q = q.join(cls, cls.id == q_cte_union.c.partner_id)
q = q.join(PersonalContact, cls.id == PersonalContact.partner_id)
q = q.options(
subqueryload_all(cls.partner_regions),
joinedload_all(cls.personal_data, PersonalData.address,
 innerjoin=True),
subqueryload_all(PersonalContact.contact_tags, 
 ContactTag.tag),
)
else:
q = session.query(cls)
q = q.join(q_cte_union, cls.id == q_cte_union.c.partner_id)
return q


 Thank you,

 Ladislav Lenart

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] [Q] WindowedRangeQuery recipe

2013-04-26 Thread Ladislav Lenart
Hello.

I have found this recipe:

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery

I think it fits perfectly for my usecase: process potentially large result set
of ORM instances in chunks of predefined size to limit memory consumption.

I have few questions / remarks:
* I think there is a typo in the line (the comma should not be there, right?):
intervals = [id for id, in q]
* The column supplied to windowed_query() function should be the primary key of
the table that represents the ORM instances, right?
* When are the ORM instances from the previous chunk reclaimed? I know the
session keeps them in an identity map.


Thank you,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q] WindowedRangeQuery recipe

2013-04-26 Thread Ladislav Lenart
Hello.

I think I understand it all now.

Thank you,

Ladislav Lenart


On 26.4.2013 13:22, Gunnlaugur Thor Briem wrote:
 No, the comma is supposed to be there; it's for tuple unpacking. The iterable 
 q
 yields tuples (which in this case are of length one, because the resultset has
 only one column).
 
 The column should be whatever attribute of the ORM instances you want to sort
 by, not necessarily the primary key.
 
 The ORM instances are referenced by the session, so they will not be reclaimed
 until the session is closed (or they are expunged from it).
 
 Regards,
 
 Gulli
 
 
 
 On Fri, Apr 26, 2013 at 10:06 AM, Ladislav Lenart lenart...@volny.cz
 mailto:lenart...@volny.cz wrote:
 
 Hello.
 
 I have found this recipe:
 
 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery
 
 I think it fits perfectly for my usecase: process potentially large 
 result set
 of ORM instances in chunks of predefined size to limit memory consumption.
 
 I have few questions / remarks:
 * I think there is a typo in the line (the comma should not be there, 
 right?):
 intervals = [id for id, in q]
 * The column supplied to windowed_query() function should be the primary 
 key of
 the table that represents the ORM instances, right?
 * When are the ORM instances from the previous chunk reclaimed? I know the
 session keeps them in an identity map.
 
 
 Thank you,
 
 Ladislav Lenart
 
 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy%2bunsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email
 to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] [Q][SA 07.9] - Trouble with EXISTS that references main SELECT

2013-04-18 Thread Ladislav Lenart
Hello.

The following SA code:

# cls = Advice
q0 = session.query(Deal).join(ImportedPartnerShare).filter(
Deal.main_deal_ext_id == cls.main_deal_ext_id,
ImportedPartnerShare.partner_id == partner.id,
)
q = session.query(cls).filter(exists(q0.with_entities('1').statement))
q = q.options(
joinedload(cls.advice_type, innerjoin=True),
joinedload_all(cls.responsible_partner, Partner.personal_data),
)


produces this SQL:


SELECT *
FROM
advice
JOIN advice_type AS advice_type_1 ON
advice.advice_type_id = advice_type_1.id
LEFT OUTER JOIN partner AS partner_1 ON
advice.responsible_partner_id = partner_1.id
LEFT OUTER JOIN personal_data AS personal_data_1 ON
partner_1.personal_data_id = personal_data_1.id
WHERE
EXISTS (
SELECT 1
FROM
advice, -- --  How can I get rid of this? 
deal
JOIN imported_partner_share ON deal.id = 
imported_partner_share.deal_id
WHERE
deal.main_deal_ext_id = advice.main_deal_ext_id
AND imported_partner_share.partner_id = %(partner_id_1)s
)


but I want to reference the advice table from the top query in EXISTS like this:


SELECT *
FROM
advice
JOIN advice_type ON advice.advice_type_id = advice_type.id
LEFT OUTER JOIN partner ON advice.responsible_partner_id = partner.id
LEFT OUTER JOIN personal_data AS personal_data ON
partner.personal_data_id = personal_data.id
WHERE
EXISTS (
SELECT 1
FROM
deal
JOIN imported_partner_share ON
deal.id = imported_partner_share.deal_id
WHERE
deal.main_deal_ext_id = advice.main_deal_ext_id
AND imported_partner_share.partner_id = %(partner_id_1)s
)


How can I do that?


Thank you,

Ladislav Lenart


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 07.9] - Trouble with EXISTS that references main SELECT

2013-04-18 Thread Ladislav Lenart
Hello.

I have found a solution to my problem: correlate(). The following code produces
SQL I want (see below):

q0 = session.query(Deal).join(ImportedPartnerShare).filter(
Deal.main_deal_ext_id == cls.main_deal_ext_id,
ImportedPartnerShare.partner_id == partner.id,
)
q = session.query(cls).filter(
exists(q0.with_entities('1').statement.correlate(cls.__table__))
)
q = q.options(
joinedload(cls.advice_type, innerjoin=True),
joinedload_all(cls.responsible_partner, Partner.personal_data),
)


Ladislav Lenart


On 18.4.2013 18:54, Ladislav Lenart wrote:
 Hello.
 
 The following SA code:
 
 # cls = Advice
 q0 = session.query(Deal).join(ImportedPartnerShare).filter(
 Deal.main_deal_ext_id == cls.main_deal_ext_id,
 ImportedPartnerShare.partner_id == partner.id,
 )
 q = session.query(cls).filter(exists(q0.with_entities('1').statement))
 q = q.options(
 joinedload(cls.advice_type, innerjoin=True),
 joinedload_all(cls.responsible_partner, Partner.personal_data),
 )
 
 
 produces this SQL:
 
 
 SELECT *
 FROM
 advice
 JOIN advice_type AS advice_type_1 ON
 advice.advice_type_id = advice_type_1.id
 LEFT OUTER JOIN partner AS partner_1 ON
 advice.responsible_partner_id = partner_1.id
 LEFT OUTER JOIN personal_data AS personal_data_1 ON
 partner_1.personal_data_id = personal_data_1.id
 WHERE
 EXISTS (
 SELECT 1
 FROM
 advice, -- --  How can I get rid of this? 
 deal
 JOIN imported_partner_share ON deal.id = 
 imported_partner_share.deal_id
 WHERE
 deal.main_deal_ext_id = advice.main_deal_ext_id
 AND imported_partner_share.partner_id = %(partner_id_1)s
 )
 
 
 but I want to reference the advice table from the top query in EXISTS like 
 this:
 
 
 SELECT *
 FROM
 advice
 JOIN advice_type ON advice.advice_type_id = advice_type.id
 LEFT OUTER JOIN partner ON advice.responsible_partner_id = partner.id
 LEFT OUTER JOIN personal_data AS personal_data ON
 partner.personal_data_id = personal_data.id
 WHERE
 EXISTS (
 SELECT 1
 FROM
 deal
 JOIN imported_partner_share ON
 deal.id = imported_partner_share.deal_id
 WHERE
 deal.main_deal_ext_id = advice.main_deal_ext_id
 AND imported_partner_share.partner_id = %(partner_id_1)s
 )
 
 
 How can I do that?
 
 
 Thank you,
 
 Ladislav Lenart
 
 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Aggregating results of a union

2013-03-21 Thread Ladislav Lenart
Hello.

Perhaps something like this is what you are looking for?

def q_union():
s = session
q1 = s.query(
Movement.customer_id.label('customer_id'),
Movement.document_id.label('document_id'),
Movement.debt.label('debt_moved'),
literal(0.00).label('debt_total')
)
q2 = s.query(
Debt.customer_id.label('customer_id'),
Debt.document_id.label('document_id'),
literal(0.00).label('debt_moved'),
Debt.debt.label('debt_total'),
)
q3 = q1.union(q2).subquery()
q4 = session.query(
q3.c.customer_id,
q3.c.document_id,
func.sum(q3.c.debt_moved),
func.sum(q3.c.debt_total),
).group_by(q3.c.customer_id, q3.c.document_id)
return q4

Call to `print q_union()` prints the following:

SELECT
anon_1.customer_id AS anon_1_customer_id,
anon_1.document_id AS anon_1_document_id,
sum(anon_1.debt_moved) AS sum_1,
sum(anon_1.debt_total) AS sum_2
FROM (
SELECT
anon_2.customer_id AS customer_id,
anon_2.document_id AS document_id,
anon_2.debt_moved AS debt_moved,
anon_2.debt_total AS debt_total
FROM (
SELECT
movement.customer_id AS customer_id,
movement.document_id AS document_id,
movement.debt AS debt_moved,
%(param_1)s AS debt_total
FROM movement
UNION
SELECT
debt.customer_id AS customer_id,
debt.document_id AS document_id,
%(param_2)s AS debt_moved,
debt.debt AS debt_total
FROM debt
) AS anon_2
) AS anon_1
GROUP BY anon_1.customer_id, anon_1.document_id

I have attached a complete python script that you can run to verify if it's 
correct.


HTH,

Ladislav Lenart


On 21.3.2013 09:07, Alexey Vihorev wrote:
 Hi!
 
  
 
 I have this query:
 
  
 
 q1 =(s.query()
 
 .add_columns(
 
  Movement.customer_id,
 
  Movement.document_id,
 
  func.sum(Movement.debt).label('debt_moved'),
 
  literal(0.00).label('debt_total')
 
 )
 
 .group_by(Movement.customer_id, Movement.document_id)
 
 )
 
  
 
 q2 =(s.query().
 
   add_columns(
 
   Debt.customer_id,
 
   Debt.document_id,
 
   literal(0.00).label('debt_moved'),
 
   Debt.debt.label('debt_total'),
 
 )
 
 )
 
  
 
 q3 =q1.union(q2)
 
  
 
 The problem is that I need to SUM() the resulting query (columns 'debt_moved',
 'debt_moved' should be summed,  grouped by customer_id, document_id),  but so
 far all my attempts to use add_columns() and group_by() on q3 have failed. How
 should I handle this this?
 
 -- 
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email
 to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Aggregating results of a union

2013-03-21 Thread Ladislav Lenart
As usual, I forgot to add the attachment...

Ladislav Lenart


On 21.3.2013 13:57, Ladislav Lenart wrote:
 Hello.
 
 Perhaps something like this is what you are looking for?
 
 def q_union():
 s = session
 q1 = s.query(
 Movement.customer_id.label('customer_id'),
 Movement.document_id.label('document_id'),
 Movement.debt.label('debt_moved'),
 literal(0.00).label('debt_total')
 )
 q2 = s.query(
 Debt.customer_id.label('customer_id'),
 Debt.document_id.label('document_id'),
 literal(0.00).label('debt_moved'),
 Debt.debt.label('debt_total'),
 )
 q3 = q1.union(q2).subquery()
 q4 = session.query(
 q3.c.customer_id,
 q3.c.document_id,
 func.sum(q3.c.debt_moved),
 func.sum(q3.c.debt_total),
 ).group_by(q3.c.customer_id, q3.c.document_id)
 return q4
 
 Call to `print q_union()` prints the following:
 
 SELECT
 anon_1.customer_id AS anon_1_customer_id,
 anon_1.document_id AS anon_1_document_id,
 sum(anon_1.debt_moved) AS sum_1,
 sum(anon_1.debt_total) AS sum_2
 FROM (
 SELECT
 anon_2.customer_id AS customer_id,
 anon_2.document_id AS document_id,
 anon_2.debt_moved AS debt_moved,
 anon_2.debt_total AS debt_total
 FROM (
 SELECT
 movement.customer_id AS customer_id,
 movement.document_id AS document_id,
 movement.debt AS debt_moved,
 %(param_1)s AS debt_total
 FROM movement
 UNION
 SELECT
 debt.customer_id AS customer_id,
 debt.document_id AS document_id,
 %(param_2)s AS debt_moved,
 debt.debt AS debt_total
 FROM debt
 ) AS anon_2
 ) AS anon_1
 GROUP BY anon_1.customer_id, anon_1.document_id
 
 I have attached a complete python script that you can run to verify if it's 
 correct.
 
 
 HTH,
 
 Ladislav Lenart
 
 
 On 21.3.2013 09:07, Alexey Vihorev wrote:
 Hi!

  

 I have this query:

  

 q1 =(s.query()

 .add_columns(

  Movement.customer_id,

  Movement.document_id,

  func.sum(Movement.debt).label('debt_moved'),

  literal(0.00).label('debt_total')

 )

 .group_by(Movement.customer_id, Movement.document_id)

 )

  

 q2 =(s.query().

   add_columns(

   Debt.customer_id,

   Debt.document_id,

   literal(0.00).label('debt_moved'),

   Debt.debt.label('debt_total'),

 )

 )

  

 q3 =q1.union(q2)

  

 The problem is that I need to SUM() the resulting query (columns 
 'debt_moved',
 'debt_moved' should be summed,  grouped by customer_id, document_id),  but so
 far all my attempts to use add_columns() and group_by() on q3 have failed. 
 How
 should I handle this this?

 -- 
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email
 to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


# coding=utf-8
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer
from sqlalchemy.engine import create_engine
from sqlalchemy.orm import scoped_session
from sqlalchemy.orm.session import sessionmaker
from sqlalchemy.sql.expression import func, literal


Base = declarative_base()

class Movement(Base):
__tablename__ = 'movement'
id = Column(Integer(), primary_key=True)
customer_id = Column(Integer())
document_id = Column(Integer())
debt = Column(Integer())

class Debt(Base):
__tablename__ = 'debt'
id = Column(Integer(), primary_key=True)
customer_id = Column(Integer())
document_id = Column(Integer())
debt = Column(Integer())


session = None

def configure():
global session
conn_string = 'postgresql+psycopg2://lada:heslo@localhost:5433/sandbox'
engine = create_engine(conn_string, echo=True)
session = scoped_session(sessionmaker(bind

Re: [sqlalchemy] Placeholder values

2013-03-20 Thread Ladislav Lenart
Hello.

I think you want this:

http://docs.sqlalchemy.org/en/rel_0_7/core/expression_api.html#sqlalchemy.sql.expression.literal_column

Basic usage example:

from sqlalchemy.sql.expression import literal_column

# Connect...
print Session.query(literal_column('42')) # prints: 'SELECT 42'


HTH,

Ladislav Lenart


On 20.3.2013 18:39, Alexey Vihorev wrote:
 Hi!
 
 I’ve got to unite two tables  preserving some of their columns as distinct, so
 to comply with the “same number, same type” requirement of the UNION operator 
 I
 use placeholder values, like this:
 
  
 
 *SELECT*united.customer_id,
 
united.document_id,
 
*SUM*(united.debt_moved) *AS*debt_moved,
 
*SUM*(united.debt_total) *AS*debt_total,
 
*SUM*(debt_total -debt_moved) *AS*delta
 
 *FROM*
 
   (*SELECT*debt.customer_id,
 
   debt.document_id,
 
   debt.debt *AS*debt_total,
 
   0.00*AS*debt_moved /--placeholder value/
 
  
 
*FROM*debt
 
*UNION**ALL**SELECT*debt_movement.customer_id,
 
 debt_movement.document_id,
 
 debt_movement.debt *AS*debt_moved,
 
 0.00*AS*debt_total /-- placeholder value/
 
  
 
*FROM*debt_movement
 
*WHERE*debt_movement.date_time'04/03/2013') *AS*united
 
 *GROUP**BY*customer_id,
 
  document_id
 
  
 
 But I found no obvious way to do that in SA. Any thoughts? Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q] SA 0.7.9 on postgres 9.1 - problem with COPY FROM command

2013-03-14 Thread Ladislav Lenart
Hello.

Please ignore my previous post. The cause of the UPDATE problem is still unknown
to me but it is NOT related to COPY FROM command in any way. I was just told
that we are unable to perform the initial UPDATE (i.e. from an empty postgres DB
to a full one) for two weeks now. The UPDATE code haven't changed at all. We
only added a couple of indexes. Might these cause such a catastrophic behaviour?

Ladislav Lenart


On 14.3.2013 12:04, Ladislav Lenart wrote:
 Hello.
 
 DISCLAIMER: This is NOT about SQL Alchemy per se. I just seek opinions and 
 ideas
 from people that are familiar with relational databases, especially postgres.
 Sorry for the longish post...
 
 Someone posted a link to the following presentation a few days back:
 
 https://speakerdeck.com/rwarren/a-brief-intro-to-profiling-in-python
 
 It inspired me to try pg's COPY FROM command instead of a compiled INSERT
 command(s) to copy large amount of data from a remote MS SQL database via 
 pyodbc
 to the local postgres instance (via psycopg2). Accordining to the 
 presentation,
 this should be approx. 2-3 times faster than my current solution.
 
 ALGORITHM:
 * FILL (populate local postgres tables from the remote MS SQL databse):
   * Drop the tables.
   * Create them a new.
   * For each such table:
 * Query MS SQL db and insert result rows to the table.
   * COMMIT.
 * UPDATE (application downtime period):
   * Now, all data from the remote MS SQL are present in local postgres tables.
   * Execute various insert/update scripts that put local data in sync.
   * COMMIT.
 
 NOte, that the pg_dump of the entire postgres DB in SQL format has about 1GB,
 500MB for the temporary tables with copies of remote data and 500MB for the
 regular tables.
 
 This algorithm works for several weeks now on a daily basis with only minor
 issues mostly related to network outages.
 
 I replaced compiled INSERTs with COPY FROM command. I use os.pipe. One end 
 reads
 MS SQL query rows, converts them to CSV and writes them to output. The other 
 end
 is a COPY FROM command that processes the CSV lines. It runs in a separate
 thread (threading.Thread). The interesting portion of the code is this:
 
 
 # Snippet of...
 class ExtDbTable:
 Logic to populate one local postgres table with data from a remote MS 
 SQL
 database.
 
 def fill(self, cursor):
 Perform a SQL query to the external DB and populate the table with
 its results.
 cursor argument represents (pyodbc) connection to the external DB.
 The table is guaranteed to be empty.
 Perform the query to the external DB and insert its results via
 postgres COPY FROM command as one huge (but streamlined) CSV.
 
 t0 = datetime.utcnow()
 gen = self._fetch(cursor)
 self._insert(gen)
 t1 = datetime.utcnow()
 if self._echo:
 info(u{}.fill\ntotal: {}\n, self.name, t1 - t0)
 
 def _fetch(self, cursor):
 Generator that fetches one result row at a time from a remote MS 
 SQL
 database.
 sql = self._query
 if self._echo:
 info(u\n\n SQL \nTable: {}\n{}, self.name, sql)
 t0 = datetime.utcnow()
 cursor.execute(sql)
 t1 = datetime.utcnow()
 if self._echo:
 info(u{}._fetch\ntotal: {}, self.name, t1 - t0)
 each = cursor.fetchone()
 while each:
 yield each
 each = cursor.fetchone()
 # TODO: Remove the obsolete code.
 # yield dict(zip([each_desc[0] for each_desc in
 each.cursor_description], each))
 
 def _insert(self, gen):
 Insert rows to the ext table via postgres COPY FROM command:
 * Open pipe.
 * Start new thread that reads CSV lines from the pipe and inserts them
 into the table via COPY FROM command. The thread is necessary because
 cursor.copy_expert() (=COPY FROM command) reads till EOF. This would
 block because the pipe has no EOF (yet).
 * Read SQL result rows from the input generator (one at a time),
 convert each to CSV format and write it to the pipe.
 
 NOTE: This should be approx. 2-3x faster than the previous solution
 that used large INSERT statement(s) escaped with cursor.mogrify().
 Maybe even more, because the current implementation SHOULD be using
 much less memory.
 
 NOTE: The real measurement shows that the new implementation takes
 70% of the old (1h50m vs. 1h15m). HOWEVER, the following update
 consumes ALL AVAILABLE MEMORY AND DOES NOT FINISH.
 
 t0 = datetime.utcnow()
 rd, wd = os.pipe() # d-descriptor
 rf, wf = os.fdopen(rd), os.fdopen(wd, 'w') # f-file
 # NOTE: Session = scoped_session(sessionmaker(bind=engine, 
 autoflush=False))
 cursor = Session().connection().connection.cursor()
 null = '_?_'
 
 def _convert_value(v):
 if v

[sqlalchemy] [Q] Struggle with exists

2013-03-01 Thread Ladislav Lenart
Hello.

I don't know how to write the following query in SA (using constructs such as
query, select, exists):

SELECT EXISTS(
SELECT 1
FROM
imported_client_share
JOIN imported_partner_share ON imported_client_share.deal_id =
imported_partner_share.deal_id
JOIN deal ON imported_client_share.deal_id = deal.id
WHERE
imported_client_share.client_id = :client_id
AND imported_partner_share.partner_id = :partner_id
AND deal.external_id IS NULL
) OR EXISTS(
SELECT 1
FROM
client_share
JOIN partner_share ON client_share.deal_id = partner_share.deal_id
JOIN deal ON client_share.deal_id = deal.id
WHERE
client_share.client_id = :client_id
AND partner_share.partner_id = :partner_id
AND deal.external_id IS NULL
)

Can you help me?

(I have read tutorial and API documentation several times but I still don't get 
it.)


Thank you in advance,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q] Struggle with exists

2013-03-01 Thread Ladislav Lenart
Thank you!

I was missing the following bit(s):

q = session.query(
exists(q1.with_entities('1').statement)
| exists(q2.with_entities('1').statement)
)

I knew about Query.statement but I did not figure out how to combine that with
OR. It did not occur to me that I can write session.query(or_(...)) directly.
with_entities() construct is also new to me, though I presume that SQL engines
optimize SELECTs in EXISTS automatically.

I must admit that I did not understand your example the first time I saw it. But
once I run it in the debugger, everything has become clear and logical:

session.query( # renders top-level SELECT
or_(
# q.exists() is a core construct and thus cannot accept
# a query object. q.statement returns select represented
# by the query, which IS a core construct.
# q.with_entities('1') replaces q's SELECT... part.
exists(q1.with_entities('1').statement),
exists(q2.with_entities('1').statement),
)
)

One unrelated question: What is the difference between Query.add_column() and
Query.add_entity()?


Thank you again,

Ladislav Lenart


On 1.3.2013 18:01, Michael Bayer wrote:
 we should probably add a method to Query called exists() that just turns any 
 query into EXISTS (SELECT 1), here's how to make it work for now
 
 from sqlalchemy import exists
 
 q1 = session.query(ImportedClientShare)
 q1 = q1.join(ImportedPartnerShare,
 ImportedClientShare.deal_id == 
 ImportedPartnerShare.deal_id)
 q1 = q1.join(Deal, ImportedClientShare.deal_id == Deal.id)
 q1 = q1.filter(
 ImportedClientShare.client_id == client_id,
 ImportedPartnerShare.partner_id == partner_id,
 Deal.external_id != None,
 )
 
 q2 = session.query(ClientShare)
 q2 = q2.join(PartnerShare, ClientShare.deal_id == PartnerShare.deal_id)
 q2 = q2.join(Deal, ClientShare.deal_id == Deal.id)
 
 q2 = q2.filter(
 ClientShare.client_id == client_id,
 PartnerShare.partner_id == partner_id,
 Deal.external_id == None,
 )
 
 q = session.query(exists(q1.with_entities('1').statement) | 
 exists(q2.with_entities('1').statement))
 
 
 
 On Mar 1, 2013, at 7:41 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 SELECT EXISTS(
SELECT 1
FROM
imported_client_share
JOIN imported_partner_share ON imported_client_share.deal_id =
 imported_partner_share.deal_id
JOIN deal ON imported_client_share.deal_id = deal.id
WHERE
imported_client_share.client_id = :client_id
AND imported_partner_share.partner_id = :partner_id
AND deal.external_id IS NULL
 ) OR EXISTS(
SELECT 1
FROM
client_share
JOIN partner_share ON client_share.deal_id = partner_share.deal_id
JOIN deal ON client_share.deal_id = deal.id
WHERE
client_share.client_id = :client_id
AND partner_share.partner_id = :partner_id
AND deal.external_id IS NULL
 )

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q] SA 0.7.9 - missing from when using union_all and joinedload_all

2013-02-28 Thread Ladislav Lenart
Thank you very much for your valuable time, Michael!

Your example code seems correct. The only differences I found are:
* Missing FK on ImportedPartnerShare.partner_id:
class ImportedPartnerShare(Base):
deal_id = Column(Integer, ForeignKey('deal.id'))
partner_id = Column(Integer, ForeignKey('partner.id')) # ForeignKey() is
missing in yout example script
* All FKs in the example should have nullable=False and ondelete='CASCADE'.

But I suppose none of this makes any difference. As you wrote and confirmed,
this issue (and many others) was resolved in SA 0.7.9. I have just verified that
I am using SA 0.7.8 at the moment (version from debian distro). I apologize for
the err subject. I did not check it when I wrote that line, I just assumed.


Thank you again,

Ladislav Lenart


On 28.2.2013 04:12, Michael Bayer wrote:
 OK, I've reconstructed mappings which correspond directly to your Query as 
 given, and it produces the identical SQL.  I've inserted a bunch of rows into 
 all the tables so that a polymorphic result comes back, so that we can in 
 fact verify that the ORM reads the client_id column correctly.
 
 Your issue exists from SQLAlchemy 0.7.8 and backwards, and was fixed as of 
 0.7.9, (we're up to 0.7.10 as well as 0.8 betas).  0.7.9 had many bug fixes 
 for the CTE feature as it had only been introduced in 0.7.6.
 
 Script is attached.
 
 
 On Feb 27, 2013, at 4:11 PM, Michael Bayer mike...@zzzcomputing.com wrote:
 

 On Feb 27, 2013, at 3:12 PM, Ladislav Lenart lenart...@volny.cz wrote:

 Hello.

 Thank you for your prompt answer. I will try to create a working example 
 that
 demonstrates the issue. Though it will take me a couple of days, maybe 
 weeks (my
 regular work followed by a vacation).

 I have another problem. I rephrased the SQL, because postgres's planner had
 issues with EXISTS and thought it is a great idea to perform full scans of
 several huge tables in order to return several hundreds result rows. Enter 
 CTEs...

 Short-short intro:
 * Client is the joined table inheritance root.
 * PersonalClient and CorporateClient are its subclasses.
 * Partner is a salesman.
 * Deal is a contract signed between salesman(s) and client(s).
 * ImportedClientShare and ClientShare are M:N relationships between clients 
 and
 deals.
 * ImportedPartnerShare and PartnerShare are M:N relationships between 
 partners
 and deals.
 * We import deals from an external DB. Those are called imported. Imported 
 deal
 has external_id and Imported*Share apply.
 * However, a user of our system (a partner) can create a new deal locally. 
 Such
 a deal does not have an external id (yet) and local *Share apply to it.

 The following code should return all clients of a given partner via
 ImportedClientShare or via ClientShare:

   q1 = session.query(ImportedClientShare.client_id.label('client_id'))
   q1 = q1.join(ImportedPartnerShare, ImportedClientShare.deal_id ==
 ImportedPartnerShare.deal_id)
   q1 = q1.join(Deal, ImportedClientShare.deal_id == Deal.id)
   q1 = q1.filter(
   ImportedPartnerShare.partner_id == partner_id, # input argument
   Deal.external_id != None,
   )
   q2 = session.query(ClientShare.client_id.label('client_id'))
   q2 = q2.join(PartnerShare, ClientShare.deal_id == 
 PartnerShare.deal_id)
   q2 = q2.join(Deal, ClientShare.deal_id == Deal.id)
   q2 = q2.filter(
   PartnerShare.partner_id == partner_id, # input argument
   Deal.external_id == None,
   )
   client_ids = q1.union(q2).cte('client_ids')
   q = session.query(Client).with_polymorphic([PersonalClient,
 CorporateClient])
   q = q.join(client_ids, Client.id == client_ids.c.client_id)

 NoSuchColumnError: Could not locate column in row for column 'client.id'

 I also tried it without with_polymorphic() and the result is the same.

 Do you have any idea what is going on / what am I doing wrong and how I can 
 fix
 this?

 Again, this is a very complex query, even more complex than the previous 
 one, and it boils down to limits in what the ORM can currently handle.  It's 
 probably a bug, though there may be workarounds that allow it to work, 
 however it's the kind of issue that typically takes me many hours to 
 diagnose and fix or at least work around, given code that I can run and pdb 
 in order to debug.  This is not something you'd have much luck resolving on 
 your own unless you wanted to become deeply familiar with SQLAlchemy 
 internals.  

 I would recommend again making sure all these issues remain in the latest 
 0.8 version and attempting to produce a rudimentary series of test classes 
 which I can run in order to reproduce your results.   I can reconstitute 
 these models looking just at your queries, though sometimes after all that 
 effort the issue isn't reproduced, due to some quirk in the mappings that's 
 also required.

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group

Re: [sqlalchemy] [Q] SA 0.7.10 - issues with union, with_polymorphic, joinedload / subqueryload

2013-02-28 Thread Ladislav Lenart
Hello again.

I have successfully installed SA 0.7.10 and the query as-is works. However
jonedload_all / subqueryload_all options on the query do NOT work. I have
slightly extended your cte_example which now demonstrates the issues.
joinedload_all does not crash but there is undesired sql activity after the main
query which renders the joinedload useless. subqueryload_all crashes.

I would be glad for any words of advice or idea(s) for possible workaround(s),

Ladislav Lenart


On 28.2.2013 10:26, Ladislav Lenart wrote:
 Thank you very much for your valuable time, Michael!
 
 Your example code seems correct. The only differences I found are:
 * Missing FK on ImportedPartnerShare.partner_id:
 class ImportedPartnerShare(Base):
 deal_id = Column(Integer, ForeignKey('deal.id'))
 partner_id = Column(Integer, ForeignKey('partner.id')) # ForeignKey() is
 missing in yout example script
 * All FKs in the example should have nullable=False and ondelete='CASCADE'.
 
 But I suppose none of this makes any difference. As you wrote and confirmed,
 this issue (and many others) was resolved in SA 0.7.9. I have just verified 
 that
 I am using SA 0.7.8 at the moment (version from debian distro). I apologize 
 for
 the err subject. I did not check it when I wrote that line, I just assumed.
 
 
 Thank you again,
 
 Ladislav Lenart
 
 
 On 28.2.2013 04:12, Michael Bayer wrote:
 OK, I've reconstructed mappings which correspond directly to your Query as 
 given, and it produces the identical SQL.  I've inserted a bunch of rows 
 into all the tables so that a polymorphic result comes back, so that we can 
 in fact verify that the ORM reads the client_id column correctly.

 Your issue exists from SQLAlchemy 0.7.8 and backwards, and was fixed as of 
 0.7.9, (we're up to 0.7.10 as well as 0.8 betas).  0.7.9 had many bug fixes 
 for the CTE feature as it had only been introduced in 0.7.6.

 Script is attached.


 On Feb 27, 2013, at 4:11 PM, Michael Bayer mike...@zzzcomputing.com wrote:


 On Feb 27, 2013, at 3:12 PM, Ladislav Lenart lenart...@volny.cz wrote:

 Hello.

 Thank you for your prompt answer. I will try to create a working example 
 that
 demonstrates the issue. Though it will take me a couple of days, maybe 
 weeks (my
 regular work followed by a vacation).

 I have another problem. I rephrased the SQL, because postgres's planner had
 issues with EXISTS and thought it is a great idea to perform full scans of
 several huge tables in order to return several hundreds result rows. Enter 
 CTEs...

 Short-short intro:
 * Client is the joined table inheritance root.
 * PersonalClient and CorporateClient are its subclasses.
 * Partner is a salesman.
 * Deal is a contract signed between salesman(s) and client(s).
 * ImportedClientShare and ClientShare are M:N relationships between 
 clients and
 deals.
 * ImportedPartnerShare and PartnerShare are M:N relationships between 
 partners
 and deals.
 * We import deals from an external DB. Those are called imported. Imported 
 deal
 has external_id and Imported*Share apply.
 * However, a user of our system (a partner) can create a new deal locally. 
 Such
 a deal does not have an external id (yet) and local *Share apply to it.

 The following code should return all clients of a given partner via
 ImportedClientShare or via ClientShare:

   q1 = session.query(ImportedClientShare.client_id.label('client_id'))
   q1 = q1.join(ImportedPartnerShare, ImportedClientShare.deal_id ==
 ImportedPartnerShare.deal_id)
   q1 = q1.join(Deal, ImportedClientShare.deal_id == Deal.id)
   q1 = q1.filter(
   ImportedPartnerShare.partner_id == partner_id, # input argument
   Deal.external_id != None,
   )
   q2 = session.query(ClientShare.client_id.label('client_id'))
   q2 = q2.join(PartnerShare, ClientShare.deal_id == 
 PartnerShare.deal_id)
   q2 = q2.join(Deal, ClientShare.deal_id == Deal.id)
   q2 = q2.filter(
   PartnerShare.partner_id == partner_id, # input argument
   Deal.external_id == None,
   )
   client_ids = q1.union(q2).cte('client_ids')
   q = session.query(Client).with_polymorphic([PersonalClient,
 CorporateClient])
   q = q.join(client_ids, Client.id == client_ids.c.client_id)

 NoSuchColumnError: Could not locate column in row for column 'client.id'

 I also tried it without with_polymorphic() and the result is the same.

 Do you have any idea what is going on / what am I doing wrong and how I 
 can fix
 this?

 Again, this is a very complex query, even more complex than the previous 
 one, and it boils down to limits in what the ORM can currently handle.  
 It's probably a bug, though there may be workarounds that allow it to work, 
 however it's the kind of issue that typically takes me many hours to 
 diagnose and fix or at least work around, given code that I can run and pdb 
 in order to debug.  This is not something you'd have much luck resolving on 
 your own unless you wanted to become

Re: [sqlalchemy] [Q] SA 0.7.10 - issues with union, with_polymorphic, joinedload / subqueryload

2013-02-28 Thread Ladislav Lenart
Hello.

I have new insigths / new bug to report.

Even when I reformat the query like this:

q = session.query(Client, PersonalClient, CorporateClient)
q = q.outerjoin(PersonalClient, Client.id == PersonalClient.id)
q = q.outerjoin(CorporateClient, Client.id == PersonalClient.id)
if eager:
q = q.options(
joinedload_all(PersonalClient.data, PersonalData.address),
joinedload_all(CorporateClient.data, CorporateData.address),
)

joinedload_all / subqueryload_all still do not work.

The only thing that works is to separate the query into two completely
independet ones:

q = session.query(Client, PersonalClient)
q = q.join(PersonalClient, Client.id == PersonalClient.id)
if eager:
q = q.options(
joinedload_all(PersonalClient.data, PersonalData.address),
)

This will also be my solution for the foreseeable future.


Thank you,

Ladislav Lenart


On 28.2.2013 11:29, Ladislav Lenart wrote:
 Hello again.
 
 I have successfully installed SA 0.7.10 and the query as-is works. However
 jonedload_all / subqueryload_all options on the query do NOT work. I have
 slightly extended your cte_example which now demonstrates the issues.
 joinedload_all does not crash but there is undesired sql activity after the 
 main
 query which renders the joinedload useless. subqueryload_all crashes.
 
 I would be glad for any words of advice or idea(s) for possible workaround(s),
 
 Ladislav Lenart
 
 
 On 28.2.2013 10:26, Ladislav Lenart wrote:
 Thank you very much for your valuable time, Michael!

 Your example code seems correct. The only differences I found are:
 * Missing FK on ImportedPartnerShare.partner_id:
 class ImportedPartnerShare(Base):
 deal_id = Column(Integer, ForeignKey('deal.id'))
 partner_id = Column(Integer, ForeignKey('partner.id')) # ForeignKey() is
 missing in yout example script
 * All FKs in the example should have nullable=False and ondelete='CASCADE'.

 But I suppose none of this makes any difference. As you wrote and confirmed,
 this issue (and many others) was resolved in SA 0.7.9. I have just verified 
 that
 I am using SA 0.7.8 at the moment (version from debian distro). I apologize 
 for
 the err subject. I did not check it when I wrote that line, I just assumed.


 Thank you again,

 Ladislav Lenart


 On 28.2.2013 04:12, Michael Bayer wrote:
 OK, I've reconstructed mappings which correspond directly to your Query as 
 given, and it produces the identical SQL.  I've inserted a bunch of rows 
 into all the tables so that a polymorphic result comes back, so that we can 
 in fact verify that the ORM reads the client_id column correctly.

 Your issue exists from SQLAlchemy 0.7.8 and backwards, and was fixed as of 
 0.7.9, (we're up to 0.7.10 as well as 0.8 betas).  0.7.9 had many bug fixes 
 for the CTE feature as it had only been introduced in 0.7.6.

 Script is attached.


 On Feb 27, 2013, at 4:11 PM, Michael Bayer mike...@zzzcomputing.com wrote:


 On Feb 27, 2013, at 3:12 PM, Ladislav Lenart lenart...@volny.cz wrote:

 Hello.

 Thank you for your prompt answer. I will try to create a working example 
 that
 demonstrates the issue. Though it will take me a couple of days, maybe 
 weeks (my
 regular work followed by a vacation).

 I have another problem. I rephrased the SQL, because postgres's planner 
 had
 issues with EXISTS and thought it is a great idea to perform full scans of
 several huge tables in order to return several hundreds result rows. 
 Enter CTEs...

 Short-short intro:
 * Client is the joined table inheritance root.
 * PersonalClient and CorporateClient are its subclasses.
 * Partner is a salesman.
 * Deal is a contract signed between salesman(s) and client(s).
 * ImportedClientShare and ClientShare are M:N relationships between 
 clients and
 deals.
 * ImportedPartnerShare and PartnerShare are M:N relationships between 
 partners
 and deals.
 * We import deals from an external DB. Those are called imported. 
 Imported deal
 has external_id and Imported*Share apply.
 * However, a user of our system (a partner) can create a new deal 
 locally. Such
 a deal does not have an external id (yet) and local *Share apply to it.

 The following code should return all clients of a given partner via
 ImportedClientShare or via ClientShare:

   q1 = session.query(ImportedClientShare.client_id.label('client_id'))
   q1 = q1.join(ImportedPartnerShare, ImportedClientShare.deal_id ==
 ImportedPartnerShare.deal_id)
   q1 = q1.join(Deal, ImportedClientShare.deal_id == Deal.id)
   q1 = q1.filter(
   ImportedPartnerShare.partner_id == partner_id, # input argument
   Deal.external_id != None,
   )
   q2 = session.query(ClientShare.client_id.label('client_id'))
   q2 = q2.join(PartnerShare, ClientShare.deal_id == 
 PartnerShare.deal_id)
   q2 = q2.join(Deal, ClientShare.deal_id == Deal.id)
   q2 = q2

Re: [sqlalchemy] [Q] SA 0.7.10 - issues with union, with_polymorphic, joinedload / subqueryload

2013-02-28 Thread Ladislav Lenart
Hello again, Michael.

Just a follow-up to my previous post. The following code does not work as I
would like:

 q = session.query(Client, PersonalClient)
 q = q.join(PersonalClient, Client.id == PersonalClient.id)
 if eager:
 q = q.options(
 joinedload_all(PersonalClient.data, PersonalData.address),
 )

It generates inner subqueries for joins which I don't want.

The following does not work either:

q = session.query(Client).with_polymorphic([PersonalClient])

It generates outer joins instead of inner joins.

BUT the following DOES the right thing and is by far the simplest and most 
elegant:

q = session.query(PersonalClient)
# I can reference Client columns via PersonalClient (e.g.
PersonalClient.inherited_column) in filter and such.

I wasted almost all day to figure this. I have no idea why I haven't tried this
(much) sooner. I guess I was stuck on with_polymorphic() and haven't expected
that PersonalClient would magically join on Client as well.


Thank you,

Ladislav Lenart



On 28.2.2013 11:48, Ladislav Lenart wrote:
 Hello.
 
 I have new insigths / new bug to report.
 
 Even when I reformat the query like this:
 
 q = session.query(Client, PersonalClient, CorporateClient)
 q = q.outerjoin(PersonalClient, Client.id == PersonalClient.id)
 q = q.outerjoin(CorporateClient, Client.id == PersonalClient.id)
 if eager:
 q = q.options(
 joinedload_all(PersonalClient.data, PersonalData.address),
 joinedload_all(CorporateClient.data, CorporateData.address),
 )
 
 joinedload_all / subqueryload_all still do not work.
 
 The only thing that works is to separate the query into two completely
 independet ones:
 
 q = session.query(Client, PersonalClient)
 q = q.join(PersonalClient, Client.id == PersonalClient.id)
 if eager:
 q = q.options(
 joinedload_all(PersonalClient.data, PersonalData.address),
 )
 
 This will also be my solution for the foreseeable future.
 
 
 Thank you,
 
 Ladislav Lenart
 
 
 On 28.2.2013 11:29, Ladislav Lenart wrote:
 Hello again.

 I have successfully installed SA 0.7.10 and the query as-is works. However
 jonedload_all / subqueryload_all options on the query do NOT work. I have
 slightly extended your cte_example which now demonstrates the issues.
 joinedload_all does not crash but there is undesired sql activity after the 
 main
 query which renders the joinedload useless. subqueryload_all crashes.

 I would be glad for any words of advice or idea(s) for possible 
 workaround(s),

 Ladislav Lenart


 On 28.2.2013 10:26, Ladislav Lenart wrote:
 Thank you very much for your valuable time, Michael!

 Your example code seems correct. The only differences I found are:
 * Missing FK on ImportedPartnerShare.partner_id:
 class ImportedPartnerShare(Base):
 deal_id = Column(Integer, ForeignKey('deal.id'))
 partner_id = Column(Integer, ForeignKey('partner.id')) # ForeignKey() is
 missing in yout example script
 * All FKs in the example should have nullable=False and ondelete='CASCADE'.

 But I suppose none of this makes any difference. As you wrote and confirmed,
 this issue (and many others) was resolved in SA 0.7.9. I have just verified 
 that
 I am using SA 0.7.8 at the moment (version from debian distro). I apologize 
 for
 the err subject. I did not check it when I wrote that line, I just assumed.


 Thank you again,

 Ladislav Lenart


 On 28.2.2013 04:12, Michael Bayer wrote:
 OK, I've reconstructed mappings which correspond directly to your Query as 
 given, and it produces the identical SQL.  I've inserted a bunch of rows 
 into all the tables so that a polymorphic result comes back, so that we 
 can in fact verify that the ORM reads the client_id column correctly.

 Your issue exists from SQLAlchemy 0.7.8 and backwards, and was fixed as of 
 0.7.9, (we're up to 0.7.10 as well as 0.8 betas).  0.7.9 had many bug 
 fixes for the CTE feature as it had only been introduced in 0.7.6.

 Script is attached.


 On Feb 27, 2013, at 4:11 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:


 On Feb 27, 2013, at 3:12 PM, Ladislav Lenart lenart...@volny.cz wrote:

 Hello.

 Thank you for your prompt answer. I will try to create a working example 
 that
 demonstrates the issue. Though it will take me a couple of days, maybe 
 weeks (my
 regular work followed by a vacation).

 I have another problem. I rephrased the SQL, because postgres's planner 
 had
 issues with EXISTS and thought it is a great idea to perform full scans 
 of
 several huge tables in order to return several hundreds result rows. 
 Enter CTEs...

 Short-short intro:
 * Client is the joined table inheritance root.
 * PersonalClient and CorporateClient are its subclasses.
 * Partner is a salesman.
 * Deal is a contract signed between salesman(s) and client(s).
 * ImportedClientShare and ClientShare

Re: [sqlalchemy] [Q] SA 0.7.10 - issues with union, with_polymorphic, joinedload / subqueryload

2013-02-28 Thread Ladislav Lenart
Cool!

I will switch to it once it is officially released.

Thank you,

Ladislav Lenart


On 28.2.2013 19:04, Michael Bayer wrote:
 All of that has been fixed in 0.8.   With 0.8 both versions work fine.
 
 
 On Feb 28, 2013, at 5:29 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello again.

 I have successfully installed SA 0.7.10 and the query as-is works. However
 jonedload_all / subqueryload_all options on the query do NOT work. I have
 slightly extended your cte_example which now demonstrates the issues.
 joinedload_all does not crash but there is undesired sql activity after the 
 main
 query which renders the joinedload useless. subqueryload_all crashes.

 I would be glad for any words of advice or idea(s) for possible 
 workaround(s),

 Ladislav Lenart


 On 28.2.2013 10:26, Ladislav Lenart wrote:
 Thank you very much for your valuable time, Michael!

 Your example code seems correct. The only differences I found are:
 * Missing FK on ImportedPartnerShare.partner_id:
 class ImportedPartnerShare(Base):
deal_id = Column(Integer, ForeignKey('deal.id'))
partner_id = Column(Integer, ForeignKey('partner.id')) # ForeignKey() is
 missing in yout example script
 * All FKs in the example should have nullable=False and ondelete='CASCADE'.

 But I suppose none of this makes any difference. As you wrote and confirmed,
 this issue (and many others) was resolved in SA 0.7.9. I have just verified 
 that
 I am using SA 0.7.8 at the moment (version from debian distro). I apologize 
 for
 the err subject. I did not check it when I wrote that line, I just assumed.


 Thank you again,

 Ladislav Lenart


 On 28.2.2013 04:12, Michael Bayer wrote:
 OK, I've reconstructed mappings which correspond directly to your Query as 
 given, and it produces the identical SQL.  I've inserted a bunch of rows 
 into all the tables so that a polymorphic result comes back, so that we 
 can in fact verify that the ORM reads the client_id column correctly.

 Your issue exists from SQLAlchemy 0.7.8 and backwards, and was fixed as of 
 0.7.9, (we're up to 0.7.10 as well as 0.8 betas).  0.7.9 had many bug 
 fixes for the CTE feature as it had only been introduced in 0.7.6.

 Script is attached.


 On Feb 27, 2013, at 4:11 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:


 On Feb 27, 2013, at 3:12 PM, Ladislav Lenart lenart...@volny.cz wrote:

 Hello.

 Thank you for your prompt answer. I will try to create a working example 
 that
 demonstrates the issue. Though it will take me a couple of days, maybe 
 weeks (my
 regular work followed by a vacation).

 I have another problem. I rephrased the SQL, because postgres's planner 
 had
 issues with EXISTS and thought it is a great idea to perform full scans 
 of
 several huge tables in order to return several hundreds result rows. 
 Enter CTEs...

 Short-short intro:
 * Client is the joined table inheritance root.
 * PersonalClient and CorporateClient are its subclasses.
 * Partner is a salesman.
 * Deal is a contract signed between salesman(s) and client(s).
 * ImportedClientShare and ClientShare are M:N relationships between 
 clients and
 deals.
 * ImportedPartnerShare and PartnerShare are M:N relationships between 
 partners
 and deals.
 * We import deals from an external DB. Those are called imported. 
 Imported deal
 has external_id and Imported*Share apply.
 * However, a user of our system (a partner) can create a new deal 
 locally. Such
 a deal does not have an external id (yet) and local *Share apply to it.

 The following code should return all clients of a given partner via
 ImportedClientShare or via ClientShare:

  q1 = session.query(ImportedClientShare.client_id.label('client_id'))
  q1 = q1.join(ImportedPartnerShare, ImportedClientShare.deal_id ==
 ImportedPartnerShare.deal_id)
  q1 = q1.join(Deal, ImportedClientShare.deal_id == Deal.id)
  q1 = q1.filter(
  ImportedPartnerShare.partner_id == partner_id, # input argument
  Deal.external_id != None,
  )
  q2 = session.query(ClientShare.client_id.label('client_id'))
  q2 = q2.join(PartnerShare, ClientShare.deal_id == 
 PartnerShare.deal_id)
  q2 = q2.join(Deal, ClientShare.deal_id == Deal.id)
  q2 = q2.filter(
  PartnerShare.partner_id == partner_id, # input argument
  Deal.external_id == None,
  )
  client_ids = q1.union(q2).cte('client_ids')
  q = session.query(Client).with_polymorphic([PersonalClient,
 CorporateClient])
  q = q.join(client_ids, Client.id == client_ids.c.client_id)

 NoSuchColumnError: Could not locate column in row for column 
 'client.id'

 I also tried it without with_polymorphic() and the result is the same.

 Do you have any idea what is going on / what am I doing wrong and how I 
 can fix
 this?

 Again, this is a very complex query, even more complex than the previous 
 one, and it boils down to limits in what the ORM can currently handle.  
 It's probably a bug, though there may be workarounds that allow

[sqlalchemy] [Q] SA 0.7.9 - missing from when using union_all and joinedload_all

2013-02-27 Thread Ladislav Lenart
Hello.

Suppose I have the following mapped classes, A and B, that have two distinct M:N
relationships, AB1 and AB2. If A.x is null, only relations in AB1 apply. If it
is not null, only relations in AB2 apply. A also has 1:N relationship to C (one
A can have more Cs). Finally, A is infact a joined table inheritance superclass
with two subclasses, A1 and A2. I want to select all As for a given B via AB1 or
AB2. I also want to prefetch A.cs of the results using joinedload. I use the
code like this:

q1 = session.query(A).with_polymorphic([A1, A2])
q1 = q1.filter(exists().where(and_(
A.x == None,
AB1.a_id == A.id,
AB1.b_id == b_id, # input argument
))

q2 = session.query(A).with_polymorphic([A1, A2])
q2 = q2.filter(exists().where(and_(
A.x != None,
AB2.a_id == A.id,
AB2.b_id == b_id, # input argument,
))

q = q1.union_all(q2)
q = q.options(
joinedload(A.cs),
)

return q

This creates the following SQL:

SELECT 
FROM (
SELECT... -- via AB1
UNION ALL
SELECT ... -- via AB2
) anon_1
LEFT OUTER JOIN c ON c.a_id = a.id -- error line

This fails with a missing from clause error for table a. The attribute a.id is
actually anon_1.a_id.

What am I doing wrong? / How can I fix this?


Thank you in advance,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q] SA 0.7.9 - missing from when using union_all and joinedload_all

2013-02-27 Thread Ladislav Lenart
Hello.

Thank you for your prompt answer. I will try to create a working example that
demonstrates the issue. Though it will take me a couple of days, maybe weeks (my
regular work followed by a vacation).

I have another problem. I rephrased the SQL, because postgres's planner had
issues with EXISTS and thought it is a great idea to perform full scans of
several huge tables in order to return several hundreds result rows. Enter 
CTEs...

Short-short intro:
* Client is the joined table inheritance root.
* PersonalClient and CorporateClient are its subclasses.
* Partner is a salesman.
* Deal is a contract signed between salesman(s) and client(s).
* ImportedClientShare and ClientShare are M:N relationships between clients and
deals.
* ImportedPartnerShare and PartnerShare are M:N relationships between partners
and deals.
* We import deals from an external DB. Those are called imported. Imported deal
has external_id and Imported*Share apply.
* However, a user of our system (a partner) can create a new deal locally. Such
a deal does not have an external id (yet) and local *Share apply to it.

The following code should return all clients of a given partner via
ImportedClientShare or via ClientShare:

q1 = session.query(ImportedClientShare.client_id.label('client_id'))
q1 = q1.join(ImportedPartnerShare, ImportedClientShare.deal_id ==
ImportedPartnerShare.deal_id)
q1 = q1.join(Deal, ImportedClientShare.deal_id == Deal.id)
q1 = q1.filter(
ImportedPartnerShare.partner_id == partner_id, # input argument
Deal.external_id != None,
)
q2 = session.query(ClientShare.client_id.label('client_id'))
q2 = q2.join(PartnerShare, ClientShare.deal_id == PartnerShare.deal_id)
q2 = q2.join(Deal, ClientShare.deal_id == Deal.id)
q2 = q2.filter(
PartnerShare.partner_id == partner_id, # input argument
Deal.external_id == None,
)
client_ids = q1.union(q2).cte('client_ids')
q = session.query(Client).with_polymorphic([PersonalClient,
CorporateClient])
q = q.join(client_ids, Client.id == client_ids.c.client_id)


It produces the following SQL:

WITH client_ids AS (
SELECT anon_1.client_id AS client_id
FROM (
SELECT imported_client_share.client_id AS client_id
FROM
imported_client_share
JOIN imported_partner_share ON 
imported_client_share.deal_id =
imported_partner_share.deal_id
JOIN deal ON imported_client_share.deal_id = deal.id
WHERE
imported_partner_share.partner_id = 610556
AND deal.external_id IS NOT NULL
UNION
SELECT client_share.client_id AS client_id
FROM
client_share
JOIN partner_share ON client_share.deal_id = 
partner_share.deal_id
JOIN deal ON client_share.deal_id = deal.id
WHERE
partner_share.partner_id = 610556
AND deal.external_id IS NULL
) AS anon_1
)
SELECT
client.type AS client_type,
client.id AS client_id,
client.external_id AS client_external_id,
personal_client.id AS personal_client_id,
personal_client.data_id AS personal_client_data_id,
corporate_client.id AS corporate_client_id,
corporate_client.data_id AS corporate_client_data_id
FROM
client
LEFT OUTER JOIN personal_client ON client.id = personal_client.id
LEFT OUTER JOIN corporate_client ON client.id = corporate_client.id
JOIN client_ids ON client.id = client_ids.client_id


The SQL IS correct, returns expected results when executed in pgadmin3 for
example, but SA's ORM layer crashes with the error:

  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2348, in
instances
rows = [process[0](row, None) for row in fetch]
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/mapper.py, line 2011,
in _instance
return _instance(row, result)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/mapper.py, line 2024,
in _instance
tuple([row[column] for column in pk_cols])
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 2835,
in _key_fallback
expression._string_or_unprintable(key))
NoSuchColumnError: Could not locate column in row for column 'client.id'

I also tried it without with_polymorphic() and the result is the same.

Do you have any idea what is going on / what am I doing wrong and how I can fix
this?


Thanks,

Ladislav Lenart


On 27.2.2013 19:05, Michael Bayer wrote:
 
 On Feb 27, 2013, at 12:21 PM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 Suppose I have the following mapped classes, A and B, that have two distinct 
 M:N
 relationships, AB1 and AB2. If A.x is null, only relations in AB1

Re: [sqlalchemy] [Q] Move elements in one-to-many relationship to a new owner

2012-11-13 Thread Ladislav Lenart
Hello.

I was unable to reproduce the problem on your test example. I THINK it does
exactly what my problematic code does. It seems that either the problem is more
elaborate (but I don't see it) or I have a bug somewhere in my code. However my
simple workaround works, so I consider the problem solved, at least for now. I
will try to dig deeper when I have more time.


Ladislav Lenart


On 9.11.2012 23:22, Michael Bayer wrote:
 there's no mapping or code example provided here, which makes a condition 
 like this extremely difficult to diagnose, however I've prepared a test 
 script that takes its best guess as to configuration, that is, a 
 client_products collection with a client backref, and a cascade of all, 
 delete-orphan on the one-to-many which maximizes the conditions under which 
 a ClientProduct might be deleted.   The test case below does not exhibit this 
 behavior; please alter it such that it reproduces the condition you are 
 seeing so that we can diagnose this fully and ensure you aren't coming across 
 any new SQLAlchemy bugs:
 
 
 from sqlalchemy import *
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base
 
 Base = declarative_base()
 
 class Client(Base):
 __tablename__ = a
 
 id = Column(Integer, primary_key=True)
 data = Column(String)
 client_products = relationship(ClientProduct,
 backref='client',
 cascade=all, delete-orphan)
 
 
 class ClientProduct(Base):
 __tablename__ = b
 
 id = Column(Integer, primary_key=True)
 data = Column(String)
 a_id = Column(Integer, ForeignKey('a.id'))
 
 e = create_engine(sqlite://, echo=True)
 
 Base.metadata.create_all(e)
 
 s = Session(e)
 
 
 s.add_all([
 Client(data='c1', client_products=[
 ClientProduct(data='cp1'),
 ClientProduct(data='cp2'),
 ClientProduct(data='cp3'),
 ]),
 Client(data='c2', client_products=[
 ClientProduct(data='cp4'),
 ClientProduct(data='cp5'),
 ClientProduct(data='cp6'),
 ]),
 ])
 s.commit()
 
 
 c1 = s.query(Client).filter_by(data='c1').one()
 c2 = s.query(Client).filter_by(data='c2').one()
 
 cp1, cp2 = c1.client_products[0:2]
 
 cp1.client = c2
 cp2.client = c2
 s.delete(c1)
 s.commit()
 
 assert s.query(Client.data).all() == [('c2',)]
 assert s.query(ClientProduct.data).all() == [('cp1', ), ('cp2', ), ('cp4', ), 
 ('cp5', ), ('cp6', )]
 
 
 
 
 On Nov 8, 2012, at 11:30 AM, Ladislav Lenart wrote:
 
 Hello.

 I have a client which has a collection of ClientProduct-s (ClientProduct has 
 a
 FK to Client). The following code:

# Move some client products from a duplicate to the original.
# Remove duplicate clients afterwards (in cascade).
#
# Note that client_map is a dict from a duplicate to its original.
for each_duplicate, each_client in client_map.iteritems():
for each_cp in each_duplicate.client_products:
if some_condition(each_cp):
each_cp.client = each_client
session.delete(each_duplicate)
session.flush()

 deletes a client product that was moved from each_duplicate to each_client in
 the inner loop. Why? What can I do to prevent it?


 Thank you in advance,

 Ladislav Lenart

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [Q] Move elements in one-to-many relationship to a new owner

2012-11-09 Thread Ladislav Lenart
Hello.

I managed to solve my problem. I use session.execute(...) instead of
session.delete(), because the second form deletes more things than it should.
The working code is:

# Move some client products from a duplicate to the original.
# Remove duplicate clients afterwards (in cascade).
#
# Note that client_map is a dict from a duplicate to its original.
for each_duplicate, each_client in client_map.iteritems():
for each_cp in each_duplicate.client_products:
if some_condition(each_cp):
each_cp.client = each_client
session.flush()
table = Client.__table__
duplicate_ids = [each.id for each in duplicate_clients.iterkeys()]
q = table.delete().where(table.c.id.in_(duplicate_ids))
session.execute(q)

I still want to know whether this is expected behaviour and if so why:
 1. ClientProduct has FK to Client. Client has relationship client_products with
ON DELETE CASCADE. Thus if a client is deleted, all its client_products are
deleted too.
 2. Suppose I have a client with two client_products.
 3. I move one of them to a different client: client.client_products[0].client =
other_client.
 4. I delete the client: session.delete(client)
 5. session.deleted now contains TWO ClientProduct instances instead of ONE even
though I moved one of them to a completely different client.
 6. Why?! Can I do anything to prevent this, e.g. insert call to
session.flush(), session.expunge(), session.refresh() or some such somewhere?
All my attempts with session.flush() failed (had no effect).


Thank you,

Ladislav Lenart


On 8.11.2012 17:30, Ladislav Lenart wrote:
 Hello.
 
 I have a client which has a collection of ClientProduct-s (ClientProduct has a
 FK to Client). The following code:
 
 # Move some client products from a duplicate to the original.
 # Remove duplicate clients afterwards (in cascade).
 #
 # Note that client_map is a dict from a duplicate to its original.
 for each_duplicate, each_client in client_map.iteritems():
 for each_cp in each_duplicate.client_products:
 if some_condition(each_cp):
 each_cp.client = each_client
 session.delete(each_duplicate)
 session.flush()
 
 deletes a client product that was moved from each_duplicate to each_client in
 the inner loop. Why? What can I do to prevent it?
 
 
 Thank you in advance,
 
 Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [Q] Transform a select part of a query

2012-10-25 Thread Ladislav Lenart
Cool!

That is what I was looking for.

Thank you,

Ladislav Lenart


On 24.10.2012 20:00, Michael Bayer wrote:
 with_entities() will give you this:
 
 http://docs.sqlalchemy.org/en/rel_0_7/orm/query.html#sqlalchemy.orm.query.Query.with_entities
 
 
 On Oct 24, 2012, at 11:21 AM, Ladislav Lenart wrote:
 
 Oh, you are right of course. This works, however I would still like to 
 transform
 the query programmatically (mainly out of curiosity) from

q = session.query(Person).order_by(Person.name)
q = q.filter(Person.age  age)
return q

 to

q = session.query(func.count(Person.id))
q = q.filter(Person.age  age)
return q

 (I am not sure if the order_by part can remain or not.)

 Is this possible somehow? I am pretty sure I read something along these lines
 but cannot find it anywhere.


 Ladislav Lenart


 On 24.10.2012 17:10, Simon King wrote:
 On Wed, Oct 24, 2012 at 3:59 PM, Ladislav Lenart lenart...@volny.cz wrote:
 Hello.

 Suppose I have the following query:

 def people_older_than(age):
q = session.query(Person).order_by(Person.name)
q = q.filter(Person.age  age)
return q

 It returns a subset of Person instances. How can I augment the query so it
 returns only their count? I.e.:

 def number_of_people_older_than(age)
q = people_older_than(21)
q_count = q# Apply some magic here.
return q_count.scalar()


 Thank you in advance,

 Ladislav Lenart


 I don't think any magic is required. Does this work for you:

 def number_of_people_older_than(age)
q = people_older_than(age)
return q.count()

 See also http://docs.sqlalchemy.org/en/rel_0_7/orm/tutorial.html#counting

 Simon

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Problem writing a query with distinct

2012-10-24 Thread Ladislav Lenart
Hello.

I haven't used distinct() yet, but if you modify your query:

 query = dbSession.query( Candidate )\
 .join( Vote )\
 .filter(\
 model.core.Candidate.id == model.core.Vote.candidate_id ,
 model.core.Vote.yay == True ,
 )\
 .order_by(\
 model.core.Vote.timestamp.desc()
 )\
 .limit(limit)\
 .offset(offset)
 .all()

it will return you instances of Candidate class, not a list of their ids. During
this process, the distinct is automatic, i.e. the result will contain each
candidate at most once. However I am not sure if this will work as expected in
combination with limit and offset.


HTH,

Ladislav Lenart


On 24.10.2012 00:59, Jonathan Vanasco wrote:
 I'm trying to write a query against Postgres and it's driving me a bit
 mad.  Hoping someone here can help.
 
 I'm make the example in something topical... voting!
 
 Given:
Candidate
id
name
Vote
id
candidate_id (fkey on Candidate)
yay (bool)
 
 With this structure, I'm trying to find the distinct Candidates IDs
 that have voted 'Yay'
 
 I thought i would accomplish it by querying the Candidate.id, joining
 the Vote and filtering/sorting on that.
 
 That sort of works.
 
 The problem with the query, is that it seems to be giving me distinct
 votes -- ie, distinct on the join of the 2 tables
 
 Is there a way to get a distinct limit on the Candidate.id field
 only ?
 
 I tried padding in Candidate.id into the distinct() option, but then
 Postgres wants me to fill it with all the different query params that
 are used - which is normal in raw postgres, but a little scare in an
 ORM where you're not necessarily concerned with ever knowing/seeing
 the raw sql
 
 anyone have an idea on how to proceed ?
 
 ( fwiw, my interim fix is to just turn the returned ids into a set in
 python , that works but gives unreliable items-per-page in
 pagination )
 
 
 query = dbSession.query( Candidate.id )\
 .distinct()
 .join( Vote )\
 .filter(\
 model.core.Candidate.id == model.core.Vote.candidate_id ,
 model.core.Vote.yay == True ,
 )\
 .order_by(\
 model.core.Vote.timestamp.desc()
 )\
 .limit(limit)\
 .offset(offset)
 .all()

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] [Q] Transform a select part of a query

2012-10-24 Thread Ladislav Lenart
Hello.

Suppose I have the following query:

def people_older_than(age):
q = session.query(Person).order_by(Person.name)
q = q.filter(Person.age  age)
return q

It returns a subset of Person instances. How can I augment the query so it
returns only their count? I.e.:

def number_of_people_older_than(age)
q = people_older_than(21)
q_count = q# Apply some magic here.
return q_count.scalar()


Thank you in advance,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [Q] Transform a select part of a query

2012-10-24 Thread Ladislav Lenart
Oh, you are right of course. This works, however I would still like to transform
the query programmatically (mainly out of curiosity) from

 q = session.query(Person).order_by(Person.name)
 q = q.filter(Person.age  age)
 return q

to

 q = session.query(func.count(Person.id))
 q = q.filter(Person.age  age)
 return q

(I am not sure if the order_by part can remain or not.)

Is this possible somehow? I am pretty sure I read something along these lines
but cannot find it anywhere.


Ladislav Lenart


On 24.10.2012 17:10, Simon King wrote:
 On Wed, Oct 24, 2012 at 3:59 PM, Ladislav Lenart lenart...@volny.cz wrote:
 Hello.

 Suppose I have the following query:

 def people_older_than(age):
 q = session.query(Person).order_by(Person.name)
 q = q.filter(Person.age  age)
 return q

 It returns a subset of Person instances. How can I augment the query so it
 returns only their count? I.e.:

 def number_of_people_older_than(age)
 q = people_older_than(21)
 q_count = q# Apply some magic here.
 return q_count.scalar()


 Thank you in advance,

 Ladislav Lenart

 
 I don't think any magic is required. Does this work for you:
 
 def number_of_people_older_than(age)
 q = people_older_than(age)
 return q.count()
 
 See also http://docs.sqlalchemy.org/en/rel_0_7/orm/tutorial.html#counting
 
 Simon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] [Q] SqlAlchemy via pyodbc to MSSQL from Linux

2012-10-10 Thread Ladislav Lenart
Hello Michael.

Just a question. When I first posted about my problem of how to connect to MSSQL
from Linux/Debian, you mentioned that you are running similar setup in
production, or at least that is how I understood it. Am I correct? If so, is it
similar to this:
 * Connect from Linux/Debian
 * To MSSQL
 * Via pyodbc
 * using freetds
?

Are there some differences?

Thank you,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SOLVED [Q] SQLSoup and PYODBC from Linux

2012-10-10 Thread Ladislav Lenart
Hello!

I finally managed to solve the problem (with the great help of my colleagues)!
The culprit: python-pyodbc package in Debian/testing is nearly THREE years old
now (version 2.1.7). I removed it and easy-installed the version 3.0.6 which
does not have the encoding bug anymore. Now everything works like a charm,
inluding SqlSoup. I even receive unicode objects directly for both VARCHAR and
NVARCHAR columns!


For future reference: If you want to connect to MSSQL server from a Linux
(Debian) machine, you need to...

[1] Install unixodbc:

aptitude install unixodbc

I also have unixodbc-dev installed, but I am not sure if it is necessary.


[2] Install freetds and the driver:

aptitude install freetds-bin tdsodbc


[3] Edit /etc/odbcinst.ini (note: DO NOT indent lines in the odbc config 
files!):

[FreeTDS]
# See dpkg -L tdsodbc | grep libtdsodbc
Driver = /path/to/libtdsodbc.so
UsageCount = 1


[4] Edit /etc/odbc.ini:

[ODBC Data Sources]
zfp = test

[zfp]
Driver = FreeTDS # The section name from /etd/odbcinst.ini
Description = test
Servername = zfp # The section name from /etc/freetds/freetds.conf (see below)
# Do NOT use option 'Server'.
TDS_Version = 8.0   # Important


[5] Edit /etc/freetds/freetds.conf:

[zfp]
host = 10.230.128.140
port = 1433
asa database = ZFP_CRM  # Database name
tds version = 8.0   # Important
client charset = UTF-8  # Important
text size = 5000


[6] Install python2.7-dev which is needed for the next step, because pyodbc
compiles some stuff and thus needs header files:

aptitude install python2.7-dev


[7] Install pyodbc manually (i.e. without aptitude):

easyinstall pyodbc


[8] Verify that freetds works (zfp is the section name in
/etc/freetds/freetds.conf):

tsql -S zfp -U username

Try to select some strings to see that they display correctly.


[9] Verify that unixodbc works (zfp here is the section name in /etc/odbc.ini):

isql zfp username password

Again, try to select some strings to see that they display correctly.


[10] Verify that pyodbc works:

import pyodbc
cnxn = pyodbc.connect(DSN=zfp;UID=username;PWD=password, unicode_results=True)
cursor = cnxn.cursor()
# Select some VARCHAR column.
(x,) = cursor.execute(select...).fetchone()
# Should print a meaningful unicode representation of the result string.
print repr(x)
# Repeat the same for NVARCHAR column. The result should be the same, i.e.
# proper unicode representation).
(x,) = cursor.execute(select...).fetchone()
print repr(x)


[11] Connect via SQLAlchemy:

from sqlalchemy.engine import create_engine
from sqlalchemy.ext.sqlsoup import SqlSoup

# zfp here is the section name in /etc/odbc.ini
engine = create_engine(
mssql+pyodbc://username:password@zfp,
convert_unicode=True,
echo='debug'
)
db = SqlSoup(engine)
# Same as in step 10
db.execute(...)
# OR
x = db.some_table.filter(...).one()


Enjoy!

Ladislav Lenart


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [Q] SqlAlchemy via pyodbc to MSSQL from Linux

2012-10-10 Thread Ladislav Lenart
Hello.

Thank you for the information. I am using freetds 0.91 without issues. But note
that on Debian/testing, the driver file libtdsodbc.so is NOT part of any
freetds-* package. It is part of the package tdsodbc. See my previous e-mail for
detailed installation instructions.

Ladislav Lenart


On 10.10.2012 17:17, Michael Bayer wrote:
 
 On Oct 10, 2012, at 6:01 AM, Ladislav Lenart wrote:
 
 Hello Michael.

 Just a question. When I first posted about my problem of how to connect to 
 MSSQL
 from Linux/Debian, you mentioned that you are running similar setup in
 production, or at least that is how I understood it. Am I correct? If so, is 
 it
 similar to this:
 * Connect from Linux/Debian
 * To MSSQL
 * Via pyodbc
 * using freetds
 ?

 Are there some differences?
 
 Production is CentOs but otherwise that's what we do, yup.  I also can run 
 unicode-aware tests against a SQL Server instance from a local fedora 
 instance as well as OSX.  
 
 I will note that we have stayed on FreeTDS 0.82 as opposed to 0.91.  I have 
 tested 0.91 on linux and we've repaired some issues for people using 0.91, 
 however from OSX there are 0.91 unicode issues I haven't been able to 
 overcome.We've kept our CentOS production environment on 0.82 as well 
 just because it already works.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [Q] SQLSoup and pymssql from Linux

2012-10-09 Thread Ladislav Lenart
Hello.

No, I was not able to connect via pymssql. Furthemore I would like to use pyodbc
if it is the preferred way. I just did not know that when I started with 
pymssql.

It really is a misconfigured character encoding issue in pyodbc / freetds on my
part. I am just clueless as to what should I set to what to make it all work.

The code:

cnxn =
pyodbc.connect('DRIVER={FreeTDS};SERVER=10.230.128.140;PORT=1433;DATABASE=ZFP_CRM;UID=efractal;PWD=efR@cZFP13;TDS_VERSION=8.0')
cursor = cnxn.cursor()
cursor.execute(select prijmeni from osoba where id_osoba = 462493)
row = cursor.fetchone()
print row

prints:

('Ne?asov', )

The correct value is 'Nečasová'. Ideally I would like to see the following:

(u'Nečasová', )

On a side note, the returned value is one character shorter. I would expected:

('Ne?asov?', )


However, when I connect via tsql, it works correcly:

tsql -S zfp -U efractal
Password:
locale is cs_CZ.UTF-8
locale charset is UTF-8
using default charset utf8
1 select prijmeni from osoba where id_osoba = 462493
2 go
prijmeni
Nečasová
(1 row affected)
1


I guess I am on my own now. Anyway, thank you for your kind assistance.

Ladislav Lenart


On 8.10.2012 19:06, Michael Bayer wrote:
 if you didnt have this problem with pymssql then please apply the patch I 
 sent previously.
 
 However, I'm going to bet the problem remains as it seems something is not 
 right with how your database and/or client is configured.  I'm not familiar 
 with the encoding pattern seen in your information_schema queries.  you may 
 have to reproduce the issue using plain SELECT statements against 
 information_schema and then seek outside help.
 
 
 
 diff -r 17cab4ad55d5 lib/sqlalchemy/dialects/mssql/pymssql.py
 --- a/lib/sqlalchemy/dialects/mssql/pymssql.pyThu Oct 04 18:26:55 
 2012 -0400
 +++ b/lib/sqlalchemy/dialects/mssql/pymssql.pyFri Oct 05 18:46:01 
 2012 -0400
 @@ -80,7 +80,7 @@
 def _get_server_version_info(self, connection):
 vers = connection.scalar(select @@version)
 m = re.match(
 -rMicrosoft SQL Server.*? - (\d+).(\d+).(\d+).(\d+), vers)
 +r\s*Microsoft SQL Server.*? - (\d+).(\d+).(\d+).(\d+), vers)
 if m:
 return tuple(int(x) for x in m.group(1, 2, 3, 4))
 else:
 
 On Oct 8, 2012, at 12:10 PM, Ladislav Lenart wrote:
 
 Hello.

 I adjusted the ODBC/FreeTDS condifugration according to your suggestions but
 still get the NoSuchTableError: mlm_spol.

 freetds.conf:

 [zfp]
 host = 10.230.128.140
 port = 1433
 tds version = 8.0
 asa database = ZFP_CRM
 client charset = utf8
 text size = 5000


 odbc.ini:

 [ODBC Data Sources]
 zfp = test

 [zfp]
 Driver = /usr/lib/libtdsodbc.so
 Description = test
 Trace = No
 Servername = zfp


 odbcinst.ini (for the sake of completeness):

 [FreeTDS]
 Driver = /usr/lib/libtdsodbc.so
 UsageCount = 2


 The code:

 from sqlalchemy.engine import create_engine
 from sqlalchemy.ext.sqlsoup import SqlSoup

 if __name__ == '__main__':
engine = create_engine(mssql://efractal:efR@cZFP13@zfp, echo='debug')
db = SqlSoup(engine)
x = db.mlm_spol.fetchone()
print x


 still fails with NoSuchTableError: mlm_spol

 Its debug output is the same as in my previous email which used different
 connect style.

 And the code that uses db.execute('select * from mlm_spol').fetchone() still
 works...

 Any other ideas?

 Ladislav Lenart


 On 8.10.2012 17:45, Michael Bayer wrote:
 your freetds datasource should be configured with CLIENT_CHARSET=utf8 as I 
 illustrated earlier:

 [ms_2005]
 host = 172.16.248.128
   port = 1213
   tds version = 8.0
   client charset = UTF8
   text size = 5000

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [Q] SQLSoup and pymssql from Linux

2012-10-09 Thread Ladislav Lenart
Also, this works correctly:

isql -v zfp efractal efR@cZFP13
+---+
| Connected!|
|   |
| sql-statement |
| help [tablename]  |
| quit  |
|   |
+---+
SQL select prijmeni from osoba where id_osoba = 462493
prijmeni


  Nečasová


  SQLRowCount returns 1
1 rows fetched
SQL


This does not:

iusql -v zfp efractal efR@cZFP13
+---+
| Connected!|
|   |
| sql-statement |
| help [tablename]  |
| quit  |
|   |
+---+
SQL select prijmeni from osoba where id_osoba = 462493
prijmeni


  asov�


 SQLRowCount returns 1
1 rows fetched
SQL


Ladislav Lenart


On 9.10.2012 12:03, Ladislav Lenart wrote:
 Hello.
 
 No, I was not able to connect via pymssql. Furthemore I would like to use 
 pyodbc
 if it is the preferred way. I just did not know that when I started with 
 pymssql.
 
 It really is a misconfigured character encoding issue in pyodbc / freetds on 
 my
 part. I am just clueless as to what should I set to what to make it all work.
 
 The code:
 
 cnxn =
 pyodbc.connect('DRIVER={FreeTDS};SERVER=10.230.128.140;PORT=1433;DATABASE=ZFP_CRM;UID=efractal;PWD=efR@cZFP13;TDS_VERSION=8.0')
 cursor = cnxn.cursor()
 cursor.execute(select prijmeni from osoba where id_osoba = 462493)
 row = cursor.fetchone()
 print row
 
 prints:
 
 ('Ne?asov', )
 
 The correct value is 'Nečasová'. Ideally I would like to see the following:
 
 (u'Nečasová', )
 
 On a side note, the returned value is one character shorter. I would expected:
 
 ('Ne?asov?', )
 
 
 However, when I connect via tsql, it works correcly:
 
 tsql -S zfp -U efractal
 Password:
 locale is cs_CZ.UTF-8
 locale charset is UTF-8
 using default charset utf8
 1 select prijmeni from osoba where id_osoba = 462493
 2 go
 prijmeni
 Nečasová
 (1 row affected)
 1
 
 
 I guess I am on my own now. Anyway, thank you for your kind assistance.
 
 Ladislav Lenart
 
 
 On 8.10.2012 19:06, Michael Bayer wrote:
 if you didnt have this problem with pymssql then please apply the patch I 
 sent previously.

 However, I'm going to bet the problem remains as it seems something is not 
 right with how your database and/or client is configured.  I'm not familiar 
 with the encoding pattern seen in your information_schema queries.  you may 
 have to reproduce the issue using plain SELECT statements against 
 information_schema and then seek outside help.



 diff -r 17cab4ad55d5 lib/sqlalchemy/dialects/mssql/pymssql.py
 --- a/lib/sqlalchemy/dialects/mssql/pymssql.py   Thu Oct 04 18:26:55 
 2012 -0400
 +++ b/lib/sqlalchemy/dialects/mssql/pymssql.py   Fri Oct 05 18:46:01 
 2012 -0400
 @@ -80,7 +80,7 @@
 def _get_server_version_info(self, connection):
 vers = connection.scalar(select @@version)
 m = re.match(
 -rMicrosoft SQL Server.*? - (\d+).(\d+).(\d+).(\d+), vers)
 +r\s*Microsoft SQL Server.*? - (\d+).(\d+).(\d+).(\d+), vers)
 if m:
 return tuple(int(x) for x in m.group(1, 2, 3, 4))
 else:

 On Oct 8, 2012, at 12:10 PM, Ladislav Lenart wrote:

 Hello.

 I adjusted the ODBC/FreeTDS condifugration according to your suggestions but
 still get the NoSuchTableError: mlm_spol.

 freetds.conf:

 [zfp]
 host = 10.230.128.140
 port = 1433
 tds version = 8.0
 asa database = ZFP_CRM
 client charset = utf8
 text size = 5000


 odbc.ini:

 [ODBC Data Sources]
 zfp = test

 [zfp]
 Driver = /usr/lib/libtdsodbc.so
 Description = test
 Trace = No
 Servername = zfp


 odbcinst.ini (for the sake of completeness):

 [FreeTDS]
 Driver = /usr/lib/libtdsodbc.so
 UsageCount = 2


 The code:

 from sqlalchemy.engine import create_engine
 from sqlalchemy.ext.sqlsoup import SqlSoup

 if __name__ == '__main__':
engine = create_engine(mssql://efractal:efR@cZFP13@zfp, echo='debug')
db = SqlSoup(engine)
x = db.mlm_spol.fetchone()
print x


 still fails with NoSuchTableError: mlm_spol

 Its debug output is the same as in my previous email which used different
 connect style.

 And the code that uses db.execute('select * from mlm_spol').fetchone() still
 works...

 Any other ideas?

 Ladislav Lenart


 On 8.10.2012 17:45, Michael Bayer wrote:
 your freetds datasource should be configured with CLIENT_CHARSET=utf8 as I 
 illustrated earlier:

 [ms_2005]
host = 172.16.248.128
   port = 1213
   tds version = 8.0
   client charset = UTF8
   text size = 5000

-- 
You received

Re: [sqlalchemy] [Q] SQLSoup and pymssql from Linux

2012-10-09 Thread Ladislav Lenart
Hello.

I made some progress. I have client charset in my freetds config file. I am also
certain that the config and the charset is used by pyodbc / freetds combo.
Without it I get 'Ne?...'. With it I get back a str encoded in utf-8. I also
enabled freetds logging where I clearly see two conversions being prepared:

iconv.c:351:preparing iconv for UTF-8 - UCS-2LE conversion
iconv.c:391:preparing iconv for ISO-8859-1 - UCS-2LE conversion

If I comment the client charset line in the config, I see:

iconv.c:391:preparing iconv for ISO-8859-1 - UCS-2LE conversion
iconv.c:391:preparing iconv for ISO-8859-1 - UCS-2LE conversion

I beleive the second one has to do with description_encoding which is 'latin-1'
by default (in MSDialect_pyodbc.__init__). However I tried to set it to utf-8,
utf-16le and whatnot without any effect (in debugger).

I also found out that my problem is not in information_schema per se but with
all nvarchar values / columns. For example this

u'\U0073006f\U0062006f'

is actually a garbled string 'osob':

u'\U0073006f\U0062006f'
u'\U006f0073\U006f0062'
u'\u006f\u0073\u006f\u0062' = u'osob'

though I have no idea how and where it comes into existence. Furthermore the
correct value is 'osoba' (a table name). I guess the conversion stripped out the
odd character.

Current state of nvarchar processing in my dev env:
tsql OK
isql OK
pyodbc KO


Ladislav Lenart


On 9.10.2012 16:19, Michael Bayer wrote:
 
 On Oct 9, 2012, at 6:03 AM, Ladislav Lenart wrote:
 
 Hello.

 No, I was not able to connect via pymssql. Furthemore I would like to use 
 pyodbc
 if it is the preferred way. I just did not know that when I started with 
 pymssql.

 It really is a misconfigured character encoding issue in pyodbc / freetds on 
 my
 part. I am just clueless as to what should I set to what to make it all work.

 The code:

cnxn =
 pyodbc.connect('DRIVER={FreeTDS};SERVER=10.230.128.140;PORT=1433;DATABASE=ZFP_CRM;UID=efractal;PWD=efR@cZFP13;TDS_VERSION=8.0')
cursor = cnxn.cursor()
cursor.execute(select prijmeni from osoba where id_osoba = 462493)
row = cursor.fetchone()
print row

 prints:

('Ne?asov', )
 
 you definitely, definitely need client charset to be part of your FreeTDS 
 config, either in that URL string up there (not sure if that's supported), or 
 preferably in your freetds.conf.   pyodbc is pretty sensitive to this.
 
 However, when I connect via tsql, it works correcly:
 
 yeah, the whole FreeTDS story is awful, I don't understand any of it either.  
I'm still at wave a dead chicken stage with  FreeTDS 
 (http://dictionary.reference.com/browse/wave+a+dead+chicken).
 
 

tsql -S zfp -U efractal
Password:
locale is cs_CZ.UTF-8
locale charset is UTF-8
using default charset utf8
1 select prijmeni from osoba where id_osoba = 462493
2 go
prijmeni
Nečasová
(1 row affected)
1


 I guess I am on my own now. Anyway, thank you for your kind assistance.

 Ladislav Lenart


 On 8.10.2012 19:06, Michael Bayer wrote:
 if you didnt have this problem with pymssql then please apply the patch I 
 sent previously.

 However, I'm going to bet the problem remains as it seems something is not 
 right with how your database and/or client is configured.  I'm not familiar 
 with the encoding pattern seen in your information_schema queries.  you may 
 have to reproduce the issue using plain SELECT statements against 
 information_schema and then seek outside help.



 diff -r 17cab4ad55d5 lib/sqlalchemy/dialects/mssql/pymssql.py
 --- a/lib/sqlalchemy/dialects/mssql/pymssql.py  Thu Oct 04 18:26:55 
 2012 -0400
 +++ b/lib/sqlalchemy/dialects/mssql/pymssql.py  Fri Oct 05 18:46:01 
 2012 -0400
 @@ -80,7 +80,7 @@
def _get_server_version_info(self, connection):
vers = connection.scalar(select @@version)
m = re.match(
 -rMicrosoft SQL Server.*? - (\d+).(\d+).(\d+).(\d+), vers)
 +r\s*Microsoft SQL Server.*? - (\d+).(\d+).(\d+).(\d+), vers)
if m:
return tuple(int(x) for x in m.group(1, 2, 3, 4))
else:

 On Oct 8, 2012, at 12:10 PM, Ladislav Lenart wrote:

 Hello.

 I adjusted the ODBC/FreeTDS condifugration according to your suggestions 
 but
 still get the NoSuchTableError: mlm_spol.

 freetds.conf:

 [zfp]
 host = 10.230.128.140
 port = 1433
 tds version = 8.0
 asa database = ZFP_CRM
 client charset = utf8
 text size = 5000


 odbc.ini:

 [ODBC Data Sources]
 zfp = test

 [zfp]
 Driver = /usr/lib/libtdsodbc.so
 Description = test
 Trace = No
 Servername = zfp


 odbcinst.ini (for the sake of completeness):

 [FreeTDS]
 Driver = /usr/lib/libtdsodbc.so
 UsageCount = 2


 The code:

 from sqlalchemy.engine import create_engine
 from sqlalchemy.ext.sqlsoup import SqlSoup

 if __name__ == '__main__':
   engine = create_engine(mssql://efractal:efR@cZFP13@zfp, echo='debug')
   db = SqlSoup(engine)
   x = db.mlm_spol.fetchone()
   print x


 still fails

Re: [sqlalchemy] [Q] SQLSoup and pymssql from Linux

2012-10-08 Thread Ladislav Lenart
Hello.

 otherwise, I'd consider using pyodbc for which the dialect and DBAPI are
production quality.   I use pyodbc with FreeTDS on unix platforms in production.

Ok, I can use pyodbc if it is the preferred choice. However I cannot make it
work either. I suspect that I supply bad connection string but am a little lost
in the docs...

The code:

from sqlalchemy.ext.sqlsoup import SqlSoup

if __name__ == '__main__':
conn_string = 
'mssql+pyodbc://username:pass\@word@10.230.128.140:1433/ZFP_CRM'
db = SqlSoup(conn_string)
x = db.zfp_mlm_spol.first()


fails (see the traceback below). Note the password contains the character '@'.
The preceeding '\' is my attempt to escape it.

What connection string should I use to connect to MSSQL via pyodbc using 
freetds?

Thank you,

Ladislav Lenart


THE TRACEBACK:

Traceback (most recent call last):
  File
/home/lada/.eclipse/org.eclipse.platform_3.8_155965261/plugins/org.python.pydev_2.6.0.2012062818/pysrc/pydevd.py,
line 1392, in module
debugger.run(setup['file'], None, None)
  File
/home/lada/.eclipse/org.eclipse.platform_3.8_155965261/plugins/org.python.pydev_2.6.0.2012062818/pysrc/pydevd.py,
line 1085, in run
pydev_imports.execfile(file, globals, locals) #execute the script
  File /home/lada/mine/devel/python/ZFP/zfp_connect.py, line 11, in module
x = db.zfp_mlm_spol.first()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 807,
in __getattr__
return self.entity(attr)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 804,
in entity
return self.map_to(attr, tablename=attr, schema=schema)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 684,
in map_to
schema=schema or self.schema)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 318, in 
__new__
table._init(name, metadata, *args, **kw)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 381, in 
_init
self._autoload(metadata, autoload_with, include_columns)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 397, in
_autoload
self, include_columns, exclude_columns
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 2424,
in run_callable
conn = self.contextual_connect()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 2490,
in contextual_connect
self.pool.connect(),
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 224, in 
connect
return _ConnectionFairy(self).checkout()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 387, in 
__init__
rec = self._connection_record = pool._do_get()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 741, in 
_do_get
con = self._create_connection()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 188, in
_create_connection
return _ConnectionRecord(self)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 270, in 
__init__
self.connection = self.__connect()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 330, in 
__connect
connection = self.__pool._creator()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py, line
80, in connect
return dialect.connect(*cargs, **cparams)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line
281, in connect
return self.dbapi.connect(*cargs, **cparams)
sqlalchemy.exc.DBAPIError: (Error) ('IM002', '[IM002] [unixODBC][Driver
Manager]Data source name not found, and no default driver specified (0)
(SQLDriverConnectW)') None None


On 6.10.2012 00:47, Michael Bayer wrote:
 what I can do for the moment is this patch, if you want to try it:
 
 diff -r 17cab4ad55d5 lib/sqlalchemy/dialects/mssql/pymssql.py
 --- a/lib/sqlalchemy/dialects/mssql/pymssql.pyThu Oct 04 18:26:55 
 2012 -0400
 +++ b/lib/sqlalchemy/dialects/mssql/pymssql.pyFri Oct 05 18:46:01 
 2012 -0400
 @@ -80,7 +80,7 @@
  def _get_server_version_info(self, connection):
  vers = connection.scalar(select @@version)
  m = re.match(
 -rMicrosoft SQL Server.*? - (\d+).(\d+).(\d+).(\d+), vers)
 +r\s*Microsoft SQL Server.*? - (\d+).(\d+).(\d+).(\d+), vers)
  if m:
  return tuple(int(x) for x in m.group(1, 2, 3, 4))
  else:
 
 
 
 otherwise, I'd consider using pyodbc for which the dialect and DBAPI are 
 production quality.   I use pyodbc with FreeTDS on unix platforms in 
 production.
 
 
 
 
 On Oct 5, 2012, at 1:40 PM, lenart...@volny.cz wrote:
 
 Hello.

 I dont have easy access to pymssql here so can you fully define what 
 fails means ?  stack trace ?

 I don't have access to my development environment during the weekend, so I 
 cannot provide you with a stacktrace, but I try to better describe the issue:

def _get_server_version_info(self, connection)
vers = connection.scalar(select @@version)
m = re.match(rMicrosoft SQL

Re: [sqlalchemy] [Q] SQLSoup and pymssql from Linux

2012-10-08 Thread Ladislav Lenart
Hello again.

It turned out that I was missing some ODBC-related packages and also needed to
configure freetds to work with unixodbc. I managed to finally do it, though it
was by no means easy for me (trial and error of several tutorials).

The following code works now:

import pyodbc
cnxn =
pyodbc.connect('DRIVER={FreeTDS};SERVER=1.2.3.4;PORT=1433;DATABASE=ZFP_CRM;UID=username;PWD=pass@cword;TDS_VERSION=8.0')
cursor = cnxn.cursor()
cursor.execute(select * from mlm_spol)
row = cursor.fetchone()
print row


However SqlSoup does not work. The code:

import pyodbc
from sqlalchemy.engine import create_engine
from sqlalchemy.ext.sqlsoup import SqlSoup
def connect():
return
pyodbc.connect('DRIVER={FreeTDS};SERVER=10.230.128.140;PORT=1433;DATABASE=ZFP_CRM;UID=efractal;PWD=efR@cZFP13;TDS_VERSION=8.0;')
engine = create_engine('mssql+pyodbc://', creator=connect)
db = SqlSoup(engine)
row = db.mlm_spol.first()
print row


fails with:

Traceback (most recent call last):
  File /home/lada/mine/devel/python/ZFP/zfp_connect.py, line 16, in module
x = db.mlm_spol.first()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 807,
in __getattr__
return self.entity(attr)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 804,
in entity
return self.map_to(attr, tablename=attr, schema=schema)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 684,
in map_to
schema=schema or self.schema)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 318, in 
__new__
table._init(name, metadata, *args, **kw)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 381, in 
_init
self._autoload(metadata, autoload_with, include_columns)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 397, in
_autoload
self, include_columns, exclude_columns
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 2426,
in run_callable
return conn.run_callable(callable_, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1969,
in run_callable
return callable_(self, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line
260, in reflecttable
return insp.reflecttable(table, include_columns, exclude_columns)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py, line
412, in reflecttable
raise exc.NoSuchTableError(table.name)
sqlalchemy.exc.NoSuchTableError: mlm_spol


Any ideas?


Thanks,

Ladislav Lenart


On 8.10.2012 11:11, Ladislav Lenart wrote:
 Hello.
 
 otherwise, I'd consider using pyodbc for which the dialect and DBAPI are
 production quality.   I use pyodbc with FreeTDS on unix platforms in 
 production.
 
 Ok, I can use pyodbc if it is the preferred choice. However I cannot make it
 work either. I suspect that I supply bad connection string but am a little 
 lost
 in the docs...
 
 The code:
 
 from sqlalchemy.ext.sqlsoup import SqlSoup
 
 if __name__ == '__main__':
 conn_string = 
 'mssql+pyodbc://username:pass\@word@10.230.128.140:1433/ZFP_CRM'
 db = SqlSoup(conn_string)
 x = db.zfp_mlm_spol.first()
 
 
 fails (see the traceback below). Note the password contains the character '@'.
 The preceeding '\' is my attempt to escape it.
 
 What connection string should I use to connect to MSSQL via pyodbc using 
 freetds?
 
 Thank you,
 
 Ladislav Lenart
 
 
 THE TRACEBACK:
 
 Traceback (most recent call last):
   File
 /home/lada/.eclipse/org.eclipse.platform_3.8_155965261/plugins/org.python.pydev_2.6.0.2012062818/pysrc/pydevd.py,
 line 1392, in module
 debugger.run(setup['file'], None, None)
   File
 /home/lada/.eclipse/org.eclipse.platform_3.8_155965261/plugins/org.python.pydev_2.6.0.2012062818/pysrc/pydevd.py,
 line 1085, in run
 pydev_imports.execfile(file, globals, locals) #execute the script
   File /home/lada/mine/devel/python/ZFP/zfp_connect.py, line 11, in module
 x = db.zfp_mlm_spol.first()
   File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 807,
 in __getattr__
 return self.entity(attr)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 804,
 in entity
 return self.map_to(attr, tablename=attr, schema=schema)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 684,
 in map_to
 schema=schema or self.schema)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 318, in 
 __new__
 table._init(name, metadata, *args, **kw)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 381, in 
 _init
 self._autoload(metadata, autoload_with, include_columns)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 397, in
 _autoload
 self, include_columns, exclude_columns
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
 2424,
 in run_callable
 conn = self.contextual_connect()
   File /usr/lib/python2.7/dist-packages/sqlalchemy

Re: [sqlalchemy] [Q] SQLSoup and pymssql from Linux

2012-10-08 Thread Ladislav Lenart
Hello.

UPDATE: The raw SqlSoup.execute() works:

import pyodbc
from sqlalchemy.engine import create_engine
from sqlalchemy.ext.sqlsoup import SqlSoup

def connect():
return
pyodbc.connect('DRIVER={FreeTDS};SERVER=10.230.128.140;PORT=1433;DATABASE=ZFP_CRM;UID=efractal;PWD=efR@cZFP13;TDS_VERSION=8.0;')
engine = create_engine('mssql+pyodbc://', creator=connect)
db = SqlSoup(engine)
x = db.execute(select * from mlm_spol).fetchone()
print x


Any ideas what I have to do to make SqlSoup's ORM work? I am out of ideas.

Ladislav Lenart


On 8.10.2012 15:02, Ladislav Lenart wrote:
 Hello again.
 
 It turned out that I was missing some ODBC-related packages and also needed to
 configure freetds to work with unixodbc. I managed to finally do it, though it
 was by no means easy for me (trial and error of several tutorials).
 
 The following code works now:
 
 import pyodbc
 cnxn =
 pyodbc.connect('DRIVER={FreeTDS};SERVER=1.2.3.4;PORT=1433;DATABASE=ZFP_CRM;UID=username;PWD=pass@cword;TDS_VERSION=8.0')
 cursor = cnxn.cursor()
 cursor.execute(select * from mlm_spol)
 row = cursor.fetchone()
 print row
 
 
 However SqlSoup does not work. The code:
 
 import pyodbc
 from sqlalchemy.engine import create_engine
 from sqlalchemy.ext.sqlsoup import SqlSoup
 def connect():
 return
 pyodbc.connect('DRIVER={FreeTDS};SERVER=10.230.128.140;PORT=1433;DATABASE=ZFP_CRM;UID=efractal;PWD=efR@cZFP13;TDS_VERSION=8.0;')
 engine = create_engine('mssql+pyodbc://', creator=connect)
 db = SqlSoup(engine)
 row = db.mlm_spol.first()
 print row
 
 
 fails with:
 
 Traceback (most recent call last):
   File /home/lada/mine/devel/python/ZFP/zfp_connect.py, line 16, in module
 x = db.mlm_spol.first()
   File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 807,
 in __getattr__
 return self.entity(attr)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 804,
 in entity
 return self.map_to(attr, tablename=attr, schema=schema)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 684,
 in map_to
 schema=schema or self.schema)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 318, in 
 __new__
 table._init(name, metadata, *args, **kw)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 381, in 
 _init
 self._autoload(metadata, autoload_with, include_columns)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 397, in
 _autoload
 self, include_columns, exclude_columns
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
 2426,
 in run_callable
 return conn.run_callable(callable_, *args, **kwargs)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
 1969,
 in run_callable
 return callable_(self, *args, **kwargs)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line
 260, in reflecttable
 return insp.reflecttable(table, include_columns, exclude_columns)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py, 
 line
 412, in reflecttable
 raise exc.NoSuchTableError(table.name)
 sqlalchemy.exc.NoSuchTableError: mlm_spol
 
 
 Any ideas?
 
 
 Thanks,
 
 Ladislav Lenart
 
 
 On 8.10.2012 11:11, Ladislav Lenart wrote:
 Hello.

 otherwise, I'd consider using pyodbc for which the dialect and DBAPI are
 production quality.   I use pyodbc with FreeTDS on unix platforms in 
 production.

 Ok, I can use pyodbc if it is the preferred choice. However I cannot make it
 work either. I suspect that I supply bad connection string but am a little 
 lost
 in the docs...

 The code:

 from sqlalchemy.ext.sqlsoup import SqlSoup

 if __name__ == '__main__':
 conn_string = 
 'mssql+pyodbc://username:pass\@word@10.230.128.140:1433/ZFP_CRM'
 db = SqlSoup(conn_string)
 x = db.zfp_mlm_spol.first()


 fails (see the traceback below). Note the password contains the character 
 '@'.
 The preceeding '\' is my attempt to escape it.

 What connection string should I use to connect to MSSQL via pyodbc using 
 freetds?

 Thank you,

 Ladislav Lenart


 THE TRACEBACK:

 Traceback (most recent call last):
   File
 /home/lada/.eclipse/org.eclipse.platform_3.8_155965261/plugins/org.python.pydev_2.6.0.2012062818/pysrc/pydevd.py,
 line 1392, in module
 debugger.run(setup['file'], None, None)
   File
 /home/lada/.eclipse/org.eclipse.platform_3.8_155965261/plugins/org.python.pydev_2.6.0.2012062818/pysrc/pydevd.py,
 line 1085, in run
 pydev_imports.execfile(file, globals, locals) #execute the script
   File /home/lada/mine/devel/python/ZFP/zfp_connect.py, line 11, in 
 module
 x = db.zfp_mlm_spol.first()
   File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 
 807,
 in __getattr__
 return self.entity(attr)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 
 804,
 in entity
 return self.map_to(attr, tablename=attr, schema=schema)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/ext

Re: [sqlalchemy] [Q] SQLSoup and pymssql from Linux

2012-10-08 Thread Ladislav Lenart
Ok, I will give it yet another try, but please note that the following works:

import pyodbc
from sqlalchemy.engine import create_engine
from sqlalchemy.ext.sqlsoup import SqlSoup

def connect():
return
pyodbc.connect('DRIVER={FreeTDS};SERVER=10.230.128.140;PORT=1433;DATABASE=ZFP_CRM;UID=efractal;PWD=efR@cZFP13;TDS_VERSION=8.0;')
engine = create_engine('mssql+pyodbc://', creator=connect)
db = SqlSoup(engine)
x = db.execute(select * from mlm_spol).fetchone()
print x


i.e. raw SqlSoup.execute() sees the table and returns a valid result.


I regard this as a proof that the connection was established successfully and
the problem lies elsewhere. Am I wrong?


Ladislav Lenart


On 8.10.2012 16:20, Michael Bayer wrote:
 no such table usually means you're not connected to the database that you 
 think you are.
 
 I'd strongly suggest configuring an ODBC datasource within FreeTDS, and using 
 standard connection techniques.   Hostname, port, tds version go into 
 freetds.conf, and database names go into odbc.ini.
 
 In freetds conf for example I have:
 
 [ms_2005]
   host = 172.16.248.128
 port = 1213
 tds version = 8.0
 client charset = UTF8
 text size = 5000
 
 
 and on my mac in /Library/ODBC/odbc.ini I have:
 
 [ODBC Data Sources]
 ms_2005 = test
 
 [ms_2005]
 Driver  = /usr/local/lib/libtdsodbc.so
 Description = test
 Trace   = No
 Servername  = ms_2005
 
 
 I then connect with SQLAlchemy as:
 
 create_engine(mssql://scott:tiger@ms_2005)
 
 see http://freetds.schemamania.org/userguide/prepodbc.htm for freetds' docs 
 on all this.
 
 
 
 On Oct 8, 2012, at 9:02 AM, Ladislav Lenart wrote:
 
 Hello again.

 It turned out that I was missing some ODBC-related packages and also needed 
 to
 configure freetds to work with unixodbc. I managed to finally do it, though 
 it
 was by no means easy for me (trial and error of several tutorials).

 The following code works now:

 import pyodbc
 cnxn =
 pyodbc.connect('DRIVER={FreeTDS};SERVER=1.2.3.4;PORT=1433;DATABASE=ZFP_CRM;UID=username;PWD=pass@cword;TDS_VERSION=8.0')
 cursor = cnxn.cursor()
 cursor.execute(select * from mlm_spol)
 row = cursor.fetchone()
 print row


 However SqlSoup does not work. The code:

 import pyodbc
 from sqlalchemy.engine import create_engine
 from sqlalchemy.ext.sqlsoup import SqlSoup
 def connect():
return
 pyodbc.connect('DRIVER={FreeTDS};SERVER=10.230.128.140;PORT=1433;DATABASE=ZFP_CRM;UID=efractal;PWD=efR@cZFP13;TDS_VERSION=8.0;')
 engine = create_engine('mssql+pyodbc://', creator=connect)
 db = SqlSoup(engine)
 row = db.mlm_spol.first()
 print row


 fails with:

 Traceback (most recent call last):
  File /home/lada/mine/devel/python/ZFP/zfp_connect.py, line 16, in module
x = db.mlm_spol.first()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 807,
 in __getattr__
return self.entity(attr)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 804,
 in entity
return self.map_to(attr, tablename=attr, schema=schema)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/ext/sqlsoup.py, line 684,
 in map_to
schema=schema or self.schema)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 318, in 
 __new__
table._init(name, metadata, *args, **kw)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 381, in 
 _init
self._autoload(metadata, autoload_with, include_columns)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 397, in
 _autoload
self, include_columns, exclude_columns
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
 2426,
 in run_callable
return conn.run_callable(callable_, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
 1969,
 in run_callable
return callable_(self, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line
 260, in reflecttable
return insp.reflecttable(table, include_columns, exclude_columns)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py, 
 line
 412, in reflecttable
raise exc.NoSuchTableError(table.name)
 sqlalchemy.exc.NoSuchTableError: mlm_spol


 Any ideas?


 Thanks,

 Ladislav Lenart


 On 8.10.2012 11:11, Ladislav Lenart wrote:
 Hello.

 otherwise, I'd consider using pyodbc for which the dialect and DBAPI are
 production quality.   I use pyodbc with FreeTDS on unix platforms in 
 production.

 Ok, I can use pyodbc if it is the preferred choice. However I cannot make it
 work either. I suspect that I supply bad connection string but am a little 
 lost
 in the docs...

 The code:

 from sqlalchemy.ext.sqlsoup import SqlSoup

 if __name__ == '__main__':
conn_string = 
 'mssql+pyodbc://username:pass\@word@10.230.128.140:1433/ZFP_CRM'
db = SqlSoup(conn_string)
x = db.zfp_mlm_spol.first()


 fails (see the traceback below). Note the password contains the character 
 '@'.
 The preceeding '\' is my

Re: [sqlalchemy] [Q] SQLSoup and pymssql from Linux

2012-10-08 Thread Ladislav Lenart
Hello.


 perhaps.  Maybe the connection doesn't have correct access to the information
schema tables, as SQLSoup relies upon table reflection.  you'd need to run with
echo='debug' on your engine to see exactly what queries are being emitted and
the rows being returned, and determine why an information schema row for
mlm_spol isn't being returned.

I did it and it seems that there is a str/unicode problem when dealing with the
information shema (see below, it's rather long).

I tried it with charset=UTF8 to pyodbc.connect string. I also tried various
combinations of create_engine options supports_unicode_binds and convert_unicode
but none of them worked.

Ladislav Lenart


CODE:

import pyodbc
from sqlalchemy.engine import create_engine
from sqlalchemy.ext.sqlsoup import SqlSoup

def connect():
   return
pyodbc.connect('DRIVER={FreeTDS};SERVER=10.230.128.140;PORT=1433;DATABASE=ZFP_CRM;UID=efractal;PWD=efR@cZFP13;TDS_VERSION=8.0;')
engine = create_engine('mssql+pyodbc://', creator=connect, echo='debug')
db = SqlSoup(engine)
x = db.execute(select * from mlm_spol).fetchone()
print x


ITS DEBUG OUTPUT:

pydev debugger: starting
2012-10-08 17:19:53,239 INFO sqlalchemy.engine.base.Engine SELECT user_name() as
user_name;
2012-10-08 17:19:53,240 INFO sqlalchemy.engine.base.Engine ()
2012-10-08 17:19:53,246 DEBUG sqlalchemy.engine.base.Engine Col ('user_name',)
2012-10-08 17:19:53,247 DEBUG sqlalchemy.engine.base.Engine Row
(u'\U00660065\U00610072\U00740063\U006c0061', )
2012-10-08 17:19:53,253 INFO sqlalchemy.engine.base.Engine
SELECT default_schema_name FROM
sys.database_principals
WHERE name = ?
AND type = 'S'

2012-10-08 17:19:53,253 INFO sqlalchemy.engine.base.Engine
(u'\U00660065\U00610072\U00740063\U006c0061',)
2012-10-08 17:19:53,259 DEBUG sqlalchemy.engine.base.Engine Col
('default_schema_name',)
2012-10-08 17:19:53,286 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2012-10-08 17:19:53,288 INFO sqlalchemy.engine.base.Engine select * from 
mlm_spol
2012-10-08 17:19:53,289 INFO sqlalchemy.engine.base.Engine ()
2012-10-08 17:19:53,304 DEBUG sqlalchemy.engine.base.Engine Col ('id_mlm_spol',
'id_mlm_spol_master', 'id_mlm_spol_ridici', 'id_mlm_spol_reditel',
'id_mlm_spol_predstav', 'id_mlm_spol_ppz', 'id_mlm_spol_psedm',
'id_mlm_spol_gar_cz', 'id_mlm_spol_gar_sk', 'id_mlm_spol_reditel_cz',
'id_mlm_spol_reditel_sk', 'id_mlm_spol_predstav_cz', 'id_mlm_spol_predstav_sk',
'id_mlm_spol_ridici_cz', 'id_mlm_spol_ridici_sk', 'id_mlm_spol_psedm_cz',
'id_mlm_spol_psedm_sk', 'id_mlm_firma_subjekt', 'pocet_primi', 'pocet_celkem',
'typ_spol', 'info', 'info_extended', 'os_cislo', 'os_cislo_sk', 'alt_cislo_1',
'alt_cislo_2', 'alt_cislo_3', 'pozice', 'dosahnuti_pozice', 'stav',
'dosahnuti_stavu', 'datum_dosahnuti_stavu', 'smlouva', 'prvni_aplikace_smlouvy',
'vznik_vykonu', 'rfp', 'body_vlastni', 'body_vlastni_alt',
'body_vlastni_alt_trans', 'body_skup', 'body_skup_alt', 'body_skup_alt_trans',
'body_top_struktury', 'kod_banky', 'cislo_uctu', 'spec_symbol', 'eu_banka',
'rod_cis', 'ico', 'pozice_puv', 'postup', 'os_cislo_reditel', 'os_cislo_ridici',
'os_cislo_sponzor', 'os_cislo_predstav', 'typoval', 'zmena', 'poznamka',
'pokus', 'os_cislo_sponzor_new', 'body_celkem', 'blokovany_postup',
'evidencni_cislo', 'body_celkem_200501', 'cislo_slev_karty', 'reg_cislo_mf',
'reg_cislo_uft', 'datum_prid_slev_karty', 'obdobi_prid_slev_karty',
'id_mlm_spol_gar_orig_cz', 'id_mlm_spol_gar_orig_sk', 'zmena_gar_cz',
'zmena_gar_sk', 'duvod_zmeny_gar_cz', 'duvod_zmeny_gar_sk', 'datum_last_cz',
'datum_last_sk', 'id_mlm_smlouva_last_cz', 'id_mlm_smlouva_last_sk',
'id_mlm_strukt_man_cz', 'id_mlm_strukt_man_sk', 'id_mlm_spol_ros_cz',
'id_mlm_spol_ros_sk', 'tisk_ps_1', 'tisk_ps_2', 'tisk_ps_3', 'tisk_ps_4',
'tisk_ps_1_rucni', 'tisk_ps_3_rucni', 'tisk_ps_5', 'tisk_ps_5_rucni',
'id_mlm_spol_psedm_akt_cz', 'id_mlm_spol_psedm_akt_sk', 'id_mlm_spol_hrk_cz',
'id_mlm_spol_hrk_sk', 'provizni_stav', 'menit_automaticky_pstav',
'zda_uplatnovat_kredit', 'perioda_dosahnuti_pozice', 'firma')
2012-10-08 17:19:53,306 DEBUG sqlalchemy.engine.base.Engine Row (277581, 268893,
37494, 37494, 20198, None, 37494, 119835, 18954, 37494, 18954, 20198, 18954,
37494, 18996, 37494, 18996, None, 0, 0, 'O', None, None, '414712', None, None,
None, None, '1', None, 1, '201208', datetime.datetime(2012, 8, 25, 0, 0), None,
None, None, None, Decimal('0.00'), Decimal('0.00'), Decimal('0.00'),
Decimal('0.00'), Decimal('0.00'), Decimal('0.00'),
Decimal('0.00'), None, None, None, None, None, None, None, None, None, None,
None, None, None, None, None, None, None, Decimal('0.00'), None, None, None,
None, None, None, None, None, None, None, None, None, None, None, None, None,
None, None, 37494, 18954, 18996, 19009, 'A', None, None, None, 'N', None, None,
None, 37494, 20198, 20198, 18954, False, True, True, 0, None)


CODE:

import pyodbc
from sqlalchemy.engine import create_engine
from sqlalchemy.ext.sqlsoup

Re: [sqlalchemy] [Q] SQLSoup and pymssql from Linux

2012-10-08 Thread Ladislav Lenart
Hello.

I adjusted the ODBC/FreeTDS condifugration according to your suggestions but
still get the NoSuchTableError: mlm_spol.

freetds.conf:

[zfp]
host = 10.230.128.140
port = 1433
tds version = 8.0
asa database = ZFP_CRM
client charset = utf8
text size = 5000


odbc.ini:

[ODBC Data Sources]
zfp = test

[zfp]
Driver = /usr/lib/libtdsodbc.so
Description = test
Trace = No
Servername = zfp


odbcinst.ini (for the sake of completeness):

[FreeTDS]
Driver = /usr/lib/libtdsodbc.so
UsageCount = 2


The code:

from sqlalchemy.engine import create_engine
from sqlalchemy.ext.sqlsoup import SqlSoup

if __name__ == '__main__':
engine = create_engine(mssql://efractal:efR@cZFP13@zfp, echo='debug')
db = SqlSoup(engine)
x = db.mlm_spol.fetchone()
print x


still fails with NoSuchTableError: mlm_spol

Its debug output is the same as in my previous email which used different
connect style.

And the code that uses db.execute('select * from mlm_spol').fetchone() still
works...

Any other ideas?

Ladislav Lenart


On 8.10.2012 17:45, Michael Bayer wrote:
 your freetds datasource should be configured with CLIENT_CHARSET=utf8 as I 
 illustrated earlier:
 
 [ms_2005]
   host = 172.16.248.128
port = 1213
tds version = 8.0
client charset = UTF8
text size = 5000

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] [Q] SQLSoup and pymssql from Linux

2012-10-05 Thread Ladislav Lenart
Hello.

I try to access a Microsoft SQL database from Linux (Debian testing):

from sqlalchemy.ext.sqlsoup import SqlSoup

conn_string = 'mssql+pymssql://user:pass@freetds_name'
db = SqlSoup(conn_string)
v = db.some_table.first()
print v

freetds_name is the section name from /etc/freetds/freetds.conf

[freetds_name]
host = ...
port = 1433
tds version = 7.1
asa database = DB

The above script fails in pymssql on line 83, because line 81 sets vers to None:

def _get_server_version_info(self, connection):
vers = connection.scalar(select @@version)
m = re.match(
rMicrosoft SQL Server.*? - (\d+).(\d+).(\d+).(\d+), vers)
if m:
return tuple(int(x) for x in m.group(1, 2, 3, 4))
else:
return None

But the following works in tsql:

1 select @@version
2 go

Microsoft SQL Server 2008 R2 (RTM) - 10.50.1617.0 (X64)
Apr 22 2011 19:23:43
Copyright (c) Microsoft Corporation
Workgroup Edition (64-bit) on Windows NT 6.1 X64
(Build 7601: Service Pack 1) (Hypervisor)

(1 row affected)


Any idea what is wrong?


Thank you,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] [Q] Why the following emits SQL for slot access?

2012-09-27 Thread Ladislav Lenart
Hello again.

I have the following test query:

def test_slot_access_after_query(self):
q = self.session.query(Foo, Bar).with_labels()
q = q.filter(Foo.bar_id == Bar.id)
rows = q.all()
assert len(rows)  0
with self.assert_no_sql_while():
for (each_foo, _each_bar) in rows:
each.bar.data # -- FAILS HERE

SQLAlchemy emits SQL to fetch each.bar's data. I do not understand why. I
expected that:
* Bar is already cached in the session.
* each.bar first checks if a bar with the appropriate id is present in the
session and if so, return it.

What am I doing wrong?


Thank you,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [Q] Why the following emits SQL for slot access?

2012-09-27 Thread Ladislav Lenart
Hello.

 not sure, is each.bar a typo and you meant _each_bar ?

No, I mean each.bar.


 or are you trying to fetch a relationship along Foo- Bar ?  if its a
relationship and the linkage is not a simple many-to-one, then it emits SQL.

Yes, I am trying to fetch Foo - Bar relationship. It is a one-way many-to-one
relationship: Foo has bar_id FK to bar (id).


I still don't understand what I see, but I figured out the cause.

This works:

def test_access_ok(self):
q = self.session.query(Foo, Bar).with_labels()
q = q.filter(Foo.bar_id == Bar.id)
rows = q.all() # IMPORTANT LINE
assert len(rows)  0
with self.assert_no_sql_while():
for (each_foo, _each_bar) in rows:
each.bar.data

This does not:

def test_access_ko(self):
q = self.session.query(Foo, Bar).with_labels()
q = q.filter(Foo.bar_id == Bar.id)
rows = list(unwrap(q)) # IMPORTANT LINE
assert len(rows)  0
with self.assert_no_sql_while():
for each in rows:
each.bar.data # -- FAILS HERE

unwrap is defined as

def unwrap(query):
for each in query:
if isinstance(each, tuple) and len(each)  0:
yield each[0]
else:
yield each

I wanted to use unwrap to minimize the impact of the queries I have to modify
slightly to overcome SA 0.7 limitation of of_type() in combination with a table
inheritance (we spoke about this in another thread). To make SA fetch everything
I need, the query must return more results (tuples of instances instead of the
instances directly), so I can joinedload from them too. With unwrap in place,
the rest of the query processing can remain intact.

Do you have any idea why the unwrap version does not work?

Thank you,

Ladislav Lenart


On 27.9.2012 20:06, Michael Bayer wrote:
 
 On Sep 27, 2012, at 1:55 PM, Ladislav Lenart wrote:
 
 Hello again.

 I have the following test query:

def test_slot_access_after_query(self):
q = self.session.query(Foo, Bar).with_labels()
q = q.filter(Foo.bar_id == Bar.id)
rows = q.all()
assert len(rows)  0
with self.assert_no_sql_while():
for (each_foo, _each_bar) in rows:
each.bar.data # -- FAILS HERE

 SQLAlchemy emits SQL to fetch each.bar's data. I do not understand why. I
 expected that:
 * Bar is already cached in the session.
 * each.bar first checks if a bar with the appropriate id is present in the
 session and if so, return it.

 What am I doing wrong?
 
 not sure, is each.bar a typo and you meant _each_bar ?or are you 
 trying to fetch a relationship along Foo- Bar ?  if its a relationship and 
 the linkage is not a simple many-to-one, then it emits SQL.
 
 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



  1   2   >