[sqlalchemy] Auto Generate sqlachemy models from yml file or dicts

2022-07-03 Thread David Boateng Adams
I want to be able to auto generate sqlalchemy model from a schema.
So I have the schema in yaml format.
>From that file, I load it and get the columns and its data type  and also 
the schema name into a dict. And I want to use that dictionary to create 
the sqlalchemy model  not just a normal python class

I have done some research and I see you can use sqlachemy's automap feature 
to do what I want but that is when you supply a DB connection uri. Likewise 
sqlacodegen

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/7ec6a4d3-12b7-4721-b7a5-186e6864ce1an%40googlegroups.com.


[sqlalchemy] Inheritance Hierarchies: Loading a model created with parent class in same session

2022-06-19 Thread David Bücher
To avoid writing a function selecting the correct class for a 'type' value, 
I would like to create models using the parent class ('Employee') and 
'type' instead of using the child class ('Manager', 'Engineer') in the 
first place. I noticed that the resulting object is not of the respecting 
child class (e.g. 'Manager'), even after re-querying it with the child 
class or with with_polymorphic. However, only in a new session this works 
out. Is there a way to achieve this without opening a new session?

Here is my example script - I am using sqlalchemy version 1.4.37:

from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.orm import declarative_base, \
relationship, sessionmaker

engine = create_engine('sqlite:///:memory:', echo=True)
Base = declarative_base()

class Employee(Base):
__tablename__ = "employee"
id = Column(Integer, primary_key=True)
name = Column(String(50))
type = Column(String(50))

__mapper_args__ = {
"polymorphic_identity": "employee",
"polymorphic_on": type,
'with_polymorphic': '*'
}

class Manager(Employee):
manager_data = Column(String(50))

__mapper_args__ = {
"polymorphic_identity": "manager"
}

Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
manager = Employee(name='name of Manager', type='manager')
session.add(manager)
session.flush()
manager_id = manager.id
session.commit()

# doesn't work:
m0 = session.query(Employee).filter_by(id=manager_id).first()
print(">>", m0.__mapper_args__['polymorphic_identity'])
try:
print(m0.manager_data)
except:
print(">> didn't work")

# doesn't work:
m = session.query(Manager).filter_by(id=manager_id).first()
print(">>", m.__mapper_args__['polymorphic_identity'])
try:
print(m.manager_data)
except:
print(">> didn't work")

# works:
session.close()
session2 = Session()
m1 = session2.query(Employee).filter_by(id=manager_id).first()
print(">>", m1.__mapper_args__['polymorphic_identity'])
print(m1.manager_data)

# works:
m = session2.query(Manager).filter_by(id=manager_id).first()
print(">>", m.__mapper_args__['polymorphic_identity'])
print(m.manager_data)

session2.close()

Thanks,
David

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/313009f4-c066-4746-ae8a-a285526c3e7an%40googlegroups.com.


[sqlalchemy] garbage collection of instances() generator causing psycopg2.ProgrammingError: named cursor isn't valid anymore

2022-02-16 Thread David Vitek
Hi all,

I have a situation where we are using transactions and postgres' server side 
cursors.  The troublesome sequence is something like:

x = s.query(...)
next(x)
if bail_out:
s.commit()
del x

I'm not entirely sure that this simple example can reproduce the problem; the 
true code is more complex, but hopefully this helps paint a picture of the 
general control flow.

The problem I'm running into has to do with what python does to garbage collect 
an ongoing coroutine.  The coroutine is ongoing since we have not exhausted x 
(assume the query returns many rows).  Python causes the coroutine to 
internally raise GeneratorExit when it is destroyed, which in this case 
triggers an except block that attempts to close the cursor.  The coroutine is 
instances() from sqlalchemy's loading.py.  The del x line causes the 
coroutine's refcount to drop to 0, which causes the GeneratorExit to raise and 
get caught by:

# from loading.py
def instances(cursor, context):
...
except Exception:
with util.safe_reraise():
cursor.close()

Upon attempting to close the cursor, postgres complains 
"psycopg2.ProgrammingError: named cursor isn't valid anymore" because it was 
closed when the enclosing transaction closed.

We consider it a good thing that the cursor closes when the transaction closes, 
but we find the exception raised when doing GC undesirable.

I'm wondering if there is any way to make cursors more cognizant of the 
transactions that they live in, so that they might realize they are already 
closed when their surrounding transaction has closed?

In the meantime, we are working around this problem by checking whether the 
session in the query object inside instances() is already dead:

# from loading.py
def instances(cursor, context):
...
except Exception:
with util.safe_reraise():
if query.session.is_active:  # <--- New line
  cursor.close()

The main shortcoming of this solution is that it only helps this particular 
cursor.

While we could try to do things like move the "del x" line up before the 
commit(), this isn't a real fix since python doesn't promise to GC things 
promptly.  Explicitly closing the cursor would be another option, but it's not 
readily available to the sqlalchemy client when using the ORM interface.  Even 
if it were, I imagine we'd forget to do it from time to time.

Any thoughts?


The information contained in this e-mail and any attachments from GrammaTech, 
Inc may contain confidential and/or proprietary information, and is intended 
only for the named recipient to whom it was originally addressed. If you are 
not the intended recipient, any disclosure, distribution, or copying of this 
e-mail or its attachments is strictly prohibited. If you have received this 
e-mail in error, please notify the sender immediately by return e-mail and 
permanently delete the e-mail and any attachments.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/PH1P110MB128246BB30B8FC8846504AA0BB359%40PH1P110MB1282.NAMP110.PROD.OUTLOOK.COM.


[sqlalchemy] Re: PostgreSQL: interval columns, values measured in months or years

2020-07-20 Thread David Feinzeig
Hi,

I think I've encountered my first time caring about psycopg2 mangling 
intervals and this looks super helpful. I was wondering if you have any 
updated suggestions since this post is a few years old?

Thanks!
David

On Tuesday, December 24, 2013 at 6:27:06 PM UTC-5, David Bolen wrote:
>
> Sibylle Koczian > writes: 
>
> > Exactly, that's it. The stackoverflow discussion was very instructive, 
> > thank you! I'll try out psycopg2-dateutils. 
>
> I'm a big fan of dateutil.  If you do use it, you may also choose to 
> bypass the use of timedelta entirely, since as you've seen it can fail 
> to accurately represent the database value (timedelta only has days, 
> minutes and seconds as components). 
>
> For example, I use the code below to map PostgreSQL interval columns 
> directly to dateutil's relativedelta subclass. 
>
> Any returned queries involving interval values will use the new 
> Interval type automatically, and will accurately round-trip.  You'll 
> have to use the Interval class explicitly when creating values to go 
> the other way. 
>
> The mapping occurs purely at the psycopg2 level. 
>
> -- David 
>
>
> # 
> # 
> -- 
> #  Interval Mapping 
> # 
> #  Override normal psycopg2 mapping so Interval types get mapped into a 
> #  relativedelta since it can more accurately represent them.  We use our 
> own 
> #  relativedelta subclass as we need it to be a new style class for 
> psycopg2 
> #  adaptation to work.  Other code must use our Interval class if trying 
> to 
> #  store such data back into the database. 
> # 
> #  In this case there's nothing to do at the SQLAlchemy layer because for 
> #  PostgreSQL, it just uses the data as provided by psycopg2. 
> # 
> -- 
> # 
>
> class Interval(relativedelta, object): 
> pass 
>
> import psycopg2 
> from psycopg2._psycopg import INTERVAL as psycopg2_INTERVAL 
> from psycopg2.extensions import new_type, register_type, register_adapter, 
> AsIs 
> import re 
>
> interval_re = re.compile(' *' 
>  '((?P\d+) y\D*)?' 
>  '((?P\d+) m\D*)?' 
>  '((?P\d+) d\D*)?' 
> 
>  '((?P\d+):(?P\d+):(?P\d+))?' 
>  '(\.(?P\d+))?$') 
>
>
> # Configure handling for supplying an Interval to store in the database 
>
> def adapt_interval(interval): 
> adapter = AsIs("'%d years %d months %d days %02d:%02d:%02d.%06d'" % 
>(interval.years, interval.months, interval.days, 
> interval.hours, interval.minutes, interval.seconds, 
> interval.microseconds)) 
> return adapter 
>
> register_adapter(Interval, adapt_interval) 
> 
> # Configure handling upon receiving an Interval from the database 
>
> def cast_interval(value, cur): 
> if value is None: 
> return None 
>
> m = interval_re.match(value) 
> if m: 
> vals = m.groupdict(0) 
> # Most everything is direct, but subseconds has a varying 
> precision, 
> # so force it to be microseconds if we had a value 
> microseconds = vals['subseconds'] 
> if microseconds: 
> microseconds = (microseconds + '00')[:6] 
> return Interval(years=int(vals['years']), 
> months=int(vals['months']), 
> days=int(vals['days']), 
> hours=int(vals['hours']), 
> minutes=int(vals['minutes']), 
> seconds=int(vals['seconds']), 
> microseconds=int(microseconds)) 
> 
> else: 
> raise InterfaceError("Bad interval representation: %r" % value) 
>
> INTERVAL = new_type(psycopg2_INTERVAL.values, "INTERVAL", cast_interval) 
> register_type(INTERVAL) 
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/9657cb0a-83f0-4253-95dd-6d07186d0aa0o%40googlegroups.com.


Re: Alembic: Change the order of the columns for a table

2020-03-05 Thread David Siller
Hello Mike,

sorry for not having provided the table definition initially. You are 
absolutely right, I used to define the tables and columns in a declarative 
way and had the uniqueness-constraints defined on the columns.

As you proposed, setting the unique=False and index=False before invoking 
the CreateTableOp solves the problem indeed.

So thanks a lot once more!

Best regards
David

Am Mittwoch, 4. März 2020 17:54:30 UTC+1 schrieb Mike Bayer:
>
>
>
> On Wed, Mar 4, 2020, at 10:27 AM, David Siller wrote:
>
> Hello Mike,
>
> and thank you very much for the solution. It is working flawlessly for 
> ordering the columns.
>
> For others finding this thread: Mike created an entry in the Alembic 
> cookbook 
> <https://alembic.sqlalchemy.org/en/latest/cookbook.html#apply-custom-sorting-to-table-columns-within-create-table>
> .
>
> There is just one (minor) flaw, where I currently don't know how to fix 
> it, but I can adjust the generated script manually:
> I'm using the naming schema for constraints and indexes as described in 
> the documentation <https://alembic.sqlalchemy.org/en/latest/naming.html>. 
> While the names for e.g. foreign keys, indexes or primary key constraints 
> are kept with your solution, UniqueConstraints are strangely duplicated, 
> once with the provided name and once just with the column name. This 
> happens just for UniqueConstraints. So you have e.g. something like the 
> following in the generated script:
>
> sa.UniqueConstraint("iso_name"),
> sa.UniqueConstraint("iso_name", name=op.f("uq_country_iso_name")),
>
> So "uq_country_iso_name" in this case is the name provided by the naming 
> schema, while the first UniqueConstraint is generated as duplication.
>
> Maybe any hint on what I'm doing wrong? Or maybe a bug in the 
> Column.copy-method 
> (although the Column.constraints-set on the copied column seems to be 
> empty; also the copied columns-list as in your example contains the 
> correct UniqueConstraint with the correct name; so maybe it happens in 
> the CreateTableOp)?
>
>
> are you using the unique=True flag on the Column itself?  that would be 
> why.   It would probably work to set col.unique=False, col.index=False 
> before applying the Column to the new operation so that these don't double 
> up for the constraints that we are already getting from the table.  let me 
> know if that works and we can update the demo.
>
>
>
> Thanks again for your solution
> David
>
>
>
> Am Dienstag, 3. März 2020 15:54:17 UTC+1 schrieb Mike Bayer:
>
>
>
> On Tue, Mar 3, 2020, at 3:36 AM, David Siller wrote:
>
> Hello,
>
> first and foremost: thank you for SQLAlchemy and Alembic. I've worked with 
> a lot of ORMs, but only with these two I feel very comfortable and I'm 
> doing a lot of crazy stuff with it.
>
> The current problem that I have: I'm currently creating a lot of tables 
> with a lot of mixins, as most of them have e.g. an id-column or columns 
> for created_at-dates etc. However it seems that I can't control the order 
> of the columns, resulting in e.g. the primary-key-id-column to show up in 
> the middle of the other columns or the created_at-column at the beginning 
> and the created_by-column at the end, especially when I autogenerate the 
> versioning scripts.
>
>
> Declarative with mixins would hopefully be able to do a little better than 
> that, if the issue is just amongst a small handful of columns maybe there's 
> a way to fix it at that level. This ordering is actually controlled by 
> an attribute on every column called "_creation_order".   It gets set when 
> the column is first created based on a global counter and that is how the 
> sorting of the columns works within Declarative; it sorts on that attribute 
> and sends the Column objects to Table in that order.   
>
> also, I don't know how the column ordering that comes from the mappings 
> would be different if you rendered metadata.create_all() versus using 
> alembic autogenerate, it's the same Table objects.
>
>
>
>
>
>
> But alembic has also a method in this case (as always ;-)). So I created a 
> Rewriter-method for the CreateTableOps in the env.py-file, re-order the 
> columns in the correct way and reassign this ordered list of columns to 
> op.columns. Unfortunately this doesn't work. Somehow it either uses 
> whatever is already the to_table()-method (?) or ... something else. So I 
> tried to create a new operation in the rewriter with the ordered list and 
> returned this operation instead. But then I get an the error: 
> sqlalchemy.exc.ArgumentError: 
> Column object 'id' already assigned to Table 'user'.
>
>
>
&

Re: Alembic: Change the order of the columns for a table

2020-03-04 Thread David Siller
An addition: It keeps the correct name if the UniqueConstraint involves 
multiple columns. The duplication only happens if the constraint checks the 
uniqueness on a single column.

Also the CreateTableOp does not seem to duplicate the UniqueConstraints, so 
it must happen later on.

Best regards
David

Am Mittwoch, 4. März 2020 16:27:26 UTC+1 schrieb David Siller:
>
> Hello Mike,
>
> and thank you very much for the solution. It is working flawlessly for 
> ordering the columns.
>
> For others finding this thread: Mike created an entry in the Alembic 
> cookbook 
> <https://alembic.sqlalchemy.org/en/latest/cookbook.html#apply-custom-sorting-to-table-columns-within-create-table>
> .
>
> There is just one (minor) flaw, where I currently don't know how to fix 
> it, but I can adjust the generated script manually:
> I'm using the naming schema for constraints and indexes as described in 
> the documentation <https://alembic.sqlalchemy.org/en/latest/naming.html>. 
> While the names for e.g. foreign keys, indexes or primary key constraints 
> are kept with your solution, UniqueConstraints are strangely duplicated, 
> once with the provided name and once just with the column name. This 
> happens just for UniqueConstraints. So you have e.g. something like the 
> following in the generated script:
>
> sa.UniqueConstraint("iso_name"),
> sa.UniqueConstraint("iso_name", name=op.f("uq_country_iso_name")),
>
> So "uq_country_iso_name" in this case is the name provided by the naming 
> schema, while the first UniqueConstraint is generated as duplication.
>
> Maybe any hint on what I'm doing wrong? Or maybe a bug in the 
> Column.copy-method 
> (although the Column.constraints-set on the copied column seems to be 
> empty; also the copied columns-list as in your example contains the 
> correct UniqueConstraint with the correct name; so maybe it happens in 
> the CreateTableOp)?
>
> Thanks again for your solution
> David
>
>
>
> Am Dienstag, 3. März 2020 15:54:17 UTC+1 schrieb Mike Bayer:
>>
>>
>>
>> On Tue, Mar 3, 2020, at 3:36 AM, David Siller wrote:
>>
>> Hello,
>>
>> first and foremost: thank you for SQLAlchemy and Alembic. I've worked 
>> with a lot of ORMs, but only with these two I feel very comfortable and I'm 
>> doing a lot of crazy stuff with it.
>>
>> The current problem that I have: I'm currently creating a lot of tables 
>> with a lot of mixins, as most of them have e.g. an id-column or columns 
>> for created_at-dates etc. However it seems that I can't control the 
>> order of the columns, resulting in e.g. the primary-key-id-column to 
>> show up in the middle of the other columns or the created_at-column at 
>> the beginning and the created_by-column at the end, especially when I 
>> autogenerate the versioning scripts.
>>
>>
>> Declarative with mixins would hopefully be able to do a little better 
>> than that, if the issue is just amongst a small handful of columns maybe 
>> there's a way to fix it at that level. This ordering is actually 
>> controlled by an attribute on every column called "_creation_order".   It 
>> gets set when the column is first created based on a global counter and 
>> that is how the sorting of the columns works within Declarative; it sorts 
>> on that attribute and sends the Column objects to Table in that order.   
>>
>> also, I don't know how the column ordering that comes from the mappings 
>> would be different if you rendered metadata.create_all() versus using 
>> alembic autogenerate, it's the same Table objects.
>>
>>
>>
>>
>>
>>
>> But alembic has also a method in this case (as always ;-)). So I created 
>> a Rewriter-method for the CreateTableOps in the env.py-file, re-order 
>> the columns in the correct way and reassign this ordered list of columns to 
>> op.columns. Unfortunately this doesn't work. Somehow it either uses 
>> whatever is already the to_table()-method (?) or ... something else. So 
>> I tried to create a new operation in the rewriter with the ordered list and 
>> returned this operation instead. But then I get an the error: 
>> sqlalchemy.exc.ArgumentError: 
>> Column object 'id' already assigned to Table 'user'.
>>
>>
>>
>> so the CreateTableOp and a lot of the other ops have an "_orig" attribute 
>> that points to the "original" object being autogenerated.   For this 
>> object, it's "_orig_table".  These objects did not anticipate being 
>> rewritten at the level at which you are attempting to do.   So to make the

Re: Alembic: Change the order of the columns for a table

2020-03-04 Thread David Siller
Hello Mike,

and thank you very much for the solution. It is working flawlessly for 
ordering the columns.

For others finding this thread: Mike created an entry in the Alembic 
cookbook 
<https://alembic.sqlalchemy.org/en/latest/cookbook.html#apply-custom-sorting-to-table-columns-within-create-table>
.

There is just one (minor) flaw, where I currently don't know how to fix it, 
but I can adjust the generated script manually:
I'm using the naming schema for constraints and indexes as described in the 
documentation <https://alembic.sqlalchemy.org/en/latest/naming.html>. While 
the names for e.g. foreign keys, indexes or primary key constraints are 
kept with your solution, UniqueConstraints are strangely duplicated, once 
with the provided name and once just with the column name. This happens 
just for UniqueConstraints. So you have e.g. something like the following 
in the generated script:

sa.UniqueConstraint("iso_name"),
sa.UniqueConstraint("iso_name", name=op.f("uq_country_iso_name")),

So "uq_country_iso_name" in this case is the name provided by the naming 
schema, while the first UniqueConstraint is generated as duplication.

Maybe any hint on what I'm doing wrong? Or maybe a bug in the 
Column.copy-method 
(although the Column.constraints-set on the copied column seems to be 
empty; also the copied columns-list as in your example contains the correct 
UniqueConstraint 
with the correct name; so maybe it happens in the CreateTableOp)?

Thanks again for your solution
David



Am Dienstag, 3. März 2020 15:54:17 UTC+1 schrieb Mike Bayer:
>
>
>
> On Tue, Mar 3, 2020, at 3:36 AM, David Siller wrote:
>
> Hello,
>
> first and foremost: thank you for SQLAlchemy and Alembic. I've worked with 
> a lot of ORMs, but only with these two I feel very comfortable and I'm 
> doing a lot of crazy stuff with it.
>
> The current problem that I have: I'm currently creating a lot of tables 
> with a lot of mixins, as most of them have e.g. an id-column or columns 
> for created_at-dates etc. However it seems that I can't control the order 
> of the columns, resulting in e.g. the primary-key-id-column to show up in 
> the middle of the other columns or the created_at-column at the beginning 
> and the created_by-column at the end, especially when I autogenerate the 
> versioning scripts.
>
>
> Declarative with mixins would hopefully be able to do a little better than 
> that, if the issue is just amongst a small handful of columns maybe there's 
> a way to fix it at that level. This ordering is actually controlled by 
> an attribute on every column called "_creation_order".   It gets set when 
> the column is first created based on a global counter and that is how the 
> sorting of the columns works within Declarative; it sorts on that attribute 
> and sends the Column objects to Table in that order.   
>
> also, I don't know how the column ordering that comes from the mappings 
> would be different if you rendered metadata.create_all() versus using 
> alembic autogenerate, it's the same Table objects.
>
>
>
>
>
>
> But alembic has also a method in this case (as always ;-)). So I created a 
> Rewriter-method for the CreateTableOps in the env.py-file, re-order the 
> columns in the correct way and reassign this ordered list of columns to 
> op.columns. Unfortunately this doesn't work. Somehow it either uses 
> whatever is already the to_table()-method (?) or ... something else. So I 
> tried to create a new operation in the rewriter with the ordered list and 
> returned this operation instead. But then I get an the error: 
> sqlalchemy.exc.ArgumentError: 
> Column object 'id' already assigned to Table 'user'.
>
>
>
> so the CreateTableOp and a lot of the other ops have an "_orig" attribute 
> that points to the "original" object being autogenerated.   For this 
> object, it's "_orig_table".  These objects did not anticipate being 
> rewritten at the level at which you are attempting to do.   So to make the 
> columns-in-place get recognized, I have to delete _orig_table:
>
> op._orig_table = None
>
> then it tries to build the Table again, as we want it to do.  but then we 
> have the same problem which is that these columns are from your existing 
> mapping and they are already associated with a table.  
>
> There's a bunch of ways to go here but none of them would be something 
> people could normally figure out without asking me here.  I  think the 
> cleanest way is to copy the columns using their copy() method, and then to 
> avoid dealing with _orig_table, make a new CreateTableOp:
>
> @writer.rewrites(ops.CreateTableOp)
> def order_columns(context, revision, op):
>
> special_names = {"id&qu

Alembic: Change the order of the columns for a table

2020-03-03 Thread David Siller
Hello,

first and foremost: thank you for SQLAlchemy and Alembic. I've worked with 
a lot of ORMs, but only with these two I feel very comfortable and I'm 
doing a lot of crazy stuff with it.

The current problem that I have: I'm currently creating a lot of tables 
with a lot of mixins, as most of them have e.g. an id-column or columns for 
created_at-dates etc. However it seems that I can't control the order of 
the columns, resulting in e.g. the primary-key-id-column to show up in the 
middle of the other columns or the created_at-column at the beginning and 
the created_by-column at the end, especially when I autogenerate the 
versioning scripts. But alembic has also a method in this case (as always 
;-)). So I created a Rewriter-method for the CreateTableOps in the env.py-file, 
re-order the columns in the correct way and reassign this ordered list of 
columns to op.columns. Unfortunately this doesn't work. Somehow it either 
uses whatever is already the to_table()-method (?) or ... something else. 
So I tried to create a new operation in the rewriter with the ordered list 
and returned this operation instead. But then I get an the error: 
sqlalchemy.exc.ArgumentError: 
Column object 'id' already assigned to Table 'user'.

The code I'm using is the following:

from operator import itemgetter
from alembic.autogenerate import rewriter
from alembic.operations.ops import CreateTableOp
from sqlalchemy.sql.schema import Column


writer = rewriter.Rewriter()


@writer.rewriter(CreateTableOp)
def order_columns(context, revision, op):
"""Reorder the columns before creating a table."""
preordered = []
for col in op.columns:
k = 0  # key used for ordering later on
if not isinstance(col, Column):
k = 99  # All constraints or indexes should stay at the end of 
the definition
elif col.primary_key and col.name=='id':
k = 1
# + a lot of other ordering constraints for other columns
else:
k= 2 # All non-id-columns
preordered.append((k, col))  # Creating my ordered list

# Now sorting the list and extracting only the column-objects.
# This list is indeed correctly sorted, just what I want
ordered_column_list = [itm[1] for itm in sorted(preordered, key=
itemgetter(0))]

# Creating a new operation and returning it is not working, as it 
results in an error:
# Returning: ArgumentError: Column object '...' already assigned to 
Table '...'
# new_op = CreateTableOp(op.table_name, ordered_column_list, 
schema=op.schema)
# return new_op

# Reassigning the ordered column list is not working either, it seems 
to be ignored:
op.columns = ordered_column_list
return op

[...]

def run_migrations_online():
[...]
with connectable.connect() as connection:
context.configure(
[...]
process_revision_directives=writer
)
[...]


The problem is similar to Altering the behavior of AddColumn 
<https://groups.google.com/forum/#!searchin/sqlalchemy-alembic/order|sort:date/sqlalchemy-alembic/izYq2EMYotI/gMISQpjkAwAJ>,
 
but I need to order multiple columns. The ordering works perfectly fine, 
the rewriter is also invoked (tested it e.g. by returning [] instead or 
debugging), just the reordered list of columns is then totally ignored.

Any hint to point me in the right direction of what I'm doing wrong or any 
other possibility to do the reordering?

Thank you in advance and thanks again

David

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/ef49db6c-420c-49ec-a708-86ad5d874fc1%40googlegroups.com.


Re: [sqlalchemy] Re: Query last relation created and attributes

2019-05-20 Thread Scheck David
Ok I just identified the issue. It seems that there is a conflict between 2 
subqueries :

in my model.I have this : 
statussen = relationship(
"PersoonStatus",
order_by="desc(PersoonStatus.status_datum)",
backref='persoon',
cascade='all, delete, delete-orphan',
lazy='subquery'
)

and if I do this query : 

last_statuses = aliased(
statussen_table_name,
self.session.query(
getattr(statussen_table_name, 
issue_id_field),
 statussen_table_name.status_id)\
.join(Status).order_by(Status.datum.desc())\
.limit(1).subquery().lateral())

there is a conflict between this 2 subqueries who target the same table.

Is there a workaround to prevent this conflict?

any idea?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/61df1e98-8475-4d3a-a21b-20aff412d4fa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Query last relation created and attributes

2019-05-17 Thread Scheck David
I really don't know why this query returns me this .. totally mysterious

Le jeudi 16 mai 2019 16:27:50 UTC+2, Scheck David a écrit :
>
> I finished by a query like that : 
>
>
> last_statuses = aliased(statussen_table_name, 
> self.session.query(getattr(statussen_table_name, object_name),
> 
> statussen_table_name.status_id).join(Status).order_by(Status.datum.desc()).limit(1).subquery().lateral())
> return 
> self.session.query(self.cls).join(last_statuses).filter(Status.statustype_id 
> > 50).all()
>
> the problem is that there is a problem in the query : 
>
> sqlalchemy.exc.InvalidRequestError: Select statement 'SELECT 
> statussen.datum 
>
> FROM (SELECT personen.id AS personen_id 
>
> FROM statussen, personen JOIN LATERAL (SELECT 
> personen_statussen.persoon_id AS persoon_id, personen_statussen.status_id 
> AS status_id 
>
> FROM personen_statussen JOIN statussen ON statussen.id = 
> personen_statussen.status_id ORDER BY statussen.datum DESC
>
>  LIMIT :param_1) AS anon_2 ON personen.id = anon_2.persoon_id 
>
> WHERE statussen.statustype_id > :statustype_id_1) AS anon_1, 
> personen_statussen 
>
> WHERE statussen.id = personen_statussen.status_id' returned no FROM 
> clauses due to auto-correlation; specify correlate() to control 
> correlation manually.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/94817a4b-3579-44de-ab48-1251baf21ac1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Query last relation created and attributes

2019-05-16 Thread Scheck David
I finished by a query like that : 


last_statuses = aliased(statussen_table_name, 
self.session.query(getattr(statussen_table_name, object_name),

statussen_table_name.status_id).join(Status).order_by(Status.datum.desc()).limit(1).subquery().lateral())
return 
self.session.query(self.cls).join(last_statuses).filter(Status.statustype_id 
> 50).all()

the problem is that there is a problem in the query : 

sqlalchemy.exc.InvalidRequestError: Select statement 'SELECT 
statussen.datum 

FROM (SELECT personen.id AS personen_id 

FROM statussen, personen JOIN LATERAL (SELECT personen_statussen.persoon_id 
AS persoon_id, personen_statussen.status_id AS status_id 

FROM personen_statussen JOIN statussen ON statussen.id = 
personen_statussen.status_id ORDER BY statussen.datum DESC

 LIMIT :param_1) AS anon_2 ON personen.id = anon_2.persoon_id 

WHERE statussen.statustype_id > :statustype_id_1) AS anon_1, 
personen_statussen 

WHERE statussen.id = personen_statussen.status_id' returned no FROM clauses 
due to auto-correlation; specify correlate() to control correlation 
manually.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/930ee74c-1892-4621-8006-51cbd94e020e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Query last relation created and attributes

2019-05-16 Thread Scheck David
I finished by a query like that : 


last_statuses = aliased(statussen_table_name, 
self.session.query(getattr(statussen_table_name, object_name),

statussen_table_name.status_id).join(Status).order_by(Status.datum.desc()).limit(1).subquery().lateral())
return 
self.session.query(self.cls).join(last_statuses).filter(Status.statustype_id 
> 50).all()

the problem is that there is a problem in the query : 

sqlalchemy.exc.InvalidRequestError: Select statement 'SELECT 
statussen.datum 

FROM (SELECT personen.id AS personen_id 

FROM statussen, personen JOIN LATERAL (SELECT personen_statussen.persoon_id 
AS persoon_id, personen_statussen.status_id AS status_id 

FROM personen_statussen JOIN statussen ON statussen.id = 
personen_statussen.status_id ORDER BY statussen.datum DESC

 LIMIT :param_1) AS anon_2 ON personen.id = anon_2.persoon_id 

WHERE statussen.statustype_id > :statustype_id_1) AS anon_1, 
personen_statussen 

WHERE statussen.id = personen_statussen.status_id' returned no FROM clauses 
due to auto-correlation; specify correlate() to control correlation 
manually.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/f7fef252-8fc6-4cb5-a8e4-1270cfacab21%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Query last relation created and attributes

2019-05-15 Thread Scheck David
For a full state of what I have 

https://dpaste.de/vV8k

the goal is to convert the sql query to SQLAlchemy.

Thanks in advance for any help

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d17f598c-11fc-4921-b165-7f92d95b385c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Query last relation created and attributes

2019-05-14 Thread Scheck David
In fact this I will use as a method on a datamanager use for different
objects. In this sql the object is erfgoedobjecten but it will change.

Le mar. 14 mai 2019 à 18:06, Jonathan Vanasco  a
écrit :

>
>
> On Tuesday, May 14, 2019 at 10:29:58 AM UTC-4, Scheck David wrote:
>>
>> I'm near the result of sql:
>>
>> https://dpaste.de/1XYa#
>>
>>>
>>>>
> Do you want this to be an attribute/relationship on the class or to run
> this as a separate query?
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/Cju-PF7mV58/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/36b8df42-960c-4f75-adee-a4a59cc87636%40googlegroups.com
> <https://groups.google.com/d/msgid/sqlalchemy/36b8df42-960c-4f75-adee-a4a59cc87636%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
-- 

With kindest regards,


*David SCHECK*

PRESIDENT/DEVELOPER

[image: Signature Logo Sphax Bleu-01.png]

Phone: +32 4 87 86 70 12
Visit our website ! https://www.sphax.org

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAOPG6sc%3DgxYFbRHF46kqnRRyhx0-%2BJiC4L06S5hDQNc28a19GA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Query last relation created and attributes

2019-05-14 Thread Scheck David
I'm near the result of sql:

https://dpaste.de/1XYa#

Le lundi 13 mai 2019 17:46:29 UTC+2, Mike Bayer a écrit :
>
>
>
> On Mon, May 13, 2019 at 10:37 AM Scheck David  > wrote:
>
>> the problem is that I can't use SQL for this because this is a mixins 
>> that I use for several objects (tables) because they all have this status 
>> structure... yes it's quite difficult to do :
>>
>
> I don't know what it is you want to do so if you could illustrate the SQL 
> that resembles what you want, I'd have some idea.   If you are trying to 
> build a Query object, then you are working with SQL and you'd need to 
> understand the structure you're looking for.
>
>
>  
>
>>
>> right now I'm on this stage, but still don't work
>> self.session.query(Object).outerjoin(ObjectStatus).outerjoin(Status)\
>> .group_by(Object).with_entities(Object, 
>> func.max(Status.datum).label("status_datum")).subquery()
>>
>> self.session.query(Object).join((sub, sub.c.id == Status.id))\
>> .filter(sub.c.statustype_id > 50)
>>
>> Le lun. 13 mai 2019 à 16:26, Mike Bayer > > a écrit :
>>
>>> Hi -
>>>
>>> I was hoping someone could jump in on this.
>>>
>>> do you know the SQL that you want to emit?  E.g. plain SQL string.   I
>>> can help you do that part.
>>>
>>>
>>> On Mon, May 13, 2019 at 9:11 AM Scheck David >> > wrote:
>>> >
>>> > I think I'm near but I can't finish :
>>> >
>>> > for all the structure : https://dpaste.de/fek5#L
>>> >
>>> > and here my query :
>>> >
>>> > self.session.query(Object).outerjoin(ObjectStatus).outerjoin(Status)\
>>> > .group_by(Object).with_entities(Object, 
>>> func.max(Status.datum).label("status_datum")).subquery()
>>> >
>>> > self.session.query(Object).join((sub, sub.c.id == Status.id))\
>>> > .filter(sub.c.statustype_id > 50)
>>> >
>>> > but status type not reachable.
>>> >
>>> >
>>> > --
>>> > SQLAlchemy -
>>> > The Python SQL Toolkit and Object Relational Mapper
>>> >
>>> > http://www.sqlalchemy.org/
>>> >
>>> > To post example code, please provide an MCVE: Minimal, Complete, and 
>>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
>>> description.
>>> > ---
>>> > You received this message because you are subscribed to the Google 
>>> Groups "sqlalchemy" group.
>>> > To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to sqlal...@googlegroups.com .
>>> > To post to this group, send email to sqlal...@googlegroups.com 
>>> .
>>> > Visit this group at https://groups.google.com/group/sqlalchemy.
>>> > To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/sqlalchemy/fe2da543-2710-4b44-9f88-e68c5e35faea%40googlegroups.com
>>> .
>>> > For more options, visit https://groups.google.com/d/optout.
>>>
>>> -- 
>>> SQLAlchemy - 
>>> The Python SQL Toolkit and Object Relational Mapper
>>>
>>> http://www.sqlalchemy.org/
>>>
>>> To post example code, please provide an MCVE: Minimal, Complete, and 
>>> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full 
>>> description.
>>> --- 
>>> You received this message because you are subscribed to a topic in the 
>>> Google Groups "sqlalchemy" group.
>>> To unsubscribe from this topic, visit 
>>> https://groups.google.com/d/topic/sqlalchemy/Cju-PF7mV58/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to 
>>> sqlal...@googlegroups.com .
>>> To post to this group, send email to sqlal...@googlegroups.com 
>>> .
>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/sqlalchemy/CA%2BRjkXF3d_pffLqFLYpSNw80E%3D_2ULhh0pTiqBG-F1dFjA9r1Q%40mail.gmail.com
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>> -- 
>>
>> With kindest regards,
>>
>>
>> *David SCHECK*
>>
>> PRESIDENT/DEVELOPER
>>
>> [image: Signature Logo Sphax Bleu-01.png]
>>
>> Phone: +32 4 87 86 70 12
>> Visit our webs

Re: [sqlalchemy] Re: Query last relation created and attributes

2019-05-14 Thread Scheck David
Here is the last version of my sql query:

https://dpaste.de/8UhP

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/23195ece-7551-483a-bd85-61a00a8b8748%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Query last relation created and attributes

2019-05-14 Thread Scheck David

>
>
> Here is the sql generated for this query : https://dpaste.de/bJsc
 

Le lundi 13 mai 2019 17:46:29 UTC+2, Mike Bayer a écrit :
>
>
>
> On Mon, May 13, 2019 at 10:37 AM Scheck David  > wrote:
>
>> the problem is that I can't use SQL for this because this is a mixins 
>> that I use for several objects (tables) because they all have this status 
>> structure... yes it's quite difficult to do :
>>
>
> I don't know what it is you want to do so if you could illustrate the SQL 
> that resembles what you want, I'd have some idea.   If you are trying to 
> build a Query object, then you are working with SQL and you'd need to 
> understand the structure you're looking for.
>
>
>  
>
>>
>> right now I'm on this stage, but still don't work
>> self.session.query(Object).outerjoin(ObjectStatus).outerjoin(Status)\
>> .group_by(Object).with_entities(Object, 
>> func.max(Status.datum).label("status_datum")).subquery()
>>
>> self.session.query(Object).join((sub, sub.c.id == Status.id))\
>> .filter(sub.c.statustype_id > 50)
>>
>> Le lun. 13 mai 2019 à 16:26, Mike Bayer > > a écrit :
>>
>>> Hi -
>>>
>>> I was hoping someone could jump in on this.
>>>
>>> do you know the SQL that you want to emit?  E.g. plain SQL string.   I
>>> can help you do that part.
>>>
>>>
>>> On Mon, May 13, 2019 at 9:11 AM Scheck David >> > wrote:
>>> >
>>> > I think I'm near but I can't finish :
>>> >
>>> > for all the structure : https://dpaste.de/fek5#L
>>> >
>>> > and here my query :
>>> >
>>> > self.session.query(Object).outerjoin(ObjectStatus).outerjoin(Status)\
>>> > .group_by(Object).with_entities(Object, 
>>> func.max(Status.datum).label("status_datum")).subquery()
>>> >
>>> > self.session.query(Object).join((sub, sub.c.id == Status.id))\
>>> > .filter(sub.c.statustype_id > 50)
>>> >
>>> > but status type not reachable.
>>> >
>>> >
>>> > --
>>> > SQLAlchemy -
>>> > The Python SQL Toolkit and Object Relational Mapper
>>> >
>>> > http://www.sqlalchemy.org/
>>> >
>>> > To post example code, please provide an MCVE: Minimal, Complete, and 
>>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
>>> description.
>>> > ---
>>> > You received this message because you are subscribed to the Google 
>>> Groups "sqlalchemy" group.
>>> > To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to sqlal...@googlegroups.com .
>>> > To post to this group, send email to sqlal...@googlegroups.com 
>>> .
>>> > Visit this group at https://groups.google.com/group/sqlalchemy.
>>> > To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/sqlalchemy/fe2da543-2710-4b44-9f88-e68c5e35faea%40googlegroups.com
>>> .
>>> > For more options, visit https://groups.google.com/d/optout.
>>>
>>> -- 
>>> SQLAlchemy - 
>>> The Python SQL Toolkit and Object Relational Mapper
>>>
>>> http://www.sqlalchemy.org/
>>>
>>> To post example code, please provide an MCVE: Minimal, Complete, and 
>>> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full 
>>> description.
>>> --- 
>>> You received this message because you are subscribed to a topic in the 
>>> Google Groups "sqlalchemy" group.
>>> To unsubscribe from this topic, visit 
>>> https://groups.google.com/d/topic/sqlalchemy/Cju-PF7mV58/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to 
>>> sqlal...@googlegroups.com .
>>> To post to this group, send email to sqlal...@googlegroups.com 
>>> .
>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/sqlalchemy/CA%2BRjkXF3d_pffLqFLYpSNw80E%3D_2ULhh0pTiqBG-F1dFjA9r1Q%40mail.gmail.com
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>> -- 
>>
>> With kindest regards,
>>
>>
>> *David SCHECK*
>>
>> PRESIDENT/DEVELOPER
>>
>> [image: Signature Logo Sphax Bleu-01.png]
>>
>> Phone: +32 4 

Re: [sqlalchemy] Re: Query last relation created and attributes

2019-05-14 Thread Scheck David

>
> Here is my sql for this case 
>

SELECT DISTINCT erfgoed.id as id,
statussen.statustype_id as statype_id,
statussen.datum as datum,
erfgoed.naam as naam
FROM erfgoedobjecten as erfgoed
JOIN erfgoedobjecten_statussen as erfgoedobjectstatus
ON erfgoed.id = erfgoedobjectstatus.erfgoedobject_id
JOIN statussen
ON statussen.id = 
(
SELECT id
FROM statussen
WHERE statussen.id = erfgoedobjectstatus.status_id
ORDER BY datum DESC
)
WHERE statussen.statustype_id > 50


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/4b07dd75-df93-4896-91ff-95edcdd74453%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Query last relation created and attributes

2019-05-14 Thread Scheck David
Ok, I'll try to build this query in sql. thanks :)

Le lun. 13 mai 2019 à 17:43, James Fennell  a
écrit :

> I think Mike's suggestion was to construct the raw SQL string you want,
> then reverse engineer to get the correct SQL Alchemy code, which you can
> then use with your different models. For complicated SQL logic I think this
> is a good practice in general.
>
> You current question seems like a general SQL question rather than
> something specific to SQL Alchemy. After you've the SQL, we could discuss
> the reverse engineering.
>
> On Mon, May 13, 2019, 10:37 AM Scheck David 
>> the problem is that I can't use SQL for this because this is a mixins
>> that I use for several objects (tables) because they all have this status
>> structure... yes it's quite difficult to do :
>>
>> right now I'm on this stage, but still don't work
>> self.session.query(Object).outerjoin(ObjectStatus).outerjoin(Status)\
>> .group_by(Object).with_entities(Object,
>> func.max(Status.datum).label("status_datum")).subquery()
>>
>> self.session.query(Object).join((sub, sub.c.id == Status.id))\
>> .filter(sub.c.statustype_id > 50)
>>
>> Le lun. 13 mai 2019 à 16:26, Mike Bayer  a
>> écrit :
>>
>>> Hi -
>>>
>>> I was hoping someone could jump in on this.
>>>
>>> do you know the SQL that you want to emit?  E.g. plain SQL string.   I
>>> can help you do that part.
>>>
>>>
>>> On Mon, May 13, 2019 at 9:11 AM Scheck David  wrote:
>>> >
>>> > I think I'm near but I can't finish :
>>> >
>>> > for all the structure : https://dpaste.de/fek5#L
>>> >
>>> > and here my query :
>>> >
>>> > self.session.query(Object).outerjoin(ObjectStatus).outerjoin(Status)\
>>> > .group_by(Object).with_entities(Object,
>>> func.max(Status.datum).label("status_datum")).subquery()
>>> >
>>> > self.session.query(Object).join((sub, sub.c.id == Status.id))\
>>> > .filter(sub.c.statustype_id > 50)
>>> >
>>> > but status type not reachable.
>>> >
>>> >
>>> > --
>>> > SQLAlchemy -
>>> > The Python SQL Toolkit and Object Relational Mapper
>>> >
>>> > http://www.sqlalchemy.org/
>>> >
>>> > To post example code, please provide an MCVE: Minimal, Complete, and
>>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>>> description.
>>> > ---
>>> > You received this message because you are subscribed to the Google
>>> Groups "sqlalchemy" group.
>>> > To unsubscribe from this group and stop receiving emails from it, send
>>> an email to sqlalchemy+unsubscr...@googlegroups.com.
>>> > To post to this group, send email to sqlalchemy@googlegroups.com.
>>> > Visit this group at https://groups.google.com/group/sqlalchemy.
>>> > To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/sqlalchemy/fe2da543-2710-4b44-9f88-e68c5e35faea%40googlegroups.com
>>> .
>>> > For more options, visit https://groups.google.com/d/optout.
>>>
>>> --
>>> SQLAlchemy -
>>> The Python SQL Toolkit and Object Relational Mapper
>>>
>>> http://www.sqlalchemy.org/
>>>
>>> To post example code, please provide an MCVE: Minimal, Complete, and
>>> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
>>> description.
>>> ---
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "sqlalchemy" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/sqlalchemy/Cju-PF7mV58/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> sqlalchemy+unsubscr...@googlegroups.com.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/sqlalchemy/CA%2BRjkXF3d_pffLqFLYpSNw80E%3D_2ULhh0pTiqBG-F1dFjA9r1Q%40mail.gmail.com
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>> --
>>
>> With kindest regards,
>>
>>
>> *David SCHECK*
>>
>> PRESIDENT/DEVELOPER
>>
>> [image: Signature Logo Sphax Bleu-01.p

Re: [sqlalchemy] Re: Query last relation created and attributes

2019-05-13 Thread Scheck David
the problem is that I can't use SQL for this because this is a mixins that
I use for several objects (tables) because they all have this status
structure... yes it's quite difficult to do :

right now I'm on this stage, but still don't work
self.session.query(Object).outerjoin(ObjectStatus).outerjoin(Status)\
.group_by(Object).with_entities(Object,
func.max(Status.datum).label("status_datum")).subquery()

self.session.query(Object).join((sub, sub.c.id == Status.id))\
.filter(sub.c.statustype_id > 50)

Le lun. 13 mai 2019 à 16:26, Mike Bayer  a écrit :

> Hi -
>
> I was hoping someone could jump in on this.
>
> do you know the SQL that you want to emit?  E.g. plain SQL string.   I
> can help you do that part.
>
>
> On Mon, May 13, 2019 at 9:11 AM Scheck David  wrote:
> >
> > I think I'm near but I can't finish :
> >
> > for all the structure : https://dpaste.de/fek5#L
> >
> > and here my query :
> >
> > self.session.query(Object).outerjoin(ObjectStatus).outerjoin(Status)\
> > .group_by(Object).with_entities(Object,
> func.max(Status.datum).label("status_datum")).subquery()
> >
> > self.session.query(Object).join((sub, sub.c.id == Status.id))\
> > .filter(sub.c.statustype_id > 50)
> >
> > but status type not reachable.
> >
> >
> > --
> > SQLAlchemy -
> > The Python SQL Toolkit and Object Relational Mapper
> >
> > http://www.sqlalchemy.org/
> >
> > To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> > ---
> > You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com.
> > To post to this group, send email to sqlalchemy@googlegroups.com.
> > Visit this group at https://groups.google.com/group/sqlalchemy.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/fe2da543-2710-4b44-9f88-e68c5e35faea%40googlegroups.com
> .
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/Cju-PF7mV58/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/CA%2BRjkXF3d_pffLqFLYpSNw80E%3D_2ULhh0pTiqBG-F1dFjA9r1Q%40mail.gmail.com
> .
> For more options, visit https://groups.google.com/d/optout.
>


-- 

With kindest regards,


*David SCHECK*

PRESIDENT/DEVELOPER

[image: Signature Logo Sphax Bleu-01.png]

Phone: +32 4 87 86 70 12
Visit our website ! https://www.sphax.org

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAOPG6scR1rWJTBN%3DEArX2buPqiVsGaPiQq4wYGU58Renqae0kA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Query last relation created and attributes

2019-05-13 Thread Scheck David
I think I'm near but I can't finish : 

for all the structure : https://dpaste.de/fek5#L

and here my query : 

self.session.query(Object).outerjoin(ObjectStatus).outerjoin(Status)\
.group_by(Object).with_entities(Object, 
func.max(Status.datum).label("status_datum")).subquery()

self.session.query(Object).join((sub, sub.c.id == Status.id))\
.filter(sub.c.statustype_id > 50)

but status type not reachable.


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/fe2da543-2710-4b44-9f88-e68c5e35faea%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Query last relation created and attributes

2019-05-10 Thread Scheck David
I did this :


   1. self.session.query(Object)\
   2. .join(Object.statussen)\
   3. .filter(Status.id == self.session.query(Status).order_by(desc(Status.
   datum)).first().id)\
   4. .filter(Statustype.id > 50).all()


but I still don"t get it

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/99179cb4-a6b0-409f-a4b1-06fc5abf7fb8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Query last relation created and attributes

2019-05-09 Thread Scheck David
Hi,

I have to perform a query who filter in the last relations created an 
attribute, but I don't know how to do. I checked the doc and I don't get it

class Statustype(Base):
__tablename__ = 'statustypes'
id = Column(Integer, nullable=False, primary_key=True)
naam = Column(String(50), nullable=False)

class Status(Base):
__tablename__ = 'statussen'
id = Column(Integer, nullable=False, primary_key=True)
statustype_id = Column(Integer, ForeignKey('statustypes.id'), 
nullable=False)
datum = Column(DateTime(timezone=True), nullable=False, 
default=func.now())

status = relationship('Statustype')

class Object(Base):
__tablename__ = 'aanduidingsobjecten'
naam = Column(String(255), nullable=False)
type_id = Column(Integer, ForeignKey('aanduidingsobjecttypes.id'), 
nullable=False)
statussen = relationship(
"AanduidingsobjectStatus",
order_by='desc(AanduidingsobjectStatus.status_datum)',
backref='aanduidingsobject',
cascade='all, delete, delete-orphan',
lazy='subquery'
)

here is the query I got so far : 
session.query(Object)\
.join(Object.statussen)\
.filter(Statustype.id > 50).all()

But I don't see how to perform the check on the last status and all in only 
one query. Do you have an idea?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/99bc3fc7-d2fb-4629-b81e-b1558e578669%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: can't update images

2019-03-29 Thread Scheck David
Ok solved it in postgresql thanks

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] can't update images

2019-03-27 Thread Scheck David
it's ok, it's saving but what is strange is that my query doesn't find this
records:

this is my sql :

 sql = 'SELECT ss.id, ss.name, ss.image_metadata FROM (SELECT image.id,
image.image_metadata, image.name,
json_array_elements(image.uri_reference)::text as uri_ref FROM image) ss
WHERE ss.uri_ref = :uri ORDER BY ss.id LIMIT 5'
result = self.session.execute(text(sql).params(uri=uri))
return result

Le mer. 27 mars 2019 à 16:04, Scheck David  a écrit :

> Hi everyone,
>
> I try to update an attribute but it appears that it doesn't save. Would
> you know why ? it's very strange because the print is good. but when I
> query again this uri_reference it finds Nonetype.
>
> here is the command :
>
> ```
>
> @contextlib.contextmanager
> def db_session(settings):
> engine = engine_from_config(settings, 'sqlalchemy.')
> session_maker = sessionmaker(bind=engine)
> session = session_maker()
> try:
> yield session
> session.commit()
> except Exception:
> session.rollback()
> finally:
> session.close()
>
>
> def process(settings):
> """
> Command to migrate urls to a single JSON field for performance issues
> """
> with db_session(settings) as session:
> print ("")
> print ("SELECTING ALL IMAGES")
> print ("")
> for image in session.query(Image).limit(40).all():
> urls = _list_uri_references_for_data(image.image_metadata)
> urls.append(image.state.actor_uri)
> image.uri_reference = urls
> print ("{}: SAVING -> {}".format(image.name,
> image.uri_reference))
> print ("")
> print ("CLOSING SCRIPT")
> print ("")
> ```
>
> Thanks in advance.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/F0WBuRrxa4Q/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>


-- 

With kindest regards,


*David SCHECK*

PRESIDENT/DEVELOPER

[image: Signature Logo Sphax Bleu-01.png]

Phone: +32 4 87 86 70 12
Visit our website ! https://www.sphax.org

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] can't update images

2019-03-27 Thread Scheck David
Hi everyone,

I try to update an attribute but it appears that it doesn't save. Would you 
know why ? it's very strange because the print is good. but when I query 
again this uri_reference it finds Nonetype.

here is the command : 

```

@contextlib.contextmanager
def db_session(settings):
engine = engine_from_config(settings, 'sqlalchemy.')
session_maker = sessionmaker(bind=engine)
session = session_maker()
try:
yield session
session.commit()
except Exception:
session.rollback()
finally:
session.close()


def process(settings):
"""
Command to migrate urls to a single JSON field for performance issues
"""
with db_session(settings) as session:
print ("")
print ("SELECTING ALL IMAGES")
print ("")
for image in session.query(Image).limit(40).all():
urls = _list_uri_references_for_data(image.image_metadata)
urls.append(image.state.actor_uri)
image.uri_reference = urls
print ("{}: SAVING -> {}".format(image.name, 
image.uri_reference))
print ("")
print ("CLOSING SCRIPT")
print ("")
```

Thanks in advance.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] RAW SQL working on Postgres but not in SQLAlchemy

2019-03-27 Thread Scheck David
Thanks for the answer Mike, I'll use your advice. It was indeed a problem
of wrong database config. Thanks for the answer.

Le mar. 26 mars 2019 à 19:29, Piyush Nalawade  a
écrit :

> Big thanks. Learned something new.
>
> On Tue, Mar 26, 2019, 11:24 PM Jonathan Vanasco 
> wrote:
>
>>
>>
>> On Tuesday, March 26, 2019 at 1:46:37 PM UTC-4, Piyush Nalawade wrote:
>>>
>>> Hi Mike,
>>>
>>> In the above example does the text and params help to protect from SQL
>>> injection attacks?
>>>
>>
>> yes.
>>
>> see
>> https://docs.sqlalchemy.org/en/latest/core/sqlelement.html?#sqlalchemy.sql.expression.text
>>  on
>> how the raw text is handled
>>
>> in terms of params,
>> https://docs.sqlalchemy.org/en/latest/core/sqlelement.html?#sqlalchemy.sql.expression.ClauseElement.params
>>
>> passing the values in via `params` invokes the database support for bind
>> parameters, which is what protects you from sql injection in values passed
>> in.
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/pDuIdeSzR8Q/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>


-- 

With kindest regards,


*David SCHECK*

PRESIDENT/DEVELOPER

[image: Signature Logo Sphax Bleu-01.png]

Phone: +32 4 87 86 70 12
Visit our website ! https://www.sphax.org

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] RAW SQL working on Postgres but not in SQLAlchemy

2019-03-26 Thread Scheck David
Hi,

I've a simple query as this : 

def count_references(self, uri):
sql = 'SELECT count(*) FROM (SELECT image.id, 
json_array_elements(image.uri_reference)::text as uri_ref FROM image) ss 
WHERE ss.uri_ref = \'\"{0}\"\''.format(uri)
result = self.session.execute(text(sql))

I tested It on pgadmin and all works very good. and SQLAlchemy is throwing 
an error as : 

sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) relation 
"image" does not exist

LINE 1: ...ements(image.uri_reference)::text as uri_ref FROM image) ss ...

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Index on nested JSONB field

2019-03-21 Thread Scheck David
@mike, a real thank you for your patience. I'm sorry I was lost for a
while, I understood what you said and it's what I'm doing now. Thanks for
the tips and sorry for your time

Le jeu. 21 mars 2019 à 17:17, Scheck David  a écrit :

> CREATE INDEX img_createdby_uri_index ON image ((image_metadata ->
> 'systemfields' -> 'created_by' ->> 'uri'));
>
> that's what I want. thank you
>
> Le jeu. 21 mars 2019 à 14:25, Mike Bayer  a
> écrit :
>
>> (noting again, you need the CREATE INDEX statement that PG wants.
>> SQLAlchemy part is only a bonus which isn't strictly needed here).
>>
>> On Thu, Mar 21, 2019 at 9:24 AM Mike Bayer 
>> wrote:
>>
>>> can you go onto some Postgresql forums and ask there?  this is a
>>> Postgresql-specific issue.
>>>
>>> On Thu, Mar 21, 2019 at 8:44 AM Scheck David  wrote:
>>>
>>>> In fact it's a function in postgresql which catch all urls and store it
>>>> in a field. but it's not what I'm searching for. I just have to create an
>>>> index gin and I still don't know how to. I'm just stuck because I don't
>>>> understand how to create index on a nested field.
>>>>
>>>> op.create_index('ix_law_search_vector', 'law', ['search_vector'],
>>>> unique=False, postgresql_using='gin')
>>>>
>>>> my model is like
>>>> Image
>>>> metadata = JSONB
>>>>
>>>> JSONB field :
>>>> { 'updated_by': {'uri': 'https://alink', 'description': 'Beeldbank'}},
>>>>  'akey': [{'uri': 'alink'}], 'aanduidingsobjecten': []
>>>> }
>>>>
>>>> And I need to optimise the search through all the keys "uri".
>>>>
>>>> because right now we do something like that :
>>>>
>>>> filter(
>>>> or_(
>>>>
>>>> Image.image_metadata['systemfields']['updated_by']['uri'].astext == uri,
>>>>     Image.image_metadata.contains({'akey': [{'uri': uri}]}),
>>>> ImageState.actor_uri == uri
>>>> ))
>>>>
>>>> and it takes 70s on a database with 27 records. and that's just not
>>>> possible.
>>>>
>>>> I've another query where it's 2 sec (I don't understand that) :
>>>> filter(Image.image_metadata.contains({'akey': [{'uri': uri}]}))
>>>>
>>>>
>>>> --
>>>>
>>>> With kindest regards,
>>>>
>>>>
>>>> *David SCHECK*
>>>>
>>>> PRESIDENT/DEVELOPER
>>>>
>>>> [image: Signature Logo Sphax Bleu-01.png]
>>>>
>>>> Phone: +32 4 87 86 70 12
>>>> Visit our website ! https://www.sphax.org
>>>>
>>>> --
>>>> SQLAlchemy -
>>>> The Python SQL Toolkit and Object Relational Mapper
>>>>
>>>> http://www.sqlalchemy.org/
>>>>
>>>> To post example code, please provide an MCVE: Minimal, Complete, and
>>>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>>>> description.
>>>> ---
>>>> You received this message because you are subscribed to the Google
>>>> Groups "sqlalchemy" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to sqlalchemy+unsubscr...@googlegroups.com.
>>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
> --
>
> With kindest regards,
>
>
> *David SCHECK*
>
> PRESIDENT/DEVELOPER
>
> [image: Signature Logo Sphax Bleu-01.png]
>
> Phone: +32 4 87 86 70 12
> Visit our website ! https://www.sphax.org
>


-- 

With kindest regards,


*David SCHECK*

PRESIDENT/DEVELOPER

[image: Signature Logo Sphax Bleu-01.png]

Phone: +32 4 87 86 70 12
Visit our website ! https://www.sphax.org

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Index on nested JSONB field

2019-03-21 Thread Scheck David
CREATE INDEX img_createdby_uri_index ON image ((image_metadata ->
'systemfields' -> 'created_by' ->> 'uri'));

that's what I want. thank you

Le jeu. 21 mars 2019 à 14:25, Mike Bayer  a
écrit :

> (noting again, you need the CREATE INDEX statement that PG wants.
> SQLAlchemy part is only a bonus which isn't strictly needed here).
>
> On Thu, Mar 21, 2019 at 9:24 AM Mike Bayer 
> wrote:
>
>> can you go onto some Postgresql forums and ask there?  this is a
>> Postgresql-specific issue.
>>
>> On Thu, Mar 21, 2019 at 8:44 AM Scheck David  wrote:
>>
>>> In fact it's a function in postgresql which catch all urls and store it
>>> in a field. but it's not what I'm searching for. I just have to create an
>>> index gin and I still don't know how to. I'm just stuck because I don't
>>> understand how to create index on a nested field.
>>>
>>> op.create_index('ix_law_search_vector', 'law', ['search_vector'],
>>> unique=False, postgresql_using='gin')
>>>
>>> my model is like
>>> Image
>>> metadata = JSONB
>>>
>>> JSONB field :
>>> { 'updated_by': {'uri': 'https://alink', 'description': 'Beeldbank'}},
>>>  'akey': [{'uri': 'alink'}], 'aanduidingsobjecten': []
>>> }
>>>
>>> And I need to optimise the search through all the keys "uri".
>>>
>>> because right now we do something like that :
>>>
>>> filter(
>>> or_(
>>>
>>> Image.image_metadata['systemfields']['updated_by']['uri'].astext == uri,
>>> Image.image_metadata.contains({'akey': [{'uri': uri}]}),
>>> ImageState.actor_uri == uri
>>> ))
>>>
>>> and it takes 70s on a database with 27 records. and that's just not
>>> possible.
>>>
>>> I've another query where it's 2 sec (I don't understand that) :
>>> filter(Image.image_metadata.contains({'akey': [{'uri': uri}]}))
>>>
>>>
>>> --
>>>
>>> With kindest regards,
>>>
>>>
>>> *David SCHECK*
>>>
>>> PRESIDENT/DEVELOPER
>>>
>>> [image: Signature Logo Sphax Bleu-01.png]
>>>
>>> Phone: +32 4 87 86 70 12
>>> Visit our website ! https://www.sphax.org
>>>
>>> --
>>> SQLAlchemy -
>>> The Python SQL Toolkit and Object Relational Mapper
>>>
>>> http://www.sqlalchemy.org/
>>>
>>> To post example code, please provide an MCVE: Minimal, Complete, and
>>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>>> description.
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to sqlalchemy+unsubscr...@googlegroups.com.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>


-- 

With kindest regards,


*David SCHECK*

PRESIDENT/DEVELOPER

[image: Signature Logo Sphax Bleu-01.png]

Phone: +32 4 87 86 70 12
Visit our website ! https://www.sphax.org

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Index on nested JSONB field

2019-03-21 Thread Scheck David
In fact it's a function in postgresql which catch all urls and store it in
a field. but it's not what I'm searching for. I just have to create an
index gin and I still don't know how to. I'm just stuck because I don't
understand how to create index on a nested field.

op.create_index('ix_law_search_vector', 'law', ['search_vector'],
unique=False, postgresql_using='gin')

my model is like
Image
metadata = JSONB

JSONB field :
{ 'updated_by': {'uri': 'https://alink', 'description': 'Beeldbank'}},
 'akey': [{'uri': 'alink'}], 'aanduidingsobjecten': []
}

And I need to optimise the search through all the keys "uri".

because right now we do something like that :

filter(
or_(

Image.image_metadata['systemfields']['updated_by']['uri'].astext == uri,
Image.image_metadata.contains({'akey': [{'uri': uri}]}),
ImageState.actor_uri == uri
))

and it takes 70s on a database with 27 records. and that's just not
possible.

I've another query where it's 2 sec (I don't understand that) :
filter(Image.image_metadata.contains({'akey': [{'uri': uri}]}))


-- 

With kindest regards,


*David SCHECK*

PRESIDENT/DEVELOPER

[image: Signature Logo Sphax Bleu-01.png]

Phone: +32 4 87 86 70 12
Visit our website ! https://www.sphax.org

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Index on nested JSONB field

2019-03-21 Thread Scheck David
the only problems is when I'll query with SQLAlchemy with a field like :
"uri,uri,uri" is there a simple query to extract this uri ? like a contains
?

Le jeu. 21 mars 2019 à 09:35, Scheck David  a écrit :

> I found this which could make the work. because it seems that it index all
> in one field and this will improve my performances. what do you think?
>
> https://stackoverflow.com/questions/40106609/indexing-nested-json-with-postgres
>
> I think this could do the trick
>
> Le mer. 20 mars 2019 à 17:03, Mike Bayer  a
> écrit :
>
>> On Wed, Mar 20, 2019 at 11:04 AM david scheck 
>> wrote:
>> >
>> > Hi everyone,
>> >
>> > I'm trying to implement a nested JSONB index on a nested field in the
>> json file. I searched through internet and came at the conclusion that I
>> had to create it manually.
>> >
>> > so that's where I am.
>> >
>> > op.create_index('ix_law_search_vector', 'law', ['search_vector'],
>> unique=False, postgresql_using='gin')
>> >
>> >
>> > first of all, I don't understand the 3rd argument. what does this
>> represent? the name of the index?
>>
>> the first argument is the name of the index.All the remaining
>> positional arguments are SQL expressions.  So the ['search_vector'] is
>> probably wrong.
>>
>> >
>> > this is what should be. but my problem is that my desire index is very
>> deep in the JSONB.
>>
>> >
>> >
>> > that's what I've to target :
>> Image.image_metadata['afield']['anotherfield']['fieldindex']
>> >
>> >
>> > I want to create an index on fieldindex.
>> >
>> >
>> > but if I do :
>> >
>> > op.create_index('uri', ['search_vector'], unique=False,
>> postgresql_using='gin')
>> >
>> >
>> > as there is a lot of field 'uri' in the json, will it index all this
>> fields in 1 ?
>> >
>> >
>> > (I'm new to sqlalchemy, I usually use the django orm, so sorry in
>> advance).
>>
>> Right so in SQLAlchemy, you don't have to figure any of this out.
>> Just get the Postgresql syntax you want exactly and run it,
>> op.execute("CREATE INDEX whatever").if you show me the syntax I
>> can see how to make Index() do it too but to just emit a very specific
>> CREATE INDEX, you can just type it out.
>>
>>
>>
>> >
>> >
>> > Thanks for answers
>> >
>> >
>> > --
>> > SQLAlchemy -
>> > The Python SQL Toolkit and Object Relational Mapper
>> >
>> > http://www.sqlalchemy.org/
>> >
>> > To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> > ---
>> > You received this message because you are subscribed to the Google
>> Groups "sqlalchemy" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an email to sqlalchemy+unsubscr...@googlegroups.com.
>> > To post to this group, send email to sqlalchemy@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/sqlalchemy.
>> > For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
> --
>
> With kindest regards,
>
>
> *David SCHECK*
>
> PRESIDENT/DEVELOPER
>
> [image: Signature Logo Sphax Bleu-01.png]
>
> Phone: +32 4 87 86 70 12
> Visit our website ! https://www.sphax.org
>


-- 

With kindest regards,


*David SCHECK*

PRESIDENT/DEVELOPER

[image: Signature Logo Sphax Bleu-01.png]

Phone: +32 4 87 86 70 12
Visit our website ! https://www.sphax.org

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Index on nested JSONB field

2019-03-21 Thread Scheck David
I found this which could make the work. because it seems that it index all
in one field and this will improve my performances. what do you think?
https://stackoverflow.com/questions/40106609/indexing-nested-json-with-postgres

I think this could do the trick

Le mer. 20 mars 2019 à 17:03, Mike Bayer  a
écrit :

> On Wed, Mar 20, 2019 at 11:04 AM david scheck 
> wrote:
> >
> > Hi everyone,
> >
> > I'm trying to implement a nested JSONB index on a nested field in the
> json file. I searched through internet and came at the conclusion that I
> had to create it manually.
> >
> > so that's where I am.
> >
> > op.create_index('ix_law_search_vector', 'law', ['search_vector'],
> unique=False, postgresql_using='gin')
> >
> >
> > first of all, I don't understand the 3rd argument. what does this
> represent? the name of the index?
>
> the first argument is the name of the index.All the remaining
> positional arguments are SQL expressions.  So the ['search_vector'] is
> probably wrong.
>
> >
> > this is what should be. but my problem is that my desire index is very
> deep in the JSONB.
>
> >
> >
> > that's what I've to target :
> Image.image_metadata['afield']['anotherfield']['fieldindex']
> >
> >
> > I want to create an index on fieldindex.
> >
> >
> > but if I do :
> >
> > op.create_index('uri', ['search_vector'], unique=False,
> postgresql_using='gin')
> >
> >
> > as there is a lot of field 'uri' in the json, will it index all this
> fields in 1 ?
> >
> >
> > (I'm new to sqlalchemy, I usually use the django orm, so sorry in
> advance).
>
> Right so in SQLAlchemy, you don't have to figure any of this out.
> Just get the Postgresql syntax you want exactly and run it,
> op.execute("CREATE INDEX whatever").if you show me the syntax I
> can see how to make Index() do it too but to just emit a very specific
> CREATE INDEX, you can just type it out.
>
>
>
> >
> >
> > Thanks for answers
> >
> >
> > --
> > SQLAlchemy -
> > The Python SQL Toolkit and Object Relational Mapper
> >
> > http://www.sqlalchemy.org/
> >
> > To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> > ---
> > You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com.
> > To post to this group, send email to sqlalchemy@googlegroups.com.
> > Visit this group at https://groups.google.com/group/sqlalchemy.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>


-- 

With kindest regards,


*David SCHECK*

PRESIDENT/DEVELOPER

[image: Signature Logo Sphax Bleu-01.png]

Phone: +32 4 87 86 70 12
Visit our website ! https://www.sphax.org

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Index on nested JSONB field

2019-03-20 Thread david scheck
Hi everyone,

I'm trying to implement a nested JSONB index on a nested field in the json 
file. I searched through internet and came at the conclusion that I had to 
create it manually.

so that's where I am. 

op.create_index('ix_law_search_vector', 'law', ['search_vector'], 
unique=False, postgresql_using='gin')


first of all, I don't understand the 3rd argument. what does this 
represent? the name of the index?

this is what should be. but my problem is that my desire index is very deep 
in the JSONB.


that's what I've to target : 
Image.image_metadata['afield']['anotherfield']['fieldindex']


I want to create an index on fieldindex.


but if I do :

op.create_index('uri', ['search_vector'], unique=False, 
postgresql_using='gin')


as there is a lot of field 'uri' in the json, will it index all this fields 
in 1 ?


(I'm new to sqlalchemy, I usually use the django orm, so sorry in advance).


Thanks for answers

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] What is the best way to cite/reference sqlalchemy in a paper?

2018-11-15 Thread David Hoese
You are typically assigned one when you publish a paper and the publisher 
makes it available online. Another common thing I've seen lately with 
libraries I work with is putting them on zenodo (https://zenodo.org/). I 
know for github projects you can link your repository and any time you make 
a release on github, zenodo will automatically update the version number 
and create a new DOI. I'm not sure if they have anything that works with 
bitbucket. Here's an example with the vispy library I work 
on: https://zenodo.org/record/1112438


On Thursday, November 15, 2018 at 7:10:55 PM UTC-6, Mike Bayer wrote:
>
> On Thu, Nov 15, 2018 at 2:03 PM David Hoese  > wrote: 
> > 
> > My team is writing a paper that uses sqlalchemy. We would like to 
> reference sqlalchemy and use a DOI if one is available. How would the 
> SQLAlchemy project prefer to be cited? Thanks. 
>
> you can... reference the website?   I just googled what DOI is and I'm 
> not familiar with that.Where would I get one of those? 
>
>
>
> > 
> > Dave 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] What is the best way to cite/reference sqlalchemy in a paper?

2018-11-15 Thread David Hoese
My team is writing a paper that uses sqlalchemy. We would like to reference 
sqlalchemy and use a DOI if one is available. How would the SQLAlchemy 
project prefer to be cited? Thanks.

Dave

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Can't get ORM to correlate when the query has a union

2018-07-16 Thread 'David C' via sqlalchemy
I'll try exists().where(). Thanks for the bug fix!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Can't get ORM to correlate when the query has a union

2018-07-14 Thread 'David C' via sqlalchemy
I can't find any way to get SqlAlchemy to correlate a table referenced in a 
computed column when the query has a union.

Simplified test case:

Base = declarative.declarative_base()


class MyTable(Base):
__tablename__ = 'my_table'
score = Column(Integer, primary_key=True)


my_alias = aliased(MyTable, name="my_alias")

As a silly query, suppose I wanted to list each score along with an 
indicator of whether a higher score exists in the table. This works fine:

query_1 = Query(MyTable)
query_1 = query_1.add_column(Query(my_alias).filter(my_alias.score > MyTable
.score).exists())

and produces the query:

SELECT my_table.score AS my_table_score, EXISTS (SELECT 1 
FROM my_table AS my_alias 
WHERE my_alias.score > my_table.score) AS anon_1 
FROM my_table

However, if the query is has a union:

query_2 = Query(MyTable).union(Query(MyTable))
query_2 = query_2.add_column(Query(my_alias).filter(my_alias.score > MyTable
.score).exists())

then SqlAlchemy refuses to correlate the MyTable in the added column with 
the MyTable in the outer query:

SELECT anon_1.my_table_score AS anon_1_my_table_score, EXISTS (SELECT 1 
FROM my_table AS my_alias, my_table  -- How do I get rid of the second 
my_table here?
WHERE my_alias.score > my_table.score) AS anon_2 
FROM (SELECT my_table.score AS my_table_score 
FROM my_table UNION SELECT my_table.score AS my_table_score 
FROM my_table) AS anon_1

I tried adding an explicit correlate(), but it does not help:

query_2 = query_2.add_column(Query(my_alias).filter(my_alias.score > MyTable
.score).correlate(MyTable).exists())

How can I force the MyTable in the added column to correlate with the outer 
query?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Is this normal...

2018-02-17 Thread David A
I should note this is with cx-Oracle 6.1 and SQLAlchemy 1.2.3

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Is this normal...

2018-02-17 Thread David A
Noticed something I haven't seen before (as I don't spend alot of time with 
echo=True set...)


class Test(Base):
__tablename__ = 'TEST'

id = Column(Integer, primary_key=True)
test = Column(String)


engine = create_engine('oracle://user:p...@xxx.xxx.xxx.xxx:1521/ORCL', echo=
True, )

Session = sessionmaker(bind=engine)
session = Session()

new = Test(test='foo')

session.add(new)

session.commit()




2018-02-17 20:10:09,309 INFO sqlalchemy.engine.base.Engine SELECT USER FROM 
DUAL
2018-02-17 20:10:09,309 INFO sqlalchemy.engine.base.Engine {}
2018-02-17 20:10:09,337 INFO sqlalchemy.engine.base.Engine SELECT 
CAST('test plain returns' AS VARCHAR(60 CHAR)) AS anon_1 FROM DUAL
2018-02-17 20:10:09,337 INFO sqlalchemy.engine.base.Engine {}
2018-02-17 20:10:09,358 INFO sqlalchemy.engine.base.Engine SELECT 
CAST('test unicode returns' AS NVARCHAR2(60)) AS anon_1 FROM DUAL
2018-02-17 20:10:09,358 INFO sqlalchemy.engine.base.Engine {}
2018-02-17 20:10:09,401 INFO sqlalchemy.engine.base.Engine select value 
from nls_session_parameters where parameter = 'NLS_NUMERIC_CHARACTERS'
2018-02-17 20:10:09,401 INFO sqlalchemy.engine.base.Engine {}
2018-02-17 20:10:09,422 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2018-02-17 20:10:09,424 INFO sqlalchemy.engine.base.Engine INSERT INTO 
"TEST" (test) VALUES (:test) RETURNING "TEST".id INTO :ret_0
*2018-02-17 20:10:09,424 INFO sqlalchemy.engine.base.Engine {'test': 'foo', 
'ret_0': }*
2018-02-17 20:10:09,448 INFO sqlalchemy.engine.base.Engine COMMIT


what is up with the None, None, None.?   Is this normal?

Table is as follows.


CREATE TABLE FOO.TEST
(
  IDINTEGER GENERATED ALWAYS AS IDENTITY ( START WITH 21 MAXVALUE 
 MINVALUE 1 CYCLE CACHE 20 ORDER KEEP) NOT NULL,
  TEST  VARCHAR2(10 BYTE)
)
TABLESPACE TAB_LIVE
PCTUSED0
PCTFREE10
INITRANS   1
MAXTRANS   255
STORAGE(
INITIAL  1M
NEXT 1M
MINEXTENTS   1
MAXEXTENTS   UNLIMITED
PCTINCREASE  0
BUFFER_POOL  DEFAULT
   )
LOGGING 
NOCOMPRESS 
NOCACHE
MONITORING;


CREATE OR REPLACE PUBLIC SYNONYM TEST FOR FOO.TEST;

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Is accessing the ResultProxy like objects a supported feature?

2018-02-08 Thread David Michael Gang
Hi,

As explained here:
https://stackoverflow.com/a/31275607/2343743

The result set of fetchone can be accessed:

row = fetchone()

col1 = row[0] # access via integer position

col2 = row['col2'] # access via name

col3 = row[mytable.c.mycol] # access via Column object.


I accessed it like an object

row.col2 

and it worked.

Is this way supported? Can it be deprecated every version?


Thanks

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] pyodbc and is_disconnect

2017-09-28 Thread David Moore
Fantastic, thanks so much for the help, Mike.

Kind regards,


David Moore

Support Technical Lead

j5 Software South Africa

Skype:

Phone:

Email:

Web:

davidm.j5int

+27 21 762 1440

dav...@j5int.com <em...@j5int.com>

www.j5int.com

[image: j5InternationalLogo_small.png]

--

This message is confidential. It may also be privileged or otherwise
protected by work product immunity or other legal rules. If you have
received it by mistake, please let us know by e-mail reply and delete it
from your system; you may not copy this message or disclose its contents to
anyone. Please send us by fax any message containing deadlines as incoming
e-mails are not screened for response deadlines. The integrity and security
of this message cannot be guaranteed on the Internet.You should carry out
your own virus checks before opening any attachments. Opinions, conclusions
and other information that do not relate to the official business of the
company are neither given nor endorsed by it.


On Thu, Sep 28, 2017 at 3:46 PM, Mike Bayer <mike...@zzzcomputing.com>
wrote:

>
>
> On Thu, Sep 28, 2017 at 3:58 AM, David Moore <dav...@j5int.com> wrote:
>
>> Hi,
>>
>> I've recently had an issue with pyodbc not correctly identifying a
>> disconnect exception when connected to a replicating SQL Server cluster
>> with failover. As far as I've been able to ascertain, what happened is that
>> the database failed over, leaving all connections in a weird state. Since
>> sqlalchemy didn't correctly identify the errors as disconnect exceptions,
>> it kept trying to use the connections which never became usable again. We
>> recycle connections after an hour, but that left us with an hour of no
>> database functionality. Production SQLAlchemy version is 1.0.6, but there
>> is no relevant change I can see on latest master.
>>
>> So, I went digging into how sqlalchemy classifies these errors. It seems
>> the only disconnect condition that pyodbc takes special note of is [08S01]
>> (in lib/sqlalchemy/connectors/pyodbc.py). Back before we used
>> sqlalchemy, we collected a more comprehensive set of errors which imply a
>> disconnection event, and I'd love to see these in sqlalchemy. These
>> are '01002', '08003', '08007', '08S01','08S02', '08001', 'HYT00' and
>> 'HY010'.
>>
>> So, two questions (assuming that these additions will be welcome):
>>   * I notice 08S01 is handled in lib/sqlalchemy/connectors/pyodbc.py,
>> whereas pymssql looks for disconnect errors in
>> lib/sqlalchemy/dialects/pymssql.py. Where is the correct place to patch
>> this? The former, or lib/sqlalchemy/dialects/pyodbc.py?
>>
>
> I think the presence of 08S01 in connectors/pyodbc.py is a bug -this error
> code is specific to SQL Server, so should be in dialects/mssql/pyodbc.
> For that and adding the error codes https://bitbucket.org/zzzeek/
> sqlalchemy/issues/4095/sql-server-close-connection-codes-for is added
> should be pushed today.
>
>
>
>>   * Is there a place I can hook or extend or override our current setup
>> to get this detection into production without a full sqlalchemy upgrade
>> testing cycle?
>>
>
> big time, there's an exception catch hook with deep functionality:
>
> http://docs.sqlalchemy.org/en/latest/core/events.html?
> highlight=handle_error#sqlalchemy.events.ConnectionEvents.handle_error
>
> per the example we can illustrate your codes:
>
> @event.listens_for(Engine, "handle_error")def handle_exception(context):
> if isinstance(context.original_exception,
> pyodbc.Error):
>
> for code in ('08S01', '01002', '08003',
> '08007', '08S02', '08001', 'HYT00', 'HY010'):
>
> if code in str(context.original_exception):
>
> context.is_disconnect = True
>
>
>
>
>>
>> Kind regards,
>>
>> David Moore
>>
>> Support Technical Lead
>>
>> j5 Software South Africa
>>
>> Skype:
>>
>> Phone:
>>
>> Email:
>>
>> Web:
>>
>> davidm.j5int
>>
>> +27 21 762 1440 <+27%2021%20762%201440>
>>
>> dav...@j5int.com <em...@j5int.com>
>>
>> www.j5int.com
>>
>> [image: j5InternationalLogo_small.png]
>>
>> --
>>
>> This message is confidential. It may also be privileged or otherwise
>> protected by work product immunity or other legal rules. If you have
>> received it by mistake, please let us know by e-mail reply and delete it
>> from your system; you may not copy this message or disclose its contents to
>> anyone. Please send us by fax any message containing 

[sqlalchemy] pyodbc and is_disconnect

2017-09-28 Thread David Moore
Hi,

I've recently had an issue with pyodbc not correctly identifying a
disconnect exception when connected to a replicating SQL Server cluster
with failover. As far as I've been able to ascertain, what happened is that
the database failed over, leaving all connections in a weird state. Since
sqlalchemy didn't correctly identify the errors as disconnect exceptions,
it kept trying to use the connections which never became usable again. We
recycle connections after an hour, but that left us with an hour of no
database functionality. Production SQLAlchemy version is 1.0.6, but there
is no relevant change I can see on latest master.

So, I went digging into how sqlalchemy classifies these errors. It seems
the only disconnect condition that pyodbc takes special note of is [08S01]
(in lib/sqlalchemy/connectors/pyodbc.py). Back before we used sqlalchemy,
we collected a more comprehensive set of errors which imply a disconnection
event, and I'd love to see these in sqlalchemy. These are '01002', '08003',
'08007', '08S01','08S02', '08001', 'HYT00' and 'HY010'.

So, two questions (assuming that these additions will be welcome):
  * I notice 08S01 is handled in lib/sqlalchemy/connectors/pyodbc.py,
whereas pymssql looks for disconnect errors in
lib/sqlalchemy/dialects/pymssql.py. Where is the correct place to patch
this? The former, or lib/sqlalchemy/dialects/pyodbc.py?
  * Is there a place I can hook or extend or override our current setup to
get this detection into production without a full sqlalchemy upgrade
testing cycle?

Kind regards,

David Moore

Support Technical Lead

j5 Software South Africa

Skype:

Phone:

Email:

Web:

davidm.j5int

+27 21 762 1440

dav...@j5int.com <em...@j5int.com>

www.j5int.com

[image: j5InternationalLogo_small.png]

--

This message is confidential. It may also be privileged or otherwise
protected by work product immunity or other legal rules. If you have
received it by mistake, please let us know by e-mail reply and delete it
from your system; you may not copy this message or disclose its contents to
anyone. Please send us by fax any message containing deadlines as incoming
e-mails are not screened for response deadlines. The integrity and security
of this message cannot be guaranteed on the Internet.You should carry out
your own virus checks before opening any attachments. Opinions, conclusions
and other information that do not relate to the official business of the
company are neither given nor endorsed by it.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] session.add() neglecting some of my objects to be added

2017-07-18 Thread David Laredo Razo
Thanks Mike, indeed, that was the problem. I solved it using an "ad hoc" 
copy function. This is the solution that worked for me in case somebody 
else incurs in the same mistake I did

def copy_sqla_object(obj, omit_fk=True):
"""Given an SQLAlchemy object, creates a new object (FOR WHICH THE OBJECT
MUST SUPPORT CREATION USING __init__() WITH NO PARAMETERS), and copies
across all attributes, omitting PKs, FKs (by default), and relationship
attributes."""
cls = type(obj)
mapper = class_mapper(cls)
newobj = cls()  # not: cls.__new__(cls)
pk_keys = set([c.key for c in mapper.primary_key])
rel_keys = set([c.key for c in mapper.relationships])
prohibited = pk_keys | rel_keys
if omit_fk:
fk_keys = set([c.key for c in mapper.columns if c.foreign_keys])
prohibited = prohibited | fk_keys
for k in [p.key for p in mapper.iterate_properties if p.key not in 
prohibited]:
try:
value = getattr(obj, k)
setattr(newobj, k, value)
except AttributeError:
pass
return newobj



On Friday, July 14, 2017 at 2:47:45 PM UTC-5, Mike Bayer wrote:
>
> On Fri, Jul 14, 2017 at 1:24 AM, David Laredo Razo 
> <davidl...@gmail.com > wrote: 
>
>
> this code is the problem: 
>
> > 
> > new_object = copy.copy(reading) 
>
> copy() will copy the _sa_instance_state and prevent the session from 
> tracking the object correctly. 
>
> Correct pattern should be: 
>
> new_object = ThermafuserReading(None, componentId) 
>
> Only when you call the constructor (e.g. ThermafuserReading.__init__) 
> do you get a new InstanceState object dedicated to that object. So 
> don't use copy(). 
>
> There *are* ways to use copy() here instead but they are non-obvious 
> and not necessary for a simple case like this. 
>
>
>
>
> > new_object.timestamp = timestamp 
> > 
> > readings.append(new_object) 
> > 
> > #print(new_object, mapper.identity_key_from_instance(new_object)) 
> > #session.add(new_object) 
> > 
> > row_format = "{:>15}" * (len(header) + 1) 
> > 
> > print("Before adding to the session") 
> > print(row_format.format("", *header)) 
> > for reading in readings: 
> > insp = inspect(reading) 
> > row = [hex(id(reading)), insp.transient, insp.pending, insp.persistent, 
> > insp.detached, insp.deleted, reading in session] 
> > print(row_format.format("", *row)) 
> > 
> > session.add_all(readings) 
> > 
> > print("\n#Elements in the session") 
> > print(session) 
> > for element in session: 
> > print(element) 
> > 
> > print("\nAfter adding to the session") 
> > print(row_format.format("", *header)) 
> > for reading in readings: 
> > insp = inspect(reading) 
> > row = [hex(id(reading)), insp.transient, insp.pending, insp.persistent, 
> > insp.detached, insp.deleted, reading in session] 
> > print(row_format.format("", *row)) 
> > 
> > These are some results I obtained by comparing wheter the objects in my 
> list 
> > are in the session or not 
> > 
> > 
> > 
> > 
> > As you can observe, according to the results above the objects are 
> indeed 
> > inside the session but for some reason when I try to print whats 
> contained 
> > in the session by doing 
> > 
> > for element in session: 
> >print(element) 
> > 
> > I just get a None, what am I doing wrong? I dont see anything wrong in 
> my 
> > code, I hope you can help me clarify this. Thanks in advance. 
> > 
> > I will attach both my code and the tests data in case you want to try it 
> by 
> > yourself. 
> > 
> > 
> > 
> > 
> > 
> > On Thursday, July 13, 2017 at 8:27:04 AM UTC-5, Mike Bayer wrote: 
> >> 
> >> On Thu, Jul 13, 2017 at 12:31 AM, David Laredo Razo 
> >> <davidl...@gmail.com> wrote: 
> >> > Hello, I am using SQLAlchemy version 1.2.0b1 
> >> > 
> >> > 
> >> > 
> >> > So far so go, the problem arises when I add readings to the session 
> via 
> >> > session.add_all(readings). I only get the last element in my list 
> added, 
> >> > e.g. 
> >> 
> >> there's no reason at all that would happen, other than what's in 
> >> "readings" is not what you'd expect. 
> >> 
> >> try iterating through every element in "readings" after the add_all(), 
> >> and do "obj in session". 
> >> 
> >> If some of these objects were from a different session, then they may 
> >> be "detached" as you put them in in 

Re: [sqlalchemy] session.add() neglecting some of my objects to be added

2017-07-13 Thread David Laredo Razo
i did as you instructed me but the error persists. 

This is the example code im talking about

Session = sessionmaker()
session = Session() 

mapper = inspect(ThermafuserReading)
readings = list()

header = ["hex(id(object))", "is transient", "is pending", "is persistent", 
"is detached", "is deleted", "is in session"]

#Open the csv file
csvFilePath = "/Users/davidlaredorazo/Box Sync/Data/Zone4/1C1A/1C1A 
2016-12-31.csv"
with open(csvFilePath, 'r') as csvfile:

reader = csv.reader(csvfile)
componentId = 1
count = 0

reading = ThermafuserReading(None, componentId)

for row in reader:

if count == 0:
count += 1
continue

#print(row)
timestamp = parse(row[0], None, ignoretz = True)

reading.timestamp = timestamp
new_object = copy.copy(reading)
new_object.timestamp = timestamp

readings.append(new_object)

#print(new_object, mapper.identity_key_from_instance(new_object))
#session.add(new_object)

row_format = "{:>15}" * (len(header) + 1)

print("Before adding to the session")
print(row_format.format("", *header))
for reading in readings:
insp = inspect(reading)
row = [hex(id(reading)), insp.transient, insp.pending, insp.persistent, 
insp.detached, insp.deleted, reading in session]
print(row_format.format("", *row))

session.add_all(readings)

print("\n#Elements in the session")
print(session)
for element in session:
print(element)

print("\nAfter adding to the session")
print(row_format.format("", *header))
for reading in readings:
insp = inspect(reading)
row = [hex(id(reading)), insp.transient, insp.pending, insp.persistent, 
insp.detached, insp.deleted, reading in session]
print(row_format.format("", *row))

These are some results I obtained by comparing wheter the objects in my 
list are in the session or not

<https://lh3.googleusercontent.com/-88hJ0sjxu2M/WWhUYMhRSiI/Ag0/kXliLRlFjZU-ZYwpMhUD5w_tVgNTOf9dgCLcBGAs/s1600/session_error1.png>

<https://lh3.googleusercontent.com/-Vvuk4avT75o/WWhUj98bouI/Ag4/FoPlU2HC3lk-GOC6qD1jihZKWIC-0SqXwCLcBGAs/s1600/session_error2.png>

<https://lh3.googleusercontent.com/-mkaRplvoUes/WWhUoF74sqI/Ag8/uztkwWPac-I1WxRDdj8-mQK2mgi7Fu9mACLcBGAs/s1600/session_error3.png>

As you can observe, according to the results above the objects are indeed 
inside the session but for some reason when I try to print whats contained 
in the session by doing

for element in session:
   print(element)

I just get a None, what am I doing wrong? I dont see anything wrong in my 
code, I hope you can help me clarify this. Thanks in advance.

I will attach both my code and the tests data in case you want to try it by 
yourself.


 


On Thursday, July 13, 2017 at 8:27:04 AM UTC-5, Mike Bayer wrote:
>
> On Thu, Jul 13, 2017 at 12:31 AM, David Laredo Razo 
> <davidl...@gmail.com > wrote: 
> > Hello, I am using SQLAlchemy version 1.2.0b1 
> > 
> > 
> > 
> > So far so go, the problem arises when I add readings to the session via 
> > session.add_all(readings). I only get the last element in my list added, 
> > e.g. 
>
> there's no reason at all that would happen, other than what's in 
> "readings" is not what you'd expect. 
>
> try iterating through every element in "readings" after the add_all(), 
> and do "obj in session". 
>
> If some of these objects were from a different session, then they may 
> be "detached" as you put them in in which case they'd go into 
> session.identity_map, not session.new. 
>
>
>
>
> > 
> > for new in session.new: 
> >print(new, mapper.identity_key_from_instance(new_object)) 
> > 
> > <ThermafuserReading(thermafuserId = '1', timestamp = '2017-01-01 
> 00:00:00')> 
> > (, (datetime.datetime(2017, 1, 1, 
> 0, 
> > 0), 1)) 
> > 
> > 
> > Why is this behavior? I have a test code and the test data in case its 
> > needed to reproduce this behavior 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
&g

[sqlalchemy] Re: session.add() neglecting some of my objects to be added

2017-07-12 Thread David Laredo Razo
Sorry I made a mistake, when printing whats inside the session I do it this 
way

for new_object in session.new:
   print(new_object, mapper.identity_key_from_instance(new_object))

On Wednesday, July 12, 2017 at 11:31:06 PM UTC-5, David Laredo Razo wrote:
>
> Hello, I am using SQLAlchemy version 1.2.0b1
>
> I created some mapped objects using the declarative style in SQLAlchemy. I 
> have a mapping called ThermafuserReading which has a composed primary key 
> made up of the Time_stamp column which is DateTime and ThermafuserId column 
> which is an Integer and also acts as a Foreign Key to another table called 
> Thermafuser. This is the definition of the class
>
> class ThermafuserReading(Base):
> """Class to map to the Thermafuser Readings table in the HVAC DB"""
>
> __tablename__ = 'Thermafuser_Reading'
>
> _timestamp = Column('Time_stamp', DateTime, primary_key = True)
> _thermafuserId = Column('ThermafuserId', Integer, 
> ForeignKey("Thermafuser.ThermafuserId"), primary_key = True)
> _roomOccupied = Column('RoomOccupied', Boolean)
> _zoneTemperature = Column('ZoneTemperature', Float)
> _supplyAir = Column('SupplyAir', Float, nullable=True)
> _airflowFeedback = Column('AirflowFeedback', Float, nullable=True)
> _CO2Input = Column('CO2Input', Float, nullable=True)
> _maxAirflow = Column('MaxAirflow', Float, nullable=True)
> _minAirflow = Column('MinAirflow', Float, nullable=True)
> _unoccupiedHeatingSetpoint = Column('UnoccupiedHeatingSetpoint', Float, 
> nullable=True)
> _unoccupiedCoolingSetpoint = Column('UnoccupiedCoolingSetpoint', Float, 
> nullable=True)
> _occupiedCoolingSetpoint = Column('OccupiedCoolingSetpoint', Float, 
> nullable=True)
> _occupiedHeatingSetpoint = Column('OccupiedHeatingSetpoint', Float, 
> nullable=True)
> _terminalLoad = Column('TerminalLoad', Float, nullable=True)
>
> #Relationship between Thermafuser Reading and Thermafuser
> _thermafuser = relationship("Thermafuser", back_populates = 
> "_thermafuserReadings",  cascade = "all, delete-orphan", single_parent = True)
>
>
> I am creating a session in the following way
>
> sqlengine = 
> sqlalchemy.create_engine("mysql+mysqldb://user:password@localhost:3306/HVAC")
> Session = sessionmaker(bind=sqlengine)
> session = Session()
>
>
> At some point in my code I am creating a list called readings of 
> Thermafuser Readings and adding such list the session via 
> session.add_all(readings)
>
> This are some example elements printed from the list readings:
>
> for reading in readings:
> print(reading, mapper.identity_key_from_instance(reading))
>
> <ThermafuserReading(thermafuserId = '1', timestamp = '2016-12-31 23:40:00')> 
> (, (datetime.datetime(2016, 12, 31, 23, 
> 40), 1))
>
> <ThermafuserReading(thermafuserId = '1', timestamp = '2016-12-31 23:45:00')> 
> (, (datetime.datetime(2016, 12, 31, 23, 
> 45), 1))
>
> <ThermafuserReading(thermafuserId = '1', timestamp = '2016-12-31 23:50:00')> 
> (, (datetime.datetime(2016, 12, 31, 23, 
> 50), 1))
>
> <ThermafuserReading(thermafuserId = '1', timestamp = '2016-12-31 23:55:00')> 
> (, (datetime.datetime(2016, 12, 31, 23, 
> 55), 1))
>
> <ThermafuserReading(thermafuserId = '1', timestamp = '2017-01-01 00:00:00')> 
> (, (datetime.datetime(2017, 1, 1, 0, 0), 
> 1))
>
>
> So far so go, the problem arises when I add readings to the session via 
> session.add_all(readings). I only get the last element in my list added, 
> e.g.
>
> for new in session.new:
>print(new, mapper.identity_key_from_instance(new_object))
>
> <ThermafuserReading(thermafuserId = '1', timestamp = '2017-01-01 
> 00:00:00')> (, 
> (datetime.datetime(2017, 1, 1, 0, 0), 1))
>
>
> Why is this behavior? I have a test code and the test data in case its 
> needed to reproduce this behavior
>
>
>
>
>
>
>
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] session.add() neglecting some of my objects to be added

2017-07-12 Thread David Laredo Razo
Hello, I am using SQLAlchemy version 1.2.0b1

I created some mapped objects using the declarative style in SQLAlchemy. I 
have a mapping called ThermafuserReading which has a composed primary key 
made up of the Time_stamp column which is DateTime and ThermafuserId column 
which is an Integer and also acts as a Foreign Key to another table called 
Thermafuser. This is the definition of the class

class ThermafuserReading(Base):
"""Class to map to the Thermafuser Readings table in the HVAC DB"""

__tablename__ = 'Thermafuser_Reading'

_timestamp = Column('Time_stamp', DateTime, primary_key = True)
_thermafuserId = Column('ThermafuserId', Integer, 
ForeignKey("Thermafuser.ThermafuserId"), primary_key = True)
_roomOccupied = Column('RoomOccupied', Boolean)
_zoneTemperature = Column('ZoneTemperature', Float)
_supplyAir = Column('SupplyAir', Float, nullable=True)
_airflowFeedback = Column('AirflowFeedback', Float, nullable=True)
_CO2Input = Column('CO2Input', Float, nullable=True)
_maxAirflow = Column('MaxAirflow', Float, nullable=True)
_minAirflow = Column('MinAirflow', Float, nullable=True)
_unoccupiedHeatingSetpoint = Column('UnoccupiedHeatingSetpoint', Float, 
nullable=True)
_unoccupiedCoolingSetpoint = Column('UnoccupiedCoolingSetpoint', Float, 
nullable=True)
_occupiedCoolingSetpoint = Column('OccupiedCoolingSetpoint', Float, 
nullable=True)
_occupiedHeatingSetpoint = Column('OccupiedHeatingSetpoint', Float, 
nullable=True)
_terminalLoad = Column('TerminalLoad', Float, nullable=True)

#Relationship between Thermafuser Reading and Thermafuser
_thermafuser = relationship("Thermafuser", back_populates = 
"_thermafuserReadings",  cascade = "all, delete-orphan", single_parent = True)


I am creating a session in the following way

sqlengine = 
sqlalchemy.create_engine("mysql+mysqldb://user:password@localhost:3306/HVAC")
Session = sessionmaker(bind=sqlengine)
session = Session()


At some point in my code I am creating a list called readings of 
Thermafuser Readings and adding such list the session via 
session.add_all(readings)

This are some example elements printed from the list readings:

for reading in readings:
print(reading, mapper.identity_key_from_instance(reading))

 
(, (datetime.datetime(2016, 12, 31, 23, 
40), 1))

 
(, (datetime.datetime(2016, 12, 31, 23, 
45), 1))

 
(, (datetime.datetime(2016, 12, 31, 23, 
50), 1))

 
(, (datetime.datetime(2016, 12, 31, 23, 
55), 1))

 
(, (datetime.datetime(2017, 1, 1, 0, 0), 
1))


So far so go, the problem arises when I add readings to the session via 
session.add_all(readings). I only get the last element in my list added, 
e.g.

for new in session.new:
   print(new, mapper.identity_key_from_instance(new_object))

 (, 
(datetime.datetime(2017, 1, 1, 0, 0), 1))


Why is this behavior? I have a test code and the test data in case its 
needed to reproduce this behavior









-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Sequences in Oracle RAC out of order

2017-07-03 Thread David Moore
Hello,

I just chased down a bug in our application while using SQLAlchemy 1.0.6 in 
an Oracle RAC environment, which should probably be in SQLAlchemy 
documentation somewhere, and possibly motivate a change to the options for 
sequence creation on Oracle.

Basically, our application assumes a column we attach a sqlalchemy.Sequence 
to will always increment, and we use that to process a set of changes in 
strict order.  However, if you read the Oracle docs on SEQUENCE creation 
(here: 
https://docs.oracle.com/cd/B12037_01/server.101/b10759/statements_6014.htm), 
in an RAC environment, that is not true by default - each node running your 
database grabs a cache of sequence numbers to serve out equal to CACHE size 
(default is 20) and whichever node processes your insert will assign a 
sequence number from its cache.

I understand this is done for performance reasons, and in the very common 
case that all you want from your Sequence is to autogenerate a unique id, 
it's absolutely fine.  However, it would be nice to have an option to 
specify the ORDER keyword when creating your SEQUENCE if you really need a 
Sequence in sequential order, and a documentation note that this is an 
oddity of Oracle RAC.

Kind regards,

Dave Moore

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Bulk Lazy Loader for relationships

2017-05-03 Thread David Chanin
How do objects get registered into the aggressive loader? Does it happen 
automatically when they're initially loaded via query or cache? Ideally we 
wanted to group items together when they're loaded and do bulk lazy loading 
on that group rather than on all models in the session, but couldn't figure 
out a good way to accomplish that. Ex if we run users = 
session.query(User).limit(10).all() then we'd just want to do bulk lazy 
loading within the users from that query rather than all users that happen 
to be in the session, but it seems like we'd need to do some pretty 
invasive changes to SQLAlchemy to keep track of which models were initially 
loaded together.

On Thursday, May 4, 2017 at 12:45:36 AM UTC+8, Jonathan Vanasco wrote:
>
> We have a somewhat tangential solution that was developed when building a 
> read-through cache that backs into SqlAlchemy.
>
> Instead of working on the Session, we register items for 'aggressive 
> loading' into a custom class instance that handles the coordination.
>
> Our system works like this:
>
> * objects are registered with the aggressive loader
> * the aggressive loader is triggered:
> ** the registered objects are recursively analyzed to find primary key 
> relationships that should be loaded
> ** for each object type, the unloaded primary keys are loaded
> ** some relationships are specified to run on a second pass
>
> Aside from the eagerloading advantages, this improved performance for a 
> few more reasons:
>
> * only the necessary data was requested/loaded. this eased the database 
> and wire traffic (i.e. at the db level a joined object is calculated once, 
> not once per relationship)
> * the queries were easier on the DB, as simple "get by primary key" 
> requests - not wrapped joins.
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Bulk Lazy Loader for relationships

2017-05-03 Thread David Chanin
Interesting - so it will be possible in 1.2 to do more custom relationship 
loading in a "post load" hook?

Thanks for the feedback! That definitely makes sense - I didn't fully 
understand all the logic in _emit_lazyload() initially so I wasn't sure 
what was OK to remove and what wasn't. I made a PR with those changes here: 
https://github.com/operator/sqlalchemy_bulk_lazy_loader/pull/1.

One thing I still don't fully understand is the passive variable. What does 
passive mean, and is it set per-model or per-relationship?

Thanks a lot Mike!

On Wednesday, May 3, 2017 at 10:14:03 PM UTC+8, Mike Bayer wrote:
>
>
> Related note, in 1.2 I'm adding a new mechanism for loading things which 
> is a "post load" hook, that is, a bunch of objects are loaded in a 
> query, and then afterwards, more loaders can run on a batch of completed 
> objects.   The effect looks similar to how "subqueryload" works right 
> now, except "subqueryload" fires off within the initial population step 
> of the objects.  The two kinds of loaders using this are the "select IN" 
> loading, which is like subqueryloading but more or less better in most 
> ways (also seems you're doing "select IN" here), and "select IN" loading 
> of subclass attributes on a joined inheritance mapping. 
>
> The hook you have here would be a third kind of hook, a "lazyload that 
> works across the Session" hook.   Definitely something new.   I'm not 
> sure all of the logic that's been copied into _emit_lazyload() really 
> applies though; the top logic is all related to the specific object that 
> has triggered the load, like if its pending or not, if it had any query 
> option set up, the state.load_path, etc.   You can't assume any of that 
> stuff applies to all the other states if you are going across the whole 
> result.It's probably better, since this is a very different kind of 
> loader, to make it just load for all the states in the same way without 
> looking at any of their options or things like that. 
>
>
>
> On 05/03/2017 08:52 AM, David Chanin wrote: 
> > Ack, thanks Simon! That is definitely a bug :). I just pushed a fix. 
> > 
> > Thanks for the feedback! 
> > David 
> > 
> > On Wednesday, May 3, 2017 at 5:47:54 PM UTC+8, David Chanin wrote: 
> > 
> > Hi Everyone, 
> > 
> > We just open-sourced a custom lazy loader for SQLAlchemy that does 
> > bulk lazy loading of relations - essentially a lazy subqueryload. 
> > The idea is that whenever a relation is lazy-loaded on a model, the 
> > loader will look for all similar models in the session that haven't 
> > had that relation populated yet and will issue a single SQL query to 
> > populate them all in bulk. I'm really curious to hear any feedback 
> > you may have on this idea / implementation, pitfalls that we're 
> > overlooking, or ideas for improvement. The repo is at 
> > https://github.com/operator/sqlalchemy_bulk_lazy_loader 
> > <https://github.com/operator/sqlalchemy_bulk_lazy_loader>. Hope 
> it's 
> > useful to others as well. 
> > 
> > Thanks so much! 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> > Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sqlalchemy+...@googlegroups.com  
> > <mailto:sqlalchemy+unsubscr...@googlegroups.com >. 
> > To post to this group, send email to sqlal...@googlegroups.com 
>  
> > <mailto:sqlal...@googlegroups.com >. 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Bulk Lazy Loader for relationships

2017-05-03 Thread David Chanin
Ack, thanks Simon! That is definitely a bug :). I just pushed a fix.

Thanks for the feedback!
David

On Wednesday, May 3, 2017 at 5:47:54 PM UTC+8, David Chanin wrote:
>
> Hi Everyone,
>
> We just open-sourced a custom lazy loader for SQLAlchemy that does bulk 
> lazy loading of relations - essentially a lazy subqueryload. The idea is 
> that whenever a relation is lazy-loaded on a model, the loader will look 
> for all similar models in the session that haven't had that relation 
> populated yet and will issue a single SQL query to populate them all in 
> bulk. I'm really curious to hear any feedback you may have on this idea / 
> implementation, pitfalls that we're overlooking, or ideas for improvement. 
> The repo is at https://github.com/operator/sqlalchemy_bulk_lazy_loader. 
> Hope it's useful to others as well.
>
> Thanks so much!
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Bulk Lazy Loader for relationships

2017-05-03 Thread David Chanin
Hi Everyone,

We just open-sourced a custom lazy loader for SQLAlchemy that does bulk 
lazy loading of relations - essentially a lazy subqueryload. The idea is 
that whenever a relation is lazy-loaded on a model, the loader will look 
for all similar models in the session that haven't had that relation 
populated yet and will issue a single SQL query to populate them all in 
bulk. I'm really curious to hear any feedback you may have on this idea / 
implementation, pitfalls that we're overlooking, or ideas for improvement. 
The repo is at https://github.com/operator/sqlalchemy_bulk_lazy_loader. 
Hope it's useful to others as well.

Thanks so much!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Support for Oracle 12c auto increment (IDENTITY) columns?

2016-04-13 Thread David Moore



- Original Message -


From: "Piotr Dobrogost"  
To: "sqlalchemy"  
Sent: Wednesday, April 13, 2016 1:50:19 PM 
Subject: Re: [sqlalchemy] Support for Oracle 12c auto increment (IDENTITY) 
columns? 



Mike, 


Thanks for your reply! 

On Wednesday, April 13, 2016 at 1:15:32 PM UTC+2, Mike Bayer wrote: 

We've not started supporting new oracle 12c features as of yet, in this case it 
might be possible to get it working with some dialect flags since we already 
use "returning" to get at the newly generated primary key, although testing 
would be needed and other assumptions in the dialect might get in the way. 




Which flags do you have in mind? Looking at 
http://docs.sqlalchemy.org/en/latest/dialects/oracle.html I don't see anything 
which might be useful to make it work. 


I was surprised Oracle needs different syntax with explicit sequence in SA. 
Having one syntax for auto increment column (primary key) across all backends 
seems like very important feature to have. What is the reason there's no Oracle 
dialect option to generate and use suitable sequence for such column? Is there 
some recipe solving this problem? 




Regards, 
Piotr Dobrogost 
-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group. 
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com . 
To post to this group, send email to sqlalchemy@googlegroups.com . 
Visit this group at https://groups.google.com/group/sqlalchemy . 
For more options, visit https://groups.google.com/d/optout . 


>From the license under which you can use this ( 
>http://www.oracle.com/technetwork/licenses/standard-license-152015.html ): 

License Rights and Restrictions 
Oracle grants You a nonexclusive, nontransferable, limited license to 
internally use the Programs, subject to the restrictions stated in this 
Agreement, only for the purpose of developing, testing, prototyping, and 
demonstrating Your application and only as long as Your application has not 
been used for any data processing, business, commercial, or production 
purposes, and not for any other purpose. 

-- 

I suspect you would not be able to use this for developing and testing 
sqlalchemy without breaking those terms. Oracle can get really nasty about 
using Developer Days when they think you've broken this agreement. 

regards, 

Dave Moore 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Domain models and "leaky abstractions"

2015-11-16 Thread David Allouche

> On 13 Nov 2015, at 09:16, Yegor Roganov  wrote:
> 
> Suppose we have a 1 <-> N relation between users and cities (meaning that 
> every user is related with one city). For our domain model "User" we want to 
> define a method "lives_in_london". Code looks like this:
> 
> class User(Base):
> id = ...
> city_id = ...
> city = relationship("City")
> def lives_in_london(self):
> return self.city.name == 'London'
> 
> class City(Base):
> id = ...
> name = Column(String)
> 
> The problem with this code is that "lives_in_london" method is a leaky 
> abstraction, its client must keep in mind that it may issue a DB query. It's 
> okay as a one-off thing, but will case problems if used in a loop.
> So to be used efficiently, clients must know to preload the "city" 
> relationship.
> 
> I know it's a contrived example, but the general question is how to define 
> domain methods that need to access relations. I also know that the question 
> is intrinsic to all ORM, but maybe SQLAlchemy could offer some support.

I have not yet found a really satisfying solution to this class of problems, 
but all the solutions are along the lines of "do not present such abstractions".

When there is a need to encapsulate some special knowledge about the data model 
(in your example, this is "self.city.name == 'London'"), I have seen three 
approaches:

Use higher level abstractions that perform all the required queries to produce 
"view" objects needed to complete a task. Those abstractions will tend to be 
tailored to a specific single use in the code base, but they will provide a 
convenient injection point to mock out the database layer for unit testing, and 
they can be easily identified and audited when the data model changes.
Detach mapped entity after loading to force client code to perform all the 
relationship loading explicitly. Kind of hackish, but that might work well in 
some cases.
Do not use mapped entities at all for read-only tasks, and instead load 
specific columns and use class methods on the models to encapsulate knowledge. 
That makes for simple code since there is no need to introduce an explicit view 
abstraction layer, but that might lead to code that is harder to test because 
the query generation code will be coupled with the code that uses the result of 
the query.

The common theme here is: do not let mapped objects attached to the session 
escape to code that should not know about which operations could generate 
relationship loading queries.

There is clear contrast between code that only reads the database, typically 
works on collections of objects, and should prevent mapped objects from leaking 
out. And code that writes to the database, typically works on individual 
objects, and lets client code update mapped objects.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLTap 0.3.10 released

2015-10-03 Thread David Allouche

> On 2 Oct 2015, at 05:49, George Reilly <george.v.rei...@gmail.com> wrote:
> 
> On Mon, Sep 28, 2015 at 1:04 AM, David Allouche <da...@allouche.net 
> <mailto:da...@allouche.net>> wrote:
>> On 28 Sep 2015, at 05:21, George V. Reilly <george.v.rei...@gmail.com 
>> <mailto:george.v.rei...@gmail.com>> wrote:
>> 
>> SQLTap is a very useful library that helps you in profiling SQLAlchemy 
>> queries. It helps you understand where and when SQLAlchemy issues queries, 
>> how often they are issued, how many rows are returned, and if you are 
>> issuing queries with duplicate parameters. The last two are new in tonight's 
>> 0.3.10 release, which incorporates improved reporting from me.
>> 
>> https://github.com/inconshreveable/sqltap 
>> <https://github.com/inconshreveable/sqltap>
> 
> What are the downsides of issuing queries with duplicate parameters? Aside 
> from "it makes it harder to read the logs".
> 
> I mean, obviously, not "dozens of duplicate parameters": any query with 
> dozens of parameters probably has other problems.
> 
> I could not find any rationale on the linked page.
> 
> I was investigating the performance of a single, pathologically slow API 
> request to one of our web services. SQLTap told me that there were nearly 
> 12,000 database queries over several minutes. Digging further, I found that 
> about half of these requests could be eliminated by adding lazy="joined" to 
> some relationships. There was one relationship where adding lazy="joined" 
> eliminated some queries but the result rowsets were so large that the overall 
> API request became noticeably slower. I updated SQLTap's report to show the 
> number of rows returned by each query, which helped identify such problems.
> 
> I also found that certain objects were being requested again and again; i.e., 
> I was issuing queries with identical parameters. SQLAlchemy issues each query 
> to the database, gets back an identical result (since they weren't being 
> modified), and returns the existing object from its identity map. See 
> http://docs.sqlalchemy.org/en/rel_1_0/orm/session_basics.html#is-the-session-a-cache
>  
> <http://docs.sqlalchemy.org/en/rel_1_0/orm/session_basics.html#is-the-session-a-cache>.
>  Making my app smarter about not asking again for objects that it had already 
> loaded eliminated thousands more queries. This is why I added the duplicate 
> query reporting, so that I could pinpoint opportunities for caching.

Since your message addresses a wider scope than my question, I will answer 
similarly widely.

You had a pathological API handler that used an anti patterns of ORMs: object 
graph traversal concealing requests in loops.  SQLTap helped you identify the 
problem.

You applied the fix: eager loading through joins. Then you hit the related anti 
pattern: unnecessarily large result sets. And SQLTap helped you identify that 
problem too.

That's nice, and it helps illustrate that SQLTap is useful for debugging 
database performance. But, so far, it does not answer my question.

Then you found another anti pattern: redundant explicit queries. And this is 
what detection of "issuing queries with duplicate parameters" is about. It is 
not so much about "duplicate parameters", which I understood as "multiple 
parameters in a single query that have the same value", but rather about 
redundant queries that have identical SQL and parameters in a single handler.

Maybe you can make the benefits of SQLTap more obvious by explaining its goals 
in this way: how it helps you debug common anti patterns in database code.

Sorry if I sound pedantic: in a couple of weeks I will make a presentation 
about SQLAlchemy Tips and Tricks. So this is as much about clarifying my 
thinking than about clarifying the SQLTap presentation.

Thank you for reading this much.






-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLTap 0.3.10 released

2015-09-28 Thread David Allouche

> On 28 Sep 2015, at 05:21, George V. Reilly  wrote:
> 
> SQLTap is a very useful library that helps you in profiling SQLAlchemy 
> queries. It helps you understand where and when SQLAlchemy issues queries, 
> how often they are issued, how many rows are returned, and if you are issuing 
> queries with duplicate parameters. The last two are new in tonight's 0.3.10 
> release, which incorporates improved reporting from me.
> 
> https://github.com/inconshreveable/sqltap 
> 

What are the downsides of issuing queries with duplicate parameters? Aside from 
"it makes it harder to read the logs".

I mean, obviously, not "dozens of duplicate parameters": any query with dozens 
of parameters probably has other problems.

I could not find any rationale on the linked page.

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how to dynamically work with an aliased table?

2015-09-25 Thread David Allouche

> On 24 Sep 2015, at 17:38, Jonathan Vanasco <jonat...@findmeon.com> wrote:
> 
> 
> 
> On Thursday, September 24, 2015 at 3:05:56 AM UTC-4, David Allouche wrote:
> That looks like the right approach. There is probably something else in your 
> actual code that is causing "it [to] not work". 
> 
> To get a better understanding of "it did not work", I would look at the 
> "str(query)" before and after the stuff with the aliased table. 
> 
> The _aliased object (returned by `sqlalchemy.orm.alias()` does not have 
> addressable columns.  touching _aliased.string_id and _aliased.c.string_id 
> both raise errors.
> 
> The rest of the code works fine in production, I just can't seem to figure 
> out how to add this table onto the query under a different name which can be 
> queried against.  The closest thing I could do was to nest everything into 
> subselects -- but the sql is grossly inefficient.


Use sqlalchemy.orm.aliased() instead of .alias(). For example:

bar2 = sqlalchemy.orm.aliased(Bar, name='bar2')

Honestly, by reading the documentation, I am confused about how useful 
orm.alias() is.

But orm.aliased() is the thing I use all over the place to do what you want.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: How to get specific attribute from last row?

2015-09-24 Thread David Allouche

> On 23 Sep 2015, at 20:37, Sebastian M Cheung  
> wrote:
> 
> posts = User.query.all()
> row = posts.select().execute().fetchone()
> print row[-1]
> return row.activation_code
> 
> doesnt work

Why are you doing this? Instead of, for example:

posts = session.query(User).order_by(User.id.desc()).first()
print posts.activation_code

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how to dynamically work with an aliased table?

2015-09-24 Thread David Allouche

> On 23 Sep 2015, at 20:38, Jonathan Vanasco  wrote:
> 
> I have a query where I derive an object based on some dynamic filters on a 
> relationship:
> 
> sql_ors = [
> sqlalchemy.func.lower(Bar.string_id) == id_search.lower(),
> sqlalchemy.func.lower(Bar.numeric_id) == id_search.lower(),
> ]
> query = dbSession.query(Foo)\
> .join(Bar,
>   Foo.bar_id == Bar.id
> )\
> .filter(
> Foo.bash.op('IS')(True),
> sqlalchemy.sql.expression.or_(*sql_ors),
> )
> 
> This generally works fine.
> 
> Because of how the app stores data, I need to expand this query in the 
> following way:
> 
> 1. I need to join another instance of Bar onto the query
> 2. I need to filter against that instance of Bar
> 
> After reading the docs, I was hoping something like this would work -- it did 
> not, but I'm sharing this to explain the actions I was trying to accomplish
> 
> _aliased = sqlalchemy.orm.alias(Bar, name='bar2')
> sql_ors.extend([
> sqlalchemy.func.lower(_aliased.string_id) == id_search.lower(),
> sqlalchemy.func.lower(_aliased.numeric_id) == id_search.lower(),
> ])
> query = query.join(_aliased, Foo.bar_id == _aliased.id)
> 
> 
> A better query would handle this entire section with some CASE clauses, but 
> I'm hoping to just patch in the right functionality for a bit without 
> rewriting too much.
> 
> Anyone have a clue which ORM api elements I should be using?

That looks like the right approach. There is probably something else in your 
actual code that is causing "it [to] not work".

To get a better understanding of "it did not work", I would look at the 
"str(query)" before and after the stuff with the aliased table.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Raise exception on implicit join

2015-07-28 Thread David Allouche

 On 28 Jul 2015, at 11:31, Jacob Magnusson m...@jacobian.se wrote:
 
 I'm trying to avoid implicit joins (i.e. more than one entry in the FROM 
 clause) in my code and I'm wondering if you guys have a good idea on how to 
 best achieve this. I want to raise an exception if a query changes to have 
 more than one entry in the FROM clause. An alternative would be to have it 
 happen only before the query is executed. I'm guessing you would have a good 
 idea on where to start Michael?

Just chiming in to say I think this would indeed be useful.

In my software, I have a few places where queries are built all over the place: 
column selection here, filters there, and I did at least once have a bug caused 
by such an implicit join.

Such bugs tend to be very dangerous, because if the executed query has a 
DISTINCT clause, they can be invisible, and if the test dataset is small enough 
they can go undetected. When they land in production with real dataset they 
cause combinatorial explosions. 

Regards.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Append AND NULL condition on joins

2015-06-19 Thread david . ceresuela
After some testing, I have found where the trouble lies. In the orm/util 
file
there is this code around line 836:

if not prop and getattr(right_info, 'mapper', None) \
and right_info.mapper.single:
# if single inheritance target and we are using a manual
# or implicit ON clause, augment it the same way we'd augment the
# WHERE.
single_crit = right_info.mapper._single_table_criterion
if right_info.is_aliased_class:
single_crit = right_info._adapter.traverse(single_crit)
self.onclause = self.onclause  single_crit

If right_info is a class mapping an SQLAlchemy table everything is alright 
because
right_info.mapper.single is False and the code is not run.
If, on the other hand, right_info is, for example, a python class that 
extends
a class that maps an SQLAlchemy table the right_info.mapper.single value is 
True
but the right_info.mapper._single_table_criterion is None. Thus, the 
self.onclause
ends up being: ... AND NULL.

This code was added in the commit dd6389f171736abd28d777b6fde40403cab0c13e. 
That
could be the reason why my previous code worked in the 0.9 series.

You can find attached the files to reproduce this behaviour:
- models.py: SQLAlchemy tables
- client.py: A python class that extends the Client class in models.
- feed.py: A python class that extends the Feed class in models.
- main.py: Run this file ('python main.py') to reproduce the behaviour

Output from main.py:

SELECT feed.id AS feed_id, feed.clientid AS feed_clientid, client.id AS 
client_id 
FROM feed LEFT OUTER JOIN client ON client.id = feed.clientid AND NULL


In case anybody else finds themselves in this situation, some partial 
solutions:
a) If possible, import directly the SQLAlchemy classes that match the 
tables.
b) Add a check for single_crit being anything other than None in orm/util.
Doing this breaks some[1] tests though, so be very sure you know what you 
are doing.

if single_crit:
self.onclause = self.onclause  single_crit

c) Use __bases__ in the outerjoin to get the SQLAlchemy class:
q = DBSession.query(Feed, Client).outerjoin(Client.__bases__[0], Client.id 
== Feed.clientid)


Regards


[1]
test.orm.inheritance.test_single.RelationshipToSingleTest.test_no_aliasing_from_overlap
test.orm.inheritance.test_single.RelationshipToSingleTest.test_outer_join_literal_onclause
test.orm.inheritance.test_single.RelationshipToSingleTest.test_outer_join_literal_onclause_alias
test.orm.inheritance.test_single.RelationshipToSingleTest.test_outer_join_no_onclause
test.orm.inheritance.test_single.RelationshipToSingleTest.test_outer_join_no_onclause_alias


On Wednesday, June 17, 2015 at 6:12:33 PM UTC+2, david.c...@gmail.com wrote:

 Hi all,

 I was giving SQLAlchemy 1.0 series a try but I have found some problems 
 along
 the way. There are some queries that in the 0.9.9 version used to work, but
 they do not work as expected anymore. An example of one of those is:

 feeds = DBSession.query(Feed, Client, ClientPro).outerjoin(
 Client, Client.id == Feed.clientid).outerjoin(
 ClientPro, ClientPro.clientid == Client.id)

 and it used to return:

 SELECT feed.id AS feed_id, feed.clientid AS feed_clientid ... 
 FROM feed
 LEFT OUTER JOIN client ON client.id = feed.clientid
 LEFT OUTER JOIN clientpro ON clientpro.clientid = client.id


 But since I changed to 1.0 series it returns:

 SELECT feed.id AS feed_id, feed.clientid ...
 FROM feed
 LEFT OUTER JOIN client ON client.id = feed.clientid AND NULL
 LEFT OUTER JOIN clientpro ON clientpro.clientid = client.id AND NULL


 As you can see, it adds the 'AND NULL' condition to the joins so the 
 columns
 corresponding to the client and clientpro are NULL.

 I have tested it from version 1.0.0 to 1.0.5 and it returns the same SQL 
 query
 in all of them.

 The relevant part of the models.py file is:

 class Feed(Base, ModelBase):
 __tablename__ = 'feed'
 id = Column(Integer, primary_key=True)
 clientid = Column(Integer, ForeignKey('client.id'), nullable=False)
 ...
 
 class Client(Base, ModelBase):
 __tablename__ = 'client'
 id = Column(Integer, primary_key=True)
 ...
 
 class ClientPro(Base, ModelBase):
 __tablename__ = 'clientpro'
 id = Column(Integer, primary_key=True)
 clientid = Column(Integer, ForeignKey('client.id', ondelete='CASCADE'
 ), nullable=False)
 ...

 
 And finally, the versions I am using:
 - PostgreSQL 9.3
 - Pyramid 1.5.7 (zope.sqlalchemy 0.7.6)
 - psycopg2 2.6

 What it is that I am missing?

 Thanks!


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
from models import Client

class Client(Client):


[sqlalchemy] Append AND NULL condition on joins

2015-06-17 Thread david . ceresuela
Hi all,

I was giving SQLAlchemy 1.0 series a try but I have found some problems 
along
the way. There are some queries that in the 0.9.9 version used to work, but
they do not work as expected anymore. An example of one of those is:

feeds = DBSession.query(Feed, Client, ClientPro).outerjoin(
Client, Client.id == Feed.clientid).outerjoin(
ClientPro, ClientPro.clientid == Client.id)

and it used to return:

SELECT feed.id AS feed_id, feed.clientid AS feed_clientid ... 
FROM feed
LEFT OUTER JOIN client ON client.id = feed.clientid
LEFT OUTER JOIN clientpro ON clientpro.clientid = client.id


But since I changed to 1.0 series it returns:

SELECT feed.id AS feed_id, feed.clientid ...
FROM feed
LEFT OUTER JOIN client ON client.id = feed.clientid AND NULL
LEFT OUTER JOIN clientpro ON clientpro.clientid = client.id AND NULL


As you can see, it adds the 'AND NULL' condition to the joins so the columns
corresponding to the client and clientpro are NULL.

I have tested it from version 1.0.0 to 1.0.5 and it returns the same SQL 
query
in all of them.

The relevant part of the models.py file is:

class Feed(Base, ModelBase):
__tablename__ = 'feed'
id = Column(Integer, primary_key=True)
clientid = Column(Integer, ForeignKey('client.id'), nullable=False)
...

class Client(Base, ModelBase):
__tablename__ = 'client'
id = Column(Integer, primary_key=True)
...

class ClientPro(Base, ModelBase):
__tablename__ = 'clientpro'
id = Column(Integer, primary_key=True)
clientid = Column(Integer, ForeignKey('client.id', ondelete='CASCADE'), 
nullable=False)
...


And finally, the versions I am using:
- PostgreSQL 9.3
- Pyramid 1.5.7 (zope.sqlalchemy 0.7.6)
- psycopg2 2.6

What it is that I am missing?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] What is the terminology for ...?

2015-06-11 Thread David Allouche

 On 11 Jun 2015, at 19:09, Tony C cappy2...@gmail.com wrote:
 
 I'm new to databases, SQL, and SQLAlchemy. I've been going through the 
 tutorials and docs at SQLAlchemy.org trying to find a solution to
 what I believe is a very simple issue with database quiries, however I'm 
 quite uncertain of the terminology.
 
 I'm using the declarative form of SQL queries.
 
 When doing a query on a table with a foreign key, I want to query the data in 
 the referenced table.
 In uncertain terms, I want the query to be recursive- meaning, I want all 
 of the data from the referenced tables, not just the values of the foreign 
 keys.
 However, I don't know what the exact terminology is in order to search 
 through help.
 
 Here is an example: (this is exploratory code only. Error handling and 
 imports left out for brevity)
 
 http://pastebin.com/HpNB64t4
 
 1. What is the correct terminology for a recursive query, in SQLAlchemy 
 parlance?
 2. How do I tell SQL Alchemy to do that?

You probably want to read about the following topics in the SQLAlchemy 
documentation.
relation
joinedload

Regards.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] ...and MVC

2015-05-26 Thread David Allouche

 On 26 May 2015, at 09:16, Burak Arslan burak.ars...@arskom.com.tr wrote:
 
 same here but before passing the model instance from controller to view,
 all instances are detached from session (via expunge()). this 1)
 prevents a lot of unexpected db queries. 2) lets you recycle db
 connections as soon as possible.

Oh, that's clever.

One of the shortcomings of SQLAlchemy is that it makes it easy to produce a 
query within a loop anti-pattern through relation traversal. So far I 
prevented this by avoiding loading mapped objects as much as possible, instead 
loading only specific columns — with the nice side effect of reducing data 
transfer with the database.

Now, thanks to you I realise I can prevent the anti-pattern with less code 
using Session.expunge_all().


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] when do I have to do a rollback so the app keep working fine

2015-04-28 Thread David Chavez
Thank you, I looked at it and it is quite more complicated than I
thought it was, really extensive what you have there. If it's not too
much to ask can anyone explain a little bit how to use that?

On 4/27/15, Mike Bayer mike...@zzzcomputing.com wrote:


 On 4/27/15 7:40 AM, David Allouche wrote:
 On 18 Apr 2015, at 02:20, dcgh...@gmail.com wrote:

 Hi everyone, I have a considerably amount of code that uses SQLAlchemy
 and I want to implement a decorator that captures the SQLAlchemy
 exceptions, then make session.rollback() and recall the decorated
 function, so I don't have to write the try except statement whenever I
 use SQLAlchemy.
 For implementing such a decorator I need the exceptions I can certainly
 capture to make session.rollback() and the app keep working fine because
 there are exceptions that will cause an endless loop and should never be
 captured (e.g., generating the same primary key due to a bug and always
 raising IntegrityError)
 So, can anyone tell me what are those exceptions that are safe to make
 session.rollback()?
 I guess that your intent is to retry transactions that failed because of a
 serialisation error in the SERIALIZABLE isolation level.

 My understanding is that, to SQLAlchemy, this is a database-specific
 issue. You can expect the SQLAlchemy exception to be an OperationalError,
 but you would need to add additional checks to specifically identify the
 kind error returned by your database driver. An argument could be made
 that serialisation errors should be wrapped in a more specific exception
 class by SQLAlchemy, but I do not believe that is the case at the moment.

 I am no expert, so please someone correct me if I am wrong.

 that's pretty much the current situation - OperationalError refers to
 something went wrong with the connection and IntegrityError means
 something went wrong with the data the query is attempting to modify.

 In Openstack we have an elaborate system of catching those exceptions we
 care about across many backends; this is probably more than you need but
 this is sort of what is needed:
 https://github.com/openstack/oslo.db/blob/master/oslo_db/sqlalchemy/exc_filters.py


 --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/Uifgo8n7yHw/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] when do I have to do a rollback so the app keep working fine

2015-04-27 Thread David Allouche

 On 18 Apr 2015, at 02:20, dcgh...@gmail.com wrote:
 
 Hi everyone, I have a considerably amount of code that uses SQLAlchemy and I 
 want to implement a decorator that captures the SQLAlchemy exceptions, then 
 make session.rollback() and recall the decorated function, so I don't have to 
 write the try except statement whenever I use SQLAlchemy.
 For implementing such a decorator I need the exceptions I can certainly 
 capture to make session.rollback() and the app keep working fine because 
 there are exceptions that will cause an endless loop and should never be 
 captured (e.g., generating the same primary key due to a bug and always 
 raising IntegrityError)
 So, can anyone tell me what are those exceptions that are safe to make 
 session.rollback()?

I guess that your intent is to retry transactions that failed because of a 
serialisation error in the SERIALIZABLE isolation level.

My understanding is that, to SQLAlchemy, this is a database-specific issue. You 
can expect the SQLAlchemy exception to be an OperationalError, but you would 
need to add additional checks to specifically identify the kind error returned 
by your database driver. An argument could be made that serialisation errors 
should be wrapped in a more specific exception class by SQLAlchemy, but I do 
not believe that is the case at the moment.

I am no expert, so please someone correct me if I am wrong.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] translating a complex raw sql statement to SQLalchemy ORM query -- or, binding list/array params to from_statement()

2015-03-24 Thread David Ford
greetings,

i have a project of transferring everything from an old API to a new one. 
the new API uses sqlalchemy ORM exclusively and my old used raw sql with 
the py-postgresql driver. i need help converting some of the more complex 
statements into ORM, or at least into a textual statement with some bound 
parameters. this statement is an upsert type that will update if exists, 
or insert if not. additionally, i'm dealing with an array input (python 
list) and i can't figure out how to bind subgroups using .params(). it is 
easily done in ORM with .in_() but i lack the experience in how to convert 
this set of queries into ORM.

given the following data:

vid:   'CVE-2002-2443'
dept:  'SecEng'
subgroups: ['Archive', 'Desktop', 'DB', 'API']
status:'n/a'  

and the following existing SQL statement (postgresql):

WITH new_values (vid,dept,subgroups,status) AS (
VALUES (:vid, :dept, :subgroups, :status)),
persub AS (
SELECT :vid as vid,dept,subgroup
FROMsme_subgroups sg
WHERE   sg.dept = :dept
AND sg.subgroup = ANY (:subgroups)),
upsert AS (
UPDATE sme_vuln_status sv
SET status  = nv.status,
published   = now() at time zone 'UTC'
FROM new_values nv
WHERE   sv.vid  = nv.vid  
AND sv.dept = nv.dept  
AND sv.subgroup = ANY (nv.subgroups)
RETURNING sv.*)INSERT INTO sme_vuln_status 
(vid,dept,subgroup,status,published)
SELECT vid,dept,subgroup,:status,now() at time zone 'UTC'  
FROM persub
WHERE NOT EXISTS (SELECT *
FROM upsert up
WHERE   up.vid  = persub.vid
AND up.dept = persub.dept
AND up.subgroup = persub.subgroup
)  



how can i properly do either of:

   - bind a list as a variable for a raw .from_statement().params(..., 
   subgroups=?) or similar set of methods
   - build an ORM query matching this raw statement?
   

thank you :)
-d

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Connections and the session

2015-03-21 Thread David Chavez
Thank you so much, knowing this, I assume that I have some debugging
ahead, thank you again.

On 3/21/15, Michael Bayer mike...@zzzcomputing.com wrote:


 dcgh...@gmail.com wrote:

 Hello there,

 I have the following code structure (more or less) in several processes:

 from sqlalchemy import create_engine
 from sqlalchemy.orm import sessionmaker

 engine = create_engine(some_mysql_dburi)
 session = sessionmaker(bind=engine)()

 while True:
query = session.query(...).filter(...)
# do something here from that query, update, insert new rows, etc.
session.commit()

 I am getting a too many connections error and it's not all the time, so
 I'm not sure what would it be. Although there are several processes
 running, I don't think at some time I'm connected to mysql server more
 than the amount of connections allowed, which by default it's about 150
 connections I think, and I'm not modifying the default value. So I must
 have some errors in this layout of my code. Here are some of the questions
 about the session and the connection(s) within I'd like to ask:
 1- How many opened connections are maintained by a single session object?
 I read somewhere it's only one, but, here it's my next

 if only one engine is associated with it, them just one connection.

 2- Does the session close the connection being used or requests for
 another?

 assuming the session isn’t in “autocommit” mode, the connection stays open
 until you call commit(), rollback(), or close().

 If it requests for a new one, does it close the previous (i.e., return it
 to the engine pool)?

 just one connection at a time yup

 3- Should I call session.close() right after the session.commit()
 statement?

 it’s not necessary, however all objects that are associated with that
 session are still able to make it start up a new transaction if you keep
 using those objects and hit upon unloaded attributes. close() would prevent
 that.

 If have to, do I have to put the session creation inside the while?

 not the way it is above; after that commit(), the Session there isn’t
 connected to anything. Unless the objects associated with it are being used
 in other threads, or something like that.

 I read that when the session gets garbage collected the connection(s)
 is(are) closed,

 that happens also but the commit() is enough (and close() makes it sure).

 so I could do this, but I don't know if it is a good use of the session.

 I read the docs many times and I didn't find anything solid that answers
 that questions to me, any help on the subject will be very appreciated,
 thanks in advance.

 the best section is this one:
 http://docs.sqlalchemy.org/en/rel_0_9/orm/session_basics.html#when-do-i-construct-a-session-when-do-i-commit-it-and-when-do-i-close-it.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

 --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/kD7hl_jkCQI/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Duplicate entries in many-to-many relationship

2015-02-02 Thread David L.
Hi,

I have a problem that is essentially identical to 
this: 
https://groups.google.com/forum/#!searchin/sqlalchemy/association$20table$20duplicate/sqlalchemy/2myGWqEg8LY/lCBB5q4F0PIJ

However, I'm not sure how to proceed. I understand that I can simply check 
to see if a child exists before adding it in order to avoid an 
IntegrityError. However, I still want the link between parent and child to 
be created in the association table:

1 if child doesn't exist:
2 create new child
3 append child to parent's list of children (via association table)
4 else if child exists:
5 retrieve extant child
6 append child to parent's list of children (via association table)

Since this is handled automatically by SQLAlchemy, I have no idea how to go 
about doing step #6. The relationship is defined using a backref.

MySQL is Percona 5.6.22-71.0.
SQLAlchemy version is 0.9.8.

Any help would be greatly appreciated.

Thanks,
David

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Polymorphic joined-table inheritance

2014-01-13 Thread david . ceresuela
@Kevin Horn: I removed the __init__ methods and the problem did not get 
solved,
but thank you very much for your suggestion.

@Michael Bayer: I run your test in my environment and it behaved as 
expected (no WHERE clause),
so maybe it is not SQLAlchemy's problem but Pyramid's (or some other thing 
in between). Unfortunately,
I can't spend more time on this right now, so if I have more time in the 
future I will try to determine
the problem, but that is not very probable.

Thank you very much for your time.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Polymorphic joined-table inheritance

2014-01-10 Thread david . ceresuela
Hello all,

I'm trying to use SQLAlchemy inheritance in a (company's) project and I 
cannot seem to make it work.

First, the versions:
- Ubuntu: 12.10
- Python: 2.7.3
- SQLAlchemy: 0.9.1
- PostgreSQL: 9.1
- sqlite: 3.7.13
- Pyramid: 1.4.2
All of the problems happen in both PostgreSQL and sqlite.

I have a ClientVip class that inherits from Client. This is the trimmed 
code:

##
class Client(Base):
__tablename__ = 'client'
id = Column(Integer, primary_key=True)
ctype = Column(String(16), nullable=False)
webaddid = Column(Integer, ForeignKey('web.id'), nullable=False)
parentid = Column(Integer, ForeignKey('client.id'), nullable=True)
refinternal = Column(Integer, unique=True, nullable=False)
...
notestx = Column(String(256), nullable=True)
notesclient = Column(String(256), nullable=True)

__mapper_args__ = {
'polymorphic_identity':'basic',
'polymorphic_on': ctype
}

def __init__(self, webaddid=None, parentid=None, refinternal=None,
...
notestx=None, notesclient=None):
# self.ctype = basic   ### This is actually commented, but might 
be important
self.webaddid = webaddid
self.parentid = parentid
self.refinternal = refinternal
...
self.notesclient = notesclient


class ClientVip(Client):
__tablename__ = 'clientvip'
id = Column(Integer, ForeignKey('client.id'), primary_key=True)
legalname = Column(String(128), nullable=True)
commercialname = Column(String(64), nullable=True)
...
isclienttop = Column(Boolean, nullable=False)
numlisttop = Column(Integer, nullable=True)

# Fill the column 'ctype' with the value 'vip'
__mapper_args__ = {
'polymorphic_identity':'vip',
}

def __init__(self, clientid=None, legalname=None, commercialname=None,
 ...
 isclienttop=False, numlisttop=None, **kwargs):

# First initialize the basic client
super(ClientVip, self).__init__(**kwargs)

# And then the VIP client
# self.ctype = vip### This is actually commented, but might 
be important
self.clientid = clientid
self.legalname = legalname
self.commercialname = commercialname
...
self.isclienttop = isclienttop
self.numlisttop = numlisttop
##

I have checked the code in
 - 
http://docs.sqlalchemy.org/en/rel_0_9/orm/examples.html?highlight=inheritance#module-examples.inheritance
 - 
http://docs.sqlalchemy.org/en/rel_0_9/_modules/examples/inheritance/joined.html
and I can't find anything different, but maybe are my eyes.


Problem: Querying client and clientvip
---

With this code I try to query all the clients [ clients = 
DBSession.query(Client) ]
and this is where the problems start, because the query it is issuing is:

##
SELECT client.id AS client_id, client.ctype AS client_ctype, 
client.refinternal AS client_refinternal, 
...
client.notestx AS client_notestx, client.notesclient AS client_notesclient 
FROM client 
WHERE client.ctype IN (NULL)
##

Why is there a where clause at all? Should not the query finish with the 
FROM client line?
And besides, why does it think that ctype must be NULL (instead of basic 
or vip, for instance)?


If a force to query with a filter [ clients = 
DBSession.query(Client).filter(Client.ctype == basic) ]
this is the query I get:

##
SELECT client.id AS client_id, client.ctype AS client_ctype, 
client.refinternal AS client_refinternal,
...
client.notestx AS client_notestx, client.notesclient AS client_notesclient 
FROM client 
WHERE client.ctype = %(ctype_1)s AND client.ctype IN (NULL)
##

which also looks for NULL values in the where clause.


And what happens if I query the ClientVip?
Well, there are no where clauses, so we are good:

##
SELECT clientvip.id AS clientvip_id, client.id AS client_id, client.ctype 
AS client_ctype, client.refinternal AS client_refinternal,
...
clientvip.isclienttop AS clientvip_isclienttop, clientvip.numlisttop AS 
clientvip_numlisttop 
FROM client JOIN clientvip ON client.id = clientvip.id
##


Appendix: Inserting client
-

You might have seen that in the __init__ methods I have commented out the  
self.ctype = 'basic' 
When this line is commented out (as it should) and I try to insert a 
Client, I get the following error:

##
IntegrityError: (IntegrityError) null value in column ctype violates 
not-null constraint
##

which leads me to think that the polymorphism I am trying to get is not 
working properly, becasue
I shouldn't need to force a value on the ctype column.


A plea for help
-

I have been looking at this code for quite some time and I can't figure out 
what I am missing.
If any of you have any idea of what I could be possibly doing wrong, or any 
ideas that I could
try, I will be very very happy to hear them, because I have run out of 
ideas 

[sqlalchemy] Re: PostgreSQL: interval columns, values measured in months or years

2013-12-25 Thread David Bolen
Wichert Akkerman wich...@wiggy.net writes:

 How does your code differ from 
 https://pypi.python.org/pypi/psycopg2-dateutils ?

Ah, I skipped that second URL earlier and only looked through the SO
discussion...

I guess two major differences I see are that psycopg2-dateutils
appears to only handle the database-python direction, but not the
reverse, but that it also translates arrays of intervals.

More importantly, it handles negative intervals properly which I don't
currently do (wow, ~3.5 years without needing that).  So it wins for
accuracy.

-- David


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Re: PostgreSQL: interval columns, values measured in months or years

2013-12-24 Thread David Bolen
Sibylle Koczian nulla.epist...@web.de writes:

 Exactly, that's it. The stackoverflow discussion was very instructive,
 thank you! I'll try out psycopg2-dateutils.

I'm a big fan of dateutil.  If you do use it, you may also choose to
bypass the use of timedelta entirely, since as you've seen it can fail
to accurately represent the database value (timedelta only has days,
minutes and seconds as components).

For example, I use the code below to map PostgreSQL interval columns
directly to dateutil's relativedelta subclass.

Any returned queries involving interval values will use the new
Interval type automatically, and will accurately round-trip.  You'll
have to use the Interval class explicitly when creating values to go
the other way.

The mapping occurs purely at the psycopg2 level.

-- David


#
# --
#  Interval Mapping
#
#  Override normal psycopg2 mapping so Interval types get mapped into a
#  relativedelta since it can more accurately represent them.  We use our own
#  relativedelta subclass as we need it to be a new style class for psycopg2
#  adaptation to work.  Other code must use our Interval class if trying to
#  store such data back into the database.
#
#  In this case there's nothing to do at the SQLAlchemy layer because for
#  PostgreSQL, it just uses the data as provided by psycopg2.
# --
#

class Interval(relativedelta, object):
pass

import psycopg2
from psycopg2._psycopg import INTERVAL as psycopg2_INTERVAL
from psycopg2.extensions import new_type, register_type, register_adapter, AsIs
import re

interval_re = re.compile(' *'
 '((?Pyears\d+) y\D*)?'
 '((?Pmonths\d+) m\D*)?'
 '((?Pdays\d+) d\D*)?'
 '((?Phours\d+):(?Pminutes\d+):(?Pseconds\d+))?'
 '(\.(?Psubseconds\d+))?$')


# Configure handling for supplying an Interval to store in the database

def adapt_interval(interval):
adapter = AsIs('%d years %d months %d days %02d:%02d:%02d.%06d' %
   (interval.years, interval.months, interval.days,
interval.hours, interval.minutes, interval.seconds,
interval.microseconds))
return adapter

register_adapter(Interval, adapt_interval)

# Configure handling upon receiving an Interval from the database

def cast_interval(value, cur):
if value is None:
return None

m = interval_re.match(value)
if m:
vals = m.groupdict(0)
# Most everything is direct, but subseconds has a varying precision,
# so force it to be microseconds if we had a value
microseconds = vals['subseconds']
if microseconds:
microseconds = (microseconds + '00')[:6]
return Interval(years=int(vals['years']),
months=int(vals['months']),
days=int(vals['days']),
hours=int(vals['hours']),
minutes=int(vals['minutes']),
seconds=int(vals['seconds']),
microseconds=int(microseconds))

else:
raise InterfaceError(Bad interval representation: %r % value)

INTERVAL = new_type(psycopg2_INTERVAL.values, INTERVAL, cast_interval)
register_type(INTERVAL)


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Re: Alembic 0.6.0 released

2013-07-20 Thread David Szotten
south currently uses state freezing (which aways felt messy), but i believe 
the new rewrite for inclusion in django core is moving to a way of 
describing schema changes in a declarative way that can be introspected (so 
current state is calculated from all previous migrations instead of some 
frozen state)

this also allows large chunks of historic migrations to be squashed into 
new shortcut migrations which sounds nice

i'm not familiar with alembic (nor south) internals, but this sounds like 
an interesting idea.

is it something you have considered?

-david

On Friday, 19 July 2013 23:00:21 UTC+1, Michael Bayer wrote:

 Hey gang - 

 Alembic 0.6.0 has been sitting in the hopper for awhile so I figured I'd 
 put it out. The two changes here I'm most excited about are no longer 
 needing 2to3 for Python 3, and also I've made the display of history much 
 nicer, since I used that a lot and I needed something easier to read - it 
 also allows for ranges now too. 

 There's some other nifty changes that have more potential than they might 
 initially seem to, including that you can now add any number of custom 
 arguments to any command using the new -x option, and then parsing for 
 them in your env.py using get_x_argument(), and there's also a new feature 
 that allows you to limit autogenerate at the column level, in addition to 
 the table and schema level as before. 

 There's a bunch of issues that have piled up in Alembic, the vast majorty 
 concern autogenerate, and there's another thread that refers to how 
 Alembic's versioning model works.  There are some big ideas in place to 
 rework both of these systems in a fairly dramatic way. 

 For autogenerate, the idea is that we would in most cases not rely upon 
 database reflection anymore, instead we will try to compare schemas based 
 on different versions of the code directly.  This would require that the 
 current state of the incoming MetaData() is written out to some serialized 
 format, which I'm thinking might be nice as JSON - though it can start as 
 pickle.Creating a MetaData-JSON serializer/deserializer would be great 
 but also a very big job.   The advantage to comparing Python-generated 
 metadata objects is that we are guaranteed to catch all changes in the 
 schema perfectly, defaults, types, arguments on types, all of it, without 
 any reliance on the quirks of database reflection - since the purpose of 
 autogenerate is really just to detect what Tables/Columns have been added, 
 removed or changed in the code, versus what the code was before.   
 Decisions have to be made here as to what role reflection will continue to 
 play, what the upgrade path will be for existing deployments, does the 
 behavior kick in automatically or does it require some configuration, etc. 
   

 The other change to the versioning model is just as dramatic and involves 
 reworking the versioning to work based on an open-ended dependency graph, 
 meaning any particular revision is dependent on zero or more previous 
 revisions, rather than all revisions being lined up in a straight line.   
 The upgrade process would then find itself at a certain target by 
 navigating the topological sort of this dependency graph, pretty much the 
 same idea as how SQLAlchemy's unit of work functions.   With this model, 
 the issue of merging branches mostly goes away, as two unrelated migrations 
 from different branches can both be pulled in and just be siblings of each 
 other without issue. 

 Neither of these two changes are something I personally need in any urgent 
 way - I generally treat database migrations as a mostly manual process in 
 any case and I'm happy to edit the mistakes in my autogenerate migrations 
 by hand and to deal with the very occasional merge manually.I do get a 
 lot of complaints about the many edge cases present in autogenerate at 
 least so at some point it would be nice to get to this issue. 

 Anyway, 0.6.0 is ready to go, happy downloading: 

 https://pypi.python.org/pypi/alembic 

 https://alembic.readthedocs.org/en/latest/changelog.html#change-0.6.0 




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] after_bulk_update table/column info?

2013-07-07 Thread David Szotten
Hi,

Is it possible to find out which class/columns were included in the update 
from an `after_bulk_update` event handler?

From what i can tell from the source, this information lives on the 
`BulkUpdate(Evaluate in my case)` object which isn't passed to the event 
handler, and I can't figure out how to access this info from the available 
variables (session, query, query_context, result)

Thanks
-d

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] after_bulk_update table/column info?

2013-07-07 Thread David Szotten
Awesome, thanks!

(For the record, i was ultimately looking for the field types involved in
the update, which i found with your help above as:

for bind in result.context.compiled.binds.values():
field_type = bind.type
if isinstance(field_type, MyField):
raise NotImplementedError(Bulk updates are not yet supported)


Thanks again!



On 7 July 2013 17:14, Michael Bayer mike...@zzzcomputing.com wrote:

 this should probably be available as some documented helper function for
 now, it's tricky to figure out up front what parameters these events will
 need.  Really these bulk events should get to know all the arguments that
 were passed, including the synchronize_session argument which isn't present
 at all right now.

 that particular info you can get like this:

 @event.listens_for(Session, after_bulk_update)
 def bulk_ud(sess, query, ctx, result):
 print result.context.compiled.params





 On Jul 7, 2013, at 5:51 AM, David Szotten davidszot...@gmail.com wrote:

 Hi,

 Is it possible to find out which class/columns were included in the update
 from an `after_bulk_update` event handler?

 From what i can tell from the source, this information lives on the
 `BulkUpdate(Evaluate in my case)` object which isn't passed to the event
 handler, and I can't figure out how to access this info from the available
 variables (session, query, query_context, result)

 Thanks
 -d

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.

 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.




  --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/SwOckp6-Vh8/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: MetaBase.create_all broken on parallel execution

2013-04-13 Thread David Lemayian
This happened to me as well. Happened after installing libsqlite3-dev. 
Fixed by running  apt-get remove libsqlite3-dev  .

Best,
David.

On Thursday, 8 November 2012 10:19:06 UTC+3, Eugeny Klementev wrote:

 Hi all,

 I use sqlalchemy with postgresql backend.

 On script starting i use code to initialize all table:

 engine = create_engine(settings.database_engine)
 Base.metadata.create_all(engine)
 session = Session(bind = engine)

 And i run two instances of this script.

 One of instances broken with message:

 Traceback (most recent call last):
   File ./datadb.py, line 40, in module
 Base.metadata.create_all(engine)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 2515, 
 in create_all
 tables=tables)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
 2234, in _run_visitor
 conn._run_visitor(visitorcallable, element, **kwargs)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
 1904, in _run_visitor
 **kwargs).traverse_single(element)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py, line 
 86, in traverse_single
 return meth(obj, **kw)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/ddl.py, line 
 67, in visit_metadata
 self.traverse_single(table, create_ok=True)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py, line 
 86, in traverse_single
 return meth(obj, **kw)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/ddl.py, line 
 86, in visit_table
 self.connection.execute(schema.CreateTable(table))
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
 1405, in execute
 params)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
 1496, in _execute_ddl
 compiled
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
 1646, in _execute_context
 context)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
 1639, in _execute_context
 context)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, 
 line 330, in do_execute
 cursor.execute(statement, parameters)
 sqlalchemy.exc.IntegrityError: (IntegrityError) duplicate key value 
 violates unique constraint pg_type_typname_nsp_index
 DETAIL:  Key (typname, typnamespace)=(datadb_datapoint_id_seq, 2200) 
 already exists.
  '\nCREATE TABLE datadb_datapoint (\n\tid SERIAL NOT NULL, \n\tunit_id 
 INTEGER NOT NULL, \n\ttimestamp INTEGER NOT NULL, \n\tvalue_type VARCHAR 
 NOT NULL, \n\tvalue VARCHAR NOT NULL, \n\tPRIMARY KEY (id)\n)\n\n' {}

 I understand it is conflict of creation of tables from different instances 
 at one time.

 I hope that sqlalchemy has any locking mechanism, but my investigations 
 are failed.

 Anybody known solution of my problem?


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Full table scan using Oracle String indexes

2012-10-25 Thread David Moore
Hello, 

In line with what is required for Python 3, would it not make sense to insist 
across the board that bind values to sqlalchemy.String should be bytestrings 
and that bind values to sqlalchemy.Unicode should be unicode strings, 
converting if necessary? I don't think I understand why you would not want that 
ever. 

regards, 
Dave Moore 

- Original Message -

 Hi,

 Having had the same problem as above, I would like to use the
 convert_unicode='force' flag at engine configuration level.
 It seems that String tests the value of convert_unicode at its own
 level as well as engine level, but it isn't the case for the 'force'
 value check :

 def bind_processor(self, dialect):
 if self.convert_unicode or dialect.convert_unicode:
 if dialect.supports_unicode_binds and \
 self.convert_unicode != 'force' :
 ...

 def result_processor(self, dialect, coltype):
 wants_unicode = self.convert_unicode or dialect.convert_unicode
 needs_convert = wants_unicode and \
 (dialect.returns_unicode_strings is not True or
 self.convert_unicode == 'force' )

 Thus I have to create a type decorator for all my Strings even if I
 have set the right flag at engine configuration level...

 Yann

 Le jeudi 2 août 2012 23:22:22 UTC+2, Michael Bayer a écrit :
  so the convert_unicode='force' flag is not ideal here as that
  will
  spend most of it's time checking for decodes necessary on the
  result
  set side, which is a lot of wasted effort. You can customize how
  strings are handled on the bind side, including per-dialect
  behavior, using a custom type:
 

  class MyStringType(TypeDecorator):
 
  impl = String
 

  def process_bind_param(self, value, dialect):
 
  if value is not None and dialect.name == oracle:
 
  value = value.encode('utf-8')
 

  then replace usage of the String(length=XYZ) type with
  MyStringType(length=XYZ).
 

  docs:
  http://docs.sqlalchemy.org/en/rel_0_7/core/types.html#types-custom
 

  On Aug 2, 2012, at 5:41 AM, David Moore wrote:
 

   Hi,
 
  
 
   Further searching seems to indicate this is an Oracle issue, not
   a
   cx_Oracle issue.
 
   http://www.digipedia.pl/usenet/thread/15912/2814/ - same problem
   with cx_Oracle
 
   http://support.unify.com/supportforum/viewtopic.php?f=40t=3823 -
   happening with perl drivers.
 
   http://tao.qshine.com/note/note_ora.htm - happening with ADO.NET
   drivers.
 
  
 
   There's also evidence this problem occurs with MS SQL Server as
   well:
 
   http://www.microsoftfaqs.com/Articles/1148439/Index_not_used_when_UNICODE_%3D_true
   - SQL Server 2005
 
   http://blog.brianhartsock.com/2008/12/14/nvarchar-vs-varchar-in-sql-server-beware/
 
  
 
   Although, it seems in SQL Server at least, it still uses the
   index,
   just in a slower manner, which is probably why we haven't noticed
   this issue yet.
 
  
 
   It seems to me that the place to fix this is when the type of the
   column is known, which is when sqlalchemy binds the parameter.
   There's some discussion in one of those links that, at the sql
   statement preparation level, you don't know what the column
   datatype is, so you can't fix it there.
 
  
 
   Would it be possible for String datatypes to detect unicode
   values
   and encode them with the database charset?
 
  
 
   As a lower-level temporary solution, you can get cx_Oracle to
   encode all unicode parameters to strings, but that's obviously
   the
   wrong thing to do if you have any Unicode columns. This snippet:
 
  
 
   def InputTypeHandler(cursor, value, numElements):
 
   if isinstance(value, unicode):
 
   return cursor.var(str, arraysize = numElements,
 
   inconverter = lambda x: x.encode(cursor.connection.nencoding))
 
  
 
   And then when creating the connection:
 
  
 
   connection.inputtypehandler = InputTypeHandler
 
  
 
   thanks for the help,
 
  
 
   --
 
   David Moore
 
   Senior Software Engineer
 
   St. James Software
 
   Email: dav...@sjsoft.com
 
  
 
  
 
   - Original Message -
 
   And for anyone else experiencing this issue, there was a subtle
 
   difference in the execution plans that's now apparent. The
 
   statement which only selects colid runs a FAST FULL SCAN and
 
   successfully converts the unicode parameter using SYS_OP_C2C,
   and
 
   uses the index. When you select both colid and message, it runs
   a
 
   FULL SCAN, and even though it seems to detect the conversion can
   be
 
   done, it does not use the index at all. Perhaps this is actually
   an
 
   Oracle issue?
 
  
 
   --
 
   David Moore
 
   Senior Software Engineer
 
   St. James Software
 
   Email: dav...@sjsoft.com
 
  
 
  
 
  
 
   - Original Message -
 
   Hi Michael,
 
  
 
   Indeed, you are correct - adding the message column to the
 
   cx_oracle
 
   query shows the same behaviour as the sqlalchemy query. Sorry I
 
   missed that.
 
  
 
   I will take these results to the cx_oracle list on this basis.
   On
 
   my
 
   second question, though, is there a way to enforce

Re: [sqlalchemy] Full table scan using Oracle String indexes

2012-10-25 Thread David Moore
Hi Yann, 

Sorry for the confusion - my email was musing about what I thought sqlalchemy 
should do, not what I thought you should do. 

The way we have solved this is using the cx_Oracle inputtypehandler hook. So 
we've overriden where the connection is created by sqlalchemy, and then used 
the following snippet: 

def OracleInputTypeHandler(cursor, value, numElements): 
if isinstance(value, unicode): 
return cursor.var(str, arraysize = numElements, 
inconverter = lambda x: x.encode(cursor.connection.nencoding)) 

connection.inputtypehandler = OracleInputTypeHandler 

This converts all unicode bind parameters passed into cx_Oracle into the client 
encoding. 

HTH, 
Dave Moore 

- Original Message -

 Le jeudi 25 octobre 2012 16:21:49 UTC+2, David Moore a écrit :
  Hello,
 

 Hi,
  In line with what is required for Python 3, would it not make sense
  to insist across the board that bind values to sqlalchemy.String
  should be bytestrings and that bind values to sqlalchemy.Unicode
  should be unicode strings, converting if necessary?
 
 I am using the Unicode class and not String directly. But the code I
 was referring to is in the String class.

  I don't think I understand why you would not want that ever.
 

 I am using unicode strings, but I'd like sqlalchemy to convert them
 to the dialect encoding in order to avoid a problem with oracle
 indexes not being used when querying with unicode strings.

  regards,
 
  Dave Moore
 

   Hi,
  
 

   Having had the same problem as above, I would like to use the
   convert_unicode='force' flag at engine configuration level.
  
 
   It seems that String tests the value of convert_unicode at its
   own
   level as well as engine level, but it isn't the case for the
   'force'
   value check :
  
 

   def bind_processor(self, dialect):
  
 
   if self.convert_unicode or dialect.convert_unicode:
  
 
   if dialect.supports_unicode_binds and \
  
 
   self.convert_unicode != 'force' :
  
 
   ...
  
 

   def result_processor(self, dialect, coltype):
  
 
   wants_unicode = self.convert_unicode or dialect.convert_unicode
  
 
   needs_convert = wants_unicode and \
  
 
   (dialect.returns_unicode_strings is not True or
  
 
   self.convert_unicode == 'force' )
  
 

   Thus I have to create a type decorator for all my Strings even if
   I
   have set the right flag at engine configuration level...
  
 

   Yann
  
 

   Le jeudi 2 août 2012 23:22:22 UTC+2, Michael Bayer a écrit :
  
 
so the convert_unicode='force' flag is not ideal here as that
will
spend most of it's time checking for decodes necessary on the
result
set side, which is a lot of wasted effort. You can customize
how
strings are handled on the bind side, including per-dialect
behavior, using a custom type:
   
  
 

class MyStringType(TypeDecorator):
   
  
 
impl = String
   
  
 

def process_bind_param(self, value, dialect):
   
  
 
if value is not None and dialect.name == oracle:
   
  
 
value = value.encode('utf-8')
   
  
 

then replace usage of the String(length=XYZ) type with
MyStringType(length=XYZ).
   
  
 

docs:
http://docs.sqlalchemy.org/en/rel_0_7/core/types.html#types-custom
   
  
 

On Aug 2, 2012, at 5:41 AM, David Moore wrote:
   
  
 

 Hi,
   
  
 

   
  
 
 Further searching seems to indicate this is an Oracle issue,
 not
 a
 cx_Oracle issue.
   
  
 
 http://www.digipedia.pl/usenet/thread/15912/2814/ - same
 problem
 with cx_Oracle
   
  
 
 http://support.unify.com/supportforum/viewtopic.php?f=40t=3823
 -
 happening with perl drivers.
   
  
 
 http://tao.qshine.com/note/note_ora.htm - happening with
 ADO.NET
 drivers.
   
  
 

   
  
 
 There's also evidence this problem occurs with MS SQL Server
 as
 well:
   
  
 
 http://www.microsoftfaqs.com/Articles/1148439/Index_not_used_when_UNICODE_%3D_true
 - SQL Server 2005
   
  
 
 http://blog.brianhartsock.com/2008/12/14/nvarchar-vs-varchar-in-sql-server-beware/
   
  
 

   
  
 
 Although, it seems in SQL Server at least, it still uses the
 index,
 just in a slower manner, which is probably why we haven't
 noticed
 this issue yet.
   
  
 

   
  
 
 It seems to me that the place to fix this is when the type of
 the
 column is known, which is when sqlalchemy binds the
 parameter.
 There's some discussion in one of those links that, at the
 sql
 statement preparation level, you don't know what the column
 datatype is, so you can't fix it there.
   
  
 

   
  
 
 Would it be possible for String datatypes to detect unicode
 values
 and encode them with the database charset?
   
  
 

   
  
 
 As a lower-level temporary solution, you can get cx_Oracle to
 encode all unicode parameters to strings, but that's
 obviously
 the
 wrong thing to do if you have any

[sqlalchemy] Re: SQLAlchemy ORM Object caching with relationships and invalidation

2012-09-27 Thread David McKeone
Hi Mike,

I'll let the others add more detail about your questions, but for the broad 
strokes I thought I'd add that I ran into similar issues with my simple 
caching method and ultimately ended up using the new Dogpile.cache stuff 
that Mike recommended on his 
bloghttp://techspot.zzzeek.org/2012/04/19/using-beaker-for-caching-why-you-ll-want-to-switch-to-dogpile.cache/.
 
  (The example file is here: 
https://groups.google.com/d/msg/sqlalchemy/MrKA6AygZ14/o95dmUdLS70J )  It 
is far more integrated with the internals of SQLAlchemy relationship 
management, so it may behave better.


On Thursday, September 27, 2012 7:28:12 AM UTC+2, mikepk wrote:

 Unlike most of the SQLAlchemy caching examples, I'm not using query-level 
 caching but rather caching at the instance / object level. I'm using the 
 ORM and I have a SQLAlchemy object that I load and occasionally store in a 
 second level cache. I have a custom 'get' loading function that checks the 
 second level cache based on primary key and returns the cached copy if 
 present and populates the cache if not.

 I also have an event handler on 'after_commit' that checks the session for 
 dirty objects that are cached (with a simple class property of 
 __cached__=True) and does either an invalidate or write-back to the cache 
 when these objects are dirty and cached.

 This pattern is pretty simple and works great.

 My problem is that I'd like to be able to use this same (or similar) 
 pattern for more complex SQLAlchemy objects containing relationships. I'd 
 like the cache to contain not only the 'base' object but all (lazy) loaded 
 related obejcts. I have no problem storing and retrieving these objects 
 (and relationships) from the cache, my problem comes from the 
 invalidation/write-back part.

 Lets say I have object A that's in the cache and it has a relationship, 
 A.B that was stored along with it in the cache. If I retrieve A from the 
 cache I can get A.B and I get the cached copy of B. If B is modified, 
 however, then my simple cache invalidator event handler doesn't see that B 
 is cached (no __cached__ property on B) and B gets committed to the 
 database without the cached copy of A being invalidated. Now subsequent 
 cache hits of A will have a stale A.B relationship.

 So my question is, is there a clean / simple way to mark A for 
 invalidation when B is modified? I've come up with a few schemes but all of 
 them seem brittle, complicated, and my intuition is telling me that I'm 
 reinventing the wheel; that some facility in SQLAlchemy itself may be 
 useful in walking this relationship graph to find loaded, connected 
 relationships who's cached represenations might need to be invalidated. 
 Alternatively, is there another pattern that would be better suited to this 
 kind of object-level caching?

 Thanks!
 -Mike


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/s9rf5eY9f2IJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Dogpile.cache w/ SQLAlchemy

2012-09-26 Thread David McKeone
This example has worked really well so far, but I thought I'd mention that 
I ran into few small things and made some changes.

1) This is was minor. I wanted to be able to use the cache_regions on 
multiple databases from the same program so I made the CachingQuery class 
take a parameter with the regions dict it operates on -- simple enough, and 
similar to the Beaker example which takes a cache manager instance.

2) This was a little more interesting.  I replaced an existing simplistic 
caching implementation that stored database results with the new 
dogpile.cache implementation and I saw my request times slow down from 16ms 
to 66ms.  It kind of shocked me that it was so much slower, so I ran 
cProfile and gprof2dot against it to see what was happening.  I found out 
that the key generation algorithm in _key_from_query -- specifically 
visitors.traverse -- was causing the slow-down.  Once I added a small 
feature to use explicit cache keys, it was back to normal.  Admittedly the 
tables I'm caching in this case are quite large; three preference tables 
that have 100+ columns each.  However, I think it does serve as a bit of a 
warning, since I imagine the traverse only gets slower as the query gets 
more complicated (although I haven't tested that).  Automatic cache-key 
generation based on the query is nice, but there is certainly a price to be 
paid in some circumstances.

Anyway, thanks again for the example Mike.  I imagine that some (maybe all) 
of these things could be due to the fact that it is just an example, but 
since it is new I thought I'd pass along my experience.  It's simplicity 
certainly helped me to see where all the pieces fit and to start making 
these changes.


On Monday, September 24, 2012 4:15:29 PM UTC+2, Michael Bayer wrote:

 yes, the example in 0.8 should be changed to this, but I haven't done it 
 yet.   dogpile's usage is similar to Beaker as far as the general calling 
 pattern.   A tutorial format of the example using dogpile is attached.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/hO3TuPbT8AAJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Dogpile.cache w/ SQLAlchemy

2012-09-26 Thread David McKeone
Fair enough about the example stuff; I kind of figured, but thought I'd 
just share my experience.  It's such a fine-line between authors getting a 
bead on real-world usage vs. hearing the same question over and over again.

Previously, I had it just using the binds that were in the Query, but we 
 had cases where bound parameters were embedded in the statement too 
 affecting things.   So as a quick fix I changed it to just traverse the 
 whole statement, but this can be optimized significantly more than it is. 
  It's the way it is so that it works more completely on the first 
 go-around.


Is there a thread I can pull at here?  I'd love to here about performant 
key generation options that are more flexible than explicit cache keys 
(obviously that is highly prone to developer error).   Looking at the query 
class, I'm guessing you are still using md5(unicode(query.statement)) for 
the query portion of the key, but what are you using for binds?  (Sorry, 
not quite familiar with the internals of SQLAlchemy yet -- and I can't find 
that attribute or anything similar in the source)  Are there other 
alternatives for cache keys that are worth exploring?  (and feel free to 
just post links or what have you, I'm happy to do the reading)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/JvefOSuw1FwJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Dogpile.cache w/ SQLAlchemy

2012-09-24 Thread David McKeone
As per this 
comment: 
http://techspot.zzzeek.org/2012/04/19/using-beaker-for-caching-why-you-ll-want-to-switch-to-dogpile.cache/#comment-503780670

Has any work been put into an example for using Dogpile.cache with 
SQLAlchemy?  I'm about to embark on implementing caching and I don't want 
to re-invent the wheel by creating my own adaptation if I don't need to.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/zf0HBD7s7SwJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Dogpile.cache w/ SQLAlchemy

2012-09-24 Thread David McKeone
Great, thanks Mike!

On Monday, September 24, 2012 4:15:29 PM UTC+2, Michael Bayer wrote:

 yes, the example in 0.8 should be changed to this, but I haven't done it 
 yet.   dogpile's usage is similar to Beaker as far as the general calling 
 pattern.   A tutorial format of the example using dogpile is attached.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/ULgsSiDYpfIJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SELECT * FROM function() with declarative

2012-09-21 Thread David McKeone
ok, I think I found the solution.  Thought I'd post what I did and what I 
tried; partly because it might help someone else and partly because someone 
may have suggestions for a better way.

This is what worked:

my_function = func.my_function(...args...)
func_subquery =  select(['id'], from_obj=[my_function])
results = 
session.query(Table1).join(Table2).filter(Table1.id.in_(func_subquery))

and results in a query that looks like this:

SELECT table1.id, table1.col1, table1.col2... FROM table1 JOIN table2 ON 
table1.t2_id = table2.id WHERE table1.id IN( SELECT id FROM 
my_function(...args...) )

This differs somewhat from what I was initially thinking:

SELECT table1.id, table1.col1, table1.col2... FROM (SELECT * FROM 
my_function(...args...)) as table1 JOIN table2 ON table1.t2_id = table2.id 

When I run EXPLAIN ANALYZE in PostgreSQL the IN() version seems to be 
slightly more efficient according to the planner (and real run times are 
more or less the same)
IN: Nested Loop  (cost=12.75..889.97 rows=35432 width=222) (actual 
time=42.200..42.209 rows=2 loops=1)
JOIN: Nested Loop  (cost=0.25..4386.37 rows=1000 width=226) (actual 
time=41.052..41.061 rows=2 loops=1)

-
Things I tried on the path to getting my answer:
-
I attempted to use a combination of a label and an alias with my function 
(modelled after this: 
http://docs.sqlalchemy.org/en/rel_0_7/orm/query.html?highlight=aliased#sqlalchemy.orm.aliased):

my_function = func.my_function(...args...).label(Table1.__tablename__)   
my_function_alias = aliased(Table1, alias=my_function, adapt_on_names=True)

but that failed miserably.  Using the labeled function in a regular core 
select resulted in an AttributeError exception, so I think that might have 
been part of the problem:

 print select('*', from_obj=[my_function])
Traceback (most recent call last):
  File stdin, line 1, in module
  File /Library/Python/2.7/site-packages/sqlalchemy/sql/expression.py, 
line 1790, in __str__
return unicode(self.compile()).encode('ascii', 'backslashreplace')
  File /Library/Python/2.7/site-packages/sqlalchemy/sql/expression.py, 
line 1778, in compile
return self._compiler(dialect, bind=bind, **kw)
  File /Library/Python/2.7/site-packages/sqlalchemy/sql/expression.py, 
line 1784, in _compiler
return dialect.statement_compiler(dialect, self, **kw)
  File /Library/Python/2.7/site-packages/sqlalchemy/sql/compiler.py, line 
277, in __init__
engine.Compiled.__init__(self, dialect, statement, **kwargs)
  File /Library/Python/2.7/site-packages/sqlalchemy/engine/base.py, line 
705, in __init__
self.string = self.process(self.statement)
  File /Library/Python/2.7/site-packages/sqlalchemy/engine/base.py, line 
724, in process
return obj._compiler_dispatch(self, **kwargs)
  File /Library/Python/2.7/site-packages/sqlalchemy/sql/visitors.py, line 
72, in _compiler_dispatch
return getter(visitor)(self, **kw)
  File /Library/Python/2.7/site-packages/sqlalchemy/sql/compiler.py, line 
873, in visit_select
froms = select._get_display_froms(existingfroms)
  File /Library/Python/2.7/site-packages/sqlalchemy/sql/expression.py, 
line 4807, in _get_display_froms
toremove = set(itertools.chain(*[f._hide_froms for f in froms]))
AttributeError: '_Label' object has no attribute '_hide_froms'


Using the aliased function would include the original table as well as the 
aliased functioned, and without a join condition it would just do a 
cartesian product:

 results = session.query(my_function_alias).first()
SELECT table1.id, table1.col1, table1.col2... FROM table1, (SELECT * FROM 
my_function(...args...))

So that didn't work either.  After doing this I realized that if I have to 
include the table and the function sub-select I might as well attempt it as 
an IN(), and that is what brought me to my final answer.

Thanks again for your suggestion Mike, it definitely put me on the right 
path to the solution.


On Thursday, September 20, 2012 7:10:52 PM UTC+2, Michael Bayer wrote:

 you can make a function act like a table by creating a select() out of it, 
 check out the fifth example at 
 http://docs.sqlalchemy.org/en/rel_0_7/core/tutorial.html#functions

 On Sep 20, 2012, at 1:09 PM, David McKeone wrote:

 Hi M,

 Is creating something in the database the only way to do it?  How would I 
 coerce the view's return type into my object?  How do I substitute the view 
 in the FROM part of my clause instead?


 On Thursday, September 20, 2012 5:52:28 PM UTC+2, A.M. wrote:


 On Sep 20, 2012, at 11:49 AM, David McKeone wrote: 

  I've googled around can't seem to find an answer to this, so hopefully 
 someone knows how to do it here. 
  
  I'm using PostgreSQL and I have a PL/PGSQL function that filters and 
 modifies a particular table based on a number of conditions and then 
 returns a set of rows as the result

[sqlalchemy] SELECT * FROM function() with declarative

2012-09-20 Thread David McKeone
I've googled around can't seem to find an answer to this, so hopefully 
someone knows how to do it here.

I'm using PostgreSQL and I have a PL/PGSQL function that filters and 
modifies a particular table based on a number of conditions and then 
returns a set of rows as the result.  This pattern has allowed the system 
to use the functions as if they were tables so that joins can still be done 
on the resulting values.

So instead of:

SELECT * FROM table1 INNER JOIN table2 ON table1.t2_id = table2.id

I do:

SELECT * FROM my_function( ...args... ) as table1 INNER JOIN table2 
ON table1.t2_id = table2.id

That part works ok in plain SQL (and as well in the system I'm converting 
from)

So now with SQLAlchemy I have my declarative definitions for those tables:

class Table1(Base):
   __tablename__ = 'table1'

   id = Column()
   t2_id = Column(ForeignKey())

   table2 = Relationship( ... )  # Join condition is specified explicitly

class Table2(Base);
   __tablename__ = 'table2'
 
   id = Column()

and I'm trying to figure out how I would execute a query that looks like 
this:

result = 
session.query(Table1).join(Table1.table2).options(contains_eager(Table1.table2))

but using a function to 'fake' Table1 instead.  So basically I'm attempting 
to get SQLAlchemy to treat the result of my function as if it was the 
normal Table1 object.  I've tried using select_from() to inject my call to 
func.my_function() but that doesn't seem to work and since what I'm doing 
seems like it might be tricky (or not portable across SQL) I thought I'd 
ask if it's even possible.

Thanks for any help!



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/bYCl5PH0yNIJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Portable date intervals?

2012-09-04 Thread David McKeone
Great, thanks Mike!  I only need to do PostgreSQL, but I will look into the 
@compiles decorator to make a more general solution.

.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/JwSkrcDaPDwJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SQLAlchemy with Flask -- hybrid models?

2012-08-23 Thread David McKeone


On Thursday, August 23, 2012 3:01:50 AM UTC+1, Michael Bayer wrote:


 On Aug 22, 2012, at 5:33 PM, David McKeone wrote: 

  I suppose I should be more clear.  This is really a long term question, 
 I was just looking for some kind of answer now because I don't want to code 
 myself into a corner in the short term.  Currently I can make requests 
 outside of a flask request context by using the app.test_request_context() 
 context manager, and it seems to do the right thing.   
  
  In the long term I'm looking for 2 (maybe 3) things that I already get 
 from Flask-SQLAlchemy: 
  1) Session teardown for every request (looks like that is done with a 
 call to session.remove() in the request teardown) 

 you can use the Session provided by flask-sqlalchemy, which has the nice 
 quality that it aligns itself with the current request. 

 He can make that feature more open-ended though.   I should be able to say 
 flask_sqlalchemy.request_scope(some_sessionmaker) to set that up with any 
 sessionmaker of my choosing. 


  2) Debug query tracking for use with Flask-DebugToolbar  (Plus 
 compatibility with other plug-ins that may expect Flask-SQLAlchemy) 

 the logic i see in flask-sqlalchemy related to debug tracking has no 
 connection to the db.Model class at all.   its just some connection 
 events which are ultimately established via the SQLAlchemy class.  Your 
 existing non-flask SQLA models will participate in the Session/Engine used 
 by Flask without impacting this functionaltiy. 


  3) The Model.query behaviour (it's nice, but I could live without it, 
 since its really just syntactic) 

 scoped_session has a query_property available, so you can pull that from 
 Flask's scoped_session using SQLAlchemy public APIs like this: 

 sa = SQLAlchemy(db) 

 # API: 
 http://docs.sqlalchemy.org/en/rel_0_7/orm/session.html#sqlalchemy.orm.scoping.ScopedSession.query_property
  
 Base.query = sa.session.query_property 


 or to get exactly flask's, which appears to add three methods get_or_404, 
 first_or_404, paginate: 

 Base.query = flask_sqlalchemy._QueryProperty(sa) 

  
  
  Didn't say this explicitly; for now I will do what you say and forge 
 ahead with things.  I think I see the path, but I'll make sure to let you 
 (the list) know if I run into trouble. 

 good luck ! 



Slugged it out today and got this working, hooray!  Thanks again for your 
help Mike (and for the time you probably put in to parse the 
Flask-SQLAlchemy code).  If you are at PyCon this year I WILL find you and 
I WILL buy you beer, unless you don't drink, in which case I WILL buy you 
soda or coffee.

I haven't done the Base.query part, and I may never do it (more below), but 
everything else works great and all my tests pass after switching to the 
new method.

The more I use the new system the more I wish I would have started with it. 
 Perhaps I can get it documented as an option, because I find it makes it 
far more clear where the models belong in the grand scheme of things.  Now, 
not everyone has 93 tables, a boat-load of relationships and requirements 
for doing things outside of HTTP like I do, so I can understand why it's 
been done the way that it's been done, but having to pass the db instance 
into all of my model definitions (and the resulting project structure 
issues I had) just wasn't worth it.  I've also found that having to use the 
session directly makes it far more clear which session is being used, and 
how.  Not to mention the benefits from being able to decouple my models 
from Flask completely.

So, in the name of Google search completeness,  here is the solution that I 
ultimately ended up with, adapted for the simple User model from above, for 
those like me who want/need it.  (It's quite simple, and I'm amazed that it 
hadn't occurred to me to try it like this)

from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base

app =  Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:tmp/test.db'
db = SQLAlchemy(app)

Base = declarative_base()

class User(Base):
__tablename__ = 'users'

id = Column(Integer, primary_key=True)
name = Column(String(50), unique=True)
email = Column(String(120), unique=True)

def __init__(self, name=None, email=None):
self.name = name
self.email = email

def __repr__(self):
return 'User %r' % (self.name)

@app.before_first_request
def setup():
# Recreate database each time for demo
Base.metadata.drop_all(bind=db.engine)
Base.metadata.create_all(bind=db.engine)
db.session.add(User('Bob Jones', 'b...@gmail.com'))
db.session.add(User('Joe Quimby', 'e...@joes.com'))
db.session.commit()

@app.route('/')
def root():
users = db.session.query(User).all()
return ubr.join([u{0}: {1}.format(user.name, user.email) for user 
in users])

if __name__ == '__main__':
app.run('127.0.0.1', 5000

[sqlalchemy] SQLAlchemy with Flask -- hybrid models?

2012-08-22 Thread David McKeone
I've been using SQLAlchemy with Flask via the Flask extension 
Flask-SQLAlchemy. Everything works great so far, but I foresee a potential 
problem once I start to use my database model outside of Flask.  In the 
future I'd like to be able to use my models with non-Flask SQLAlchemy (a 
worker daemon process or with a PySide interface).  Well just use standard 
SQLAlchemy, you may say,  and fore-go the use of the extension.  That 
was my first thought, but sadly some useful extensions (notably 
Flask-DebugToolbar) seem to like using the extension version and it is nice 
to be able to have Flask manage the database sessions in the way that it 
does.  I'd like to not throw the baby out with the bath water.

I realize that this is somewhat specific to Flask, but is there a way that 
I could do both?  Can I create models with standard SQLAlchemy declarative 
and then somehow inject them into Flask-SQLAlchemy's way of doing things?

If it helps with the solution, I don't need to use any of the Model 
specific methods that Flask-SQLAlchemy provides (get_or_404, paginate, 
etc..) and I also specify a __tablename__ for all of my models, so I don't 
rely on Flask-SQLAlchemy generating that for me.

I took a look at the source of Flask-SQLAlchemy 
(https://github.com/mitsuhiko/flask-sqlalchemy/blob/master/flask_sqlalchemy.py) 
 and from what I can tell it seems that it's using Flask's signalling 
capabilities by customizing SQLAlchemy's session and mapper, but that is 
where my understanding ends (I'm still new to this whole stack, Python, 
Flask, SQLAlchemy) and I could use some pointers for how to proceed.


To visualize what I'm talking about, here are the two types of models. 
 A basic Flask-SQLAlchemy model looks like 
(http://packages.python.org/Flask-SQLAlchemy/quickstart.html#a-minimal-application):

from flask import Flask 
from flask.ext.sqlalchemy import SQLAlchemy 

app = Flask(__name__) 
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:tmp/test.db' 

db = SQLAlchemy(app) 

class User(db.Model): 
id = db.Column(db.Integer, primary_key=True) 
username = db.Column(db.String(80), unique=True) 
email = db.Column(db.String(120), unique=True) 

def __init__(self, username, email): 
 self.username = username 

   self.email = email 


def __repr__(self): 
return 'User %r' % self.username


Note the db.Model, db.Integer and db dot everything.

The plain declarative SQLAlchemy equivalent would be 
(http://flask.pocoo.org/docs/patterns/sqlalchemy/):


from sqlalchemy import Column Integer, String, create_engine 
from sqlalchemy.orm import scoped_session, sessionmaker 
from sqlalchemy.ext.declarative import declarative_base 

engine = create_engine('sqlite:tmp/test.db', convert_unicode=True) 
db_session = scoped_session(sessionmaker(autocommit=False, 
autoflush=False, bind=engine)) 
Base = declarative_base() 
Base.query = db_session.query_property() 


class User(Base): 
 __tablename__ = 'users' 

 id = Column(Integer, primary_key=True) 
 name = Column(String(50), unique=True) 
 email = Column(String(120), unique=True) 

 def __init__(self, name=None, email=None): 
 self.name = name 
 self.email = email 

 def __repr__(self): 
 return 'User %r' % (self.name)



Thanks for your help!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/fyqvIAUBbcsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SQLAlchemy with Flask -- hybrid models?

2012-08-22 Thread David McKeone


On Wednesday, August 22, 2012 2:23:28 PM UTC+1, Simon King wrote:

 On Wed, Aug 22, 2012 at 12:51 PM, David McKeone 
 davidm...@gmail.comjavascript: 
 wrote: 
  I've been using SQLAlchemy with Flask via the Flask extension 
  Flask-SQLAlchemy. Everything works great so far, but I foresee a 
 potential 
  problem once I start to use my database model outside of Flask.  In the 
  future I'd like to be able to use my models with non-Flask SQLAlchemy (a 
  worker daemon process or with a PySide interface).  Well just use 
 standard 
  SQLAlchemy, you may say,  and fore-go the use of the extension.  That 
 was 
  my first thought, but sadly some useful extensions (notably 
  Flask-DebugToolbar) seem to like using the extension version and it is 
 nice 
  to be able to have Flask manage the database sessions in the way that it 
  does.  I'd like to not throw the baby out with the bath water. 
  
  I realize that this is somewhat specific to Flask, but is there a way 
 that I 
  could do both?  Can I create models with standard SQLAlchemy declarative 
 and 
  then somehow inject them into Flask-SQLAlchemy's way of doing things? 
  
  If it helps with the solution, I don't need to use any of the Model 
 specific 
  methods that Flask-SQLAlchemy provides (get_or_404, paginate, etc..) and 
 I 
  also specify a __tablename__ for all of my models, so I don't rely on 
  Flask-SQLAlchemy generating that for me. 
  
  I took a look at the source of Flask-SQLAlchemy 
  (
 https://github.com/mitsuhiko/flask-sqlalchemy/blob/master/flask_sqlalchemy.py)
  

  and from what I can tell it seems that it's using Flask's signalling 
  capabilities by customizing SQLAlchemy's session and mapper, but that is 
  where my understanding ends (I'm still new to this whole stack, Python, 
  Flask, SQLAlchemy) and I could use some pointers for how to proceed. 
  
  
  To visualize what I'm talking about, here are the two types of models. 
  A 
  basic Flask-SQLAlchemy model looks like 
  (
 http://packages.python.org/Flask-SQLAlchemy/quickstart.html#a-minimal-application):
  

  
  from flask import Flask 
  from flask.ext.sqlalchemy import SQLAlchemy 
  
  app = Flask(__name__) 
  app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:tmp/test.db' 
  
  db = SQLAlchemy(app) 
  
  class User(db.Model): 
  id = db.Column(db.Integer, primary_key=True) 
  username = db.Column(db.String(80), unique=True) 
  email = db.Column(db.String(120), unique=True) 
  
  def __init__(self, username, email): 
   self.username = username 
  
 self.email = email 
  
  
  def __repr__(self): 
  return 'User %r' % self.username 
  
  
  Note the db.Model, db.Integer and db dot everything. 
  
  The plain declarative SQLAlchemy equivalent would be 
  (http://flask.pocoo.org/docs/patterns/sqlalchemy/): 
  
  
  from sqlalchemy import Column Integer, String, create_engine 
  from sqlalchemy.orm import scoped_session, sessionmaker 
  from sqlalchemy.ext.declarative import declarative_base 
  
  engine = create_engine('sqlite:tmp/test.db', convert_unicode=True) 
  db_session = scoped_session(sessionmaker(autocommit=False, 
 autoflush=False, 
  bind=engine)) 
  Base = declarative_base() 
  Base.query = db_session.query_property() 
  
  
  class User(Base): 
   __tablename__ = 'users' 
  
   id = Column(Integer, primary_key=True) 
   name = Column(String(50), unique=True) 
   email = Column(String(120), unique=True) 
  
   def __init__(self, name=None, email=None): 
   self.name = name 
   self.email = email 
  
   def __repr__(self): 
   return 'User %r' % (self.name) 
  

 From a very quick read of the Flask-SQLAlchemy docs, I would have 
 thought you could just use your flask-based classes in non-Flask-based 
 apps without any issue. The quickstart guide that you referenced above 
 illustrates a command-line session using them, so there's no reason 
 why you couldn't do the same in a worker process. 

 Have you already tried it and found that it doesn't work? 

 Simon 



Perhaps I've missed the important bit, but my understanding is that there 
are two ways to do it:

1) Use SQLAlchemy with manual session control (
http://flask.pocoo.org/docs/patterns/sqlalchemy/)
2) Use SQLAlchemy with the Flask-SQLAlchemy extension which does session 
control for you, but requires you to use it's own declarative base class, 
db.Model  (
http://packages.python.org/Flask-SQLAlchemy/quickstart.html#a-minimal-application
)

The problem is that other extensions give you additional features if you 
use solution #2, but not if you use solution #1, because solution #1 
implies that you are going to do everything yourself manually.  I'm fine 
with the way solution #2 handles the sessions and everything else, I just 
want to be able to use my own declarative base so that the models are 
more-or-less independent of the app that they are being used in.

-- 
You received this message

Re: [sqlalchemy] SQLAlchemy with Flask -- hybrid models?

2012-08-22 Thread David McKeone


On Wednesday, August 22, 2012 2:33:01 PM UTC+1, David McKeone wrote:



 On Wednesday, August 22, 2012 2:23:28 PM UTC+1, Simon King wrote:

 On Wed, Aug 22, 2012 at 12:51 PM, David McKeone davidm...@gmail.com 
 wrote: 
  I've been using SQLAlchemy with Flask via the Flask extension 
  Flask-SQLAlchemy. Everything works great so far, but I foresee a 
 potential 
  problem once I start to use my database model outside of Flask.  In the 
  future I'd like to be able to use my models with non-Flask SQLAlchemy 
 (a 
  worker daemon process or with a PySide interface).  Well just use 
 standard 
  SQLAlchemy, you may say,  and fore-go the use of the extension. 
  That was 
  my first thought, but sadly some useful extensions (notably 
  Flask-DebugToolbar) seem to like using the extension version and it is 
 nice 
  to be able to have Flask manage the database sessions in the way that 
 it 
  does.  I'd like to not throw the baby out with the bath water. 
  
  I realize that this is somewhat specific to Flask, but is there a way 
 that I 
  could do both?  Can I create models with standard SQLAlchemy 
 declarative and 
  then somehow inject them into Flask-SQLAlchemy's way of doing things? 
  
  If it helps with the solution, I don't need to use any of the Model 
 specific 
  methods that Flask-SQLAlchemy provides (get_or_404, paginate, etc..) 
 and I 
  also specify a __tablename__ for all of my models, so I don't rely on 
  Flask-SQLAlchemy generating that for me. 
  
  I took a look at the source of Flask-SQLAlchemy 
  (
 https://github.com/mitsuhiko/flask-sqlalchemy/blob/master/flask_sqlalchemy.py)
  

  and from what I can tell it seems that it's using Flask's signalling 
  capabilities by customizing SQLAlchemy's session and mapper, but that 
 is 
  where my understanding ends (I'm still new to this whole stack, Python, 
  Flask, SQLAlchemy) and I could use some pointers for how to proceed. 
  
  
  To visualize what I'm talking about, here are the two types of models. 
  A 
  basic Flask-SQLAlchemy model looks like 
  (
 http://packages.python.org/Flask-SQLAlchemy/quickstart.html#a-minimal-application):
  

  
  from flask import Flask 
  from flask.ext.sqlalchemy import SQLAlchemy 
  
  app = Flask(__name__) 
  app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:tmp/test.db' 
  
  db = SQLAlchemy(app) 
  
  class User(db.Model): 
  id = db.Column(db.Integer, primary_key=True) 
  username = db.Column(db.String(80), unique=True) 
  email = db.Column(db.String(120), unique=True) 
  
  def __init__(self, username, email): 
   self.username = username 
  
 self.email = email 
  
  
  def __repr__(self): 
  return 'User %r' % self.username 
  
  
  Note the db.Model, db.Integer and db dot everything. 
  
  The plain declarative SQLAlchemy equivalent would be 
  (http://flask.pocoo.org/docs/patterns/sqlalchemy/): 
  
  
  from sqlalchemy import Column Integer, String, create_engine 
  from sqlalchemy.orm import scoped_session, sessionmaker 
  from sqlalchemy.ext.declarative import declarative_base 
  
  engine = create_engine('sqlite:tmp/test.db', convert_unicode=True) 
  db_session = scoped_session(sessionmaker(autocommit=False, 
 autoflush=False, 
  bind=engine)) 
  Base = declarative_base() 
  Base.query = db_session.query_property() 
  
  
  class User(Base): 
   __tablename__ = 'users' 
  
   id = Column(Integer, primary_key=True) 
   name = Column(String(50), unique=True) 
   email = Column(String(120), unique=True) 
  
   def __init__(self, name=None, email=None): 
   self.name = name 
   self.email = email 
  
   def __repr__(self): 
   return 'User %r' % (self.name) 
  

 From a very quick read of the Flask-SQLAlchemy docs, I would have 
 thought you could just use your flask-based classes in non-Flask-based 
 apps without any issue. The quickstart guide that you referenced above 
 illustrates a command-line session using them, so there's no reason 
 why you couldn't do the same in a worker process. 

 Have you already tried it and found that it doesn't work? 

 Simon 



 Perhaps I've missed the important bit, but my understanding is that there 
 are two ways to do it:

 1) Use SQLAlchemy with manual session control (
 http://flask.pocoo.org/docs/patterns/sqlalchemy/)
 2) Use SQLAlchemy with the Flask-SQLAlchemy extension which does session 
 control for you, but requires you to use it's own declarative base class, 
 db.Model  (
 http://packages.python.org/Flask-SQLAlchemy/quickstart.html#a-minimal-application
 )

 The problem is that other extensions give you additional features if you 
 use solution #2, but not if you use solution #1, because solution #1 
 implies that you are going to do everything yourself manually.  I'm fine 
 with the way solution #2 handles the sessions and everything else, I just 
 want to be able to use my own declarative base so that the models are 
 more-or-less

Re: [sqlalchemy] SQLAlchemy with Flask -- hybrid models?

2012-08-22 Thread David McKeone
On Wednesday, August 22, 2012 3:12:02 PM UTC+1, Simon King wrote:

 On Wed, Aug 22, 2012 at 2:44 PM, David McKeone 
 davidm...@gmail.comjavascript: 
 wrote: 
  
  
  On Wednesday, August 22, 2012 2:33:01 PM UTC+1, David McKeone wrote: 
  
  
  
  On Wednesday, August 22, 2012 2:23:28 PM UTC+1, Simon King wrote: 
  
  On Wed, Aug 22, 2012 at 12:51 PM, David McKeone davidm...@gmail.com 
  wrote: 
   I've been using SQLAlchemy with Flask via the Flask extension 
   Flask-SQLAlchemy. Everything works great so far, but I foresee a 
   potential 
   problem once I start to use my database model outside of Flask.  In 
 the 
   future I'd like to be able to use my models with non-Flask 
 SQLAlchemy 
   (a 
   worker daemon process or with a PySide interface).  Well just use 
   standard 
   SQLAlchemy, you may say,  and fore-go the use of the extension. 
   That was 
   my first thought, but sadly some useful extensions (notably 
   Flask-DebugToolbar) seem to like using the extension version and it 
 is 
   nice 
   to be able to have Flask manage the database sessions in the way 
 that 
   it 
   does.  I'd like to not throw the baby out with the bath water. 
   
   I realize that this is somewhat specific to Flask, but is there a 
 way 
   that I 
   could do both?  Can I create models with standard SQLAlchemy 
   declarative and 
   then somehow inject them into Flask-SQLAlchemy's way of doing 
 things? 
   
   If it helps with the solution, I don't need to use any of the Model 
   specific 
   methods that Flask-SQLAlchemy provides (get_or_404, paginate, etc..) 
   and I 
   also specify a __tablename__ for all of my models, so I don't rely 
 on 
   Flask-SQLAlchemy generating that for me. 
   
   I took a look at the source of Flask-SQLAlchemy 
   
   (
 https://github.com/mitsuhiko/flask-sqlalchemy/blob/master/flask_sqlalchemy.py)
  

   and from what I can tell it seems that it's using Flask's signalling 
   capabilities by customizing SQLAlchemy's session and mapper, but 
 that 
   is 
   where my understanding ends (I'm still new to this whole stack, 
 Python, 
   Flask, SQLAlchemy) and I could use some pointers for how to proceed. 
   
   
   To visualize what I'm talking about, here are the two types of 
 models. 
   A 
   basic Flask-SQLAlchemy model looks like 
   
   (
 http://packages.python.org/Flask-SQLAlchemy/quickstart.html#a-minimal-application):
  

   
   from flask import Flask 
   from flask.ext.sqlalchemy import SQLAlchemy 
   
   app = Flask(__name__) 
   app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:tmp/test.db' 
   
   db = SQLAlchemy(app) 
   
   class User(db.Model): 
   id = db.Column(db.Integer, primary_key=True) 
   username = db.Column(db.String(80), unique=True) 
   email = db.Column(db.String(120), unique=True) 
   
   def __init__(self, username, email): 
self.username = username 
   
  self.email = email 
   
   
   def __repr__(self): 
   return 'User %r' % self.username 
   
   
   Note the db.Model, db.Integer and db dot everything. 
   
   The plain declarative SQLAlchemy equivalent would be 
   (http://flask.pocoo.org/docs/patterns/sqlalchemy/): 
   
   
   from sqlalchemy import Column Integer, String, create_engine 
   from sqlalchemy.orm import scoped_session, sessionmaker 
   from sqlalchemy.ext.declarative import declarative_base 
   
   engine = create_engine('sqlite:tmp/test.db', 
 convert_unicode=True) 
   db_session = scoped_session(sessionmaker(autocommit=False, 
   autoflush=False, 
   bind=engine)) 
   Base = declarative_base() 
   Base.query = db_session.query_property() 
   
   
   class User(Base): 
__tablename__ = 'users' 
   
id = Column(Integer, primary_key=True) 
name = Column(String(50), unique=True) 
email = Column(String(120), unique=True) 
   
def __init__(self, name=None, email=None): 
self.name = name 
self.email = email 
   
def __repr__(self): 
return 'User %r' % (self.name) 
   
  
  From a very quick read of the Flask-SQLAlchemy docs, I would have 
  thought you could just use your flask-based classes in non-Flask-based 
  apps without any issue. The quickstart guide that you referenced above 
  illustrates a command-line session using them, so there's no reason 
  why you couldn't do the same in a worker process. 
  
  Have you already tried it and found that it doesn't work? 
  
  Simon 
  
  
  
  Perhaps I've missed the important bit, but my understanding is that 
 there 
  are two ways to do it: 
  
  1) Use SQLAlchemy with manual session control 
  (http://flask.pocoo.org/docs/patterns/sqlalchemy/) 
  2) Use SQLAlchemy with the Flask-SQLAlchemy extension which does 
 session 
  control for you, but requires you to use it's own declarative base 
 class, 
  db.Model 
  (
 http://packages.python.org/Flask-SQLAlchemy/quickstart.html#a-minimal-application)
  

  
  The problem

Re: [sqlalchemy] SQLAlchemy with Flask -- hybrid models?

2012-08-22 Thread David McKeone


On Wednesday, August 22, 2012 3:12:02 PM UTC+1, Simon King wrote:

 On Wed, Aug 22, 2012 at 2:44 PM, David McKeone 
 davidm...@gmail.comjavascript: 
 wrote: 
  
  
  On Wednesday, August 22, 2012 2:33:01 PM UTC+1, David McKeone wrote: 
  
  
  
  On Wednesday, August 22, 2012 2:23:28 PM UTC+1, Simon King wrote: 
  
  On Wed, Aug 22, 2012 at 12:51 PM, David McKeone davidm...@gmail.com 
  wrote: 
   I've been using SQLAlchemy with Flask via the Flask extension 
   Flask-SQLAlchemy. Everything works great so far, but I foresee a 
   potential 
   problem once I start to use my database model outside of Flask.  In 
 the 
   future I'd like to be able to use my models with non-Flask 
 SQLAlchemy 
   (a 
   worker daemon process or with a PySide interface).  Well just use 
   standard 
   SQLAlchemy, you may say,  and fore-go the use of the extension. 
   That was 
   my first thought, but sadly some useful extensions (notably 
   Flask-DebugToolbar) seem to like using the extension version and it 
 is 
   nice 
   to be able to have Flask manage the database sessions in the way 
 that 
   it 
   does.  I'd like to not throw the baby out with the bath water. 
   
   I realize that this is somewhat specific to Flask, but is there a 
 way 
   that I 
   could do both?  Can I create models with standard SQLAlchemy 
   declarative and 
   then somehow inject them into Flask-SQLAlchemy's way of doing 
 things? 
   
   If it helps with the solution, I don't need to use any of the Model 
   specific 
   methods that Flask-SQLAlchemy provides (get_or_404, paginate, etc..) 
   and I 
   also specify a __tablename__ for all of my models, so I don't rely 
 on 
   Flask-SQLAlchemy generating that for me. 
   
   I took a look at the source of Flask-SQLAlchemy 
   
   (
 https://github.com/mitsuhiko/flask-sqlalchemy/blob/master/flask_sqlalchemy.py)
  

   and from what I can tell it seems that it's using Flask's signalling 
   capabilities by customizing SQLAlchemy's session and mapper, but 
 that 
   is 
   where my understanding ends (I'm still new to this whole stack, 
 Python, 
   Flask, SQLAlchemy) and I could use some pointers for how to proceed. 
   
   
   To visualize what I'm talking about, here are the two types of 
 models. 
   A 
   basic Flask-SQLAlchemy model looks like 
   
   (
 http://packages.python.org/Flask-SQLAlchemy/quickstart.html#a-minimal-application):
  

   
   from flask import Flask 
   from flask.ext.sqlalchemy import SQLAlchemy 
   
   app = Flask(__name__) 
   app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:tmp/test.db' 
   
   db = SQLAlchemy(app) 
   
   class User(db.Model): 
   id = db.Column(db.Integer, primary_key=True) 
   username = db.Column(db.String(80), unique=True) 
   email = db.Column(db.String(120), unique=True) 
   
   def __init__(self, username, email): 
self.username = username 
   
  self.email = email 
   
   
   def __repr__(self): 
   return 'User %r' % self.username 
   
   
   Note the db.Model, db.Integer and db dot everything. 
   
   The plain declarative SQLAlchemy equivalent would be 
   (http://flask.pocoo.org/docs/patterns/sqlalchemy/): 
   
   
   from sqlalchemy import Column Integer, String, create_engine 
   from sqlalchemy.orm import scoped_session, sessionmaker 
   from sqlalchemy.ext.declarative import declarative_base 
   
   engine = create_engine('sqlite:tmp/test.db', 
 convert_unicode=True) 
   db_session = scoped_session(sessionmaker(autocommit=False, 
   autoflush=False, 
   bind=engine)) 
   Base = declarative_base() 
   Base.query = db_session.query_property() 
   
   
   class User(Base): 
__tablename__ = 'users' 
   
id = Column(Integer, primary_key=True) 
name = Column(String(50), unique=True) 
email = Column(String(120), unique=True) 
   
def __init__(self, name=None, email=None): 
self.name = name 
self.email = email 
   
def __repr__(self): 
return 'User %r' % (self.name) 
   
  
  From a very quick read of the Flask-SQLAlchemy docs, I would have 
  thought you could just use your flask-based classes in non-Flask-based 
  apps without any issue. The quickstart guide that you referenced above 
  illustrates a command-line session using them, so there's no reason 
  why you couldn't do the same in a worker process. 
  
  Have you already tried it and found that it doesn't work? 
  
  Simon 
  
  
  
  Perhaps I've missed the important bit, but my understanding is that 
 there 
  are two ways to do it: 
  
  1) Use SQLAlchemy with manual session control 
  (http://flask.pocoo.org/docs/patterns/sqlalchemy/) 
  2) Use SQLAlchemy with the Flask-SQLAlchemy extension which does 
 session 
  control for you, but requires you to use it's own declarative base 
 class, 
  db.Model 
  (
 http://packages.python.org/Flask-SQLAlchemy/quickstart.html#a-minimal-application)
  

  
  The problem

Re: [sqlalchemy] SQLAlchemy with Flask -- hybrid models?

2012-08-22 Thread David McKeone


On Wednesday, August 22, 2012 5:36:22 PM UTC+1, Michael Bayer wrote:


 On Aug 22, 2012, at 7:51 AM, David McKeone wrote:

 I've been using SQLAlchemy with Flask via the Flask extension 
 Flask-SQLAlchemy. Everything works great so far, but I foresee a potential 
 problem once I start to use my database model outside of Flask.  In the 
 future I'd like to be able to use my models with non-Flask SQLAlchemy (a 
 worker daemon process or with a PySide interface).  Well just use standard 
 SQLAlchemy, you may say,  and fore-go the use of the extension.  That 
 was my first thought, but sadly some useful extensions (notably 
 Flask-DebugToolbar) seem to like using the extension version and it is nice 
 to be able to have Flask manage the database sessions in the way that it 
 does.  I'd like to not throw the baby out with the bath water.

 I realize that this is somewhat specific to Flask, but is there a way that 
 I could do both?  Can I create models with standard SQLAlchemy declarative 
 and then somehow inject them into Flask-SQLAlchemy's way of doing things?


 If it helps with the solution, I don't need to use any of the Model 
 specific methods that Flask-SQLAlchemy provides (get_or_404, paginate, 
 etc..) and I also specify a __tablename__ for all of my models, so I don't 
 rely on Flask-SQLAlchemy generating that for me.

 I took a look at the source of Flask-SQLAlchemy (
 https://github.com/mitsuhiko/flask-sqlalchemy/blob/master/flask_sqlalchemy.py)
  
  and from what I can tell it seems that it's using Flask's signalling 
 capabilities by customizing SQLAlchemy's session and mapper, but that is 
 where my understanding ends (I'm still new to this whole stack, Python, 
 Flask, SQLAlchemy) and I could use some pointers for how to proceed.



 There's no reason I can see in the source that flask-sqlalchemy would get 
 in the way of entirely plain SQLAlchemy mapped objects.   At the end of the 
 day, a class that extends Flask's db.model is just a mapped class, just 
 like a non-flask class.  Both kinds of classes are freely usable with any 
 SQLAlchemy Session, including the Session that Flask-SQLA provides.It's 
 important to note the distinction between mapper configuration, which has 
 to do with class structure, and session configuration, which only deals 
 with instances of objects.  These two processes work together at a core 
 level that various extensions only ride on top of, unless those extensions 
 define additional dependencies above that level.   Flask-sqlalchemy appears 
 only to define one very trivial such dependency which is some coordination 
 to enable the before_models_committed and models_committed hooks, which 
 themselves are just for end-user convenience (
 http://packages.python.org/Flask-SQLAlchemy/signals.html).   

 The Flask-SQLA approach is really just assigning event listeners to 
 sessions and mappers.   It's doing so in a way that is a bit brittle, but 
 also this system precedes SQLAlchemy's 0.7 event model.   Armin's immediate 
 goal with flask-sqlalchemy is to migrate the extension to use the new event 
 model, which would actually remove the need for the styles of registration 
 I see here as the new system allows registration of event listeners on all 
 sessions/mappers non-intrusively.

 There's also a custom Query class in use here, though it doesn't seem to 
 be consistently integrated with the Session, but using custom Query classes 
 like this as well as adding the MyClass.query hook is a widely used 
 pattern.

 So if you were to use plain SQLAlchemy models with flask-SQLA out of the 
 box, these particular events wouldn't fire off as much, unless you also set 
 up the flask_sqlalchemy._SignalTrackingMapperExtension with your normal 
 mappers.   

 I think if you just tried using regular models with flask models, and 
 didn't rely on those two particular signals, you'd see everything pretty 
 much works without any issue.


Thanks for your great response Mike.  

Forgive my ignorance, but I don't understand enough of the underpinnings to 
get my first steps out of this (the downside of starting with something 
that gives you stuff for free, I suppose).  I'm definitely going to walk 
through what you've said and reference it against the documentation, but 
while your mind is fresh on the topic, I was wondering if you could just 
clarify how I might convert a standard model object into a flask-sqlalchemy 
model object.

Using the two examples above, would I just take my User(Base) class and 
then assign it property out of the Flask db properties?  Something like: 

User.session = db.session()  
User.engine = db.engine()

I know those properties don't actually exist on User, but is that the kind 
of thing I should be looking to do?  Move certain pieces into all of the 
models?  or is there something higher level going on that will do this for 
me in some way? something else?

... but also this system precedes SQLAlchemy's 0.7 event model.   Armin's

Re: [sqlalchemy] SQLAlchemy with Flask -- hybrid models?

2012-08-22 Thread David McKeone


On Wednesday, August 22, 2012 9:03:55 PM UTC+1, Michael Bayer wrote:


 On Aug 22, 2012, at 3:51 PM, David McKeone wrote:


  I was wondering if you could just clarify how I might convert a standard 
 model object into a flask-sqlalchemy model object.


 why do you need to do this ?What flask-sqlalchemy-specific features 
 would you hope for the model objects to have ?   There's pretty much two I 
 can see - one is the MyModel.query attribute, and the other are the events. 
  Neither are necessary, though you might want the usage of one or the 
 other.   Otherwise no conversion is needed.


 Using the two examples above, would I just take my User(Base) class and 
 then assign it property out of the Flask db properties?  Something like: 

 User.session = db.session()  
 User.engine = db.engine()

 I know those properties don't actually exist on User, but is that the kind 
 of thing I should be looking to do?  Move certain pieces into all of the 
 models?  or is there something higher level going on that will do this for 
 me in some way? something else?


 I don't see why engine or session would be stuck onto the model class like 
 that,  I don't see that usage here: 
 http://packages.python.org/Flask-SQLAlchemy/quickstart.html#a-minimal-application.
  
  Just the query attribute.  User.query.all().  This is equivalent to 
 session.query(User).all().

 The User.session / User.engine pattern suggests that a model class acts 
 as a registry for how to get at a live database connection.  That pattern 
 is one that SQLAlchemy specifically discourages.   In the Flask docs, 
 you'll see that the handle to the state of a transaction is db.session. 
  The objects like instances of User are then contained within that.


  This seems like it would be a really good way to accomplish what I'm 
 looking for and to move things forward as well.  Once I read up on the 
 requisite knowledge I may end up making an attempt at making this over the 
 next little while.  I had a drink with Armin last week and I'm not sure if 
 his current stuff points him in this direction (of course you'd have to ask 
 him for the real answer on that), but I certainly have a vested interest, 
 so maybe I can do some of the grunt work.


 its really quite a small amount of effort.   I think for now the strategy 
 on your end should be to forge ahead with what's there, and if there's some 
 very specific thing that doesn't work as you expect, I can show you how to 
 get the behavior you're looking for.



I suppose I should be more clear.  This is really a long term question, I 
was just looking for some kind of answer now because I don't want to code 
myself into a corner in the short term.  Currently I can make requests 
outside of a flask request context by using the app.test_request_context() 
context manager, and it seems to do the right thing.   

In the long term I'm looking for 2 (maybe 3) things that I already get from 
Flask-SQLAlchemy:
1) Session teardown for every request (looks like that is done with a call 
to session.remove() in the request teardown)
2) Debug query tracking for use with Flask-DebugToolbar  (Plus 
compatibility with other plug-ins that may expect Flask-SQLAlchemy)

- This is really the difficult one.  In this case I think it just needs an 
attribute on the Flask app called sqlalchemy_queries which contains a tuple 
of queries.  This is where I was thinking it may be better to assist with 
updating the plugin to just play nice with SQLAlchemy's default behaviours. 
 Maybe I'm wrong?

3) The Model.query behaviour (it's nice, but I could live without it, since 
its really just syntactic)

I think it'll make my code simpler/better for two reasons:
1) My models will be SQLAlchemy, not a Flask specific dialect with db dot 
everywhere.  This will make the SQLAlchemy documentation more obvious to 
programmers that use the code.
2) The db.Model method seems to make it very easy to create circular import 
errors (as seen in this issue: 
https://github.com/mitsuhiko/flask-sqlalchemy/issues/97 and experienced by 
myself).  It would be really nice if I could isolate the models and model 
mapping behaviours from the rest of the application to avoid that kind of 
thing.

The User.session / User.engine pattern suggests that a model class acts 
 as a registry for how to get at a live database connection.  That pattern 
 is one that SQLAlchemy specifically discourages.   In the Flask docs, 
 you'll see that the handle to the state of a transaction is db.session. 
  The objects like instances of User are then contained within that.


This was really just my lack of understanding of how the database 
connection is connected to the models, rather than stating how I wanted it 
to be done.  Thinking about it now, and reading your comment, I realize 
that sessions are connected to the models when they are passed in, like in 
session.query(User).first(), and Model.query is really just a class 
instance that holds onto the session

Re: [sqlalchemy] SQLAlchemy with Flask -- hybrid models?

2012-08-22 Thread David McKeone


On Wednesday, August 22, 2012 10:21:59 PM UTC+1, David McKeone wrote:



 On Wednesday, August 22, 2012 9:03:55 PM UTC+1, Michael Bayer wrote:


 On Aug 22, 2012, at 3:51 PM, David McKeone wrote:


  I was wondering if you could just clarify how I might convert a standard 
 model object into a flask-sqlalchemy model object.


 why do you need to do this ?What flask-sqlalchemy-specific features 
 would you hope for the model objects to have ?   There's pretty much two I 
 can see - one is the MyModel.query attribute, and the other are the events. 
  Neither are necessary, though you might want the usage of one or the 
 other.   Otherwise no conversion is needed.


 Using the two examples above, would I just take my User(Base) class and 
 then assign it property out of the Flask db properties?  Something like: 

 User.session = db.session()  
 User.engine = db.engine()

 I know those properties don't actually exist on User, but is that the 
 kind of thing I should be looking to do?  Move certain pieces into all of 
 the models?  or is there something higher level going on that will do this 
 for me in some way? something else?


 I don't see why engine or session would be stuck onto the model class 
 like that,  I don't see that usage here: 
 http://packages.python.org/Flask-SQLAlchemy/quickstart.html#a-minimal-application.
  
  Just the query attribute.  User.query.all().  This is equivalent to 
 session.query(User).all().

 The User.session / User.engine pattern suggests that a model class acts 
 as a registry for how to get at a live database connection.  That pattern 
 is one that SQLAlchemy specifically discourages.   In the Flask docs, 
 you'll see that the handle to the state of a transaction is db.session. 
  The objects like instances of User are then contained within that.


  This seems like it would be a really good way to accomplish what I'm 
 looking for and to move things forward as well.  Once I read up on the 
 requisite knowledge I may end up making an attempt at making this over the 
 next little while.  I had a drink with Armin last week and I'm not sure if 
 his current stuff points him in this direction (of course you'd have to ask 
 him for the real answer on that), but I certainly have a vested interest, 
 so maybe I can do some of the grunt work.


 its really quite a small amount of effort.   I think for now the strategy 
 on your end should be to forge ahead with what's there, and if there's some 
 very specific thing that doesn't work as you expect, I can show you how to 
 get the behavior you're looking for.



 I suppose I should be more clear.  This is really a long term question, I 
 was just looking for some kind of answer now because I don't want to code 
 myself into a corner in the short term.  Currently I can make requests 
 outside of a flask request context by using the app.test_request_context() 
 context manager, and it seems to do the right thing.   

 In the long term I'm looking for 2 (maybe 3) things that I already get 
 from Flask-SQLAlchemy:
 1) Session teardown for every request (looks like that is done with a call 
 to session.remove() in the request teardown)
 2) Debug query tracking for use with Flask-DebugToolbar  (Plus 
 compatibility with other plug-ins that may expect Flask-SQLAlchemy)

 - This is really the difficult one.  In this case I think it just needs an 
 attribute on the Flask app called sqlalchemy_queries which contains a tuple 
 of queries.  This is where I was thinking it may be better to assist with 
 updating the plugin to just play nice with SQLAlchemy's default behaviours. 
  Maybe I'm wrong?

 3) The Model.query behaviour (it's nice, but I could live without it, 
 since its really just syntactic)

 I think it'll make my code simpler/better for two reasons:
 1) My models will be SQLAlchemy, not a Flask specific dialect with db 
 dot everywhere.  This will make the SQLAlchemy documentation more obvious 
 to programmers that use the code.
 2) The db.Model method seems to make it very easy to create circular 
 import errors (as seen in this issue: 
 https://github.com/mitsuhiko/flask-sqlalchemy/issues/97 and experienced 
 by myself).  It would be really nice if I could isolate the models and 
 model mapping behaviours from the rest of the application to avoid that 
 kind of thing.

 The User.session / User.engine pattern suggests that a model class acts 
 as a registry for how to get at a live database connection.  That pattern 
 is one that SQLAlchemy specifically discourages.   In the Flask docs, 
 you'll see that the handle to the state of a transaction is db.session. 
  The objects like instances of User are then contained within that.


 This was really just my lack of understanding of how the database 
 connection is connected to the models, rather than stating how I wanted it 
 to be done.  Thinking about it now, and reading your comment, I realize 
 that sessions are connected to the models when they are passed in, like

[sqlalchemy] Declarative and deferred

2012-08-11 Thread David McKeone
Short:
---
Is there a way to backfill multiple deferred columns in a declarative object 
result instance in a dynamic way when groups can't be predicted in the model?


Long:

First, let me just say thanks for SQLAlchemy.  This is my first post to this 
list and after working with it for quite a while I've found it to be an 
excellent tool for working with the database.  My previous work (non-Python) 
was done with a database abstraction layer that was more relational and less 
object-oriented and I've found SQLAlchemy to be amazing for letting me have my 
cake (objects) and eat it too (hand-crafted sql optimizations).

Alright, so a few caveats for background:
1) I'm fairly new to Python (~4 months), but not to programming (~10 years)
2) This is the first time I've used an ORM, so my question may be more about 
the object-relational mismatch handling, rather than SQLAlchemy directly. 
3) I'm using SQLAlchemy with Flask's plug-in flask-sqlalchemy.  That may not 
have much do with my question, but just in case there is some subtle difference 
between declarative's base model and Flask's db.Model
4) The current project is to use Flask and SQLAlchemy to create a web site with 
an existing database in a deployed client-server application (96 Tables, can be 
anywhere between ~200MB and 30GB)
5) Assumptions abound... this is a fairly complicated/specific case (I think) 
so there may be underlying assumptions about how I'm doing things that are 
incorrect.  If I'm wrong in those underlying assumptions, then feel free to 
challenge them.
6) SQLAlchemy 0.7.8

Cool.

So, I see that using declarative objects has a quite a few advantages;  you can 
easily add attributes(columns, relationships, etc...), validators, and methods 
-- all great stuff for keeping things logically grouped. Then when you get to 
performance optimizations there is a significant benefit with larger models to 
not fetch all the columns for every request (this is a web app after all, so 
lower response times are a goal).  Great, so deferred looks like the ticket to 
be able to handle this particular mis-match in a good enough way.  I can defer 
any non-essential columns and if I need one or two other columns down the line 
then they'll be lazy-loaded as required. 

Contrived example:

class User(db.Model, HelperMixin):
__tablename__ =  'user'

id = db.Column(db.Integer, primary_key=True)
password = db.Column(db.String)
type = db.Column(db.Integer)
first_name = db.Column(db.String)
last_name = db.Column(db.String)
title = db.Column(db.String)
birthday = db.Column(db.Date)
height = db.Column(db.Numeric)
width = db.Column(db.Numeric)
# etc...

   def is_valid(self, check_password):
  # check password in a horribly insecure, but easy way
  return True if check_password == self.password else False

So with this model I want to validate a users password on login, but not load 
all the other unnecessary stuff, because login probably doesn't need all the 
rest of those columns.  Because I also want to keep things simple on the model, 
I don't use deferred directly, but rather I created a couple helper methods in 
a mixin.   (Note that other parts of the application may need more columns or 
less columns or different columns, depending on context, so putting deferreds 
directly in the model would also be impractical)

The mixin looks like this:

from sqlalchemy.orm import defer
from sqlalchemy.orm.properties import ColumnProperty
from sqlalchemy.orm.util import class_mapper

class HelperMixin(object):

@classmethod
def itercolumns(cls):
for prop in class_mapper(cls).iterate_properties:
if isinstance(prop, ColumnProperty):
yield prop.key

@classmethod
def get_deferred_except(cls, *attributes):
attribute_set = set(attributes)

ret = list()
for name in cls.itercolumns():
if name not in attribute_set:
ret.append(defer(name))

return ret

so with this helper I can context sensitively build up a result object with 
just the stuff I need (but without losing the benefits of the associated 
methods):

deferred = User.get_deferred_except('id', 'password') # Get list of defer() 
instances for all columns, but those specified
user = User.query.options(*deferred).first()

# SQL Emitted -- SELECT id, password FROM user

if user.is_valid(the_password):
# Valid stuff
else:
# Invalid stuff

Ok, well that worked great, but now I need to get the patrons name for some 
runtime specific reason.  So I do this:

full_name =  .join([user.title, user.first_name, user.last_name])

I now emit:

SELECT title FROM user
SELECT first_name FROM user
SELECT last_name FROM user

When what I really want at this point, and can predictably know in this case, 
is:

SELECT title, first_name, last_name FROM user

So, the question is, what is the best way to back-fill an object in a way that 
you keep the number 

Re: [sqlalchemy] Declarative and deferred

2012-08-11 Thread David McKeone
 

 session.refresh(user, [title, first_name, last_name]) 


This was the part that I was missing.  It's fairly readable and it does 
exactly what I'd need.
 


 also, if the columns you're actually using are along these lines, that is, 
 they aren't 10K text files, I'd strongly encourage you to do some actual 
 profiling to determine if all this complexity is necessary and not actually 
 hurting performance much more.To pull over 3K of text over the wire in 
 a single result is probably a lot less overhead than to pull 1000K of text 
 in two or three queries. 


At this point I'm really just exploring the boundaries of the tool so that 
I can select a flexible design.  I still haven't quite found the sweet spot 
between what can/should be lazy and what cannot/shouldn't be lazy.  In the 
existing application (the non-ORM one) all of this is done with an 
abstracted form of direct SQL (kind of like SQLAlchemy core).  I'd like to 
convert some of those sections to use declarative objects instead, so the 
point of of this is to know that if I do go down that path then I could 
still optimize the columns if I needed to (read: after I profiled it and 
determined that it was necessary) without having to drop all the way down 
to SQLAlchemy core and then change things from passing objects around to 
passing keys in some circumstances.  Although it's very likely that you are 
correct and that the complexity induced from using this kind of system may 
outweigh the over-the-wire savings -- I guess we'll see when I get there.


well this is what the group feature does, if any columns in the group are 
 touched, the whole group is loaded.  My advice would be to do some 
 profiling, come up with groups that are tailored close enough to the groups 
 of attributes that tend to be called together, and to not overthink it. 


I will certainly look into this some more, since there are certainly groups 
of columns that can be naturally grouped.  Plus I imagine that 
session.refresh() would load the entire group if an attribute from a group 
was passed to it.  So that could be an interesting way to chunk it.


Thanks for responding with such a great post.  Its certainly helped me 
think through the issues from different angle.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/zQNUzzPhzFAJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: how can i generate IS NOT True instead of != True ( orm ) ?

2012-08-10 Thread David Bolen
Jonathan Vanasco jonat...@findmeon.com writes:

 in postgresql i have a boolean field that allows Null values.

 i'd like to query for the items that are not 'true'

 filter( tablename.is_deleted != True )

 creates this sql:
 is_deleted != True

 however this is incorrect and doesn't match the resultset i want.  it

To be fair, it is correct in terms of doing what you asked, though if
you want it to include NULLs I agree it doesn't do what you want...

 needs to read :
 is_deleted IS NOT True

There are is and isnot operators in sqlalchemy.sql.operators, but I'm
not entirely sure how to trigger from within an expression (and they don't
seem to be column operators, but I'm probably missing something simple).

However, you should be able to use the generic op method, as in:

filter(tablename.is_deleted.op(IS NOT)(True))

Alternatively, you could explicitly manage the handling of NULL
entries:

from sqlalchemy.sql.functions import coalesce

filter(coalesce(tablename.is_deleted, False) != True)

This way you avoid the treatment of NULLs in the comparison entirely.  This
method also extrapolates to other cases more readily since it works with
non-boolean fields as well.

-- David

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Postgres migration issue

2012-08-09 Thread David Moore
- Original Message -

 Hi

 I usually use MySQL to develop on, however I need to work with
 Postgres for the first time today so I fired it up. I have a routine
 which converts a non-SQL database into the database of choice,
 converting its schema into a new table in the target database using
 SQA, and then copies all the data in the source database into the
 new SQL table.

 That all worked fine into the Postgres+pg8000 database. My problem is
 when I then attempt to open up a table again using auto reflection I
 get an error I've never seen before, and I don't get how this can
 be, given the table was created via sqlalchemy? The data looks fine
 in the table, and all columns are created as I expected (converting
 to the correct Postrgres column types etc.

 Error when I issue t = Table('my_table', meta, autoload=True) is;
 (sorry about the screen shot, I'm working in a bad RDP client and
 can't cut/paste into my Mac. :-(

 So it appears to be having some problem in the reflection, but I
 can't see why - I hope there is a setting in the connection or
 something I can do to fix this up? I've never used Postgres before,
 so I'm groping in the dark.. From Googling around, it appears that
 there is some issue with determining the schema or some such, but
 it's all assuming I know a lot more about Postgres than I do!

 Cheers
 Warwick

Hi Warwick, 

You are using pg8000 1.08 and PostgreSQL = 9.0. Upgrade to pg8000 1.09, it 
fixes this issue (there are new PostgreSQL types introduced in version 9 which 
pg8000 didn't know of in 1.08, and added in 1.09). 

regards 
-- 

David Moore 
Senior Software Engineer 
St. James Software 
Email: dav...@sjsoft.com 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

attachment: PastedGraphic-1.png

  1   2   3   >