[sqlalchemy] [alembic] equivalent to `makemigrations --check`

2024-03-11 Thread Chris Withers

Hi,

I got a bounce-back from attempting to email the old alembic google group, so 
trying here instead!

Does Alembic have an equivalent to django's "makemigrations --check"?

This is a thing you can use in CI on pull requests to ensure no changes
have been made to the model that are not reflected in migrations.

I guess this boils down to "is the current db state plus any outstanding
migrations equal to the model in this checkout"...

cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/69a5ab37-ada6-4224-8473-423f13481c57%40withers.org.


Re: [sqlalchemy] Re: testing patterns with sqlalchemy 2.0

2022-09-02 Thread Chris Withers

On 01/09/2022 20:00, Jonathan Vanasco wrote:


 > Create an empty schema from the models using create_all?

This is what I usually do with smaller projects.  


When taking this approach, how do you ensure the accumulated schema 
migrations end up with a database that matches the one that create_all 
gives you?


When using unittest, some tests will use a fresh DB per test-run, others 
per-class, and others per-test. 


Yeah, pytest fixture scoping gives a great way to set these up.

Sometimes the tests dictate that, other 
times I control that with env vars.  That gives us the flexibility to 
work on a small section and do efficient test runs during development.


Creating a database for every unit test feels like something that would 
be slow enough to be annoying. How are you creating databases such that 
it's fast enough for this not to be the case?


Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d779d7ad-1eff-e4e8-1f9c-9979b80a9146%40withers.org.


[sqlalchemy] testing patterns with sqlalchemy 2.0

2022-08-31 Thread Chris Withers

Hi All,

Are there any libraries (or anything in sqlalchemy itself!) that cover 
the pattern of running unit tests in against a database such that each 
test gets its own sterile environment in which to run? Postgres, if it 
helps. I've done some stuff with running in a subtransaction and rolling 
back at the end of the test before, but wanted to see if anything had 
become common enough to end up in a popular library yet...


What's the recommended way of getting a blank db to test against? Run 
alembic migrations in a session-level fixture? Create an empty schema 
from the models using create_all? Something else?


cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/a26f0df3-ec7a-bd11-9c2f-b8711499e03d%40withers.org.


[sqlalchemy] SQLAlchemy 1.3.23 turn off before_compile with **kwargs?

2021-03-01 Thread Chris Simpson
Hello

I'd like to understand how to turn off a before_compile listener (e.g. soft 
delete, to include deleted items).

For example, 

I've adapted the example from the docs: 
https://docs.sqlalchemy.org/en/13/orm/events.html?highlight=before_compile#sqlalchemy.orm.events.QueryEvents.before_compile
  
<https://docs.sqlalchemy.org/en/13/orm/events.html?highlight=before_compile#sqlalchemy.orm.events.QueryEvents.before_compile>

To use the field 'archived' , which works as expected.

@event.listens_for(Query, "before_compile", retval=True, bake_ok=True)
def filter_archived(query):
for desc in query.column_descriptions:
if desc["type"] is Person:
entity = desc["entity"]
query = query.filter(entity.archived == 0)
return query


I've tried things such as:

Person.query.filter_by(archived=True).all()

But I don't understand yet where I should put such kwargs to override the 
before_compile events listener

Is the following the right path?

@event.listens_for(Query, "before_compile", retval=True, bake_ok=True)
def filter_archived(query, **kwargs):
for desc in query.column_descriptions:
if kwargs["include_archived"] is not True and desc["type"] is 
Person:
entity = desc["entity"]
    query = query.filter(entity.archived == 0)
return query


Kind regards
Chris

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d9ca992a-9075-4c1b-b4a0-32179f48b5bdn%40googlegroups.com.


Re: [sqlalchemy] Re: SQLAlchemy join/has query with example code

2021-02-09 Thread Chris Simpson
Thanks Mike, the assurance it's the right idea was what I wanted to check.

All sorted much appreciated.

On Sun, 7 Feb 2021, 21:49 Mike Bayer,  wrote:

>
>
> On Sat, Feb 6, 2021, at 8:56 AM, Chris Simpson wrote:
>
> After posting, I have arrived at *a* solution (which might be awful)
> Please let me know if this is a bad approach or I'm following the api
> correctly:
>
> I have converted this SQL query:
>
> SELECT COUNT(*)
> FROM person
> JOIN subscription ON
> person.id = subscription.person_id
> JOIN plan ON
> subscription.sku_uuid = plan.uuid
> JOIN plan_requirements ON
> plan.id = plan_requirements.plan_id
> WHERE plan_requirements.subscription = 1
>
> Into the following SQLAlchemy query:
>
> database.session.query(Person)\
> .join(Subscription)\
> .join(Plan, Subscription.sku_uuid==Plan.uuid)\
> .join(PlanRequirements, Plan.id==PlanRequirements.plan_id)\
> .filter(PlanRequirements.subscription==1).all()
>
>
> seems to be the right idea except you aren't emitting the "COUNT(*)" part
> of it, not sure if that's what you wanted.
>
>
> Kind regards,
>
> Chris
> On Saturday, 6 February 2021 at 13:42:54 UTC Chris Simpson wrote:
>
> Hello,
>
> I'm trying to convert this working SQL query: (SQLAlchemy models are below)
>
> SELECT COUNT(*)
> FROM person
> JOIN subscription ON
> person.id = subscription.person_id
> JOIN plan ON
> subscription.sku_uuid = plan.uuid
> JOIN plan_requirements ON
> plan.id = plan_requirements.plan_id
> WHERE plan_requirements.subscription = 1
>
> Into a SQLAlchemy query. so far from reading the docs
> <https://docs.sqlalchemy.org/en/13/orm/tutorial.html>,  I have the
> following:
>
> database.session.query(Person).join(Subscription).filter(Subscription.plan.has()
> ).all()
>
> With the objective: Show me all people who have at least one plan with the
> plan_requirements.subscription set to 1 (meaning true).
>
> Do I need to somehow keep chaining my joins?
>
> My SQLAlchemy Models are: (full code is also linked at end)
>
> class Person(database.Model):
> __tablename__ = "person"
> id = database.Column(database.Integer(), primary_key=True)
> uuid = database.Column(database.String(), default=uuid_string)
> given_name = database.Column(database.String())
> family_name = database.Column(database.String())
> subscriptions = relationship("Subscription", back_populates="person")
>
> class Plan(database.Model):
> __tablename__ = "plan"
> id = database.Column(database.Integer(), primary_key=True)
> uuid = database.Column(database.String(), default=uuid_string)
> requirements = relationship(
> "PlanRequirements", uselist=False, back_populates="plan"
> )
>
>
> class PlanRequirements(database.Model):
> __tablename__ = "plan_requirements"
> id = database.Column(database.Integer(), primary_key=True)
> plan_id = database.Column(database.Integer(), ForeignKey("plan.id"))
> plan = relationship("Plan", back_populates="requirements")
> instant_payment = database.Column(database.Boolean(), default=False)
> subscription = database.Column(database.Boolean(), default=False)
>
> Full source code of models:
> https://github.com/Subscribie/subscribie/blob/master/subscribie/models.py#L40
> <https://github.com/Subscribie/subscribie/blob/master/subscribie/models.py#L40>
>
> Much appreciated if someone can point me in the right directly. I'm
> confident with the SQL quiery, just not how to convert that to the ORM.
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/1fa888e3-a888-4245-912f-6500d23f3620n%40googlegroups.com
> <https://groups.google.com/d/msgid/sqlalchemy/1fa888e3-a888-4245-912f-6500d23f3620n%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve f

[sqlalchemy] Re: SQLAlchemy join/has query with example code

2021-02-06 Thread Chris Simpson
After posting, I have arrived at *a* solution (which might be awful) Please 
let me know if this is a bad approach or I'm following the api correctly:

I have converted this SQL query:

SELECT COUNT(*)
FROM person
JOIN subscription ON
person.id = subscription.person_id
JOIN plan ON 
subscription.sku_uuid = plan.uuid
JOIN plan_requirements ON
plan.id = plan_requirements.plan_id
WHERE plan_requirements.subscription = 1

Into the following SQLAlchemy query: 

database.session.query(Person)\
.join(Subscription)\
.join(Plan, Subscription.sku_uuid==Plan.uuid)\
.join(PlanRequirements, Plan.id==PlanRequirements.plan_id)\
.filter(PlanRequirements.subscription==1).all()

Kind regards,

Chris

On Saturday, 6 February 2021 at 13:42:54 UTC Chris Simpson wrote:

> Hello,
>
> I'm trying to convert this working SQL query: (SQLAlchemy models are below)
>
> SELECT COUNT(*)
> FROM person
> JOIN subscription ON
> person.id = subscription.person_id
> JOIN plan ON 
> subscription.sku_uuid = plan.uuid
> JOIN plan_requirements ON
> plan.id = plan_requirements.plan_id
> WHERE plan_requirements.subscription = 1
>
> Into a SQLAlchemy query. so far from reading the docs 
> <https://docs.sqlalchemy.org/en/13/orm/tutorial.html>,  I have the 
> following:
>
> database.session.query(Person).join(Subscription).filter(Subscription.plan.has()
>  
> ).all()
>
> With the objective: Show me all people who have at least one plan with the 
> plan_requirements.subscription set to 1 (meaning true).
>
> Do I need to somehow keep chaining my joins?
>
> My SQLAlchemy Models are: (full code is also linked at end)
>
> class Person(database.Model):
> __tablename__ = "person"
> id = database.Column(database.Integer(), primary_key=True)
> uuid = database.Column(database.String(), default=uuid_string)
> given_name = database.Column(database.String())
> family_name = database.Column(database.String())
> subscriptions = relationship("Subscription", back_populates="person")
>
> class Plan(database.Model):
> __tablename__ = "plan"
> id = database.Column(database.Integer(), primary_key=True)
> uuid = database.Column(database.String(), default=uuid_string)
> requirements = relationship(
> "PlanRequirements", uselist=False, back_populates="plan"
> )
>
>
> class PlanRequirements(database.Model):
> __tablename__ = "plan_requirements"
> id = database.Column(database.Integer(), primary_key=True)
> plan_id = database.Column(database.Integer(), ForeignKey("plan.id"))
> plan = relationship("Plan", back_populates="requirements")
> instant_payment = database.Column(database.Boolean(), default=False)
> subscription = database.Column(database.Boolean(), default=False)
>
> Full source code of models: 
> https://github.com/Subscribie/subscribie/blob/master/subscribie/models.py#L40 
>  
> <https://github.com/Subscribie/subscribie/blob/master/subscribie/models.py#L40>
>
> Much appreciated if someone can point me in the right directly. I'm 
> confident with the SQL quiery, just not how to convert that to the ORM.
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/1fa888e3-a888-4245-912f-6500d23f3620n%40googlegroups.com.


[sqlalchemy] SQLAlchemy join/has query with example code

2021-02-06 Thread Chris Simpson
Hello,

I'm trying to convert this working SQL query: (SQLAlchemy models are below)

SELECT COUNT(*)
FROM person
JOIN subscription ON
person.id = subscription.person_id
JOIN plan ON 
subscription.sku_uuid = plan.uuid
JOIN plan_requirements ON
plan.id = plan_requirements.plan_id
WHERE plan_requirements.subscription = 1

Into a SQLAlchemy query. so far from reading the docs 
,  I have the 
following:

database.session.query(Person).join(Subscription).filter(Subscription.plan.has()
 
).all()

With the objective: Show me all people who have at least one plan with the 
plan_requirements.subscription set to 1 (meaning true).

Do I need to somehow keep chaining my joins?

My SQLAlchemy Models are: (full code is also linked at end)

class Person(database.Model):
__tablename__ = "person"
id = database.Column(database.Integer(), primary_key=True)
uuid = database.Column(database.String(), default=uuid_string)
given_name = database.Column(database.String())
family_name = database.Column(database.String())
subscriptions = relationship("Subscription", back_populates="person")

class Plan(database.Model):
__tablename__ = "plan"
id = database.Column(database.Integer(), primary_key=True)
uuid = database.Column(database.String(), default=uuid_string)
requirements = relationship(
"PlanRequirements", uselist=False, back_populates="plan"
)


class PlanRequirements(database.Model):
__tablename__ = "plan_requirements"
id = database.Column(database.Integer(), primary_key=True)
plan_id = database.Column(database.Integer(), ForeignKey("plan.id"))
plan = relationship("Plan", back_populates="requirements")
instant_payment = database.Column(database.Boolean(), default=False)
subscription = database.Column(database.Boolean(), default=False)

Full source code of models: 
https://github.com/Subscribie/subscribie/blob/master/subscribie/models.py#L40  


Much appreciated if someone can point me in the right directly. I'm 
confident with the SQL quiery, just not how to convert that to the ORM.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/8ae57449-73b5-4c2a-81e8-aec971a42ff2n%40googlegroups.com.


[sqlalchemy] FetchedValue columns are INSERTed by Session.merge()

2020-10-02 Thread Wilson, Chris
Dear Michael and co,

I think that Columns which are marked as server_default=FetchedValue(), which 
are normally omitted from INSERT statements, are not omitted after an object 
has been merged from another session. Therefore SQLAlchemy tries to assign the 
old values to them, which fails if the database does not allow it (e.g. 
internal columns, such as xmin in Postgres).

If this is by design, is there any way to override it on the individual objects 
before merging?

Here is an example that reproduces the error:

from sqlalchemy import Column, FetchedValue, Integer, Text, create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import object_session, sessionmaker

Base = declarative_base()

class Dog(Base):
__tablename__ = 'dog'
__table_args__ = {'prefixes': ['TEMPORARY']}

id = Column(Integer, primary_key=True)
xmin = Column("xmin", Integer, server_default=FetchedValue(), 
server_onupdate=FetchedValue(), system=True)
name = Column(Text)

def main():
engine = create_engine('postgresql://username@server:port/db')
engine.echo = True
Base.metadata.create_all(engine)

session_1 = sessionmaker(bind=engine)(autocommit=True)
with session_1.begin():
fido_1 = Dog(name="fido")
session_1.add(fido_1)
print(fido_1.xmin)

session_2 = sessionmaker(bind=engine)(autocommit=True)

with session_2.begin():
session_2.query(Dog).delete()

with sessionmaker(bind=engine)(autocommit=True).begin():
fido_2 = session_2.merge(fido_1)
session_2.add(fido_2)

print(fido_2.xmin)
print(session_2.query(Dog).count())

if __name__ == '__main__':
main()

And the result is:

sqlalchemy.exc.ProgrammingError: (raised as a result of Query-invoked 
autoflush; consider using a session.no_autoflush block if this flush is 
occurring prematurely)
(psycopg2.ProgrammingError) column "xmin" of relation "dog" does not exist
LINE 1: INSERT INTO dog (id, xmin, name) VALUES (1, '283302364', 'fi...
 ^
[SQL: INSERT INTO dog (id, xmin, name) VALUES (%(id)s, %(xmin)s, %(name)s)]
[parameters: {'id': 1, 'xmin': '283302364', 'name': 'fido'}]
(Background on this error at: http://sqlalche.me/e/f405)

Note that this is raised when fido_2 is inserted (after fido_1 is merged into a 
new session).

Thanks, Chris.

** Cantab Capital Partners LLP is now named GAM Systematic LLP. Please note 
that our email addresses have changed from @cantabcapital.com to @gam.com.**

This email was sent by and on behalf of GAM Investments. GAM Investments is the 
corporate brand for GAM Holding AG and its direct and indirect subsidiaries. 
These companies may be referred to as 'GAM' or 'GAM Investments'. In the United 
Kingdom, the business of GAM Investments is conducted by GAM (U.K.) Limited 
(No. 01664573) or one or more entities under the control of GAM (U.K.) Limited, 
including the following entities authorised and regulated by the Financial 
Conduct Authority: GAM International Management Limited (No. 01802911), GAM 
London Limited (No. 00874802), GAM Sterling Management Limited (No. 01750352), 
GAM Unit Trust Management Company Limited (No. 2873560) and GAM Systematic LLP 
(No. OC317557). GAM (U.K.) Limited and its regulated entities are registered in 
England and Wales. The registered office and principal place of business of GAM 
(U.K.) Limited and its regulated entities is at 8 Finsbury Circus, London, 
England, EC2M 7GB. The registered office of GAM Systematic LLP is at City 
House, Hills Road, Cambridge, CB2 1RE. This email, and any attachments, is 
confidential and may be privileged or otherwise protected from disclosure. It 
is intended solely for the stated addressee(s) and access to it by any other 
person is unauthorised. If you are not the intended recipient, you must not 
disclose, copy, circulate or in any other way use or rely on the information 
contained herein. If you have received this email in error, please inform us 
immediately and delete all copies of it. See - 
https://www.gam.com/en/legal/email-disclosures-eu/ for further information on 
confidentiality, the risks of non-secure electronic communication, and certain 
disclosures which we are required to make in accordance with applicable 
legislation and regulations. If you cannot access this link, please notify us 
by reply message and we will send the contents to you. GAM Investments will 
collect and use information about you in the course of your interactions with 
us. Full details about the data types we collect and what we use this for and 
your related rights is set out in our online privacy policy at 
https://www.gam.com/en/legal/privacy-policy. Please familiarise yourself with 
this policy and check it from time to time for updates as it supplements this 
notice.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, p

[sqlalchemy] Interval vs DATETIME in ORM Internals

2019-12-30 Thread Chris Modzelewski
Hi There,

So I have a bit of an (admittedly unusual) question:

I understand that the Interval type is designed to operate using
datetime.timedelta on those SQL engines that natively support the
Interval data type (eg PostgreSQL).

I also understand that when operating on SQL engines that do *not*
natively support the Interval data type, SQLAlchemy coerces
datetime.timedelta values to datetime.datetime values relative to the
epoch date.

Is my understanding so far correct?

If so, here is the crux of the situation: given an ORM model class
with an Interval attribute, when introspecting either that model class
(i.e. the class of the model itself) OR when introspecting an instance
of that model class (presumably navigating across
InstrumentedAttributes and related Comparators) how can one
differentiate between:
A) an attribute that is still a native Interval type,
B) a “converted” Interval (now represented as a DATETIME type), and
C) a native DATETIME type that does not have any relationship to intervals

I’m almost certain there’s a mechanism to do this differentiation by
navigating the internals, but I'm having trouble putting my finger on
how best to accomplish this. My overall objective is to use this
differentiation to do appropriate datetime / timedelta arithmetic and
conversion based on whether the timedelta is to be persisted to the DB
as a native Interval type, or as a datetime relative to the epoch
time.

Any help or guidance would be much appreciated!

All the best,
Chris

Sent from my iPhone

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CACng_W8AwxAZp1ODeBBTsh%2B1v0hRcet%2Bw6cvdOmsfxwp8WRSLw%40mail.gmail.com.


[sqlalchemy] refresh_flush instance event not called for PK attributes

2019-12-19 Thread Chris Wilson
Dear Mike and SQLAlchemy users,

I think I've discovered a confusing (and undocumented) limitation of the 
refresh_flush event. It's called when non-PK columns are populated after an 
INSERT or UPDATE (e.g. from a server-side default), but not for PK values.

The 
documentation<https://docs.sqlalchemy.org/en/13/orm/events.html#sqlalchemy.orm.events.InstanceEvents.refresh_flush>
 says:

"This event is the same as 
InstanceEvents.refresh()<https://docs.sqlalchemy.org/en/13/orm/events.html#sqlalchemy.orm.events.InstanceEvents.refresh>
 except it is invoked within the unit of work flush process, and the values 
here typically come from the process of handling an INSERT or UPDATE, such as 
via the RETURNING clause or from Python-side default values."

With Postgres and SQLite at least, the primary key (e.g. the ID column) of a 
newly created object is returned with a RETURNING clause. But it doesn't 
trigger a refresh_flush event, because it's skipped by this code in _postfetch:

if returning_cols:
row = result.context.returned_defaults
if row is not None:
for col in returning_cols:
# pk cols returned from insert are handled
# distinctly, don't step on the values here
if col.primary_key and result.context.isinsert:
   continue

I can see that the ID is assigned to the object's state in the caller 
(_emit_insert_statements):



primary_key = result.context.inserted_primary_key



if primary_key is not None:

# set primary key attributes

for pk, col in zip(

primary_key, mapper._pks_by_table[table]

):

prop = mapper_rec._columntoproperty[col]

if state_dict.get(prop.key) is None:

state_dict[prop.key] = pk


But no event is called when this happens (AFAICS). The after_flush and 
after_flush_postexec events are called soon after that.

It would be nice if at least the documentation made this clear, and even better 
if we could use refresh_flush for all flush-context events, including PK 
assignment. What do you think?

If an example is useful, here is a trivial one. The receive_refresh_flush 
handler is never called:


from sqlalchemy import Column, Integer, Text, create_engine, event

from sqlalchemy.ext.declarative import declarative_base

from sqlalchemy.orm import sessionmaker



Base = declarative_base()



class Dog(Base):

__tablename__ = 'dog'

id = Column(Integer, primary_key=True)

name = Column(Text)



engine = create_engine('sqlite://')

# engine.echo = True

Base.metadata.create_all(engine)



DBSession = sessionmaker(bind=engine)



session = DBSession(autocommit=True)



@event.listens_for(Dog, 'refresh_flush')

def receive_refresh_flush(target, flush_context, attrs):

print("Dog was assigned an ID: {attrs}")



with session.begin() as trans:

session.add(Dog(name="fido"))

Thanks, Chris.



This email is confidential. If you are not the intended recipient, please 
advise us immediately and delete this message. 
The registered name of Cantab- part of GAM Systematic is Cantab Capital 
Partners LLP. 
See - http://www.gam.com/en/Legal/Email+disclosures+EU for further information 
on confidentiality, the risks of non-secure electronic communication, and 
certain disclosures which we are required to make in accordance with applicable 
legislation and regulations. 
If you cannot access this link, please notify us by reply message and we will 
send the contents to you.

GAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and 
use information about you in the course of your interactions with us. 
Full details about the data types we collect and what we use this for and your 
related rights is set out in our online privacy policy at 
https://www.gam.com/en/legal/privacy-policy. 
Please familiarise yourself with this policy and check it from time to time for 
updates as it supplements this notice.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/BCCA73C2165E8947A2E786EC482564DE0167FDC820%40CCPMAILDAG03.cantab.local.


Re: [sqlalchemy] Oracle wallets + proxy authentication with arguments encoded as strings in URL

2019-11-14 Thread Chris Stephens
"oracle://[proxy_to_user]@my_wallet" worked!

Thanks again.

On Thu, Nov 14, 2019 at 2:14 PM Mike Bayer  wrote:

>
>
> On Thu, Nov 14, 2019, at 2:52 PM, Chris Stephens wrote:
>
> I'm attempting to make use of an Oracle wallet with proxy authentication
> passed as strings in URL.
>
> The documentation states:
>
> "*Changed in version 1.3: *the cx_oracle dialect now accepts all argument
> names within the URL string itself, to be passed to the cx_Oracle DBAPI. As
> was the case earlier but not correctly documented, the
> create_engine.connect_args
> <https://docs.sqlalchemy.org/en/13/core/engines.html#sqlalchemy.create_engine.params.connect_args>
>  parameter
> also accepts all cx_Oracle DBAPI connect arguments."
>
> The following works:
>
> db = sqlalchemy.create_engine('oracle+cx_oracle://my_wallet_credential',
> connect_args={"user":"[prox_to_user]"}
> )
>
> However, I can't seem to get the strings/URL to work. I tried the
> following but always receive "ORA-12154: TNS:could not resolve the
> connect identifier specified". Is this possible? If so, what is the proper
> syntax?
>
> db =
> sqlalchemy.create_engine('oracle+cx_oracle://my_wallet_credential?user=proxy_to_user')
> db =
> sqlalchemy.create_engine('oracle+cx_oracle://my_wallet_credential?user=[proxy_to_user]')
>
>
> "user" is the first argument after the //.oracle://proxy_to_user@
> /my_wallet
>
>
>
>
> Thanks in advance for any help!
>
>
>
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/CAEFL0syDWBTjXKkRLViQSeTiOEphfsSdQG6KSwwSo7%3DEB1R3mQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/CAEFL0syDWBTjXKkRLViQSeTiOEphfsSdQG6KSwwSo7%3DEB1R3mQ%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/0ead4a70-1a28-42ce-be86-4cd4dbd338d1%40www.fastmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/0ead4a70-1a28-42ce-be86-4cd4dbd338d1%40www.fastmail.com?utm_medium=email_source=footer>
> .
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAEFL0szX_fVrsRXWd3%2BYbkGVoHsVa88PfYGL-8JUZx8Tvob0qg%40mail.gmail.com.


[sqlalchemy] Oracle wallets + proxy authentication with arguments encoded as strings in URL

2019-11-14 Thread Chris Stephens
I'm attempting to make use of an Oracle wallet with proxy authentication
passed as strings in URL.

The documentation states:

"Changed in version 1.3: the cx_oracle dialect now accepts all argument
names within the URL string itself, to be passed to the cx_Oracle DBAPI. As
was the case earlier but not correctly documented, the
create_engine.connect_args

parameter
also accepts all cx_Oracle DBAPI connect arguments."

The following works:

db = sqlalchemy.create_engine('oracle+cx_oracle://my_wallet_credential',
connect_args={"user":"[prox_to_user]"}
)

However, I can't seem to get the strings/URL to work. I tried the following
but always receive "ORA-12154: TNS:could not resolve the connect identifier
specified". Is this possible? If so, what is the proper syntax?

db =
sqlalchemy.create_engine('oracle+cx_oracle://my_wallet_credential?user=proxy_to_user')
db =
sqlalchemy.create_engine('oracle+cx_oracle://my_wallet_credential?user=[proxy_to_user]')

Thanks in advance for any help!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAEFL0syDWBTjXKkRLViQSeTiOEphfsSdQG6KSwwSo7%3DEB1R3mQ%40mail.gmail.com.


Re: [sqlalchemy] Masking SAWarning: Flushing object ... with incompatible polymorphic identity

2019-06-21 Thread Chris Withers

On 20/06/2019 16:00, Mike Bayer wrote:



On Thu, Jun 20, 2019, at 3:14 AM, Chris Withers wrote:

Hi All,

I'm getting this warning:

SAWarning: Flushing object  with incompatible
polymorphic identity ; the object may not refresh
and/or load correctly (this warning may be suppressed after 10 
occurrences)

    (state_str(state), dict_[polymorphic_key]),

How can I indicate in my code that this is intentional and no warning
should be omitted?


you would need to change its class, or replace it with a new object of 
the correct class.  in both cases you probably need to pull the object 
out of the Session and put it in again.   if you are making a new 
object, you would use make_transient_to_detached() to give the object 
a persistent primary key identity then use session.add() to put it 
back in again:


https://docs.sqlalchemy.org/en/13/orm/session_api.html?highlight=make_transient#sqlalchemy.orm.session.make_transient_to_detached

if you are doing "obj.__class__ = newclass", you probably need to do 
the same thing plus more because the identity key needs to have the 
new class in it, so you could make_transient() to erase the old 
identity key and then make_transient_to_detached() to give it back a 
new identity key, then session.add() it.


in both of *those* cases, the history of the object is reset to 
"clean", so anything you actually want to persist on the object needs 
to be applied after you've recreated it as a new class with a new 
identity key.


still another way, that would not reset the history, you can also 
manipulate state.key directly to have the correct identity key and 
then replace the object, e.g.


session.expunge(obj)
obj.__class__ = newclass
obj.polymorphic_identity = 'new_identity'
inspect(obj).key = sqlalchemy.orm.identity_key(instance=obj)
session.add(obj)

I'd probably want to make a new object and not change __class__, 
because changing __class__ in the general case indicates you're going 
to have an object where its __dict__ can be in an invalid state vs. 
what the class expects.


Yikes, that escalated quickly.

It probably makes sense just to lose the polymorphic-ness in this case, 
I'm not getting enough from it to make the above sensible...


Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/4e56afd0-14ca-e767-7e54-39d0813b8f1a%40withers.org.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Masking SAWarning: Flushing object ... with incompatible polymorphic identity

2019-06-21 Thread Chris Withers

On 20/06/2019 16:00, Mike Bayer wrote:



this is not reproducing for me:

from sqlalchemy import util
import warnings
from sqlalchemy import exc

warnings.filterwarnings("ignore", category=exc.SAWarning)

util.warn_limited(
     "Flushing object %s with "
     "incompatible polymorphic identity %r; the "
     "object may not refresh and/or load correctly",
     (object(), 'foo'),
)



with the filter I get no output


Yay!  :-) I hate warn filtering, it never seems to work right 
for me, sorry, this is probably a rabbit hole that's not worth worrying 
about.


Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/10f5c0b9-b181-2542-efd4-88ddd7630765%40withers.org.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Masking SAWarning: Flushing object ... with incompatible polymorphic identity

2019-06-21 Thread Chris Withers

On 20/06/2019 18:50, Jonathan Vanasco wrote:



On Thursday, June 20, 2019 at 3:14:06 AM UTC-4, Chris Withers wrote:


How can I indicate in my code that this is intentional and no warning
should be omitted?



Personal option:

I would not mask these.  I would let them persist and probably add a 
unittest to ensure they are invoked in certain situations.


Masking them runs the risk of masking other SqlAlchemy warnings that 
you'll want to know about.


Oh, I totally agree, it's more that I couldn't get them to mask *at 
all*. Once I got something working I was going to go back to a regex for 
just this case, but see other email...


Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/35825814-e942-fd7d-4aac-ce35ecc1a9d7%40withers.org.
For more options, visit https://groups.google.com/d/optout.


Re: merging old versions

2019-06-21 Thread Chris Withers

>> On Thu, Jun 20, 2019 at 2:37 AM Chris Withers > <mailto:ch...@withers.org>> wrote:
>>
>> Hi All,
>>
>> I have some versions that make use of the third party package I no
>> longer use, how do I collapse down alembic revisions that have
>> already
>> been executed everywhere?
>>
>> I found
>> 
https://stackoverflow.com/questions/34491914/alembic-how-to-merge-all-revision-files-to-one-file

>>
>> but that doesn't feel right...

On 20/06/2019 19:00, Mike Bayer wrote:


I think the basic idea is to create a database and codebase in the 
state of the target revision. Then autogenerate a migration from 
nothing to that revision - just like you would do when starting to use 
alembic from an existing schema. From there you can change the slug on 
it so that it works as the down_revision of later migrations and clear 
out the old unused migrations that you're replacing.


- Michael



I think that's what the stackoverflow answer says too. 


Yep.

At the moment 
that might be the most expedient approach.  


Okay, TBH here the case is more that I have a couple of migrations that 
use a package I want to remove as a dependency (sqlalchemy-searchable, 
which is really good, fwiw, just turns out "like" querying is actually 
going to work better for my needs), but I think I can just remove those 
bits once all the migrations are run through...


However I can see that 
Alembic might benefit from having a special option to autogenerate a 
model into Python code assuming no database to start with.    Feel free 
to propose though Alembic is suffering from lack of contributors right now.


Gotcha, I'll come with a PR rather just asking for something, but I'm 
suffering from the same problem you are ;-)


thanks as always,

Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/b8fe3ab2-042a-a789-07a4-f874ef8c34eb%40withers.org.
For more options, visit https://groups.google.com/d/optout.


merging old versions

2019-06-20 Thread Chris Withers

Hi All,

I have some versions that make use of the third party package I no 
longer use, how do I collapse down alembic revisions that have already 
been executed everywhere?


I found 
https://stackoverflow.com/questions/34491914/alembic-how-to-merge-all-revision-files-to-one-file 
but that doesn't feel right...


Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/933cd6aa-0e35-8716-3725-56947157103b%40withers.org.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Masking SAWarning: Flushing object ... with incompatible polymorphic identity

2019-06-20 Thread Chris Withers

Hi All,

I'm getting this warning:

SAWarning: Flushing object  with incompatible 
polymorphic identity ; the object may not refresh 
and/or load correctly (this warning may be suppressed after 10 occurrences)

    (state_str(state), dict_[polymorphic_key]),

I know why: I'm changing the polymorphic type of a row, but this is 
intentional (it's handling a PUT to the entity the row represents) and I 
don't believe I'll hit refresh or load problems since once the 
transaction is committed, I'm done with the session.


So, two questions:

How can I indicate in my code that this is intentional and no warning 
should be omitted?


Why does the following warning filter not work?

warnings.filterwarnings("ignore", category=SADeprecationWarning)
warnings.filterwarnings("ignore", category=SAWarning)

The SADeprecationWarning filter *does* work (and is somewhat legit as 
they're coming from a third party library that's awaiting a fix), but 
the SAWarning doesn't catch the above.


cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/04e4866e-6636-626c-c4f3-16e31849aad2%40withers.org.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] testing that a session is committed correctly

2019-06-19 Thread Chris Withers

On 10/06/2019 15:40, Mike Bayer wrote:




Okay, so sounds like in an ideal world, the framework should provide a
way to sub out that transaction middleware when unit testing and then
for functional testing, I just need to fall back to dropping everything
in the db, or having a fresh db created from scratch?



why can't you use the rollback fixture for functional testing?  whether 
or not you can assert that commit() was called, you can still have an 
enclosing transaction that gets rolled back.


Sorry, I missed this. This sounds right, but I seem to remember hitting 
problems with this, and then I got pulled onto another project.


Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/b611a4a4-a019-034b-f431-90dc35d4f082%40withers.org.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] testing that a session is committed correctly

2019-06-10 Thread Chris Withers

On 05/06/2019 20:47, Mike Bayer wrote:




The panacea I'm after is to be able to run the DDL in a transaction, run
each test in a subtransaction off that which is rolled back at the end
of each test, but also be able to check that the code under test is
doing session.commit() where it should. Where the pattern we're
discussing, I certainly have the first two, but as you can see from what
I've just pasted above, the third one isn't there, but is it possible?



so your app code calls .flush(), .commit(), and .close() explicitly 
within each persistence method?  


Not really, it's more framework code, but most frameworks nowadays 
provide testing methods along the lines of:


client.get('/something?whatever=1')

...and they exercise the full request handling code, that tends to 
include some form of session lifecycle middleware or what have you.


and you need to make sure the .commit() 
is in the middle?   


Yep.

I'd probably use mock.patch for that level of 
granularity.   Usually I'd not have specific business methods calling 
commit() and close() at all, at the very least not .close().


Okay, so sounds like in an ideal world, the framework should provide a 
way to sub out that transaction middleware when unit testing and then 
for functional testing, I just need to fall back to dropping everything 
in the db, or having a fresh db created from scratch?


cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/c5c0a17c-0671-b86c-842e-f03cded132ba%40withers.org.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] testing that a session is committed correctly

2019-06-05 Thread Chris Withers

On 05/06/2019 17:15, Mike Bayer wrote:



How come close() doesn't rollback the SessionTransaction if it throws it
away?


that's currently what .close() does, it discards the connection.   this 
is safe because the connection pool ensures transactions are rolled 
back.   This might have to change with some of the 2.0 things I'm 
thinking about but it's not clear yet.


Anyway, you don't get a rollback here because the session is bound to an 
external connection so the connection pool is not involved.   


Ah, right, so under normal conditions, if I just close a session, any 
transaction and subtransactions in flight will be rolled back when the 
connection is returned to the pool?



if you 
want to roll back the work that the application did, that would be your 
sub_transaction.rollback():


     sub_transaction = conn.begin_nested()
     try:

     session = Session()

     # code under test:
     event = Event(text='some stuff got done')
     session.add(event)
     session.flush()
     session.close()

     finally:
     sub_transaction.rollback()
     assert session.query(Event).count() == 0


Sure, but the test then fails when the code is correct:

try:
Base.metadata.create_all(bind=conn, checkfirst=False)
Session.configure(bind=conn)

sub_transaction = conn.begin_nested()
try:

session = Session()

# code under test:
event = Event(text='some stuff got done')
session.add(event)
session.flush()
session.commit()
session.close()

# test:
sub_transaction.rollback()

assert session.query(Event).count() == 1

finally:
sub_transaction.rollback()

finally:
transaction.rollback()

The panacea I'm after is to be able to run the DDL in a transaction, run 
each test in a subtransaction off that which is rolled back at the end 
of each test, but also be able to check that the code under test is 
doing session.commit() where it should. Where the pattern we're 
discussing, I certainly have the first two, but as you can see from what 
I've just pasted above, the third one isn't there, but is it possible?


the test harness is giving you two choices.  you can look at the state 
of the DB after your program has done some things and *before* your 
harness has reversed its work, or you can look at the state of the DB 
*after* your harness has reversed its work. 


Unless I'm missing something, neither of these let the test confirm that 
the code under test has called commit() when it should.


cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d7f7c9ac-2d32-6e60-efac-f86b501683f7%40withers.org.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] testing that a session is committed correctly

2019-06-05 Thread Chris Withers

On 05/06/2019 16:41, Mike Bayer wrote:



Which gives me:

$ python sessions_are_weird.py
Traceback (most recent call last):
  File "sessions_are_weird.py", line 40, in 
    assert session.query(Event).count() == 0
AssertionError

Whereas after the rollback, I'd expect that count to be zero...


this is not working because of this:

     session.close()

     # test:
     session.rollback()


the session is closed first so rollback() will do nothing.


Yeah, that was my point: That session.close() appears to be the problem. 
It's a normal and required part of the code under test, but it throws 
away the SessionTransaction without rolling it back, so by the time the 
test does session.rollback(), it's doing it on a new SessionTransaction 
and so has no effect and the assertion fails because there event created 
is still around...


Put differently, the web app closes the session at the end of its 
request handling, which seems legit, right?


How come close() doesn't rollback the SessionTransaction if it throws it 
away?


Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/806b6413-8537-8813-5c32-1238086300dd%40withers.org.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] testing that a session is committed correctly

2019-06-05 Thread Chris Withers

(sorry, meant to send this to the list)

On 05/06/2019 15:52, Mike Bayer wrote:



That session.close() appears to be the problem. It's a normal and 
required part of the code under test, but it throws away the 
SessionTransaction without rolling it back, so by the time the test 
does session.rollback(), it's doing it on a new SessionTransaction and 
so has no effect and the assertion fails because there event created 
is still around...


Is this a bug or am I just confusing myself and everyone else?


I can run your script if you remove the "diary" thing from it


Hmm, I wonder what's different?

I'm on Python 3.7.1, SQLAlchemy 1.3.4, Postgres 11.3, here's a totally 
self contained script:


import os

from sqlalchemyimport Column, Integer, Text
from sqlalchemyimport create_engine
from sqlalchemy.ext.declarativeimport declarative_base
from sqlalchemy.ormimport sessionmaker

Base = declarative_base()

Session = sessionmaker()

class Event(Base):
__tablename__ ='entry' id = Column(Integer(),primary_key=True)
text = Column(Text)


engine = create_engine(os.environ['TEST_DB_URL'])
Session.configure(bind=engine)

conn = engine.connect()
transaction = conn.begin()
try:
Base.metadata.create_all(bind=conn,checkfirst=False)
Session.configure(bind=conn)

sub_transaction = conn.begin_nested()
try:

session = Session()

# code under test: event = Event(text='some stuff got done')
session.add(event)
session.flush()
session.close()

# test: session.rollback()
assert session.query(Event).count() ==0 finally:
sub_transaction.rollback()

finally:
transaction.rollback()


Which gives me:

$ python sessions_are_weird.py
Traceback (most recent call last):
  File "sessions_are_weird.py", line 40, in 
    assert session.query(Event).count() == 0
AssertionError

Whereas after the rollback, I'd expect that count to be zero...

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d8b26fa3-aca9-7391-f1b8-c7900368b266%40withers.org.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] testing that a session is committed correctly

2019-06-05 Thread Chris Withers

On 04/06/2019 23:21, Mike Bayer wrote:


On Tue, Jun 4, 2019, at 4:33 PM, Chris Withers wrote:


So, how do I roll back the further subtransaction created by the web
framework instantiating Session from a sessionmaker bound to the
connection in which begin_nested() has been called, which under non-test
running would actually be a top level transaction assuming I understand
the pattern correctly, in such as way that if the code-under-test has
committed on is session, the session being used to check expectations in
the unit test will see the results, but if it that commit has been
forgotten, it will not?


I'm not following all your code but if there are two sessions in play 
I'd probably try to avoid that, there should be only one Session you 
care about.  the test fixtures should be external to everything and 
make sure there's just the one session.   if there are two in play, 
I'm not sure how they both get bound to your test transaction.


Even this doesn't appear to be enough, here's the simplest reproducer 
script I can get to:


import os

from diary.modelimport Session, Base, Event, Types
from sqlalchemyimport create_engine

engine = create_engine(os.environ['TEST_DB_URL'])
Session.configure(bind=engine)

conn = engine.connect()
transaction = conn.begin()
try:
Base.metadata.create_all(bind=conn,checkfirst=False)
Session.configure(bind=conn)

sub_transaction = conn.begin_nested()
try:

session = Session()

# code under test: event = 
Event(date='2019-06-02',type=Types.done,text='some stuff got done')
session.add(event)
session.flush()
session.close()

# test: session.rollback()
assert session.query(Event).count() ==0 finally:
sub_transaction.rollback()

finally:
transaction.rollback()


That session.close() appears to be the problem. It's a normal and 
required part of the code under test, but it throws away the 
SessionTransaction without rolling it back, so by the time the test does 
session.rollback(), it's doing it on a new SessionTransaction and so has 
no effect and the assertion fails because there event created is still 
around...


Is this a bug or am I just confusing myself and everyone else?

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/55721019-2de3-12b4-3796-90ca34b5ea4c%40withers.org.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] testing that a session is committed correctly

2019-06-04 Thread Chris Withers

On 04/06/2019 23:21, Mike Bayer wrote:



I'm not following all your code but if there are two sessions in play 
I'd probably try to avoid that, there should be only one Session you 
care about. 


This comes back to something I asked you about on Twitter a while ago: 
the code under test gets its session by calling a sessionmaker; how can 
I have that return an existing session, which appears to be what you're 
suggesting, rather than a new suggestion, which appears to be all they 
can do.


the test fixtures should be external to everything 


I don't understand what you mean by this.

and make 
sure there's just the one session.   if there are two in play, I'm not 
sure how they both get bound to your test transaction.


I believe they start a nested transaction?

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/50bb9529-f98b-24aa-484f-1274c8fe3290%40withers.org.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] when does session.transaction come into being?

2019-06-04 Thread Chris Withers

On 04/06/2019 14:47, Mike Bayer wrote:



On Tue, Jun 4, 2019, at 3:05 AM, Chris Withers wrote:

Hi All,

What creates session.transaction? I can't spot get __getattr__ magic,
but the only place in the code I see it being created is in .begin(...),
which has a docstring saying that it should no longer be used, so I feel
like I must be missing something?



self.begin() is called inside the __init__ method of the Session when 
autocommit is at the default of False.


Okay, I must have missed that.

Just double checking: I thought you said that Session() would not block?
If it's calling self.begin(), how come that isn't a blocking operation?

cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/e8bd076d-3d1e-a21f-35a9-6df6a94f0a65%40withers.org.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] testing that a session is committed correctly

2019-06-04 Thread Chris Withers

On 04/06/2019 14:49, Mike Bayer wrote:



On Tue, Jun 4, 2019, at 2:15 AM, Chris Withers wrote:
Now, what I'm trying to test is that I haven't forgotten to include 
the "with session.transaction". The problem is that, without the 
transaction.rollback(), the test passes regardless of whether the 
"with session.transaction" is there, and with it there, it always fails.


What's the best way for writing tests that have the database setup DDL 
run in a transaction that's rolled back at the end of the session, 
each test in a subtransaction that gets rolled back at the end of each 
test, and also test that code under test is committing or rolling back 
as required?



So when you make the Session it has a .transaction that you start with, 
when you come out of your tests, that should be the same .transaction 
when you get the Session back.  If it's different, then the test did not 
leave the Session in the same state.  does that work ?


I think what I'm getting to here is that there are two Session instances
here, one that's set up by the web app under test, and one by pytest as 
a fixture.


However, if I understand correctly, they're both running inside the 
sub-transaction returned by engine.connect().begin_nested(), which is in 
turn inside the transaction returned by engine.connect().begin().


So, how do I roll back the further subtransaction created by the web 
framework instantiating Session from a sessionmaker bound to the 
connection in which begin_nested() has been called, which under non-test 
running would actually be a top level transaction assuming I understand 
the pattern correctly, in such as way that if the code-under-test has 
committed on is session, the session being used to check expectations in 
the unit test will see the results, but if it that commit has been 
forgotten, it will not?


cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/471aa85f-776f-b899-1fe8-2f3cc009da38%40withers.org.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] when does session.transaction come into being?

2019-06-04 Thread Chris Withers

Hi All,

What creates session.transaction? I can't spot get __getattr__ magic, 
but the only place in the code I see it being created is in .begin(...), 
which has a docstring saying that it should no longer be used, so I feel 
like I must be missing something?


cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/f0a290f1-6af6-01d5-2b45-c6dd72cc982b%40withers.org.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] testing that a session is committed correctly

2019-06-04 Thread Chris Withers

Hi All,

I'm working with the pattern described at 
https://docs.sqlalchemy.org/en/13/orm/session_transaction.html#joining-a-session-into-an-external-transaction-such-as-for-test-suites 
along with pytest and FastAPI, an async web app framework with good 
support for running blocking code.


So, I have my fixtures:

@pytest.fixture(scope='session')
def db(client):
engine = Session.kw['bind']
conn = engine.connect()
transaction = conn.begin()
try:
Base.metadata.create_all(bind=conn,checkfirst=False)
yield conn
finally:
transaction.rollback()
Session.configure(bind=engine)

@pytest.fixture()
def transaction(db):
transaction = db.begin_nested()
try:
Session.configure(bind=db)
yield transaction
finally:
transaction.rollback()

@pytest.fixture()
def session(transaction):
return Session()

And a test:

def test_create_full_data(transaction, session, client):
response = client.post('/events/',json={
'date':'2019-06-02',
'type':'DONE',
'text':'some stuff got done' })
transaction.rollback()
actual = session.query(Event).one()
compare(actual.date,expected=date(2019,6,2))
compare(type(actual),expected=Done)
compare(response.json(),expected={
'id': actual.id,
'date':'2019-06-02',
'type':'DONE',
'text':'some stuff got done' })
compare(response.status_code,expected=201)

And some code:

@router.post("/",response_model=EventRead,status_code=201)
def create_object(
*,
session: Session = Depends(db_session),
event: EventCreate,
):
""" Create new Event. """ with session.transaction:
event = Event(**event.dict())
session.add(event)
session.flush()
session.expunge(event)
return event


Now, what I'm trying to test is that I haven't forgotten to include the 
"with session.transaction". The problem is that, without the 
transaction.rollback(), the test passes regardless of whether the "with 
session.transaction" is there, and with it there, it always fails.


What's the best way for writing tests that have the database setup DDL 
run in a transaction that's rolled back at the end of the session, each 
test in a subtransaction that gets rolled back at the end of each test, 
and also test that code under test is committing or rolling back as 
required?


cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/430d4156-d9ff-4349-5be5-62bee6ea4627%40withers.org.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Will calling a sessionmaker or closing a session block?

2019-06-02 Thread Chris Withers

Hi All,

Given this async function (asgi middleware, as it happens):

@app.middleware('http')
async def make_db_session(request: Request, call_next):
    request.state.db = Session()
    response = await call_next(request)
    request.state.db.close()
    return response

Would either the call to Session, which is a sessionmaker, or 
db.close(), be expected to block?
I can't remember whether instantiating a session will open a network 
connection, or whether close will similarly do network io, so thought 
I'd ask...


Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/67f180de-a6c1-a28b-8c6f-b850b3775519%40withers.org.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: TypeDecorators don't know which database session to use

2019-05-30 Thread Chris Wilson
Hi Mike,

On Wed, 29 May 2019 at 15:30, Mike Bayer wrote:

> Note that we suggested approaches that work in conjunction with the 
> serialized approach you gave, the @property approach and the mapper.load() 
> event approach. 
>

Unfortunately I think they would both require massive code changes. 
Everywhere that we use Column(CompressedBinary), we'd have to rename the 
column and add property getters and setters. Unless there's a way to hook 
into the instrumentation machinery to do that automatically?
 

> However, if you are storing the fully serialized object in the column, 
> like the whole Cat object, you don't need to emit a SQL query to restore 
> it, for caching objects in serialized form you'd want to merge() it back 
> into the Session with load=False so that no SQL is emitted.


We are not serializing the whole object, only the PK, but merging it back 
into the session with all its attributes expired, so that any attempt to 
access them triggers a load. That works well for us, unless the current 
session changes in the mean time (not the problem that I originally asked 
about, but a related one, that luckily isn't biting us right now).
 

>You still need your Session though and of course, using a threadlocal 
> variable is the best way to make that happen right now without changing the 
> type API.
>

The problem isn't that we're passing objects between Threads, it's that we 
can use multiple sessions in the same thread. I noticed it while trying to 
create a reproducible test case in the debugger, which was switching to our 
test/scratch database, setting a local variable, and then exiting the 
context (back to our main database) with that local variable still in scope 
(with expired attributes). When the debugger rendered the repr() of the 
local variable, it causes its relationships to be loaded from the live 
database, which didn't compare equal to objects in the test database, so I 
had to restart the debugger every time this happened.

It's not critical but it was annoying because it made debugging much harder 
and slower than I thought it could/should be. I admit that this is a niche 
use case, so I consider this a feature request instead of a bug.

> If there were pre-load hooks as well as post-load, then we could set a 
> global variable to object_session(parent_object) for the duration of the 
> load.
>
>
> You shouldn't need "object_session(parent_object)",  there's only one 
> Session in play at a time within a thread so just assign the Session to a 
> thread local variable.  
>

The problem is with objects that have escaped the scope of one active 
Session, and are being refreshed when another is in scope.
 

> The best hook to use here is simply the transaction-level hooks to set the 
> current Session onto a global thread-local variable.  The ones that are 
> SessionTransaction level should work well:
>
>
> https://docs.sqlalchemy.org/en/13/orm/events.html?highlight=after_begin#sqlalchemy.orm.events.SessionEvents.after_transaction_create
>
>
> https://docs.sqlalchemy.org/en/13/orm/events.html?highlight=after_begin#sqlalchemy.orm.events.SessionEvents.after_transaction_end
>

We already have a global variable for the current Session, but I need to 
get back to the one that was used to load the object (containing the 
TypeDecorated attribute) instead of the current one.
 

> The context available is the ExecutionContext, however this isn't passed 
> to the TypeEngine bind/result processor methods right now.  That might not 
> be a bad idea in the future but for the moment would require a major 
> breaking API change that cannot be made quickly or trivially.


Could it be added as an optional argument that is only passed if the 
recipient method is expecting it?
 

> An example of passing information between a Session and the 
> execution-level context is at 
> https://github.com/sqlalchemy/sqlalchemy/wiki/SessionModifiedSQL but this 
> doesn't give you a way to get inside the TypeDecorator methods without 
> using a global threadlocal.


Unfortunately even that doesn't help, I think, because the TypeDecorator is 
called after the statement has been executed, so there's no concept of a 
"current" load statement, only the last one, and I don't know for sure if 
that was really the same context/object/session that loaded the data being 
processed.

I'm looking at whether something like the mutable extension would have 
access to the parent object, to coerce data structures containing 
SQLAlchemy objects into serializable form on the way into the database, and 
coerce them back to SQLAlchemy objects after a load event.

Thanks again, Chris.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  Se

Re: [sqlalchemy] Re: TypeDecorators don't know which database session to use

2019-05-29 Thread Chris Wilson
Hi all,

Thanks for the replies! Sorry, perhaps I wasn't clear, this is just a 
minimal example. We are actually storing serialized objects in a column, 
which can be e.g. dicts or lists of (dehydrated) SQLAlchemy objects, numpy 
arrays, etc. It's much faster to store (both read and write) a complex 
structure serialized into a single column than to split it across many 
tables. Generally this works fine for us, but I just discovered this 
unusual case, and couldn't see how to solve it.

Changing how we serialize objects is not realistically going to happen, so 
we are going to continue using TypeDecorators unless there is a better 
option that enables us to continue doing what we are currently doing.

If there were pre-load hooks as well as post-load, then we could set a 
global variable to object_session(parent_object) for the duration of the 
load. Alternatively, if the state or context was passed to the 
TypeDecorator then we could use it to get the session. But right now it 
appears that there is no hook that does what we want to do. Please would 
you accept my request to implement something like this?

Thanks, Chris.

On Tuesday, 28 May 2019 23:42:59 UTC+1, Jonathan Vanasco wrote:
>
>
>
> On Tuesday, May 28, 2019 at 4:35:32 PM UTC-4, Mike Bayer wrote:
>>
>>
>> additionally, running SQL inside of a TypeDecorator is not the intended 
>> usage, as well as using ORM features inside of a TypeDecorator is also not 
>> the intended usage.
>>
>
>
> thanks, mike. I was 99.99% sure that was the case, but didn't want to risk 
> bring wrong.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/7850da06-e4a1-4864-8b92-26a817041c6f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] TypeDecorators don't know which database session to use

2019-05-28 Thread Chris Wilson
Dear Michael,

I have discovered a limitation of TypeDecorators (custom column types): any one 
that uses the database (e.g. to load objects serialised in a custom way) has no 
way to know which database session to use. During initial load one can use a 
global session object, but expired attributes may need to be refreshed at any 
time, when the current session is not necessarily the one that loaded the 
attribute, or its parent object.

For example, consider a TypeDecorator that stores a large number of Cats by ID, 
compressed to save space in the database:

import struct
import zlib

from sqlalchemy import Column, Integer, LargeBinary, Text, TypeDecorator, 
create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import object_session, sessionmaker

Base = declarative_base()
current_session = None


class Cat(Base):
__tablename__ = 'cat'
id = Column(Integer, primary_key=True)
name = Column(Text)


class CompressedCats(TypeDecorator):
''' A house can have so many cats that we need to compress them. '''

impl = LargeBinary
_struct_protocol_id = struct.Struct('http://www.gam.com/en/Legal/Email+disclosures+EU for further information 
on confidentiality, the risks of non-secure electronic communication, and 
certain disclosures which we are required to make in accordance with applicable 
legislation and regulations. 
If you cannot access this link, please notify us by reply message and we will 
send the contents to you.

GAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and 
use information about you in the course of your interactions with us. 
Full details about the data types we collect and what we use this for and your 
related rights is set out in our online privacy policy at 
https://www.gam.com/en/legal/privacy-policy. 
Please familiarise yourself with this policy and check it from time to time for 
updates as it supplements this notice.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/BCCA73C2165E8947A2E786EC482564DE013E32B1E4%40ccpmaildag02.cantab.local.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Proper way to handle new 128 character identifier limit in Oracle >= 12.2

2019-05-08 Thread Chris Stephens
I think you would actually want to query v$parameter since "compatible" can 
be set to a lower version than server which disables 128 lengths.

SQL> select value from v$parameter where name = 'compatible';

VALUE
--
12.2.0

On Thursday, November 8, 2018 at 10:30:19 AM UTC-6, Mike Bayer wrote:
>
> On Thu, Nov 8, 2018 at 11:23 AM 'Van Klaveren, Brian N.' via 
> sqlalchemy > wrote: 
> > 
> > Hi, 
> > 
> > Oracle 12.2 now allows 128 character length identifiers: 
> > 
> > 
> https://docs.oracle.com/en/database/oracle/oracle-database/12.2/newft/new-features.html#GUID-64283AD6-0939-47B0-856E-5E9255D7246B
>  
> > 
> > It'd be great if sqlalchemy knew about this, but what's the proper way 
> of handling this? Just use the existing dialect and monkey patch 
> max_identifier_length or create a new dialect? 
>
> for now monkeypatch, and for the fix, we would need to look at the 
> server version info and modify the variable accordingly, here: 
>
>
> https://github.com/zzzeek/sqlalchemy/blob/master/lib/sqlalchemy/dialects/oracle/base.py#L1087
>  
>
> Pull requests welcome on this or at least we can create an issue in 
> bitbucket to track this. 
>
>
> > 
> > Thanks, 
> > Brian 
> > 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlal...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/41e601e2-d077-4d5c-93ea-4302d02df591%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Oracle Bulk inserts returning array of IDs possible?

2019-05-06 Thread Chris Stephens
Is is possible to implement the following behavior in SQLAlchemy against an 
Oracle database? (Bulk inserts returning PK IDs generated by 
trigger/sequence on PK column in database)

#database ddl


DROP TABLE INS_RET_TEST;

CREATE TABLE INS_RET_TEST
   (ID NUMBER NOT NULL ENABLE,
COL1 NUMBER NOT NULL ENABLE,
CONSTRAINT INS_RET_TEST_PK PRIMARY KEY (ID)
USING INDEX);

DROP SEQUENCE INS_RET_TEST_SEQ;

CREATE SEQUENCE  INS_RET_TEST_SEQ  MINVALUE 1 MAXVALUE 9
INCREMENT BY 1 START WITH 1;

CREATE OR REPLACE TRIGGER INS_RET_TEST_TRG
BEFORE INSERT ON INS_RET_TEST
FOR EACH ROW
BEGIN
<>
BEGIN
IF INSERTING AND :NEW.ID IS NULL THEN
SELECT INS_RET_TEST_SEQ.NEXTVAL INTO :NEW.ID FROM SYS.DUAL;
END IF;
END COLUMN_SEQUENCES;
END;
/

ALTER TRIGGER INS_RET_TEST_TRG ENABLE; 

 

#Python
##
 

#!/usr/bin/env python

import cx_Oracle
import time

db = cx_Oracle.connect(dsn="connect_info)
cur = db.cursor()

data = [[1],[2],[3],[4],[5]]

IdArr = cur.var(int, arraysize = len(data))
cur.setinputsizes(None, IdArr)
cur.executemany("insert into ins_ret_test(col1) values (:1) returning id 
into :2", data)

for ix, datum in enumerate(data):
print("Data: " , datum[0], "ID: ", IdArr.getvalue(ix))


If so, can someone point me to the documentation or provide an example? 
I've been unable to find anything. 

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Attribute mapped collections

2018-12-19 Thread Chris Wilson
Dear Michael and SQLAlchemy users,

I have what may be a minor bug report, and a feature request.

The possible bug is that creating an instance of a mapped association 
object<https://docs.sqlalchemy.org/en/latest/orm/basic_relationships.html#association-object>
 adds it to an attribute-mapped collection, but using the wrong key. In the 
following example, although the key is a date which is initialised, the key 
used is None instead.

The feature request is for some way to use an association_proxy on such a 
collection to return the instances of the collection (as though it was a 
relationship to the target table), instead of a dictionary from keys to target 
instances. I think it might be possible by supplying a proxy_factory and a 
getset_factory, but I can't find any examples, and it's quite difficult to set 
up in any case.

Here is the sample code illustrating both:

from datetime import date

from sqlalchemy import Column, Date, ForeignKey, Integer, Text, create_engine
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, sessionmaker
from sqlalchemy.orm.collections import attribute_mapped_collection

Base = declarative_base()

class Pet(Base):
__tablename__ = 'pet'
id = Column(Integer, primary_key=True)
name = Column(Text)

class HousePet(Base):
__tablename__ = 'house_pet'
id_pet = Column(Integer, ForeignKey(Pet.id), primary_key=True)
id_house = Column(Integer, ForeignKey("house.id"), primary_key=True)
date_moved_in = Column(Date)
pet = relationship(Pet)
# house backref created by House.date_to_new_pet, not here:
# house = relationship("House")

class House(Base):
__tablename__ = 'house'
id = Column(Integer, primary_key=True)
address = Column(Text)

date_to_new_pet = relationship(HousePet, cascade='all, delete-orphan',
   
collection_class=attribute_mapped_collection("date_moved_in"),
   backref=__tablename__)

pets = association_proxy('date_to_new_pet', 'pet')

engine = create_engine('sqlite://')
Base.metadata.create_all(engine)

DBSession = sessionmaker(bind=engine)
session = DBSession(autocommit=True)

downing = House(address="10 Downing Street")
larry = Pet(name="Larry the Cat")
larry_at_home = HousePet(pet=larry, house=downing, date_moved_in=date(2007, 1, 
1))

# This should pass, but does not because the collection key was not set:
try:
assert downing.date_to_new_pet == {larry_at_home.date_moved_in: larry}, 
str(downing.date_to_new_pet)
except AssertionError as exc:
print(exc)

# How can we make House.pets be a list-like collection instead of dict-like?
try:
assert downing.pets == [larry], str(downing.pets)
except AssertionError as exc:
print(exc)

# It would be nice to be able to update this collection as well:
downing.pets = []
assert downing.date_to_new_pet == {}, str(downing.date_to_new_pet)
downing.pets = [larry]
assert downing.date_to_new_pet == {larry_at_home.date_moved_in: larry}, 
str(downing.date_to_new_pet)

I would like the latter to work because the documentation for association 
objects<https://docs.sqlalchemy.org/en/latest/orm/basic_relationships.html#association-object>
 warns about the dangers of unsynchronised collections, and suggests using an 
association_proxy instead of a relationship with a secondary table, which works 
fine for normal relationship collections, but is very unnatural for 
attribute-mapped ones.

Thanks in advance for your consideration.

Cheers, Chris.



This email is confidential. If you are not the intended recipient, please 
advise us immediately and delete this message. 
The registered name of Cantab- part of GAM Systematic is Cantab Capital 
Partners LLP. 
See - http://www.gam.com/en/Legal/Email+disclosures+EU for further information 
on confidentiality, the risks of non-secure electronic communication, and 
certain disclosures which we are required to make in accordance with applicable 
legislation and regulations. 
If you cannot access this link, please notify us by reply message and we will 
send the contents to you.

GAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and 
use information about you in the course of your interactions with us. 
Full details about the data types we collect and what we use this for and your 
related rights is set out in our online privacy policy at 
https://www.gam.com/en/legal/privacy-policy. 
Please familiarise yourself with this policy and check it from time to time for 
updates as it supplements this notice.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You receive

Re: [sqlalchemy] Making Metadata bases more independent

2018-12-06 Thread Chris Wilson
Dear Mike,

Thanks for the very quick implementation! I didn't realise that people 
would make relationships between different Bases, as I assumed that people 
would use these to represent different databases (as we do), between which 
there can be no (enforced) relationships. Then it wouldn't matter if one 
was configured and another was not, because they could never reference each 
other. Do people really use multiple Bases that way? How about Metadatas?

The event hook implementation would probably work fine for us, as we can 
temporarily disable reconfiguration of mappers while loading source files. 
Are you looking for someone else to implement the tests? It should be easy 
because we already have a reproducible test case.

Thanks, Chris.

On Wednesday, 5 December 2018 20:24:09 UTC, Mike Bayer wrote:
>
> On Wed, Dec 5, 2018 at 11:04 AM Chris Wilson 
> > wrote: 
> > 
> > Dear Mr Bayer and SQLAlchemy users, 
> > 
> > 
> > 
> > We have an issue which probably counts as a feature request rather than 
> a bug, because it’s sufficiently esoteric. We need to be able to use some 
> Mappers (in one Metadata) while other mappers (in a different Metadata) are 
> not and cannot be completely configured. This is because we fetch code 
> (including Mapper source files) from the database using SQLAlchemy (with a 
> different Metadata/declarative_base), and it’s possible that we are in the 
> middle of loading Mappers when we need to fetch another source file from 
> the database to complete the configuration. 
> > 
> > 
> > 
> > Here is an example that reproduces the problem that we see: 
> > 
> > 
> > 
> > from sqlalchemy import Column, ForeignKey, Integer, Table, Text, 
> create_engine, event 
> > 
> > from sqlalchemy.ext.declarative import declarative_base, declared_attr 
> > 
> > from sqlalchemy.orm import Mapper, configure_mappers, relationship, 
> sessionmaker 
> > 
> > from sqlalchemy.orm.mapper import _mapper_registry 
> > 
> > 
> > 
> > _mapper_registry.clear() 
> > 
> > 
> > 
> > FirstBase = declarative_base() 
> > 
> > AnotherBase = declarative_base() 
> > 
> > 
> > 
> > class Widget(FirstBase): 
> > 
> > __tablename__ = 'widget' 
> > 
> > id = Column(Integer, primary_key=True) 
> > 
> > name = Column(Text) 
> > 
> > 
> > 
> > class Animal(AnotherBase): 
> > 
> > __tablename__ = 'mammal' 
> > 
> > 
> > 
> > id = Column(Integer, primary_key=True) 
> > 
> > name = Column(Text) 
> > 
> > species = Column(Text) 
> > 
> > 
> > 
> > @declared_attr 
> > 
> > def __mapper_args__(cls): 
> > 
> > return { 
> > 
> > 'polymorphic_on':   cls.species, 
> > 
> >'polymorphic_identity': cls.__name__, 
> > 
> > } 
> > 
> > 
> > 
> > # Register the first classes and create their Mappers: 
> > 
> > configure_mappers() 
> > 
> > 
> > 
> > # Simulate dynamic loading of an additional mapped class, which refers 
> to one that has not been loaded yet: 
> > 
> > class Mammal(Animal): 
> > 
> > # This mapping has an error, and therefore cannot be configured: 
> > 
> > employer = relationship("Employer") 
> > 
> > 
> > 
> > # These new classes should not be configured at this point: 
> > 
> > unconfigured = [m for m in _mapper_registry if not m.configured] 
> > 
> > assert len(unconfigured) == 1, str(unconfigured) 
> > 
> > 
> > 
> > # Now try to query Widget, which is registered in FirstBase, which is 
> internally consistent: 
> > 
> > 
> > 
> > engine = create_engine('sqlite://') 
> > 
> > FirstBase.metadata.create_all(engine) 
> > 
> > 
> > 
> > DBSession = sessionmaker(bind=engine) 
> > 
> > session = DBSession(autocommit=True) 
> > 
> > 
> > 
> > # We don't expect this query to fail because there is an error with a 
> mapper in AnotherBase, but it does: 
> > 
> > widgets = session.query(Widget).all() 
> > 
> > 
> > 
> > assert len(widgets) == 0 
> > 
> > print("done!") 
> > 
> > 
> > 
> > Which fails with the following exception, when it tries to query Widget 
> at the end: 
> > 
> > 
> > 
> > Traceback (most recent call last): 
> > 
> >   File 
> python36\lib\site-packages\sqlalchemy-1.2.7-py3.6-win

[sqlalchemy] Making Metadata bases more independent

2018-12-05 Thread Chris Wilson
\relationships.py,
  line 1712, in _process_dependent_arguments
self.target = self.mapper.mapped_table
  File 
python36\lib\site-packages\sqlalchemy-1.2.7-py3.6-win-amd64.egg\sqlalchemy\util\langhelpers.py,
  line 767, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
  File 
python36\lib\site-packages\sqlalchemy-1.2.7-py3.6-win-amd64.egg\sqlalchemy\orm\relationships.py,
  line 1628, in mapper
argument = self.argument()
  File 
python36\lib\site-packages\sqlalchemy-1.2.7-py3.6-win-amd64.egg\sqlalchemy\ext\declarative\clsregistry.py,
  line 293, in __call__
(self.prop.parent, self.arg, n.args[0], self.cls)
sqlalchemy.exc.InvalidRequestError: When initializing mapper 
Mapper|Mammal|mammal, expression 'Employer' failed to locate a name ("name 
'Employer' is not defined"). If this is a class name, consider adding this 
relationship() to the  class after both dependent 
classes have been defined.

I think the problem is that Mapper._new_mappers is global, so we call 
configure_mappers() even though there are no new mappers for this 
declarative_base (FirstBase), so I don't think it should be necessary.

I imagine that the fix would be something like moving _new_mappers onto the 
Metadata itself, but I don't know how difficult it would be. Is this something 
that you would consider?

Alternatively, a simpler workaround for our case, where we do not need 
polymorphism on the Widget class, would be to short-circuit 
_with_polymorphic_mappers() in such cases, so that it doesn't need to call 
configure_mappers at all.

Thanks, Chris.



This email is confidential. If you are not the intended recipient, please 
advise us immediately and delete this message. 
The registered name of Cantab- part of GAM Systematic is Cantab Capital 
Partners LLP. 
See - http://www.gam.com/en/Legal/Email+disclosures+EU for further information 
on confidentiality, the risks of non-secure electronic communication, and 
certain disclosures which we are required to make in accordance with applicable 
legislation and regulations. 
If you cannot access this link, please notify us by reply message and we will 
send the contents to you.

GAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and 
use information about you in the course of your interactions with us. 
Full details about the data types we collect and what we use this for and your 
related rights is set out in our online privacy policy at 
https://www.gam.com/en/legal/privacy-policy. 
Please familiarise yourself with this policy and check it from time to time for 
updates as it supplements this notice.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] context manager for session lifecycle

2018-11-25 Thread Chris Withers

On 26/11/2018 06:15, Mike Bayer wrote:

On Sun, Nov 25, 2018 at 12:55 PM Chris Withers  wrote:


  > Soat the moment you can still say this:
  >
  >  with session.transaction:
  >
  > which will do the commit and rollback but not the close().

 Potentially silly question: when is it useful to commit the session but
 not close it?


the Session is maintaining the scopes of many different objects, which
are, a Connection, a transaction, and all the objects you're deailng
with.  It's pretty common folks want to commit their transaction and
then afterward keep working with all the objects they have attached to
the session to load more data or commit more changes.


Gotcha, so:

session = Session()
...
session.commit() / session.rollback() / savepoints, etc
...
session.close()

cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] context manager for session lifecycle

2018-11-25 Thread Chris Withers

On 18/11/2018 23:38, Mike Bayer wrote:
On Sun, Nov 18, 2018, 6:22 PM Chris Withers <mailto:ch...@withers.org> wrote:


 >> Does SQLAlchemy provide a context manager that handles the session
 >> lifecycle described here?
 >>

https://docs.sqlalchemy.org/en/latest/orm/session_transaction.html#managing-transactions
 >>
 >
 > I guess that documentation should be updated, that yes that workflow
 > is available, it's documented in terms of using "begin()" with
 > autocommit mode:
 >
 >

https://docs.sqlalchemy.org/en/latest/orm/session_transaction.html#session-autocommit

That chunk is a little confusing, I wish there was a (default) mode
that
said "start a transaction when I ask for one, don't have one open
otherwise" - "idle in transaction" does make Postgres DBAs sad...

This is how all dbapi drivers work.  There's no begin method.   The 
Session won't leave you idle in transaction because it doesn't start 
working on a connection (and hence start a transaction) until you start 
working with it.


Ah, okay.

Sowhat happens if you run SQL on your Session if you haven't called 
"begin()"?  Raise an error ? 


That would be fine :-)

 I would say, when you make your 
Session(), thats the start of your work in a defer to thread


Yep, using the context manager I snipped out above is working fine. 
(although I mistakenly tried to have the context manager wrapped 
*around* the deferToThread, and yeah, that turns out to end pretty badly 
;-) - not had Python segfault on my in a while!)



 > Soat the moment you can still say this:
 >
 >      with session.transaction:
 >
 > which will do the commit and rollback but not the close().

Potentially silly question: when is it useful to commit the session but
not close it?


Still curious about this, gues I could RTSL or find the bit of the 
manual, but I'd prefer to not misinterpret either!


cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] context manager for session lifecycle

2018-11-18 Thread Chris Withers

On 16/11/2018 01:09, Mike Bayer wrote:

Does SQLAlchemy provide a context manager that handles the session
lifecycle described here?
https://docs.sqlalchemy.org/en/latest/orm/session_transaction.html#managing-transactions

I mean, it should be as simple as the following, right?

@contextmanager
def transaction(session):
  session.begin()
  try:
  yield
  session.commit()
  except:
  session.rollback()
  raise
  finally:
  session.close()

...I'd just prefer not to copy and paste that across libraries and
projects ;-)


I guess that documentation should be updated, that yes that workflow
is available, it's documented in terms of using "begin()" with
autocommit mode:

https://docs.sqlalchemy.org/en/latest/orm/session_transaction.html#session-autocommit


That chunk is a little confusing, I wish there was a (default) mode that 
said "start a transaction when I ask for one, don't have one open 
otherwise" - "idle in transaction" does make Postgres DBAs sad...



I see your example also uses begin(), but that's not what we have at
https://docs.sqlalchemy.org/en/latest/orm/session_transaction.html#managing-transactions,


Right, I guess my knowledge of begin() is hazier than I thought...


that's a Session without the begin() as it's not in autocommit.
Which I don't want people to use :)


I don't want autocommit, but I do want an explicit start and end of 
transaction, and indeed, session, particularly in my current project as 
all the SQLA stuff has to be done in a twisted deferToThread ;-)



Soat the moment you can still say this:

 with session.transaction:

which will do the commit and rollback but not the close().


Potentially silly question: when is it useful to commit the session but 
not close it?



there's kind of too many options with the Session right now to make
this one size fits all unless you have suggestions.


Indeed, would you believe after almost 10 years, I think sessionmaker() 
finally made sense to my brain?


cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] automated table documentation

2018-11-15 Thread Chris Frey
It occurs to me that if it were possible to add documentation as an
argument to Column(), it would be possible to use the sqlalchemy
meta data to generate pretty end-user programmer documentation for
tables.

I see a 'doc' argument already exists in Column.

Are there tools that make use of this?  Has this already been done?

Thanks,
- Chris

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] context manager for session lifecycle

2018-11-15 Thread Chris Withers

Hi All,

I'm sure I've asked this before, but a google through the archives 
couldn't find it.


Does SQLAlchemy provide a context manager that handles the session 
lifecycle described here?

https://docs.sqlalchemy.org/en/latest/orm/session_transaction.html#managing-transactions

I mean, it should be as simple as the following, right?

@contextmanager
def transaction(session):
    session.begin()
    try:
    yield
    session.commit()
    except:
    session.rollback()
    raise
    finally:
    session.close()

...I'd just prefer not to copy and paste that across libraries and 
projects ;-)


I remember Mike having good reasons against this, but I can't with them, 
so what would be bad about making sessions work like this:


with session:
   ...

...and have that do the above?

cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Using before_update and before_insert events listeners with PostgreSQL upserts

2018-10-26 Thread Chris Johnson
Unfortunately this is very write-heavy.

What would be cheaper for the database server, using this merge method or 
manually doing an exists query and handling it myself?

For reference, what I'm trying to do is save edit history do a field in the 
row. It works roughly like this:

Columns: ID (PK), author, messageBody, timestamp, editedTimestamp, 
editHistory

editHistory is a JSONB field containing a list of dicts containing partial 
rows that are archives of previous versions. So it would look something 
like this in the JSONB data:

[
{
"author": author,
"messagebody": messageBody,
"editedTimestamp": editedTimestamp,
}
]


1. Attempt insert
2. If ID PK exists, append a dict of the editable fields to the JSON list 
in editHistory
3. Update Row

On Monday, October 22, 2018 at 1:30:04 AM UTC-7, Simon King wrote:
>
> I think the problem is that you are trying to use ORM-level events 
> (https://docs.sqlalchemy.org/en/latest/orm/events.html), but you 
> aren't using the ORM to insert your data. Those events would fire if 
> you created a Session, added your Message objects to that Session, and 
> then flushed it. You are bypassing the ORM and using the insert() 
> construct directly, so the events don't fire. 
>
> Unfortunately I don't think there's any way right now to use "ON 
> CONFLICT DO UPDATE" from the ORM. There is Session.merge 
> (
> https://docs.sqlalchemy.org/en/latest/orm/session_state_management.html#merging),
>  
>
> but that's an in-python check to see if the record already exists, 
> rather than allowing the DB to do it. It might work for you, depending 
> on how fast you need to process these messages. 
>
> Simon 
>
> On Fri, Oct 19, 2018 at 6:32 PM Chris Johnson  > wrote: 
> > 
> > Python Version: 3.6.6 
> > SQLAlchemy version: 1.2.12 
> > PostgreSQL version: 10.5 
> > 
> > This is my fist time using PGSQL over MySQL or SQLite and my first 
> project using SQLAlchemy over SQLite or the Django ORM. I'm also not very 
> good at programming in general so please forgive the simple questions. 
> > 
> > I have a script that I'm using to record incoming messages from an API 
> and/or a websocket, including edits to messages. The script is using 
> upserts to save data to the database. I tried adding an event listeners for 
> updates and inserts but the event listeners are not firing off their 
> function when a message is inserted or updated. Here is the test code I'm 
> working with: 
> > 
> > class User(newBase): 
> > __tablename__ = 'user' 
> > __table_args__ = {'autoload':True} 
> > class Message(newBase): 
> > __tablename__ = 'message' 
> > __table_args__ = {'autoload':True} 
> > class Lobby(newBase): 
> > __tablename__ = 'lobby' 
> > __table_args__ = {'autoload':True} 
> > class Community(newBase): 
> > __tablename__ = 'community' 
> > __table_args__ = {'autoload':True} 
> > 
> > 
> > for table in tables: 
> > for row in _yield_limit(olddb.query(table), table.frame): 
> > logging.debug(row.frame) 
> > messagedata = row.raw 
> > message = wsrsi.sortframe(messagedata) 
> > if message[0] == 'message': 
> > message = message[1] 
> > if message.iserased == True: 
> > eraser = message.erasedby 
> > eraserid = eraser.id 
> > userInsert(eraser) 
> > else: 
> >  eraser = None 
> >  eraserid = None 
> > userInsert(message.author) 
> > 
> > 
> > messageins = insert(Message).values( 
> > id = message.id, 
> > iserased = message.iserased, 
> > erasedby = eraserid, 
> > author = message.author.id, 
> > body = message.body, 
> > private = message.isprivate, 
> > created = convTime(message.timecreated), 
> > edited = convTime(message.timeedited), 
> > lobby = message.lobbyid, 
> > usermentions = message.mentions, 
> > ) 
> > 
> > message_dict = { 
> > c.name: c 
> > for c in messageins.excluded 
> > } 
> > 
> > 
> > update_message = messageins.on_conflict_do_update( 
> > index_elements = ["id"], 
> > set_=message_dict 
> > ) 
> > 
> > 
> > try: 
> 

[sqlalchemy] Using before_update and before_insert events listeners with PostgreSQL upserts

2018-10-19 Thread Chris Johnson
Python Version: 3.6.6
SQLAlchemy version: 1.2.12
PostgreSQL version: 10.5

This is my fist time using PGSQL over MySQL or SQLite and my first project 
using SQLAlchemy over SQLite or the Django ORM. I'm also not very good at 
programming in general so please forgive the simple questions.

I have a script that I'm using to record incoming messages from an API 
and/or a websocket, including edits to messages. The script is using 
upserts to save data to the database. I tried adding an event listeners for 
updates and inserts but the event listeners are not firing off their 
function when a message is inserted or updated. Here is the test code I'm 
working with:

class User(newBase):
__tablename__ = 'user'
__table_args__ = {'autoload':True}
class Message(newBase):
__tablename__ = 'message'
__table_args__ = {'autoload':True}
class Lobby(newBase):
__tablename__ = 'lobby'
__table_args__ = {'autoload':True}
class Community(newBase):
__tablename__ = 'community'
__table_args__ = {'autoload':True}


for table in tables:
for row in _yield_limit(olddb.query(table), table.frame):
logging.debug(row.frame)
messagedata = row.raw
message = wsrsi.sortframe(messagedata)
if message[0] == 'message':
message = message[1]
if message.iserased == True:
eraser = message.erasedby
eraserid = eraser.id
userInsert(eraser)
else:
 eraser = None
 eraserid = None
userInsert(message.author)


messageins = insert(Message).values(
id = message.id,
iserased = message.iserased,
erasedby = eraserid,
author = message.author.id,
body = message.body,
private = message.isprivate,
created = convTime(message.timecreated),
edited = convTime(message.timeedited),
lobby = message.lobbyid,
usermentions = message.mentions,
)

message_dict = {
c.name: c
for c in messageins.excluded
}


update_message = messageins.on_conflict_do_update(
index_elements = ["id"],
set_=message_dict
)


try:
newdb.execute(update_message)
except IntegrityError as e:
print('Foreign key violation error, logging.')
with open('fkerrors.txt', 'a+') as f:
f.write(str(e)+'\n')
f.write(str(message.raw))
f.write('\n\n')
f.close()
except ValueError as e:
if str(e) == 'A string literal cannot contain NUL (0x00) 
characters.':
print('Null character found in message')
with open('nullerrors.txt', 'a+') as f:
f.write(str(e)+'\n')
f.write(str(message.raw))
f.write('\n\n')
f.close

For reference, `table` is an array of tables from an old MySQL database 
I've inherited that I'm migrating to PGSQL.

This is my event listener code:

@listens_for(Message, 'before_update')
def saveHistory(mapper, connect, target):
historicaldata = dict(target)
print("SaveHistory triggered\n")


event.listen(Message, 'before_insert', saveHistory)
event.listen(Message, 'before_update', saveHistory)

I've tried arranging this in several different ways including using the 
decorator on the table classes but no matter how I structure this the event 
listener does not fire on insert. I've done a lot of searching but I 
haven't been able to find any information on using event listeners 
specifically with PGSQL upserts so I don't know if the problem is with my 
code or if it's just not supported when using upserts. Am I doing something 
wrong?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Objects wrongly persistent after rollback

2018-08-24 Thread Chris Wilson
Dear Michael,

I have discovered that under very particular circumstances, SQLAlchemy forgets 
that an object was newly created in a transaction, and if that transaction 
rolls back, the object's state is not reset from persistent to transient.

It seems to rely on the following:


*Open an outer transaction

*Open an inner transaction, add object to session, close transaction to 
commit. The object is now persistent.

*Clear all references to the object, allowing it to be GC'ed and 
removed from the session._new weakref map

*Load the object from the database again.

*Raise an exception to rollback the outer transaction.

*SQLAlchemy does not realise that the object is new, and fails to reset 
its state.

*Object is still "persistent", but no longer in the database.

Here is an example that reproduces the problem:

from sqlalchemy import Column, Integer, Text, create_engine, inspect
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

Base = declarative_base()

class Dog(Base):
__tablename__ = 'dog'

id = Column(Integer, primary_key=True)
name = Column(Text)

def create():
global session
with session.begin(nested=True):
fido = Dog(name="fido")
session.add(fido)
assert not inspect(fido).persistent, "should not yet be persisted to 
database"
assert inspect(fido).persistent, "should now be persisted to database, but 
transaction not yet committed"

def fetch():
global session
global fido
fido = session.query(Dog).filter_by(name="fido").one()
assert inspect(fido).persistent, "should have been retrieved from the 
database, therefore persistent"

def main():
engine = create_engine('sqlite://')
engine.echo = True
Base.metadata.create_all(engine)

DBSession = sessionmaker(bind=engine)

global session
session = DBSession(autocommit=True)

try:
with session.begin(subtransactions=True) as trans:
create()
print(dict(trans._new))
fetch()
print(dict(trans._new))
raise Exception("force transaction to roll back")
except Exception:
pass

global fido
assert not inspect(fido).persistent, "after rollback, should no longer be 
in the database"

if __name__ == '__main__':
main()

Which fails the last assertion:

AssertionError: after rollback, should no longer be in the database

I know that this is a minor issue, but I found it quite confusing while 
debugging.

I suspect that we might need to keep a strongly referenced list of "newly 
added" PKs in the outer session, instead of using _new.

Thanks, Chris.



This email is confidential. If you are not the intended recipient, please 
advise us immediately and delete this message. 
The registered name of Cantab- part of GAM Systematic is Cantab Capital 
Partners LLP. 
See - http://www.gam.com/en/Legal/Email+disclosures+EU for further information 
on confidentiality, the risks of non-secure electronic communication, and 
certain disclosures which we are required to make in accordance with applicable 
legislation and regulations. 
If you cannot access this link, please notify us by reply message and we will 
send the contents to you.

GAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and 
use information about you in the course of your interactions with us. 
Full details about the data types we collect and what we use this for and your 
related rights is set out in our online privacy policy at 
https://www.gam.com/en/legal/privacy-policy. 
Please familiarise yourself with this policy and check it from time to time for 
updates as it supplements this notice.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Serialization / De-serialization for SQLAlchemy Declarative ORM

2018-07-11 Thread chris . modzelewski
Hi Mike,

Thanks - I really appreciate it! 

And thanks again for all of the effort you've put into SQLAlchemy! It is a 
"must have" in most of the Python projects I work on these days, and I 
regularly sing its praises to my clients.

All the best,
Chris

On Wednesday, July 11, 2018 at 6:26:55 PM UTC-4, Mike Bayer wrote:
>
> Hi Chris - 
>
> congratulations!   this looks extremely well documented and it's clear 
> you put an enormous amount of effort into it!I will definitely 
> point users your way when they ask for this kind of thing.   Looks 
> great. 
>
> - mike 
>
>
>
> On Wed, Jul 11, 2018 at 4:00 PM,   > wrote: 
> > Hi Folks, 
> > 
> > First off, I'd just like to thank everyone here for all of the awesome 
> work 
> > you've been doing on SQLAlchemy. I've been using the library for years, 
> and 
> > it is a truly fantastic piece of code that I rely on literally every 
> day. 
> > Thank you for all the work that you've done and are doing - it is deeply 
> > appreciated. 
> > 
> > Some time ago, I was thinking about how I might be able to give back to 
> the 
> > SQLAlchemy community, and I realized that I often find myself extending 
> my 
> > SQLAlchemy base models with serialization and de-serialization 
> functionality 
> > (usually to/from JSON and CSV, sometimes YAML or XML). And since it's 
> the 
> > type of thing I've done numerous times in various API and data 
> > science-related projects, I thought it probably makes sense to abstract 
> the 
> > functionality out and package the logic for re-use. 
> > 
> > Separation of concerns being important, I don't think this is 
> functionality 
> > that would be good to build into SQLAlchemy's codebase directly, so 
> instead 
> > I wrote it as a new library I'm calling SQLAthanor which I've just 
> released 
> > to PyPi. 
> > 
> > It works as a drop-in replacement for parts of the SQLAlchemy ORM 
> > (particularly Declarative) that extends models with 
> > serialization/de-serialization support, granting the ability to: 
> > 
> > serialize/de-serialize to/from CSV, JSON, YAML, and Python dict 
> > serialize/de-serialize columns, relationships, hybrid properties, 
> > association proxies, and Python @property attributes 
> > enable/disable serialization for particular attribute/format 
> combinations 
> > execute pre/post-processing functions on inbound/outbound data 
> > 
> > Since this is a new project, v.0.1.0 is only released on PyPi in beta at 
> the 
> > moment. I've got - I think - pretty extensive tests written, but as we 
> all 
> > know, database connectivity is all about edge cases. As it stands, my 
> test 
> > matrix covers Python 2.7, 3.4, 3.5, and 3.6 with versions of SQLAlchemy 
> > going all the way back to 0.9. 
> > 
> > If you're interested, I'd be incredibly grateful for any thoughts, 
> feedback, 
> > suggestions, wish lists, or questions you might have. And (if you do try 
> to 
> > use SQLAthanor in a project) any and all issues you stumble across would 
> > also be incredibly helpful. 
> > 
> > Here are the relevant links if you'd like to take a look: 
> > 
> > DOCUMENTATION: https://sqlathanor.readthedocs.io/en/latest (I hope 
> > comprehensive!) 
> > 
> > PYPI: https://pypi.org/project/sqlathanor/ 
> > 
> > GITHUB: https://github.com/insightindustry/sqlathanor 
> > 
> > Thanks in advance, and I look forward to any thoughts or perspectives 
> you 
> > might have! 
> > 
> > All the best, 
> > Chris 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Serialization / De-serialization for SQLAlchemy Declarative ORM

2018-07-11 Thread chris . modzelewski
Hi Folks,

First off, I'd just like to thank everyone here for all of the awesome work 
you've been doing on SQLAlchemy. I've been using the library for years, and 
it is a truly fantastic piece of code that I rely on literally every day. 
Thank you for all the work that you've done and are doing - it is deeply 
appreciated.

Some time ago, I was thinking about how I might be able to give back to the 
SQLAlchemy community, and I realized that I often find myself extending my 
SQLAlchemy base models with serialization and de-serialization 
functionality (usually to/from JSON and CSV, sometimes YAML or XML). And 
since it's the type of thing I've done numerous times in various API and 
data science-related projects, I thought it probably makes sense to 
abstract the functionality out and package the logic for re-use.

Separation of concerns being important, I don't think this is functionality 
that would be good to build into SQLAlchemy's codebase directly, so instead 
I wrote it as a new library I'm calling *SQLAthanor* which I've just 
released to PyPi.

It works as a drop-in replacement for parts of the SQLAlchemy ORM 
(particularly Declarative) that extends models with 
serialization/de-serialization support, granting the ability to:

   - serialize/de-serialize to/from CSV, JSON, YAML, and Python dict
   - serialize/de-serialize columns, relationships, hybrid properties, 
   association proxies, and Python @property attributes
   - enable/disable serialization for particular attribute/format 
   combinations
   - execute pre/post-processing functions on inbound/outbound data

Since this is a new project, *v.0.1.0* is only released on PyPi in beta at 
the moment. I've got - I think - pretty extensive tests written, but as we 
all know, database connectivity is all about edge cases. As it stands, my 
test matrix covers Python 2.7, 3.4, 3.5, and 3.6 with versions of 
SQLAlchemy going all the way back to 0.9.

If you're interested, I'd be incredibly grateful for any thoughts, 
feedback, suggestions, wish lists, or questions you might have. And (if you 
do try to use SQLAthanor in a project) any and all issues you stumble 
across would also be incredibly helpful.

Here are the relevant links if you'd like to take a look:

*DOCUMENTATION:* https://sqlathanor.readthedocs.io/en/latest (I hope 
comprehensive!)

*PYPI: *https://pypi.org/project/sqlathanor/

*GITHUB: *https://github.com/insightindustry/sqlathanor

Thanks in advance, and I look forward to any thoughts or perspectives you 
might have!

All the best,
Chris

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] adding entirely custom constraints

2018-02-01 Thread Chris Withers

awesome, thanks!

On 01/02/2018 15:50, Mike Bayer wrote:

On Thu, Feb 1, 2018 at 3:27 AM, Chris Withers <ch...@withers.org> wrote:

Hi,

So, I need to add an exclude constraint to a postgres table which has a
boolean column, but:

ProgrammingError: (psycopg2.ProgrammingError) data type boolean has no
default operator class for access method "gist"
HINT:  You must specify an operator class for the index or define a default
operator class for the data type.

Sounds like the way to do this is:

https://www.postgresql.org/message-id/OfficeNetEmail.2b9.8648e2b7c49ce986.144ee6b67a2%40prod2

create table xx(b bool, tr tsrange, exclude using gist((b::int) with =, tr
with &&));

That, unfortunately makes SQLALchemy's ExcludeConstraint implementation sad:

sqlalchemy/util/_collections.py", line 194, in __getitem__
 return self._data[key]
KeyError: '(best::int)'

Before I go and submit a patch to ExcludeConstraint to support expressions
rather than just columns (that looks like a lot of work :-/), how can I just
add a constraint that I supply as text and put in __table_args__ somehow?

just DDL() as always
http://docs.sqlalchemy.org/en/latest/core/ddl.html?highlight=ddl#custom-ddl





cheers,

Chris

--
SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and
Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
description.
--- You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] adding entirely custom constraints

2018-02-01 Thread Chris Withers

Hi,

So, I need to add an exclude constraint to a postgres table which has a 
boolean column, but:


ProgrammingError: (psycopg2.ProgrammingError) data type boolean has no 
default operator class for access method "gist"
HINT:  You must specify an operator class for the index or define a 
default operator class for the data type.


Sounds like the way to do this is:

https://www.postgresql.org/message-id/OfficeNetEmail.2b9.8648e2b7c49ce986.144ee6b67a2%40prod2

create table xx(b bool, tr tsrange, exclude using gist((b::int) with =, 
tr with &&));


That, unfortunately makes SQLALchemy's ExcludeConstraint implementation sad:

sqlalchemy/util/_collections.py", line 194, in __getitem__
    return self._data[key]
KeyError: '(best::int)'

Before I go and submit a patch to ExcludeConstraint to support 
expressions rather than just columns (that looks like a lot of work 
:-/), how can I just add a constraint that I supply as text and put in 
__table_args__ somehow?


cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] finding all tables with foreign keys to a model

2018-01-29 Thread Chris Withers

Hi All,

How can I introspect from a declaratively mapped model all the other 
models/tables that have foreign keys to it?

This keeps like something the ORM layer must know about...

cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: unit testing migration code

2017-12-14 Thread Chris Withers
Has anyone done anything like an equivalent of the following but for 
Alembic?


https://github.com/plumdog/django_migration_testcase

cheers,

Chris

On 01/12/2017 15:06, Mike Bayer wrote:

it's kind of a PITA but in Openstack we have fixtures which actually
run all the alembic (or sqlalchemy-migrate) migrations on a new
database.  Some of the more ambitious projects even write unit tests
in between each migration that use inspect() to check that the
database state is what's expected.

so to do things like that, you need a fixture which can:

1. create a new database (you probably need to produce a randomized
name for concurrency)
2. invoke alembic to each revision individually (you can do this
through alembic API:
http://alembic.zzzcomputing.com/en/latest/api/commands.html
3. have a dispatch which can call upon test cases linked to that rev,
like "def test_aabbccddee_does_thing_one()"
4. drops the database


and...that's how you do it !



On Thu, Nov 30, 2017 at 1:54 PM, Chris Withers <ch...@simplistix.co.uk> wrote:

Hi All,

How would I add test coverage for this sort of code?

https://coveralls.io/builds/14408741/source?filename=mortar_mixins%2Fmigrations.py

cheers,

Chris

--
You received this message because you are subscribed to the Google Groups
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [PossibleSpam][5.0] Re: [sqlalchemy] Concise, Pythonic query syntax

2017-11-06 Thread Chris Withers

Fantastic, thanks!

On 06/11/2017 22:46, Jones, Bryan wrote:

All,

I've just updated the package to use the MIT license.

Bryan

On Mon, Nov 6, 2017 at 2:22 PM, Jones, Bryan <bjo...@ece.msstate.edu 
<mailto:bjo...@ece.msstate.edu>> wrote:


Chris,

I'm open to BSD or MIT as well. Looking, I see that SQLAlchemy is
MIT-license, so I can re-license it to that.

Bryan

On Mon, Nov 6, 2017 at 12:25 PM, Chris Withers <ch...@withers.org
<mailto:ch...@withers.org>> wrote:

Great looking library, shame about the license.

You particularly attached to GPL3 or would you be amenable to
    BSD or MIT?

Chris

On 03/11/2017 21:52, Bryan Jones wrote:

All,

I've just released the pythonic_sqlalchemy_query package on
PyPI, which provides concise, Pythonic query syntax for
SQLAlchemy. For example, these two queries produce identical
results:

pythonic_query  =  session.User['jack'].addresses['j...@google.com
<mailto:j...@google.com>']
traditional_query  =  (
 # Ask for the Address...
 session.query(Address).
 # by querying a User named 'jack'...
 select_from(User).filter(User.name  ==  'jack').
 # then joining this to the Address 'j...@google.com
<mailto:j...@google.com>`.
 join(Address).filter(Address.email_address  ==  'j...@google.com 
<mailto:j...@google.com>')
For more information, see:
http://pythonic-sqlalchemy-query.readthedocs.io/en/latest/README.html
<http://pythonic-sqlalchemy-query.readthedocs.io/en/latest/README.html>

Thanks to Mike for his feedback and encouragement to post
this on PyPI. I've addressed the weaknesses he mentioned and
added more features and tests. Comments and feedback are
welcome. Enjoy!

Bryan

On Friday, June 16, 2017 at 4:54:36 PM UTC-5, Mike Bayer wrote:



On 06/15/2017 04:11 PM, Bryan Jones wrote:
> All,
>
> While working on my SQLAlchemy-based application, I
noticed an
> opportunity to provide a more concise, Pythonic query
syntax. For
> example, User['jack'].addresses produces a Query for
the Address of a
> User named jack. I had two questions
>
>  1. Has someone already done this? If so, would you
provide a link?
>  2. If not, would this be reasonable for inclusion in
SQLAlchemy, either
>     as an ORM example, or as a part of the core code
base? If so, I can
>     submit a pull request.
>


Hi Bryan -

thanks for working on this.

I can see a lot of variety of ways that systems like this
might work.
For example, I notice we are building up an expression,
but instead of
sending it to a function like
session.run_query(my_query), you've
flipped it around to say my_query.to_query(session).   We
do have a
similar approach with the "baked" query API, where you
build up
BakedQuery without a session then call
baked_query.for_session(session).

It seems like there's maybe a theme to this recipe which
is that it
makes a certain subset of query structures more succinct,
but at the
expense of serving only a limited set of types of
queries.  It seems
like an expression can either select from just the lead
entity, or from
a single column, then if I wanted more entities I need to
drop into
query.add_entity().  It's not clear how I'd select only a
SQL
expression, e.g. "SELECT lower(fullname) FROM jack", etc.
  I do like
how the functionality of __getitem__ is essentially
pluggable.   That's
a nice concept to add to a "query convenience" system.

There are other patterns like this, the most common are
entity-bound
query generators like "User.query" which these days is
mostly popular
with Flask.  There's a lot of query helpers and facades
around within
individual projects.   However in SQLAlchemy itself,
we've moved away
from providing or endorsing helpers like these built in
due to the fact
that they create one API for running the subset of
queries that happen
to fall under the convenience syntax, and then you have
to use a
different 

Re: [sqlalchemy] Concise, Pythonic query syntax

2017-11-06 Thread Chris Withers

Great looking library, shame about the license.

You particularly attached to GPL3 or would you be amenable to BSD or MIT?

Chris

On 03/11/2017 21:52, Bryan Jones wrote:

All,

I've just released the pythonic_sqlalchemy_query package on PyPI, 
which provides concise, Pythonic query syntax for SQLAlchemy. For 
example, these two queries produce identical results:


pythonic_query  =  session.User['jack'].addresses['j...@google.com']
traditional_query  =  (
 # Ask for the Address...
 session.query(Address).
 # by querying a User named 'jack'...
 select_from(User).filter(User.name  ==  'jack').
 # then joining this to the Address 'j...@google.com`.
 join(Address).filter(Address.email_address  ==  'j...@google.com')
For more information, see:
http://pythonic-sqlalchemy-query.readthedocs.io/en/latest/README.html

Thanks to Mike for his feedback and encouragement to post this on 
PyPI. I've addressed the weaknesses he mentioned and added more 
features and tests. Comments and feedback are welcome. Enjoy!


Bryan

On Friday, June 16, 2017 at 4:54:36 PM UTC-5, Mike Bayer wrote:



On 06/15/2017 04:11 PM, Bryan Jones wrote:
> All,
>
> While working on my SQLAlchemy-based application, I noticed an
> opportunity to provide a more concise, Pythonic query syntax. For
> example, User['jack'].addresses produces a Query for the Address
of a
> User named jack. I had two questions
>
>  1. Has someone already done this? If so, would you provide a link?
>  2. If not, would this be reasonable for inclusion in
SQLAlchemy, either
>     as an ORM example, or as a part of the core code base? If
so, I can
>     submit a pull request.
>


Hi Bryan -

thanks for working on this.

I can see a lot of variety of ways that systems like this might work.
For example, I notice we are building up an expression, but
instead of
sending it to a function like session.run_query(my_query), you've
flipped it around to say my_query.to_query(session).   We do have a
similar approach with the "baked" query API, where you build up
BakedQuery without a session then call
baked_query.for_session(session).

It seems like there's maybe a theme to this recipe which is that it
makes a certain subset of query structures more succinct, but at the
expense of serving only a limited set of types of queries.  It seems
like an expression can either select from just the lead entity, or
from
a single column, then if I wanted more entities I need to drop into
query.add_entity().  It's not clear how I'd select only a SQL
expression, e.g. "SELECT lower(fullname) FROM jack", etc. I do like
how the functionality of __getitem__ is essentially pluggable.  
That's
a nice concept to add to a "query convenience" system.

There are other patterns like this, the most common are entity-bound
query generators like "User.query" which these days is mostly popular
with Flask.  There's a lot of query helpers and facades around within
individual projects.   However in SQLAlchemy itself, we've moved away
from providing or endorsing helpers like these built in due to the
fact
that they create one API for running the subset of queries that
happen
to fall under the convenience syntax, and then you have to use a
different API for queries that fall outside of the convenience
syntax.
When a single product presents multiple, overlapping APIs, it
generally
causes confusion in learning the product.    It's easier for
people to
understand a particular convenience API as an entirely separate
add-on.

SQLAlchemy certainly suffers from this in any case, such as that
we have
both "classical mapping" and "declarative", "Core" and "ORM" querying
styles, things like that; though historically, we've put lots of
effort
into making it so that if you are using ORM Query, you really
don't need
to use Core at all for just about any structure of query, and
similarly
Declarative has totally replaced mapper() in virtually all cases.
  Long
ago we bundled a convenience library called SQLSoup, which I
eventually
broke out into a separate project, and then I later added the
"automap"
extension as a more fundamentals-based system to get the same effect
without using an alternate query API.

I've always encouraged people to write other kinds of query
languages on
top of SQLAlchemy's language.   There's another style that I've
yet to
see someone implement for SQLAlchemy, even though it's quite doable,
which is to parse Python AST into SQLAlchemy queries, with an
emphasis
on generator expressions acting like SELECT constructs.  There are
two
SQL librar

[sqlalchemy] Intermittent psycopg2.IntegrityError errors?

2017-09-15 Thread chris
Hi Everyone,

Sorry to bother you all, I haven't found anything on Stack Overflow or in 
the documentation where users are only having ON CASCADE DELETE errors only 
*intermittently*. If I retry the deletion it works.  I have so far been 
unable to figure out the conditions that leads to this problem, I've tried 
simultaneous access and high load but nothing has been consistently 
reproducible.

I am deleting an *organization* which should delete all *accounts* in the 
organization which should delete all *metadata* records for that account. 
 In most cases it works.  Very rarely, my service catches an exception.

(psycopg2.IntegrityError) update or delete on table "accounts" violates 
foreign key constraint "metadata_account_id_fkey" on table "metadata"
DETAIL:  Key (id)=(28106) is still referenced from table "metadata".
 [SQL: 'DELETE FROM a.accounts WHERE a.accounts.id = %(id)s'] [parameters: 
{'id': 28106}]



*PACKAGES*

*python *
*3.6.1*
*SDK*

*Name *
*raven-python*
*Version *
*6.1.0*


*psql (PostgreSQL) 9.4.6*



Here is the definition of *organizations*, *accounts* and *metadata *tables:

DROP TABLE IF EXISTS a.organizations CASCADE;

CREATE TABLE a.organizations
(
   id bigserial  NOT NULL,
   name   varchar(50)NOT NULL
);

ALTER TABLE a.organizations OWNER TO readwrite;

ALTER TABLE a.organizations
   ADD CONSTRAINT organizations_pkey
   PRIMARY KEY (id);

GRANT DELETE, SELECT, UPDATE, TRIGGER, INSERT, REFERENCES, TRUNCATE ON 
a.organizations TO readwrite;

COMMIT;

DROP TABLE IF EXISTS a.accounts CASCADE;

CREATE TABLE a.accounts
(
   id  serialNOT NULL,
   namevarchar(50)   NOT NULL,
   organization_id bigserial NOT NULL
);

ALTER TABLE a.accounts OWNER TO readwrite;

ALTER TABLE a.accounts
   ADD CONSTRAINT accounts_pkey
   PRIMARY KEY (id);

ALTER TABLE accounts
  ADD CONSTRAINT accounts_organization_id_fkey FOREIGN KEY (organization_id)
  REFERENCES a.organizations (id)
  ON UPDATE NO ACTION
  ON DELETE NO ACTION;

GRANT DELETE, SELECT, UPDATE, TRIGGER, INSERT, REFERENCES, TRUNCATE ON 
a.accounts TO readwrite;

COMMIT;

DROP TABLE IF EXISTS a.metadata CASCADE;

CREATE TABLE a.metadata
(
   idserialNOT NULL,
   account_idinteger   NOT NULL,
   gen_metadata  text  NOT NULL
);

ALTER TABLE a.metadata OWNER TO readwrite;

ALTER TABLE a.metadata
   ADD CONSTRAINT metadata_pkey
   PRIMARY KEY (id);

ALTER TABLE metadata
  ADD CONSTRAINT metadata_account_id_fkey FOREIGN KEY (account_id)
  REFERENCES a.accounts (id)
  ON UPDATE NO ACTION
  ON DELETE NO ACTION;

GRANT DELETE, SELECT, UPDATE, TRIGGER, INSERT, REFERENCES, TRUNCATE ON a.
metadata TO readwrite;

COMMIT;



I defined the ON DELETE CASCADE relationships at the ORM level so that the 
cascade delete occurs only when the parent is deleted:

class Accounts(BASE):
"""ORM model for the a.accounts table"""
__tablename__ = 'accounts'

# One to Many
gen_metadata = relationship('Metadata',
back_populates='account',
cascade='all, delete-orphan')

# Many to One, do not cascade delete
organization = relationship('Organizations', back_populates='accounts')

class Organizations(BASE):
"""ORM model for the a.organizations table"""
__tablename__ = 'organizations'

# One to Many
accounts = relationship('Accounts',
back_populates='organization',
cascade='all, delete-orphan')

class Metadata(BASE):
"""ORM model for the a.metadata table"""
__tablename__ = 'metadata'

# Many to One
account = relationship('Accounts', back_populates='gen_metadata')



And here is my Python to delete the *organization*: 

def delete_organization(org_id):
"""
Helper method to delete an organization.

:param org_id: the organization to delete
"""
org = DB.session.query(Organizations).get(org_id)

with DB.session.no_autoflush:
if org is None:
raise my_error()

for i in range(max_retries):
try:
DB.session.delete(org)
DB.session.flush()
except exc.IntegrityError:
DB.session.rollback()
if i + 1 != max_retries:
continue  # retry
else:
raise  # if we fail after max_retries, then re-raise 
the exception
else:
break  # stop retrying on success

# Do some other stuff

DB.session.commit()

return my_response()

raise my_error()



I would appreciate any and all suggestions.  Thank you all in advance for 
your help!

Regards,

Re: [sqlalchemy] Update timestamp with many levels of join inheritance

2017-07-31 Thread Chris Satterthwaite
Hi Mike,

Thanks for the quick reply.  Solution found.

I tried both the event.listens and event.listens_for route before, but I
was trying to use explicit relationships instead of the propagate flag.

Thanks for the hint on propagate=True.  That lead me to see my problem
(that I also hit unknowingly before), which I'll list here in case someone
else hits it.  Namely, I thought the class instance (target) was the FIRST
argument sent by the event, but no... the first argument was Mapper.  I
didn't see any exception until I placed print statements inside the
updateTimestamp function. Turns out, the class target is the THIRD argument.


Adding this after my class definition did the trick:

@event.listens_for(Node, 'before_update', propagate=True)
def updateTimestamp(mapper, connect, target):
target.time_updated = func.now()


Thanks for the help, and for an amazing set of libraries!

-Chris



On Mon, Jul 31, 2017 at 4:05 PM, Mike Bayer <mike...@zzzcomputing.com>
wrote:

> On Mon, Jul 31, 2017 at 4:37 PM, Chris Satterthwaite <cnsat...@gmail.com>
> wrote:
> > Conceptually, what I’m trying to do is:
> >
> >   1)  Enable queries for a higher level type (e.g. Node) to return all
> > direct instances and sub-class instances (e.g. Node, [Servers, [UnixType,
> > [Linux, AIX]]])
> >
> >   2)  Enable a set of attributes to be inherited by all sub-types
> >
> >   3)  Enable an attribute update at any level to update attributes seen
> by
> > all levels (e.g. time_updated in my example)
> >
> >
> >
> > 1 and 2 above “just work” by setting up SqlAlchemy with join inheritance.
> > But I’ve failed to implement #3.
> >
> >
> >
> > Versions:  Python v3.6.1, SQLAlchemy v1.1.11, and Postgres v9.6.
> >
> >
> > Class definitions follow...
>
> in these class definitions, there's two fields mentioned
> "time_updated" and then later "distribution", which are not mapped in
> the same way.  "time_updated" is only on the base Node class; to make
> that update no matter what, do a before_update() event for the Node
> class with propagate=True and then set object.time_updated=func.now()
> in the event.
>
> For the "distribution" column, you've got the same-named column in two
> different tables. This is typically not advised as the information
> being stored is redundant; however if these two columns are indeed the
> same information you can map both columns to one attribute using the
> technique shown at
> http://docs.sqlalchemy.org/en/latest/orm/mapping_columns.
> html#using-column-property-for-column-level-options,
> and they would automatically update together.
>
>
> >
> > class Node(Base):
> >
> > __tablename__ = 'node'
> >
> > hostname = Column(String(256), primary_key=True)
> >
> > domain = Column(String(256), primary_key=True)
> >
> > object_id = Column(CHAR(32), unique=True, default=lambda
> :uuid.uuid4().hex)
> >
> > object_type = Column(String(16))
> >
> > time_created = Column(DateTime(timezone=True),
> server_default=func.now())
> >
> > time_updated = Column(DateTime(timezone=True), default=func.now(),
> > onupdate=func.now())
> >
> > description  = Column(String(256))
> >
> > __mapper_args__ = {'with_polymorphic': '*',
> 'polymorphic_identity':'node',
> > 'polymorphic_on':object_type}
> >
> >
> > class Server(Node):
> >
> > __tablename__ = 'server'
> >
> > object_id = Column(None, ForeignKey(Node.object_id), primary_key=True)
> >
> > related_application = Column(String(512), nullable=True)
> >
> > __mapper_args__ = {'with_polymorphic': '*', 'polymorphic_identity':'
> server',
> > 'inherit_condition': object_id == Node.object_id}
> >
> >
> > class UnixType(Server):
> >
> > __tablename__ = 'unix_type'
> >
> > object_id = Column(None, ForeignKey(Server.object_id), primary_key=True)
> >
> > __mapper_args__ = {'polymorphic_identity':'node_windows',
> > 'with_polymorphic': '*', 'inherit_condition': object_id ==
> Server.object_id}
> >
> >
> > class Linux(UnixType):
> >
> > __tablename__ = 'linux'
> >
> > object_id = Column(None, ForeignKey(UnixType.object_id),
> primary_key=True)
> >
> > distribution = Column(String(256), nullable=True)
> >
> > __mapper_args__ = {'with_polymorphic': '*',
> 'polymorphic_identity':'linux',
> > 'inherit_condition': object_id == UnixType.object_id}
> >
> >
> > class AIX(UnixType):
> >
> > __tablename__ = 'aix'
> >
> > object_id = Column(None, Foreign

[sqlalchemy] Update timestamp with many levels of join inheritance

2017-07-31 Thread Chris Satterthwaite


Conceptually, what I’m trying to do is:

  1)  Enable queries for a higher level type (e.g. Node) to return all 
direct instances and sub-class instances (e.g. Node, [Servers, [UnixType, 
[Linux, AIX]]])

  2)  Enable a set of attributes to be inherited by all sub-types

  3)  Enable an attribute update at any level to update attributes seen by 
all levels (e.g. time_updated in my example)

 

1 and 2 above “just work” by setting up SqlAlchemy with join inheritance.  
But I’ve failed to implement #3.  

 

Versions:  Python v3.6.1, SQLAlchemy v1.1.11, and Postgres v9.6.


Class definitions follow...

class Node(Base):

__tablename__ = 'node'

hostname = Column(String(256), primary_key=True)

domain = Column(String(256), primary_key=True)

object_id = Column(CHAR(32), unique=True, default=lambda :uuid.uuid4().hex)

object_type = Column(String(16))

time_created = Column(DateTime(timezone=True), server_default=func.now())

time_updated = Column(DateTime(timezone=True), default=func.now(), 
onupdate=func.now())

description  = Column(String(256))

__mapper_args__ = {'with_polymorphic': '*', 'polymorphic_identity':'node', 
'polymorphic_on':object_type}


class Server(Node):

__tablename__ = 'server'

object_id = Column(None, ForeignKey(Node.object_id), primary_key=True)

related_application = Column(String(512), nullable=True)

__mapper_args__ = {'with_polymorphic': '*', 
'polymorphic_identity':'server', 'inherit_condition': object_id == 
Node.object_id}


class UnixType(Server):

__tablename__ = 'unix_type'

object_id = Column(None, ForeignKey(Server.object_id), primary_key=True)

__mapper_args__ = {'polymorphic_identity':'node_windows', 
'with_polymorphic': '*', 'inherit_condition': object_id == Server.object_id}


class Linux(UnixType):

__tablename__ = 'linux'

object_id = Column(None, ForeignKey(UnixType.object_id), primary_key=True)

distribution = Column(String(256), nullable=True)

__mapper_args__ = {'with_polymorphic': '*', 'polymorphic_identity':'linux', 
'inherit_condition': object_id == UnixType.object_id}


class AIX(UnixType):

__tablename__ = 'aix'

object_id = Column(None, ForeignKey(UnixType.object_id), primary_key=True)

distribution = Column(String(256), nullable=True)

__mapper_args__ = {'with_polymorphic': '*', 'polymorphic_identity':'aix', 
'inherit_condition': object_id == UnixType.object_id}


 

I’d like my app to query the base class (Node) for either the two primary 
keys (hostname and domain) or the unique key (object_id), and then be able 
to directly update any attribute on all inherited class levels.  Right now 
I’m having to query for Node, then query the Linux class for the same ID 
(to get access to all the attributes for my update), then update.  And that 
works ok unless I want automated actions on an inherited attribute – like 
the last update time (time_update).  It only updates when I specifically 
update the class with that attribute (the Node class in my example).

 

Complete script from my last iteration attached, along with sample output.

 

In my four iterations of test cases (required to show updates at all 
levels), I show the following:

  1)  Create a base type instance at the start, that can be sub-typed to a 
more qualified subtype later (e.g. create Node instance, later sub-type 
into AIX instance). I’m doing this by copy/delete/recreate-as-sub-type 
which works ok now; I am assuming there is a better way.

  2)  Create a more qualified type at the start, and send updates for just 
the base type (e.g. create AIX instance at the start, later update the 
“related_application” on the Server instance, or the “time_updated” on the 
Node instance)

 

Samples show that the time_update attribute only changes when the base 
class changes.

 

I probably could update a Linux or AIX object, then re-query for its base 
Node with the same ID, and then force a manual update on the timestamp.  
Hoping there is another way.

 

Sorry for the verbosity; I tried to be comprehensive.  Appreciate your help.

 

 

===

Additional detail (probably unnecessary) – hence listing it last:

I’ve spent about a week spinning my wheels, so I’m reaching out.  I’ve 
tried different variations from web posts, including: 

• relationships from parent-to-child and vice versa (to event off of in 
order to propagate time_updated; hit different problems and gave up with 
circular dependency on deletes)

• with_polymorphic in the query and attributes (to load all attributes 
types so I could set in just one level)

• duplicated attribute with foreign keys in all classes (and specifying 
inherit_condition - trying to get foreign keys to point to the same 
time_updated attribute in the base class)

• a mixin object to use declared_attr.cascading on forign keys

 

Perhaps one of those is the right way to do this, but I can’t figure it 
out.  Some posts show solutions without join inheritance, and I can’t seem 
to retrofit those solutions due to the 

Re: [sqlalchemy] mapper.order_by deprecated?

2017-05-09 Thread Chris Withers
Gotcha. Is there any way to specify a default ordering for queries 
against a model?


(similar to that offered by the Django ORM?)

cheers,

Chris


On 08/05/2017 23:51, mike bayer wrote:
because, it only works for a really simplistic case, and for all the 
other cases, I have no idea what it currently does, nor what it should 
do.


Assume A and B have order_by.I have no idea what the output is, or 
should be, for:


s.query(A, B)
s.query(B, A)
s.query(A).join(B)
s.query(A).select_from(B).join(A)
s.query(A, B).select_from(B).join(A)
s.query(A.id)
s.query(A, B.id)
s.query(A.id, B)
s.query(A.id, B.id)
s.query(B.id, A.id)

etc





On 05/08/2017 06:21 AM, Chris Withers wrote:

Hi All,

I see mapper.order_by is deprecated in the latest release. Why is that?

cheers,

Chris





--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] mapper.order_by deprecated?

2017-05-08 Thread Chris Withers

Hi All,

I see mapper.order_by is deprecated in the latest release. Why is that?

cheers,

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Forcing filters to use same type as field

2017-02-22 Thread Chris Frey
Hi,

We're using MySQL, and we have tables that use a GUID as the ID.
Unfortunately, if the GUID starts with a number, and if you select
using an integer, mysql will helpfully convert for you:

mysql> select id from table where id = 2;
+-+
| id  |
+-+
| 2ceb-d632-f330-4e7c-8490-90b7-5a02-e633 |
+-+
1 row in set, 7 warnings (0.00 sec)

This comes from a query like this in SQLAlchemy:

record = session.query(Table).filter(Table.id == 2).first()

Is there any way to force the resulting SQL to use the type of Table.id
instead of the integer type of 2?

Or is there a better way to handle this?

Thanks,
- Chris

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: adding an auto increment column to an existing table

2017-01-18 Thread Chris Withers

On 17/01/2017 15:07, mike bayer wrote:



Because there's no data in a brand new table, the server default isn't
needed to create the not-null column.


No needed, but it is created, isn't that the point of autoincrement=True?


for postgresql, autoincrement=True means that if the column is marked
primary_key=True it will use the datatype SERIAL for that column, which
in PG does mean the sequence is generated and added as the server side
default.


Would you expect a SERIAL on Postgres to end up being an inteeger with a 
default of a sequence when viewed in psql?


What happens with autoincrement on a non-primary-key column? My 
experiences suggests it does nothing...



http://stackoverflow.com/a/19262262/216229 implies the same works
for Alembic, but it didn't work at all for me. So I was wondering if
that answer was wrong, or if I was doing something wrong.


Still wondering if this answer is wrong or if I'm doing something
wrong...


that SO answer is showing op.create_table().   your example was just for
op.add_column().  super different.


I have to admit that, at the column level, that's surprising to me. 
Where can I see the differences between a column created as part of 
create_table() verus as part of an add_column()?


cheers,

Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Selecting from polymorphic tables without selecting polymorphic columns

2017-01-17 Thread Chris Withers

On 17/01/2017 18:38, mike bayer wrote:



On 01/17/2017 01:05 PM, Chris Withers wrote:

Potentially related issue, given:

class TheTable(Base):
__tablename__ = 'stuff'
__mapper_args__ = dict(
polymorphic_on='type',
polymorphic_identity='base',
)
type = Column(String)
col = Column(String)

class Model1(TheTable):
__mapper_args = dict(
polymorphic_identity='model1',
)

class Model2(TheTable):
__mapper_args = dict(
polymorphic_identity='model2',
)

session.add(Model1(col='a'))
session.add(Model2(col='b'))
session.flush()

I'd now expect:

self.session.query(Model1)

...to turn the Model1 instance, but in my tests it returns nothing.


you misspelled "__mapper_args__"


D'oh, thanks.

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Selecting from polymorphic tables without selecting polymorphic columns

2017-01-17 Thread Chris Withers

Potentially related issue, given:

class TheTable(Base):
__tablename__ = 'stuff'
__mapper_args__ = dict(
polymorphic_on='type',
polymorphic_identity='base',
)
type = Column(String)
col = Column(String)

class Model1(TheTable):
__mapper_args = dict(
polymorphic_identity='model1',
)

class Model2(TheTable):
__mapper_args = dict(
polymorphic_identity='model2',
)

session.add(Model1(col='a'))
session.add(Model2(col='b'))
session.flush()

I'd now expect:

self.session.query(Model1)

...to turn the Model1 instance, but in my tests it returns nothing.

self.session.query(TheTable) returns both rows, but both as TheTable 
instances, which I find a little surprising.


Chris

On 16/01/2017 17:54, mike bayer wrote:

issue

https://bitbucket.org/zzzeek/sqlalchemy/issues/3891/single-inh-criteria-should-be-added-for


is added.   Targeted at 1.2 as it will break applications unknowingly
relying upon the bug right now.

For now say func.count(Manager.employee_id), e.g. put the entity in the
columns clause.



On 01/16/2017 12:23 PM, Michael Williamson wrote:

Hello!

I have a use case where I want to select from a polymorphic table, but
without selecting any columns from that table. As a simple example,
consider selecting the count of all rows. When I write something like:

sess.query(func.count(1)).select_from(Manager).all()

It seems to be equivalent to:

sess.query(func.count(1)).select_from(Employee).all()

(where Manager inherits from Employee).

Is this intended, or is this a bug? If the former, what's the suggested
approach to writing such queries? To filter on the discriminator
explicitly?

For reference, I was able to reproduce the issue with a test case in
test/orm/inheritance/test_single.py:

def test_select_from_inherited_tables(self):
Manager, Engineer, Employee = (self.classes.Manager,
self.classes.Engineer, self.classes.Employee)

sess = create_session()
m1 = Manager(name='Tom', manager_data='data1')
e1 = Engineer(name='Kurt', engineer_info='knows how to hack')
sess.add_all([m1, e1])
sess.flush()

eq_(
sess.query(func.count(1)).select_from(Manager).all(),
[(1, )]
)

Thanks

Michael





--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] single table inheritance and instrument_class events

2017-01-17 Thread Chris Withers

On 17/01/2017 15:08, mike bayer wrote:


On 01/17/2017 06:15 AM, Chris Withers wrote:

Great, thanks. I assume has_inherited_table returns False where the
table is defined on the class itself?



it looks like has_inherited_table is just looking for non-None __table__
attribute up the inheritance chain.


Ah well, tests pass, must be fine *head back in sand* ;-)

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] single table inheritance and instrument_class events

2017-01-17 Thread Chris Withers
Great, thanks. I assume has_inherited_table returns False where the 
table is defined on the class itself?


(ie: normal models, and the "base" model in the case of single table 
inheritance)


Chris

On 16/01/2017 17:49, mike bayer wrote:

at the declarative level we have this:

http://docs.sqlalchemy.org/en/latest/orm/extensions/declarative/api.html#sqlalchemy.ext.declarative.has_inherited_table


this might be better since you're really looking for a "table" up above

On 01/16/2017 12:05 PM, Chris Withers wrote:

Hi All,

If I'm using instrument_class events to add some constraints to a table,
what's the 'right' way to spot when it's a subclass is being
instrumented? (where I'm guessing I shouldn't add the constraints).

My current attempt is here:

https://github.com/Mortar/mortar_mixins/commit/b82bea1dbc1393e24116943804b192df056a70e7



...but that feels a bit hacky.

Thoughts?

Chris





--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] single table inheritance and instrument_class events

2017-01-16 Thread Chris Withers

Hi All,

If I'm using instrument_class events to add some constraints to a table, 
what's the 'right' way to spot when it's a subclass is being 
instrumented? (where I'm guessing I shouldn't add the constraints).


My current attempt is here:

https://github.com/Mortar/mortar_mixins/commit/b82bea1dbc1393e24116943804b192df056a70e7

...but that feels a bit hacky.

Thoughts?

Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: adding an auto increment column to an existing table

2017-01-10 Thread Chris Withers

Okay, so this worked:

op.execute(CreateSequence(Sequence("observation_id_seq")))
op.add_column('observation', sa.Column(
'id', sa.Integer(), nullable=False,
server_default=sa.text("nextval('observation_id_seq'::regclass)")
))
op.drop_constraint('observation_pkey', table_name='observation')
op.create_primary_key('observation_pkey', 'observation', ['id'])

...but how come my original attempt didn't?

cheers,

Chris

On 10/01/2017 08:03, Chris Withers wrote:

So, I screwed up and realised I really want an auto-incrementing integer
as the primary key for a bunch of tables.

I've changed my models, got all the tests passing and now I need to get
the migrations done, I have:


op.add_column('observation',
  sa.Column('id', sa.Integer(), nullable=False,
  autoincrement=True))
op.drop_constraint('observation_pkey', table_name='observation')
op.create_primary_key('observation_pkey', 'observation', ['id'])


According to this answer, this should work:

http://stackoverflow.com/a/19262262/216229

...but when running the migration, I get:

sqlalchemy.exc.IntegrityError: (psycopg2.IntegrityError) column "id"
contains null values
 [SQL: 'ALTER TABLE observation ADD COLUMN id INTEGER NOT NULL']

...so what am I doing wrong?

cheers,

Chris



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


adding an auto increment column to an existing table

2017-01-10 Thread Chris Withers
So, I screwed up and realised I really want an auto-incrementing integer 
as the primary key for a bunch of tables.


I've changed my models, got all the tests passing and now I need to get 
the migrations done, I have:



op.add_column('observation',
  sa.Column('id', sa.Integer(), nullable=False,
  autoincrement=True))
op.drop_constraint('observation_pkey', table_name='observation')
op.create_primary_key('observation_pkey', 'observation', ['id'])


According to this answer, this should work:

http://stackoverflow.com/a/19262262/216229

...but when running the migration, I get:

sqlalchemy.exc.IntegrityError: (psycopg2.IntegrityError) column "id" 
contains null values

 [SQL: 'ALTER TABLE observation ADD COLUMN id INTEGER NOT NULL']

...so what am I doing wrong?

cheers,

Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Load sqlalchemy orm model from dict

2017-01-07 Thread Chris Withers

On 07/01/2017 00:20, Daniel Kraus wrote:

Hi!

mike bayer <mike...@zzzcomputing.com> writes:

you're looking for session.merge() but if you're looking to save on a
SELECT you might also want to send in load=False - and if you are
starting with a fresh (non-pickled) object you probably need to call
make_transient_to_detached first so that it acts like it was loaded from
the database first.

Nice. Works like I wanted :)
In fact, if your dict is created from a User object in the first place, 
I'd just keep that User object, detach it from the session and put it in 
your cache...


Chris

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: patterns for automated tests of migrations

2016-12-11 Thread Chris Withers

Hi Tom,

This is certainly interesting, but even more heavyweight than I was 
looking for ;-)
I love the idea of really making sure the schema that results from 
running all the migrations is the same as the one from doing a 
create_all with your metadata.

Does alembic have anything like django's squashmigrations command yet?

Anyway, what I'm interested in is testing migrations in the presence of 
data, particularly migrations that have to do "fun stuff" with existing 
rows to create new rows. Those feel more like unit tests to me - "run a 
bunch of individual scenarios", rather than your excellent but 
integration-y "run them all and make sure they build the expected schema".


Thoughts?

Chris

On 23/11/2016 09:20, Tom Lazar wrote:

hi chris,

here’s how we do it in all our projects:

https://github.com/pyfidelity/rest-seed/blob/master/backend/backrest/tests/test_migrations.py

basically, our migrations start with an empty database, so we run them, dump 
the resulting SQL, then create a new database using the `metadata.create_all` 
feature and then normalize the results and compare. if there is a mismatch the 
test fails.

if you want to write more elaborate tests involving actual data, this should be 
a good starting point, though

HTH,

tom



On 23 Nov 2016, at 10:15, Chris Withers <ch...@simplistix.co.uk> wrote:

Hi All,

How do you go about writing automated tests for a migration?

I don't often do this, but when migrations involve data, I prefer to but now I 
don't know how ;-)
There's nothing in the docs, right? (or am I missing something obvious)

cheers,

Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] regression with 1.1.0

2016-10-08 Thread Chris Withers

People use pg8000? ;-)

I jest, but under what circumstances does pg8000 make a better choice 
that psycopg2?


Chris

On 07/10/2016 14:40, mike bayer wrote:

FYI pg8000 raises ProgrammingError on these


On 10/07/2016 08:56 AM, Mike Bayer wrote:



On 10/07/2016 02:36 AM, Chris Withers wrote:

On 06/10/2016 15:04, Mike Bayer wrote:


So Mortar seems to be a heavily SQLAlchemy-integrated product.


It has a very small user base ;-)


I'd like
to point out that whenever we do these year-long release cycles, I put
out a lot of betas very early on that hopefully contain all the 
biggish
changes.  This one has been available on pypi since June 16.  
Nobody has

said anything about it.


Right, unfortunately my testing strategy is generally to use the latest
official releases (ie: what "pip install x" would pick) so as not to 
get

a lot of noise from the nightlies.

With SQLAlchemy, I'd happily make an exception, but how do I change my
travis.yml to have an axis for SQLAlchemy version that is "latest
stable" and "latest beta"?


there's a pip flag at that level.  I use jenkins myself but if we ask
the usual suspects on twitter we can probably find an answer :)






So, this code path used to previously raise an IntegrityError and now
raises a CompileError. What was the driver for that? I can certainly
see
there being code in the wild that was expecting an IntegrityError and
will now fail...


The change itself, including the text of that exception verbatim so 
that

google searches can find it, is described in detail here:
http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#the-autoincrement-directive-is-no-longer-implicitly-enabled-for-a-composite-primary-key-column 




.


Right, that change seems sensible here, although not sure


I wouldn't have it raise that CompileError if I wasn't changing the API
on a very small set of users.   The suffering spread by the old way
autoincrement worked had to come to an end.





There's definitely code that looks for IntegrityError in the wild,
however such code (such as what I've written for Openstack) is 
doing so
in order to detect conditions that cannot be caught beforehand, 
such as
a row that conflicts with a unique constraint or a row that is 
rejected

as a concurrent update.

In this case, you have a Table that has been configured to not expect
the database to generate any kind of identifier for a primary key 
column

(which is the change here, previously, it was),


Sorry, don't follow the bit in brackets; the test model has no integer
columns.

present in the INSERT statement.  So it is known ahead of time that 
this
will either fail unconditionally, or the Table is 
mis-configured.The

change is specifically to alert users that this is now a
mis-configuration.   E.g. so that I wouldn't get a "regression!" email
:)  .


I think I follow that, but my test is verifying that:

session.add(Model(period='...'))
session.save()

(nb: Model is erroneously instantiated with no string primary key)

...gives an IntegrityError, which I think it what most people would
expect. There's no mis-configuration there, it's just a bad object
which, if inserted, would result in a row with now primary key.


Right this is the "it will fail unconditionally" choice.   I agree
sending to the DB and getting the error back is less mysterious.  But I
don't see a way to distinguish between someone who was relying upon
implicit autoincrement for composite PK and who isn't.   I can just flip
my mind and say, OK, your scenario is way more likely than the other
one, and we flip.  Then I get people saying "why are there
integrityerrors all the sudden w/ no explanation?" .   Sure, I'll make
it a warning. Release will be today.  Please test w/ the betas next
time.






There is existing precedent for SQLAlchemy catching a NULL primary key
value before the INSERT is actually emitted, the ORM has done this 
for a

long time.


This is for the identity map, right? That's interesting, what exception
does it raise in that circumstance? (More out of curiosity than
anything: I'm wondering why that exception wasn't raised in this test
case and I ended up with an IntegrityError)

cheers,

Chris





--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] regression with 1.1.0

2016-10-07 Thread Chris Withers

On 06/10/2016 15:04, Mike Bayer wrote:


So Mortar seems to be a heavily SQLAlchemy-integrated product.


It has a very small user base ;-)


I'd like
to point out that whenever we do these year-long release cycles, I put
out a lot of betas very early on that hopefully contain all the biggish
changes.  This one has been available on pypi since June 16.  Nobody has
said anything about it.


Right, unfortunately my testing strategy is generally to use the latest 
official releases (ie: what "pip install x" would pick) so as not to get 
a lot of noise from the nightlies.


With SQLAlchemy, I'd happily make an exception, but how do I change my 
travis.yml to have an axis for SQLAlchemy version that is "latest 
stable" and "latest beta"?



So, this code path used to previously raise an IntegrityError and now
raises a CompileError. What was the driver for that? I can certainly see
there being code in the wild that was expecting an IntegrityError and
will now fail...


The change itself, including the text of that exception verbatim so that
google searches can find it, is described in detail here:
http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#the-autoincrement-directive-is-no-longer-implicitly-enabled-for-a-composite-primary-key-column
.


Right, that change seems sensible here, although not sure



There's definitely code that looks for IntegrityError in the wild,
however such code (such as what I've written for Openstack) is doing so
in order to detect conditions that cannot be caught beforehand, such as
a row that conflicts with a unique constraint or a row that is rejected
as a concurrent update.

In this case, you have a Table that has been configured to not expect
the database to generate any kind of identifier for a primary key column
(which is the change here, previously, it was),


Sorry, don't follow the bit in brackets; the test model has no integer 
columns.



present in the INSERT statement.  So it is known ahead of time that this
will either fail unconditionally, or the Table is mis-configured.The
change is specifically to alert users that this is now a
mis-configuration.   E.g. so that I wouldn't get a "regression!" email
:)  .


I think I follow that, but my test is verifying that:

session.add(Model(period='...'))
session.save()

(nb: Model is erroneously instantiated with no string primary key)

...gives an IntegrityError, which I think it what most people would 
expect. There's no mis-configuration there, it's just a bad object 
which, if inserted, would result in a row with now primary key.



There is existing precedent for SQLAlchemy catching a NULL primary key
value before the INSERT is actually emitted, the ORM has done this for a
long time.


This is for the identity map, right? That's interesting, what exception 
does it raise in that circumstance? (More out of curiosity than 
anything: I'm wondering why that exception wasn't raised in this test 
case and I ended up with an IntegrityError)


cheers,

Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] regression with 1.1.0

2016-10-06 Thread Chris Withers

Hi All,

Since 1.1.0 was released yesterday, one of my library's nightly builds 
have started failing:


https://travis-ci.org/Mortar/mortar_mixins/jobs/165422873

Error was:

AssertionError: CompileError(u"Column 'model.period' is marked as a 
member of the primary key for table 'model', but has no Python-side or 
server-side default generator indicated, nor does it indicate 
'autoincrement=True' or 'nullable=True', and no explicit value is 
passed.  Primary key columns typically may not store NULL. Note that as 
of SQLAlchemy 1.1, 'autoincrement=True' must be indicated explicitly for 
composite (e.g. multicolumn) primary keys if 
AUTO_INCREMENT/SERIAL/IDENTITY behavior is expected for one of the 
columns in the primary key. CREATE TABLE statements are impacted by this 
change as well on most backends.",) raised, 'sqlalchemy.exc.IntegrityError'> expected


The test is here:

https://github.com/Mortar/mortar_mixins/blob/master/tests/test_temporal.py#L268

So, this code path used to previously raise an IntegrityError and now 
raises a CompileError. What was the driver for that? I can certainly see 
there being code in the wild that was expecting an IntegrityError and 
will now fail...


(I guess I also have a niggle that this feels like SQLAlchemy is doing 
more work than it was before, will this have performance implications?)


Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] changing the order of columns in primary keys, including when generated by mixins

2016-09-22 Thread Chris Withers

On 22/09/2016 14:55, Mike Bayer wrote:


On 09/22/2016 07:30 AM, Chris Withers wrote:

How do you control the order in which columns are added to a
multi-column primary key when using the declarative primary_key=True
syntax?
How about when one of those columns comes from a mixin?


without mixins, the columns are processed in the order that they were
created - they all have a global "created index" number that is
consulted outside of the order in which the objects appear on a
declarative class.


Right, so you'd just re-order the columns...


With mixins, that all gets screwed up.  In that case you'd use a
PrimaryKeyConstraint() object on the final mapped class and that will
express the ordering of the columns.


...which I guess I'd want to introspect and then form that final 
PrimaryKeyConstraint. How would I do that in this mixin:


https://github.com/Mortar/mortar_mixins/blob/master/mortar_mixins/temporal.py

...while also removing the primary key generated by the primary_key=True 
on the various columns?


cheers,

Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] changing the order of columns in primary keys, including when generated by mixins

2016-09-22 Thread Chris Withers

Hi All,

How do you control the order in which columns are added to a 
multi-column primary key when using the declarative primary_key=True syntax?

How about when one of those columns comes from a mixin?

Context is in the mail below, any help gratefully received!

Chris


 Forwarded Message 
Subject: 	Re: [GENERAL] performance problems with bulk inserts/updates 
on tsrange with gist-based exclude constrains

Date:   Mon, 19 Sep 2016 09:41:33 -0700
From:   Jeff Janes <jeff.ja...@gmail.com>
To: Chris Withers <ch...@simplistix.co.uk>
CC: pgsql-general <pgsql-gene...@postgresql.org>



On Fri, Sep 16, 2016 at 2:01 AM, Chris Withers <ch...@simplistix.co.uk 
<mailto:ch...@simplistix.co.uk>> wrote:


   Hi All,

   I have quite a few tables that follow a pattern like this:

 Table "public.my_model"
  Column |   Type| Modifiers
   +---+---
  period | tsrange   | not null
  key| character varying | not null
  value  | integer   |
   Indexes:
 "my_model_pkey" PRIMARY KEY, btree (period, key)
 "my_model_period_key_excl" EXCLUDE USING gist (period WITH &&,
   key WITH =)
   Check constraints:
 "my_model_period_check" CHECK (period <> 'empty'::tsrange)


Try swapping the order of the columns in the exclude constraint.  You 
want the more selective criterion to appear first in the 
index/constraint.  Presumably "key with =" is the most selective, 
especially if many of your periods are unbounded.


Cheers,

Jeff

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Row versioning not working with multi-table polymorphism

2016-09-20 Thread Chris Wilson
Dear Mike,

Thank you very much for the fast reply and workaround!

I think it might have been us that originally requested the server-side 
versioning. We would like to continue using it because we have some large 
tables that don't really need an additional ID column since the database 
already provides one for us. However it had occurred to me and we might use 
that approach if we have problems with the event listener approach.

Thanks, Chris.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Row versioning not working with multi-table polymorphism

2016-09-20 Thread Chris Wilson
qlalchemy\orm\unitofwork.py:532:exception>
File sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\orm\persistence.py, 
line 170, in save_obj : mapper, table, update) 
<file:\R:\sw\external\20160915-0-64\python27\lib\site-packages\sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\orm\persistence.py:170:exception>
File sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\orm\persistence.py, 
line 672, in _emit_update_statements : execute(statement, params) 
<file:\R:\sw\external\20160915-0-64\python27\lib\site-packages\sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\orm\persistence.py:672:exception>
File sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\engine\base.py, line 
914, in execute : return meth(self, multiparams, params) 
<file:\R:\sw\external\20160915-0-64\python27\lib\site-packages\sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\engine\base.py:914:exception>
File sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\sql\elements.py, line 
323, in _execute_on_connection : return 
connection._execute_clauseelement(self, multiparams, params) 
<file:\R:\sw\external\20160915-0-64\python27\lib\site-packages\sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\sql\elements.py:323:exception>
File sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\engine\base.py, line 
1010, in _execute_clauseelement : compiled_sql, distilled_params 
<file:\R:\sw\external\20160915-0-64\python27\lib\site-packages\sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\engine\base.py:1010:exception>
File sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\engine\base.py, line 
1146, in _execute_context : context) 
<file:\R:\sw\external\20160915-0-64\python27\lib\site-packages\sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\engine\base.py:1146:exception>
File sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\engine\base.py, line 
1341, in _handle_dbapi_exception : exc_info 
<file:\R:\sw\external\20160915-0-64\python27\lib\site-packages\sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\engine\base.py:1341:exception>
File sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\util\compat.py, line 
199, in raise_from_cause : reraise(type(exception), exception, tb=exc_tb) 
<file:\R:\sw\external\20160915-0-64\python27\lib\site-packages\sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\util\compat.py:199:exception>
File sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\engine\base.py, line 
1139, in _execute_context : context) 
<file:\R:\sw\external\20160915-0-64\python27\lib\site-packages\sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\engine\base.py:1139:exception>
File sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\engine\default.py, 
line 450, in do_execute : cursor.execute(statement, parameters) 
<file:\R:\sw\external\20160915-0-64\python27\lib\site-packages\sqlalchemy-1.0.9-py2.7-win-amd64.egg\sqlalchemy\engine\default.py:450:exception>
ProgrammingError: (psycopg2.ProgrammingError) syntax error at or near 
"WHERE"

LINE 1: UPDATE animal SET WHERE animal.id = 2 AND animal.xmin = '63...

^

[SQL: 'UPDATE animal SET WHERE animal.id = %(animal_id)s AND animal.xmin = 
%(animal_xmin)s RETURNING animal.xmin'] [parameters: {'animal_id': 2, 
'animal_xmin': '635104190'}]


I think the problem is in sqlalchemy\orm\persistence.py, in 
the _collect_update_commands() function. It tries to generate a dict of 
columns that need updating, but none of the columns in the base table have 
changed, so the dict is empty. Normally it would skip updating this table 
here:

elif not (params or value_params):
continue

But because versioning is enabled, it takes this branch instead:

if update_version_id is not None and \
mapper.version_id_col in mapper._cols_by_table[table]:

and then tries to execute a null UPDATE just in order to get the new 
version number, which fails. 

It would work if we did a dummy update, for example assigning the PK to 
itself, as this would change the version number (xmin) returned by the 
server. Keeping the version number on the base table allows it to be shared 
by all polymorphic subclasses.

Thanks in advance for your consideration,
Chris Wilson.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Turning SAWarnings into exceptions

2016-09-15 Thread Chris Withers
Right, but then a bunch of other errors (ImportWarning?!, 
DeprecationWarning, etc) stop execution even reaching the code which 
might be causing the SAWarning.


It's really quite disappointing that Python's warning mechanisms don't 
report a full traceback...



On 15/09/2016 13:43, Mike Bayer wrote:


import warnings

warnings.simplefilter("error")



On 09/15/2016 08:07 AM, Chris Withers wrote:

Hi All,

How can I turn SAWarnings into exceptions?

I'm struggling with what to put into the PYTHONWARNINGS environment
variable :-S

cheers,

Chris





--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Turning SAWarnings into exceptions

2016-09-15 Thread Chris Withers

Hi All,

How can I turn SAWarnings into exceptions?

I'm struggling with what to put into the PYTHONWARNINGS environment 
variable :-S


cheers,

Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] reflecting stored procedure names, enums, and the like

2016-08-16 Thread Chris Withers

Gotcha, would there be any interest in adding StoredProcedure objects?

For the immediate need of "drop everything", your suggestion sounds like 
the best way forward, but I wonder if solving the more generic 
"reflecting stored procedures" problem might help with other things...


Have any tools emerged for comparing two databases' schemas? I guess the 
stuff used to auto-generate migrations must largely solve this problem?


Sorry, I'm a bit rambly this morning...


On 15/08/2016 14:38, Mike Bayer wrote:



We should have reflection for PG ENUM in there already.   SP names and 
such, there's no "stored procedure" object, so if you wanted it to 
look like the thing you see here you'd make a StoredProcedure() class 
that's a subclass of DDLElement, etc.  I don't know that there's a 
hook to add new methods to the Inspector() as of yet though.  If 
you're just looking for a "drop everything" method I'd probably forego 
all that boilerplate and just have "drop_pg_sps", "drop_mysql_sps", 
etc. functions and key them off of engine.name in a dictionary or 
something.




On 08/15/2016 06:16 AM, Chris Withers wrote:

Hi All,

What's the best way (preferably database agnostic) to reflect stored
procedure names, enums names and other non-table-specific items?

I'm trying to improve this function, which is based on Mike's original
recipe:

https://github.com/Mortar/mortar_rdb/blob/eb99d549be02643d4d670db2ee52b93b0c386fb4/mortar_rdb/__init__.py#L134 




I'd like to expand to to basically delete as much as possible in a
database to give a clean starting ground for tests...

Chris





--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] reflecting stored procedure names, enums, and the like

2016-08-15 Thread Chris Withers

Hi All,

What's the best way (preferably database agnostic) to reflect stored 
procedure names, enums names and other non-table-specific items?


I'm trying to improve this function, which is based on Mike's original 
recipe:


https://github.com/Mortar/mortar_rdb/blob/eb99d549be02643d4d670db2ee52b93b0c386fb4/mortar_rdb/__init__.py#L134

I'd like to expand to to basically delete as much as possible in a 
database to give a clean starting ground for tests...


Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] chide 2.0.1 released!

2016-06-16 Thread Chris Withers

Hi All,

I'm please to announce a new release of chide, a tiny library for 
building sample objects.


This release includes the following:

- Fix nasty bug when using with sqlalchemy where related objects could
  get added to the session even though they were never requested, as a
  result of a backref on a third model.

Full docs are here:

http://chide.readthedocs.org/

Downloads are here:

https://pypi.python.org/pypi/chide

Compatible with Python 2.7, 3.4+:

https://travis-ci.org/cjw296/chide

Any problems, please give me a shout on the simplis...@googlegroups.com 
list!


cheers,

Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: plain text versus html mail

2016-04-22 Thread Chris Withers

On 22/04/2016 03:04, Ben Finney wrote:

Chris Withers <ch...@simplistix.co.uk> writes:


[no text body]


When posting to mailing lists, please be sure to have a plain text body
with the full information. Not everyone wants to enable HTML in email
just to read a software announcement :-)


I used to have this view, but I'm afraid the world has moved on. I think 
plain text messages are more likely to be treated as spam than HTML now, 
and I appreciate the ability to format mails, even if Thunderbird does 
then butcher that formatting during delivery.


I dunno, what's the consensus?

Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] chide 2.0.0 released! - sample objects for tests

2016-04-21 Thread Chris Withers

  
  
Hi All,

I've just released a new version of this tiny
library for making sample objects for testing.

Here's a SQLAlchemy example:

class Parent(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True)
child_id = Column(Integer, ForeignKey('child.id'))
child = relationship('Child')

class Child(Base):
__tablename__ = 'child'
id = Column(Integer, primary_key=True)
value = Column(Integer)
Now we can build a source of sample data:

from chide import Collection
from chide.sqlalchemy import Set

data = Collection({
Parent: {'id': 1, 'child': Child},
Child: {'id': 3, 'value': 42}
})

And finally, we can use it:

>>> session = Sess
 i
on()
>>> samples = Set(data)
>>> session.add(samples.get(Parent))
>>> session.commit()
>>> session.query(Parent).one()

>>> _.child

The big change was realising that Collections tend to be module-level, so having them own the set of unique instances is bad.
So, that's now split out into a Set object, which should be instantiated one per test!

Documentation is here: http://chide.readthedocs.org/en/latest/
Development happens here: https://github.com/cjw296/chide/

cheers,



Chris





-- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] where is InstanceState.key set?

2016-04-21 Thread Chris Withers
I ended up going with 
https://github.com/cjw296/chide/blob/master/chide/sqlalchemy.py#L26 
having read those.

Sounds about right?

On 21/04/2016 00:23, Mike Bayer wrote:


on the load side in loading.py and on the persistence side in 
session.py _register_newly_persistent.




On 04/20/2016 01:28 PM, Chris Withers wrote:

Hey All,

Where is InstanceState.key set?

I'm looking for the code that builds the key used in the identity_map of
the session. Turns out to be not so easy to find...






Chris





--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] where is InstanceState.key set?

2016-04-20 Thread Chris Withers

Hey All,

Where is InstanceState.key set?

I'm looking for the code that builds the key used in the identity_map of 
the session. Turns out to be not so easy to find...


Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] tiny new tool for making sample objects for tests

2016-04-14 Thread Chris Withers

  
  
Hi All,

I've just released this tiny
library for making sample objects for testing.

Here's a SQLAlchemy example:

class Parent(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True)
child_id = Column(Integer, ForeignKey('child.id'))
child = relationship('Child')

class Child(Base):
__tablename__ = 'child'
id = Column(Integer, primary_key=True)
value = Column(Integer)
Now we can build a source of sample data:

from chide.sqlalchemy import C
 o
llection

samples = Collection({
Parent: {'id': 1, 'child': Child},
Child: {'id': 3, 'value': 42}
})
And finally, we can use it:

>>> session = Sess
 i
on()
>>> session.add(samples.make(Parent))
>>> session.commit()
>>> session.query(Parent).one()

>>> _.child

Hope this helps, if anyone finds bugs or has suggestions, please mail me!

cheers,

Chris




-- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Multiple many to one relationships to same table

2016-03-19 Thread 'Chris Norman' via sqlalchemy

Hi,
You're completely correct - it's a bit stupid if all I'm using is SQL.

Actually the rows in the table are just for storing things reliably, and 
all the magic happens with python classes which are as you describe.


Thank you for the help, I'll give it a go.

On 15/03/2016 20:40, Christopher Lee wrote:
A relationship usually looks at the foreign keys on the tables you 
specify and constructs the queries appropriately. The error you are 
getting happens because there are multiple foreign keys between the 
tables (in this case, the same table referencing itself... shudder...).


You need to tell each relationship which foreign key to use.

e.g.,:
contents = relationship(DbObject, foreign_keys=['location_id'], ...)

-

Technical problems aside, your database schema has some pretty serious 
flaws.  Having a single "objects" table that can relate to itself in 
all the ways in which things can relate to other things is a 
logistical nightmare, both in terms of performance and clarity.


For example, your schema looks suspiciously like people, places and 
things are all DBObjects.  A place has an id, and x, y, z 
coordinates.  A thing in that location would have a foreign key to the 
first record.  Well, what does it mean if a thing has a location_id to 
one set of coordinates, but has another set of coordinates in its x, 
y, and z values? Similarly, it looks like things have an owner_id that 
references another object, but what would it mean for a place to have 
an owner, or a person?  How can you tell what type of thing something 
is?  How would you query for all the people, or all the things?


A more sane schema might be something like:

class Person(base):
person_id = Column(Integer, primary_key=True)

class Place(base):
place_id = Column(Integer, primary_key=True)
x = Column(Float)
y = Column(Float)
z = Column(Float)

class Thing(base):
thing_id = Column(Integer, primary_key=True)
location_id = Column(Integer, ForeignKey(Place.place_id))
owner_id = Column(Integer, ForeignKey(Person.person_id))

etc.



On Tue, Mar 15, 2016 at 2:12 AM, 'Chris Norman' via sqlalchemy 
<sqlalchemy@googlegroups.com <mailto:sqlalchemy@googlegroups.com>> wrote:


Hi,

On 14/03/2016 15:19, Mike Bayer wrote:



On 03/14/2016 11:15 AM, 'Chris Norman' via sqlalchemy wrote:

Hi all,
I've tried googling for this, and I get nothing. I have a
table to store
data about objects. Each object should have a location
property which
links back to the same table. Conversely, each object
should have a
contents property which shows all objects which have their
location set
to this object.


documentation for multiple relationships to the same table is at:


http://docs.sqlalchemy.org/en/rel_1_0/orm/join_conditions.html#handling-multiple-join-paths


hope this helps


Thanks for the docs, however I've read this page loads over the
past few days - since my google searches keep turning it up - and
it seemed like the right place to be.

I can't find anything in there that helps me. This isn't to say
it's not there, but my knowledge of SQL is fairly limited, and my
understanding of things like the relationship function aren't very
in depth.

Which bits should I specifically read to help?

Sorry if my questions seem a little stupid - as I say, databases
are something I use rather than understand. Recently I've been
using Django where everything is handled for me, so I'm still
struggling to come to grips with relationships and how they're
made up in the real (Django-free) world.

Thanks again.



-- 
You received this message because you are subscribed to the Google

Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to sqlalchemy+unsubscr...@googlegroups.com
<mailto:sqlalchemy%2bunsubscr...@googlegroups.com>.
To post to this group, send email to sqlalchemy@googlegroups.com
<mailto:sqlalchemy@googlegroups.com>.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to a topic in the 
Google Groups "sqlalchemy" group.
To unsubscribe from this topic, visit 
https://groups.google.com/d/topic/sqlalchemy/oXokasM5yI0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to 
sqlalchemy+unsubscr...@googlegroups.com 
<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to sqlalchemy@googlegroups.com 
<mailto:sqlalchemy@googlegroups.com>.

Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You rec

Re: [sqlalchemy] Multiple many to one relationships to same table

2016-03-15 Thread 'Chris Norman' via sqlalchemy

Hi,

On 14/03/2016 15:19, Mike Bayer wrote:



On 03/14/2016 11:15 AM, 'Chris Norman' via sqlalchemy wrote:

Hi all,
I've tried googling for this, and I get nothing. I have a table to store
data about objects. Each object should have a location property which
links back to the same table. Conversely, each object should have a
contents property which shows all objects which have their location set
to this object.


documentation for multiple relationships to the same table is at:

http://docs.sqlalchemy.org/en/rel_1_0/orm/join_conditions.html#handling-multiple-join-paths 



hope this helps


Thanks for the docs, however I've read this page loads over the past few 
days - since my google searches keep turning it up - and it seemed like 
the right place to be.


I can't find anything in there that helps me. This isn't to say it's not 
there, but my knowledge of SQL is fairly limited, and my understanding 
of things like the relationship function aren't very in depth.


Which bits should I specifically read to help?

Sorry if my questions seem a little stupid - as I say, databases are 
something I use rather than understand. Recently I've been using Django 
where everything is handled for me, so I'm still struggling to come to 
grips with relationships and how they're made up in the real 
(Django-free) world.


Thanks again.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Multiple many to one relationships to same table

2016-03-14 Thread 'Chris Norman' via sqlalchemy
Hi all,
I've tried googling for this, and I get nothing. I have a table to store 
data about objects. Each object should have a location property which links 
back to the same table. Conversely, each object should have a contents 
property which shows all objects which have their location set to this 
object.

This is the code I have so far:

class DBObject(Base):
 __tablename__ = 'db_objects'
 id = Column(Integer, primary_key = True)
 name = Column(String)
 description = Column(String)
 location_id = Column(Integer, ForeignKey('db_objects.id'))
 owner_id = Column(Integer, ForeignKey('db_objects.id'))
 contents = relationship('DBObject', remote_side = location_id, backref = 
backref('location', remote_side = [location_id]))
 owned_objects = relationship('DBObject', remote_side = owner_id, backref = 
backref('owner', remote_side = [owner_id]))
 x = Column(Float)
 y = Column(Float)
 z = Column(Float)
 max_hp = Column(Float)
 damage = Column(Float)
 properties = Column(LargeBinary)

When I issue Base.metadata.create_all I get this:

Traceback (most recent call last):
  File "C:\python35\lib\site-packages\sqlalchemy\orm\relationships.py", 
line 2055, in _determine_joins
consider_as_foreign_keys=consider_as_foreign_keys
  File "", line 2, in join_condition
  File "C:\python35\lib\site-packages\sqlalchemy\sql\selectable.py", line 
828, in _join_condition
a, b, constraints, consider_as_foreign_keys)
  File "C:\python35\lib\site-packages\sqlalchemy\sql\selectable.py", line 
918, in _joincond_trim_constraints
"join explicitly." % (a.description, b.description))
sqlalchemy.exc.AmbiguousForeignKeysError: Can't determine join between 
'db_objects' and 'db_objects'; tables have more than one foreign key 
constraint relationship between them. Please specify the 'onclause' of this 
join explicitly.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "main.py", line 30, in 
start()
  File "C:\Users\Chrisn Norman\Dropbox\SRC\mindspace_server\server.py", 
line 15, in start
db.initialise()
  File "C:\Users\Chrisn Norman\Dropbox\SRC\mindspace_server\db.py", line 
57, in initialise
for row in session.query(DBObject):
  File "C:\python35\lib\site-packages\sqlalchemy\orm\session.py", line 
1272, in query
return self._query_cls(entities, self, **kwargs)
  File "C:\python35\lib\site-packages\sqlalchemy\orm\query.py", line 110, 
in __init__
self._set_entities(entities)
  File "C:\python35\lib\site-packages\sqlalchemy\orm\query.py", line 120, 
in _set_entities
self._set_entity_selectables(self._entities)
  File "C:\python35\lib\site-packages\sqlalchemy\orm\query.py", line 150, 
in _set_entity_selectables
ent.setup_entity(*d[entity])
  File "C:\python35\lib\site-packages\sqlalchemy\orm\query.py", line 3421, 
in setup_entity
self._with_polymorphic = ext_info.with_polymorphic_mappers
  File "C:\python35\lib\site-packages\sqlalchemy\util\langhelpers.py", line 
747, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
  File "C:\python35\lib\site-packages\sqlalchemy\orm\mapper.py", line 1893, 
in _with_polymorphic_mappers
configure_mappers()
  File "C:\python35\lib\site-packages\sqlalchemy\orm\mapper.py", line 2768, 
in configure_mappers
mapper._post_configure_properties()
  File "C:\python35\lib\site-packages\sqlalchemy\orm\mapper.py", line 1710, 
in _post_configure_properties
prop.init()
  File "C:\python35\lib\site-packages\sqlalchemy\orm\interfaces.py", line 
183, in init
self.do_init()
  File "C:\python35\lib\site-packages\sqlalchemy\orm\relationships.py", 
line 1629, in do_init
self._setup_join_conditions()
  File "C:\python35\lib\site-packages\sqlalchemy\orm\relationships.py", 
line 1704, in _setup_join_conditions
can_be_synced_fn=self._columns_are_mapped
  File "C:\python35\lib\site-packages\sqlalchemy\orm\relationships.py", 
line 1972, in __init__
self._determine_joins()
  File "C:\python35\lib\site-packages\sqlalchemy\orm\relationships.py", 
line 2099, in _determine_joins
% self.prop)
sqlalchemy.exc.AmbiguousForeignKeysError: Could not determine join 
condition between parent/child tables on relationship DBObject.contents - 
there are multiple foreign key paths linking the tables.  Specify the 
'foreign_keys' argument, providing a list of those columns which should be 
counted as containing a foreign key reference to the parent table.

I'm using latest sqlalchemy from pip and Python 3.5.

Any ideas on how to fix this?

Cheers,

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Stand-alone Sequences are not created by create_all

2016-01-27 Thread Chris Macklin
If I declare a Sequence at the class level in a model class declaration 
using declarative (SQLAlchemy 0.9.10 on postgres), the sequence is not 
created by create_all unless the sequence is explicitly associated with a 
Column.  My use case is a column which stores a URI; if a row is inserted 
which already has a URI, it is passed in.  If an insert doesn't come with a 
URI, we need to mint a new one, and thus call a default function which 
retrieves the next value of a sequence that is exclusively used for minting 
these URIs, and appends it to a namespace before adding that value to the 
column.  Thus, the sequence isn't actually associated with a column, but 
should still be table-level and created/destroyed when the table is added 
or dropped.

All of the workarounds I've found so far are somewhat distasteful:
- creating a dummy table, associating the sequence with a column in that 
table, and never inserting into it (not great as the sequence is detached 
from the table it is logically associated with, plus the chaff of a totally 
meaningless table sitting in my db)
- manually calling create on the sequence with the engine; this doesn't 
really work, as all of this code is sitting in a larger framework in which 
sqlalchemy is executing and it would require significant alterations to the 
framework to pass these sequences all the way out to the top level where 
create_all is called.

Any suggestions?  Am I missing something obvious here?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Stand-alone Sequences are not created by create_all

2016-01-27 Thread Chris Macklin
Yes indeed, that does the trick.  Thank you!

I would kindly suggest referencing this capability from higher up on that
page.  It seems relevant for the section discussing that sequences
associated with a table will be created/destroyed with the table, but this
is the key piece of information that makes it clear *how* to achieve this
association.

On Wed, Jan 27, 2016 at 11:49 AM, Simon King <si...@simonking.org.uk> wrote:

>
> > On 27 Jan 2016, at 18:59, Chris Macklin <chris.mack...@gmail.com> wrote:
> >
> > If I declare a Sequence at the class level in a model class declaration
> using declarative (SQLAlchemy 0.9.10 on postgres), the sequence is not
> created by create_all unless the sequence is explicitly associated with a
> Column.  My use case is a column which stores a URI; if a row is inserted
> which already has a URI, it is passed in.  If an insert doesn't come with a
> URI, we need to mint a new one, and thus call a default function which
> retrieves the next value of a sequence that is exclusively used for minting
> these URIs, and appends it to a namespace before adding that value to the
> column.  Thus, the sequence isn't actually associated with a column, but
> should still be table-level and created/destroyed when the table is added
> or dropped.
> >
> > All of the workarounds I've found so far are somewhat distasteful:
> > - creating a dummy table, associating the sequence with a column in that
> table, and never inserting into it (not great as the sequence is detached
> from the table it is logically associated with, plus the chaff of a totally
> meaningless table sitting in my db)
> > - manually calling create on the sequence with the engine; this doesn't
> really work, as all of this code is sitting in a larger framework in which
> sqlalchemy is executing and it would require significant alterations to the
> framework to pass these sequences all the way out to the top level where
> create_all is called.
> >
> > Any suggestions?  Am I missing something obvious here?
>
> According to:
>
>
> http://docs.sqlalchemy.org/en/rel_1_0/core/defaults.html#sqlalchemy.schema.Sequence.params.metadata
>
> “””
> Additionally, the appropriate CREATE SEQUENCE/ DROP SEQUENCE DDL commands
> will be emitted corresponding to this Sequence when MetaData.create_all()
> andMetaData.drop_all() are invoked.
> “””
>
> For this to work, the Sequence needs to be attached to the Metadata, which
> you can access as Base.metadata.
>
> Does that make any difference?
>
> Simon
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/zwz1grNHNio/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Connection pools

2016-01-06 Thread Chris Wood
The project is actually using mod_wsgi (presumably using MPM) - this seems 
to explain the rapid increase in the number of connections.  It's been 
suggested that implementing SQLRelay could be a solution.  Is this 
sensible? Are there any (better?) alternatives that I could look at?

Cheers,
Chris


On Tuesday, 22 December 2015 12:02:40 UTC, Chris Wood wrote:
>
>
>
> On Tuesday, 22 December 2015 01:53:59 UTC, Michael Bayer wrote:
>>
>>
>>
>> On 12/21/2015 07:44 PM, Chris Wood wrote: 
>> > Ah, ok - thanks for the explanation - this is different to how I'd been 
>> > led to believe it worked! However, I know that even when I'm the only 
>> > person testing the application, I'm still getting a large number of 
>> > connections. Is there a likely explanation why? 
>>
>>
>> there are three categories of why an application would have lots more 
>> connections than what one has set for a given Engine. 
>>
>> The most common is that the application is making use of child 
>> processes, meaning it uses either Python multiprocessing, os.fork(), or 
>> is running in a multi-process container such as mod_wsgi under Apache 
>> using the prefork MPM.   When Python forks a child process, an existing 
>> Engine in the parent process is essentially copied to a new one in the 
>> child that now refers to an independent pool of connections. 
>>
>> The second, also pretty common reason is that it is a common beginner 
>> mistake to confuse the create_engine() call for one that is used to 
>> procure a database connection.  In this situation, the code will have 
>> routines that clearly wish to connect to the database once, then leave, 
>> but you'll see the create_engine() call being used each time a new 
>> connection is desired, and often you'll see the block ending with an 
>> engine.dispose() call (but not always).  As the Engine object is the 
>> home for a connection pool, you are essentially creating a whole new 
>> connection pool for each actual database request. 
>>
>> The third, and far less likely scenario, is that there's only one Engine 
>> in play, but either the connection.detach() or the engine.dispose() API 
>> is being abused, such that connections are de-associated with the Engine 
>> but are not being closed.   This is unlikely because those detached 
>> connections are implicitly closed one they are garbage collected, though 
>> in the case of cx_Oracle this might not work very quickly or reliably. 
>>
>> For the first two scenarios, pool logging won't indicate much of 
>> anything; inspection and understanding of the code and its process model 
>> would be needed. For the third, again code inspection looking for any 
>> unusual patterns in use with engines or connections, especially calls to 
>> engine.dispose() which should never be used in an ordinary application 
>> as well as calls to connection.detach(). 
>>
>>
> This information is really helpful, thanks. At the moment, I think that 
> the second explanation is probably most likely, but I'll go and see if I 
> can work out what's going on properly, and if the code is using that 
> technique then it gives me somewhere to start debugging... 
>  
>
>>
>>
>> > 
>> > On Monday, 21 December 2015 18:51:25 UTC, Jonathan Vanasco wrote: 
>> > 
>> > The sizes for the connection pool are for each instance of your 
>> > application.  If you have a 10connection pool and you are running 
>> 10 
>> > instances of your application on the server, you'll easily have 100 
>> > connections.  If you're running 1 instance that forks, each fork 
>> > will have it's own pool (if correctly set up).  Search the docs and 
>> > FAQ for "fork" for more info. 
>> > 
>> > I don't have time to respond to the logging stuff now. Hopefully 
>> > someone else will. 
>> > 
>> > -- 
>> > You received this message because you are subscribed to the Google 
>> > Groups "sqlalchemy" group. 
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> > an email to sqlalchemy+...@googlegroups.com 
>> > <mailto:sqlalchemy+unsubscr...@googlegroups.com>. 
>> > To post to this group, send email to sqlal...@googlegroups.com 
>> > <mailto:sqlal...@googlegroups.com>. 
>> > Visit this group at https://groups.google.com/group/sqlalchemy. 
>> > For more options, visit https://groups.google.com/d/optout. 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] in-place modification of queries?

2015-12-30 Thread Chris Withers

  
  
Mike,

Would you be interested in a pull request to add a Query and/or
Session option for this?

My use case right now is putting the query in a registry so other
callables down the line can refine the search criteria, but I may
well not being alone in thinking that:


query = self.session.query(Club)
query.filter(Club.name.ilike(club))
query.order_by('name')
possible = query.all()

...is easier to read than:

possible = self.session.query(Club)\
.filter(Club.name.ilike(club))\
.order_by('name')\
.all()

cheers,

Chris

On 28/12/2015 15:22, Mike Bayer wrote:


  not through the current API, no.  you'd need to write some modifier to
the @generative decorator and basically tinker with things to make it do
that.



On 12/28/2015 06:18 AM, Chris Withers wrote:

  
Hi All,

Is there anything I can do to make Query instance non-generative?

query = session.query(Foo)
query.filter(Foo.x==1)

...and have the query actually be modified rather than returning a new
query with the clause added?

cheers,

Chris


  
  



  




-- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] in-place modification of queries?

2015-12-28 Thread Chris Withers

Hi All,

Is there anything I can do to make Query instance non-generative?

query = session.query(Foo)
query.filter(Foo.x==1)

...and have the query actually be modified rather than returning a new 
query with the clause added?


cheers,

Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] can I use tables generated by ORM via sql expression language for select queries

2015-12-27 Thread Chris Withers

On 27/12/2015 04:44, Krishnakant wrote:

Hi,
The subject says it all.
I have classes inheriting the base and thus my tables are created 
using ORM.
But I wish to use sql expression language for queries, particularly 
bulk selects for faster performance.

So is this possible and how?


Sure, you can get the sql expression language - compatible table from 
your Base-inheriting class as follows:


YourClass.__table__

cheers,

Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Connection pools

2015-12-22 Thread Chris Wood


On Tuesday, 22 December 2015 01:53:59 UTC, Michael Bayer wrote:
>
>
>
> On 12/21/2015 07:44 PM, Chris Wood wrote: 
> > Ah, ok - thanks for the explanation - this is different to how I'd been 
> > led to believe it worked! However, I know that even when I'm the only 
> > person testing the application, I'm still getting a large number of 
> > connections. Is there a likely explanation why? 
>
>
> there are three categories of why an application would have lots more 
> connections than what one has set for a given Engine. 
>
> The most common is that the application is making use of child 
> processes, meaning it uses either Python multiprocessing, os.fork(), or 
> is running in a multi-process container such as mod_wsgi under Apache 
> using the prefork MPM.   When Python forks a child process, an existing 
> Engine in the parent process is essentially copied to a new one in the 
> child that now refers to an independent pool of connections. 
>
> The second, also pretty common reason is that it is a common beginner 
> mistake to confuse the create_engine() call for one that is used to 
> procure a database connection.  In this situation, the code will have 
> routines that clearly wish to connect to the database once, then leave, 
> but you'll see the create_engine() call being used each time a new 
> connection is desired, and often you'll see the block ending with an 
> engine.dispose() call (but not always).  As the Engine object is the 
> home for a connection pool, you are essentially creating a whole new 
> connection pool for each actual database request. 
>
> The third, and far less likely scenario, is that there's only one Engine 
> in play, but either the connection.detach() or the engine.dispose() API 
> is being abused, such that connections are de-associated with the Engine 
> but are not being closed.   This is unlikely because those detached 
> connections are implicitly closed one they are garbage collected, though 
> in the case of cx_Oracle this might not work very quickly or reliably. 
>
> For the first two scenarios, pool logging won't indicate much of 
> anything; inspection and understanding of the code and its process model 
> would be needed. For the third, again code inspection looking for any 
> unusual patterns in use with engines or connections, especially calls to 
> engine.dispose() which should never be used in an ordinary application 
> as well as calls to connection.detach(). 
>
>
This information is really helpful, thanks. At the moment, I think that the 
second explanation is probably most likely, but I'll go and see if I can 
work out what's going on properly, and if the code is using that technique 
then it gives me somewhere to start debugging... 
 

>
>
> > 
> > On Monday, 21 December 2015 18:51:25 UTC, Jonathan Vanasco wrote: 
> > 
> > The sizes for the connection pool are for each instance of your 
> > application.  If you have a 10connection pool and you are running 10 
> > instances of your application on the server, you'll easily have 100 
> > connections.  If you're running 1 instance that forks, each fork 
> > will have it's own pool (if correctly set up).  Search the docs and 
> > FAQ for "fork" for more info. 
> > 
> > I don't have time to respond to the logging stuff now. Hopefully 
> > someone else will. 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sqlalchemy+...@googlegroups.com  
> > <mailto:sqlalchemy+unsubscr...@googlegroups.com >. 
> > To post to this group, send email to sqlal...@googlegroups.com 
>  
> > <mailto:sqlal...@googlegroups.com >. 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Connection pools

2015-12-21 Thread Chris Wood
Ah, ok - thanks for the explanation - this is different to how I'd been led 
to believe it worked! However, I know that even when I'm the only person 
testing the application, I'm still getting a large number of connections. 
Is there a likely explanation why?

On Monday, 21 December 2015 18:51:25 UTC, Jonathan Vanasco wrote:
>
> The sizes for the connection pool are for each instance of your 
> application.  If you have a 10connection pool and you are running 10 
> instances of your application on the server, you'll easily have 100 
> connections.  If you're running 1 instance that forks, each fork will have 
> it's own pool (if correctly set up).  Search the docs and FAQ for "fork" 
> for more info.
>
> I don't have time to respond to the logging stuff now. Hopefully someone 
> else will.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Connection pools

2015-12-21 Thread Chris Wood
Hello all,

I've inherited a fairly large / complex internal web-based data portal 
(with a distinct lack of documentation), which is causing some fairly major 
headaches.

The application was originally written with a SQLite backend, but I was 
asked to convert this to Oracle, which I've done.  However, we seem to get 
large numbers of connections being used, even though a pool is being used, 
and I've now been asked to try and reduce the number of connections to a 
sensible level (however, not being a DBA, I'm even struggling to get my 
head round what a sensible number would be...)

This is what I understand from my limited knowledge:
 - the web portal has several search filters - using each one fires a 
jQuery ajax request to get data (and so runs a query each time a filter is 
used)
 - originally, with no changes to the default create_engine options, we 
were regularly seeing >100 connections (as recorded in the v$session table) 
- why does this happen, even though the defaults for pool_size and 
max_overflow are 5 and 10 respectively?
 - I wondered if this was caused by connections not being closed properly, 
so I tried adding close() statements after statements had been executed, 
but this caused the portal to complain with 500 errors 
(`exc.ResourceClosedError("This Connection is closed")`)

 - I have tried to limit the values of pool_size, max_overflow, 
pool_recycle, and pool_timeout (e.g. setting them to 1, 5, 60, and 60 
respectively).  The number of concurrent connections does seem to have 
decreased, but it's still much larger than I'd expect given the values of 
the arguments - and our Oracle DBA is still complaining about it...
 - I even tried setting the value of max_overflow to 0, but got an error of 
`TimeoutError: QueuePool limit of size 1 overflow 0 reached, connection 
timed out, timeout 30`
 - The number of connections seems to vary fairly randomly (e.g. I've been 
looking at the number of connections at minute intervals this afternoon 
while doing some testing, and it's jumped from up and down fairly 
erratically, the sudden drop in the number of connections doesn't seem to 
correspond with timeouts:
 

18 2015 dec 21 15 49 00

18 2015 dec 21 15 48 00

3 2015 dec 21 15 47 00

5 2015 dec 21 15 46 00

5 2015 dec 21 15 45 00

5 2015 dec 21 15 44 00

5 2015 dec 21 15 43 00

1 2015 dec 21 15 42 00

4 2015 dec 21 15 41 00

4 2015 dec 21 15 40 00

4 2015 dec 21 15 39 00


 - I've outputting data via the sqlalchemy.pool logger, but I'm struggling 
to understand what it's showing. I see that connections are created, 
checked out, and returned to the pool, but I can't make these match with 
connections in the code.

I'm now at a total loss of what to try next. I clearly have failed to 
understand some major aspects of connections / pooling - any help or 
pointers to help me sort out the number of connections would be majorly 
appreciated!

Thanks,
Chris

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] postgres "similar to" in sqlalchemy expressions

2015-12-11 Thread Chris Withers

  
  
Hi All,

Just wanted to double check, is this still the best way to do this:

https://groups.google.com/forum/#!topic/sqlalchemy/6kZaWeqTpHA


foo.op("SIMILAR
  TO")(bar)

cheers,

Chris
  




-- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   4   5   6   >