[sqlalchemy] sqlalchemy 2.0 and ABCMeta
Hi there, We are migrating our code from SqlAlchemy 1.4 to 2.0 (2.0.23 to be specific). We have had the following, which allowed some classes inheriting from our Base to use an ABCMeta metaclass: --- class DeclarativeABCMeta(DeclarativeMeta, ABCMeta): pass metadata = MetaData(naming_convention=naming_convention) Base = declarative_base(metadata=metadata, metaclass=DeclarativeABCMeta) --- The code above works fine on 2.0, but if we want heed the 2.0 docs that comment that declarative_base is superceded by using a class inheriting from DeclarativeBase, we ought to have something like: --- class DeclarativeABCMeta(DeclarativeMeta, ABCMeta): pass metadata = MetaData(naming_convention=naming_convention) class Base(DeclarativeBase, metaclass=DeclarativeABCMeta): """A Base for using with declarative.""" __abstract__ = True metadata = metadata --- This, however breaks when it hits the first class inheriting from Base: --- class SchemaVersion(Base): __tablename__ = 'reahl_schema_version' id = Column(Integer, primary_key=True) version = Column(String(50)) egg_name = Column(String(80)) --- With: [site-packages]/sqlalchemy/orm/decl_api.py:195: in __init__ _as_declarative(reg, cls, dict_) [site-packages]/sqlalchemy/orm/decl_base.py:247: in _as_declarative return _MapperConfig.setup_mapping(registry, cls, dict_, None, {}) [site-packages]/sqlalchemy/orm/decl_base.py:328: in setup_mapping return _ClassScanMapperConfig( [site-packages]/sqlalchemy/orm/decl_base.py:520: in __init__ super().__init__(registry, cls_, mapper_kw) [site-packages]/sqlalchemy/orm/decl_base.py:344: in __init__ instrumentation.register_class( [site-packages]/sqlalchemy/orm/instrumentation.py:684: in register_class manager._update_state( [site-packages]/sqlalchemy/orm/instrumentation.py:209: in _update_state registry._add_manager(self) [site-packages]/sqlalchemy/orm/decl_api.py:1380: in _add_manager raise exc.ArgumentError( E sqlalchemy.exc.ArgumentError: Class ''reahl.sqlalchemysupport.sqlalchemysupport.SchemaVersion'>' already has a primary mapper defined. Any ideas on what we are doing wrong here? Thanks Iwan -- -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/5d616bf0-ffa7-4061-adaf-cf1c7577e0fc%40reahl.org.
Re: [sqlalchemy] Event listener for when query results get matched with pre-existing objects on the session
do_orm_execute() (and freezing the results) totally allows for what I'm trying to do! Also I forgot to mention earlier, but this is only going to run in tests. Thanks again for all the help, Tony On Friday, October 20, 2023 at 11:10:23 PM UTC+8 Mike Bayer wrote: > > > On Fri, Oct 20, 2023, at 10:46 AM, 'Tony Cosentini' via sqlalchemy wrote: > > Oh I see, thanks for clarifying. > > I'm trying to detect cases where we depend on the autoflush behavior. For > example, in the sample above, when the query runs with no_autoflush, we > won't get back any results (when previously we would get back one row). > It's a fairly large codebase, so I was trying to automate finding these > cases in order to add explicit flush calls when needed or just pass in the > pending object instead of running a query. > > > > there's not really an event for an object that just gets returned by a > query from the identity map.you would need to do something more drastic > like a do_orm_execute() hook that runs queries internally and then looks at > all the returned objects. i wouldnt put that in production. > > > > > > > > > Tony > > On Fri, Oct 20, 2023 at 10:42 PM Mike Bayer > wrote: > > > > > On Fri, Oct 20, 2023, at 9:50 AM, 'Tony Cosentini' via sqlalchemy wrote: > > Weird, I did try that but I can't seem to trigger it. > > Here's a self-contained test: > https://gist.github.com/tonycosentini/4dee3478695d032ca67707b5e26739b6 > > > > the object was not affected in that query (that is, not mutated). if you > change the query to this: > > table_one_instances = > session.query(TableOne).populate_existing().all() > > then the refresh event is triggered. > > Im not really following what kind of bug you are trying to detect. > > > > I have the event listener set to just crash, but it never triggers. > > On Fri, Oct 20, 2023 at 9:20 PM Mike Bayer > wrote: > > > > > On Fri, Oct 20, 2023, at 8:08 AM, 'Tony Cosentini' via sqlalchemy wrote: > > Hi, > > Is there any way to listen for an event for when a query result gets > merged into a pre-existing object in the session? > > > this is the refresh event: > > > https://docs.sqlalchemy.org/en/20/orm/events.html#sqlalchemy.orm.InstanceEvents.refresh > > > > I'm working on disabling autoflush for some of our codebase (mostly to cut > down on queries on a high QPS path), but before doing that I want to write > a utility to detect when we read data that was flushed via autoflush to cut > down on any potential bugs. > > What I'd like to do is this (while autoflush is still enabled): > > table_one = TableOne(name='test') > session.add(table_one) # Track this object as pending a flush via the > before_* listeners, this is working as expected. > > table_two_instances = session.query(TableTwo).all() # All good, doesn't do > anything with the table_one instance created earlier > > table_one_instances = session.query(TableOne).all() # I would like to log > a warning here as the results of this query depend on a flush happening. > What I'm hoping to do is detect that one of the rows coming back is the > same object that was flagged earlier, but I can't see to find the correct > event to use. > > > you probably want to also use refresh_flush also, which will be invoked > for column defaults that are populated on the object > > > https://docs.sqlalchemy.org/en/20/orm/events.html#sqlalchemy.orm.InstanceEvents.refresh_flush > > > > > > Is this possible? I can't seem to find the appropriate event that would > trigger when the results from the .all() query get merged back in with the > existing objects in the session. > > > that's the refresh event > > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to a topic in the > Google Groups "sqlalchemy" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/sqlalchemy/9bevy3Kpql4/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > sqlalchemy+...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/c5b89f67-e3fa-45f2-8774-463d154514f3%40app.fastmail.com > > <https://groups.google.com/d/msgid/sqlal
Re: [sqlalchemy] Event listener for when query results get matched with pre-existing objects on the session
Oh I see, thanks for clarifying. I'm trying to detect cases where we depend on the autoflush behavior. For example, in the sample above, when the query runs with no_autoflush, we won't get back any results (when previously we would get back one row). It's a fairly large codebase, so I was trying to automate finding these cases in order to add explicit flush calls when needed or just pass in the pending object instead of running a query. Tony On Fri, Oct 20, 2023 at 10:42 PM Mike Bayer < mike_not_on_goo...@zzzcomputing.com> wrote: > > > On Fri, Oct 20, 2023, at 9:50 AM, 'Tony Cosentini' via sqlalchemy wrote: > > Weird, I did try that but I can't seem to trigger it. > > Here's a self-contained test: > https://gist.github.com/tonycosentini/4dee3478695d032ca67707b5e26739b6 > > > > the object was not affected in that query (that is, not mutated). if you > change the query to this: > > table_one_instances = > session.query(TableOne).populate_existing().all() > > then the refresh event is triggered. > > Im not really following what kind of bug you are trying to detect. > > > > I have the event listener set to just crash, but it never triggers. > > On Fri, Oct 20, 2023 at 9:20 PM Mike Bayer < > mike_not_on_goo...@zzzcomputing.com> wrote: > > > > > On Fri, Oct 20, 2023, at 8:08 AM, 'Tony Cosentini' via sqlalchemy wrote: > > Hi, > > Is there any way to listen for an event for when a query result gets > merged into a pre-existing object in the session? > > > this is the refresh event: > > > https://docs.sqlalchemy.org/en/20/orm/events.html#sqlalchemy.orm.InstanceEvents.refresh > > > > I'm working on disabling autoflush for some of our codebase (mostly to cut > down on queries on a high QPS path), but before doing that I want to write > a utility to detect when we read data that was flushed via autoflush to cut > down on any potential bugs. > > What I'd like to do is this (while autoflush is still enabled): > > table_one = TableOne(name='test') > session.add(table_one) # Track this object as pending a flush via the > before_* listeners, this is working as expected. > > table_two_instances = session.query(TableTwo).all() # All good, doesn't do > anything with the table_one instance created earlier > > table_one_instances = session.query(TableOne).all() # I would like to log > a warning here as the results of this query depend on a flush happening. > What I'm hoping to do is detect that one of the rows coming back is the > same object that was flagged earlier, but I can't see to find the correct > event to use. > > > you probably want to also use refresh_flush also, which will be invoked > for column defaults that are populated on the object > > > https://docs.sqlalchemy.org/en/20/orm/events.html#sqlalchemy.orm.InstanceEvents.refresh_flush > > > > > > Is this possible? I can't seem to find the appropriate event that would > trigger when the results from the .all() query get merged back in with the > existing objects in the session. > > > that's the refresh event > > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to a topic in the > Google Groups "sqlalchemy" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/sqlalchemy/9bevy3Kpql4/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > sqlalchemy+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/c5b89f67-e3fa-45f2-8774-463d154514f3%40app.fastmail.com > <https://groups.google.com/d/msgid/sqlalchemy/c5b89f67-e3fa-45f2-8774-463d154514f3%40app.fastmail.com?utm_medium=email&utm_source=footer> > . > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sqlalchemy+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.co
Re: [sqlalchemy] Event listener for when query results get matched with pre-existing objects on the session
Weird, I did try that but I can't seem to trigger it. Here's a self-contained test: https://gist.github.com/tonycosentini/4dee3478695d032ca67707b5e26739b6 I have the event listener set to just crash, but it never triggers. On Fri, Oct 20, 2023 at 9:20 PM Mike Bayer < mike_not_on_goo...@zzzcomputing.com> wrote: > > > On Fri, Oct 20, 2023, at 8:08 AM, 'Tony Cosentini' via sqlalchemy wrote: > > Hi, > > Is there any way to listen for an event for when a query result gets > merged into a pre-existing object in the session? > > > this is the refresh event: > > > https://docs.sqlalchemy.org/en/20/orm/events.html#sqlalchemy.orm.InstanceEvents.refresh > > > > I'm working on disabling autoflush for some of our codebase (mostly to cut > down on queries on a high QPS path), but before doing that I want to write > a utility to detect when we read data that was flushed via autoflush to cut > down on any potential bugs. > > What I'd like to do is this (while autoflush is still enabled): > > table_one = TableOne(name='test') > session.add(table_one) # Track this object as pending a flush via the > before_* listeners, this is working as expected. > > table_two_instances = session.query(TableTwo).all() # All good, doesn't do > anything with the table_one instance created earlier > > table_one_instances = session.query(TableOne).all() # I would like to log > a warning here as the results of this query depend on a flush happening. > What I'm hoping to do is detect that one of the rows coming back is the > same object that was flagged earlier, but I can't see to find the correct > event to use. > > > you probably want to also use refresh_flush also, which will be invoked > for column defaults that are populated on the object > > > https://docs.sqlalchemy.org/en/20/orm/events.html#sqlalchemy.orm.InstanceEvents.refresh_flush > > > > > > Is this possible? I can't seem to find the appropriate event that would > trigger when the results from the .all() query get merged back in with the > existing objects in the session. > > > that's the refresh event > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to a topic in the > Google Groups "sqlalchemy" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/sqlalchemy/9bevy3Kpql4/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > sqlalchemy+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/c5b89f67-e3fa-45f2-8774-463d154514f3%40app.fastmail.com > <https://groups.google.com/d/msgid/sqlalchemy/c5b89f67-e3fa-45f2-8774-463d154514f3%40app.fastmail.com?utm_medium=email&utm_source=footer> > . > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAEx_o%2BAhhUSPDYuGL4TFkTvxjHYfKEWUnBMxh2PzCDw3GJPaGg%40mail.gmail.com.
[sqlalchemy] Event listener for when query results get matched with pre-existing objects on the session
Hi, Is there any way to listen for an event for when a query result gets merged into a pre-existing object in the session? I'm working on disabling autoflush for some of our codebase (mostly to cut down on queries on a high QPS path), but before doing that I want to write a utility to detect when we read data that was flushed via autoflush to cut down on any potential bugs. What I'd like to do is this (while autoflush is still enabled): table_one = TableOne(name='test') session.add(table_one) # Track this object as pending a flush via the before_* listeners, this is working as expected. table_two_instances = session.query(TableTwo).all() # All good, doesn't do anything with the table_one instance created earlier table_one_instances = session.query(TableOne).all() # I would like to log a warning here as the results of this query depend on a flush happening. What I'm hoping to do is detect that one of the rows coming back is the same object that was flagged earlier, but I can't see to find the correct event to use. Is this possible? I can't seem to find the appropriate event that would trigger when the results from the .all() query get merged back in with the existing objects in the session. Thanks, Tony -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/13ed894a-0a70-407b-a568-9cc17fd58042n%40googlegroups.com.
[sqlalchemy] Issue DELETE statement with LIMIT
Hi everyone, I'd like to issue a LIMIT for a DELETE statement. By default, this is not possible as far as I can see it. The function limit() is available for SELECT in general however. Searching through documentation, I found a reference to with_dialect_option(): https://docs.sqlalchemy.org/en/20/search.html?q=with_dialect_options&check_keywords=yes&area=default# Which points to https://docs.sqlalchemy.org/en/20/core/dml.html#sqlalchemy.sql.expression.UpdateBase.with_dialect_options claiming the function to be available in UpdateBase (although the documentation off given the claimed method is not callable like documented). This was the case in 1.4 (https://github.com/sqlalchemy/sqlalchemy/blob/rel_1_4/lib/sqlalchemy/sql/dml.py#L345) already and also in 2.0 (https://github.com/sqlalchemy/sqlalchemy/blob/main/lib/sqlalchemy/sql/dml.py#L438). However, trying to call it as per documentation results in an exception being raised: ``` sqlalchemy.exc.ArgumentError: Argument 'mysql_limit' is not accepted by dialect 'mysql' on behalf of ``` This is caused by https://github.com/sqlalchemy/sqlalchemy/blob/main/lib/sqlalchemy/dialects/mysql/base.py#L2454 not listing sql.Delete explicitly. UpdateBase apparently cannot be referenced either given the import (guessing as it's not imported explicitly in https://github.com/sqlalchemy/sqlalchemy/blob/main/lib/sqlalchemy/sql/__init__.py or not referenced another way?). However, by adding Delete just like Update will have the following line run fine without an error - but not adding the LIMIT either. My best guess right now would be due to the lack of limit clause handling? ``` stmt = stmt.with_dialect_options(mysql_limit=limit, mariadb_limit=limit) ``` where `limit` simply is an integer. Any hints or help is appreciated. I can also raise a ticket on Github :) Best regards -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/f13a8ca6-3e52-4287-a6a4-52b5b4672470n%40googlegroups.com.
[sqlalchemy] Temporarily disable/intercept ORM events on mutation
When customers call our JSON:API API, they can use an "include" parameter to specify related objects to be appended to the response. However, requests to include are not always satisfiable (e.g. if job.supervisor=null, include=supervisor is ignored). In order to prevent Marshmallow from trying to load nonexistent related objects to append to our API responses, we need to tell it when to ignore a relationship attribute, such as via setting job.supervisor=marshmallow.missing (if it sees job.supervisor=None, it will attempt (and fail) to load the null supervisor object, so we cannot just leave it as-is). Unfortunately, this causes problems as SQLAlchemy attempts to handle the new value: Error Traceback (most recent call last): File "/Users/lucadou/IdeaProjects/person-api/api/unit_tests/test_person_service.py", line 601, in test_get_people_include_job response = self.person_service.get_people(QueryParameters({"include": "jobs"})) File "/Users/lucadou/IdeaProjects/person-api/api/src/person_service.py", line 61, in get_people response = person_schema.dump(people, many=True) ^ File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/marshmallow/schema.py", line 557, in dump result = self._serialize(processed_obj, many=many) ^ File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/marshmallow/schema.py", line 519, in _serialize return [ ^ File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/marshmallow/schema.py", line 520, in self._serialize(d, many=False) File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/marshmallow/schema.py", line 525, in _serialize value = field_obj.serialize(attr_name, obj, accessor=self.get_attribute) File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/marshmallow_jsonapi/fields.py", line 248, in serialize return super().serialize(attr, obj, accessor) ^^ File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/marshmallow/fields.py", line 344, in serialize return self._serialize(value, attr, obj, **kwargs) ^^^ File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/marshmallow_jsonapi/fields.py", line 274, in _serialize self._serialize_included(item) File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/marshmallow_jsonapi/fields.py", line 280, in _serialize_included result = self.schema.dump(value) ^^^ File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/marshmallow/schema.py", line 551, in dump processed_obj = self._invoke_dump_processors( ^ File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/marshmallow/schema.py", line 1068, in _invoke_dump_processors data = self._invoke_processors( File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/marshmallow/schema.py", line 1225, in _invoke_processors data = processor(data, many=many, **kwargs) File "/Users/lucadou/IdeaProjects/person-api/api/src/model/schema/job_schema.py", line 62, in set_null_supervisor job.supervisor = missing ^^ File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/sqlalchemy/orm/attributes.py", line 536, in __set__ self.impl.set( File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/sqlalchemy/orm/attributes.py", line 1466, in set value = self.fire_replace_event(state, dict_, value, old, initiator) File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/sqlalchemy/orm/attributes.py", line 1505, in fire_replace_event value = fn( ^^^ File "/Users/lucadou/IdeaProjects/person-api/api/venv/lib/python3.11/site-packages/sqlalchemy/orm/attributes.py", line 2167, in emit_backref_from_scalar_set_event instance_state(child), ^ AttributeError: '_Missing' object has no attribute '_sa_instance_state' Is there any way to temporarily disable ORM event listeners when we mutate objects and have no intention of saving the changes/do not intend for the ORM to act on them? -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalc
Re: [sqlalchemy] sqlalchemy.orm.exc.FlushError on subclass
Thanks. We went with the helper class route, and it seems to be working much better than attempting to use inheritance in a manner that seems unsupported. On Wednesday, September 6, 2023 at 1:24:04 PM UTC-5 Mike Bayer wrote: > if you can't correct this model to apply the persistence details to the > concrete class you wish to persist and query, then you'd do the suggested > "enable_typechecks=False". There is no attribute in SQLAlchemy named > "meta" and no stack trace is given here so I dont know to what that refers. > > Overall I'm not sure how this API_Person class is useful because you can't > query for them. I would think that if you cant change the original > model then you'd have this API_Person as a series of helper functions that > accept a Person as their argument. > > > > On Wed, Sep 6, 2023, at 1:56 PM, 'Luna Lucadou' via sqlalchemy wrote: > > The project I am working on is split up into several modules. Previously, > each module had its own ORM classes. > However, due to several bugs arising from forgetting to update each > module's ORM classes in lock step when adding new functionality, we have > decided it would be best to extract the ORM classes which interact with our > DB into their own module and make that available to the other modules via > pip. > > One of these modules, which monitors for changes in an upstream DB and > applies them to ours, has some methods which are not present in the other > modules and which cannot be easily extracted out due to its dependencies on > upstream DB functionality. > > As such, in this module, we must subclass the ORM models which interact > with our DB: > > models.apimodels.db.person.py: > #... > @dataclass(init=False, eq=True, unsafe_hash=True) > class Person(Base): > __tablename__ = "person" > > id: Mapped[int] = mapped_column(primary_key=True) > first_name: Mapped[str] > last_name: Mapped[str] > email_address: Mapped[str] > office_address: Mapped[str] > office_phone_number: Mapped[str] > > # ... > > etl.models.api_db.api_person.py: > #... > from apimodels.db.person import Person as PersonBase > # ... > class API_Person(PersonBase): > __tablename__ = "person" > __table_args__ = {"keep_existing": True} > > def get_pvi(self): > # ... > > def get_historical_pvis(self) -> list[str]: > # ... > > def __eq__(self): > # ... > > def __hash__(self): > # ... > > @staticmethod > def from_upstream_hub_person( > uh_person: Optional[UH_Person], > ) -> Optional["API_Person"]: > # ... > > Of note is that this subclass does not add any new attributes or modify > existing ones, it merely adds some helper methods related to identifying > primary key changes in the upstream DB. (This is also why we override the > eq and hash methods provided by dataclasses - incoming changesets have to > be matched against existing records, even when primary keys change > upstream.) > > This is effectively single-table inheritance, but it is not a good fit for > polymorphic_identity since it is not a distinct class, merely adding > module-specific helper methods, and if I am reading the documentation > correctly, using polymorphic_identity would mean any records touched by > the API_Person subclass (which is all of them) would no longer be usable > by other modules, which do not extend any of the models. > > From what I have read, it seems like the keep_existing param should be of > use here, but neither it nor extend_existing set to True help with the > errors I am seeing when attempting to interact with this subclass in any > way: > > sqlalchemy.orm.exc.FlushError: Attempting to flush an item of type 'model.api_db.api_person.API_Person'> as a member of collection > "Identifier.person". Expected an object of type 'apimodels.db.person.Person'> or a polymorphic subclass of this type. If > is a subclass of 'apimodels.db.person.Person'>, configure mapper "Mapper[Person(person)]" to > load this subtype polymorphically, or set enable_typechecks=False to allow > any subtype to be accepted for flush. > > I did try setting enable_typechecks to False, but this results in a > different error when attempting to use getattr on the subclass: > > AttributeError: 'Person' object has no attribute 'meta' > > Is there a better way of doing this? > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www
[sqlalchemy] sqlalchemy.orm.exc.FlushError on subclass
The project I am working on is split up into several modules. Previously, each module had its own ORM classes. However, due to several bugs arising from forgetting to update each module's ORM classes in lock step when adding new functionality, we have decided it would be best to extract the ORM classes which interact with our DB into their own module and make that available to the other modules via pip. One of these modules, which monitors for changes in an upstream DB and applies them to ours, has some methods which are not present in the other modules and which cannot be easily extracted out due to its dependencies on upstream DB functionality. As such, in this module, we must subclass the ORM models which interact with our DB: models.apimodels.db.person.py: #... @dataclass(init=False, eq=True, unsafe_hash=True) class Person(Base): __tablename__ = "person" id: Mapped[int] = mapped_column(primary_key=True) first_name: Mapped[str] last_name: Mapped[str] email_address: Mapped[str] office_address: Mapped[str] office_phone_number: Mapped[str] # ... etl.models.api_db.api_person.py: #... from apimodels.db.person import Person as PersonBase # ... class API_Person(PersonBase): __tablename__ = "person" __table_args__ = {"keep_existing": True} def get_pvi(self): # ... def get_historical_pvis(self) -> list[str]: # ... def __eq__(self): # ... def __hash__(self): # ... @staticmethod def from_upstream_hub_person( uh_person: Optional[UH_Person], ) -> Optional["API_Person"]: # ... Of note is that this subclass does not add any new attributes or modify existing ones, it merely adds some helper methods related to identifying primary key changes in the upstream DB. (This is also why we override the eq and hash methods provided by dataclasses - incoming changesets have to be matched against existing records, even when primary keys change upstream.) This is effectively single-table inheritance, but it is not a good fit for polymorphic_identity since it is not a distinct class, merely adding module-specific helper methods, and if I am reading the documentation correctly, using polymorphic_identity would mean any records touched by the API_Person subclass (which is all of them) would no longer be usable by other modules, which do not extend any of the models. >From what I have read, it seems like the keep_existing param should be of use here, but neither it nor extend_existing set to True help with the errors I am seeing when attempting to interact with this subclass in any way: sqlalchemy.orm.exc.FlushError: Attempting to flush an item of type as a member of collection "Identifier.person". Expected an object of type or a polymorphic subclass of this type. If is a subclass of , configure mapper "Mapper[Person(person)]" to load this subtype polymorphically, or set enable_typechecks=False to allow any subtype to be accepted for flush. I did try setting enable_typechecks to False, but this results in a different error when attempting to use getattr on the subclass: AttributeError: 'Person' object has no attribute 'meta' Is there a better way of doing this? -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/bff6d34f-ea35-4aba-ba94-5ae1f29154fan%40googlegroups.com.
[sqlalchemy] Dogpile session caching
Hello, First of all, excellent project! It's such a powerful tool when you really master the internals, and after several years I think I'm nearing that point. My question today is related to the dogpile caching example <https://docs.sqlalchemy.org/en/20/_modules/examples/dogpile_caching/local_session_caching.html>s. I migrated to 2.x awhile back but the given examples in the documentation for 2.x show the Session.query interface being used. I understand this has been deprecated starting with 2.x, so naturally I've migrated existing code using Session.query. This seems to break the example code for dogpile caching. I was hoping to get some advice on how to implement the caching using the new unified sa.select and Session.execute/Session.scalars interface, or even whether dogpile cache supports this interface yet? Thanks in advance, Joe -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/29559c02-782b-4d71-be75-82494c5f253dn%40googlegroups.com.
Re: [sqlalchemy] Should I use a surrogate primary key on an Association Object pattern?
Hi Pierre, This isn't an official answer, I'm just a long time user of SQLAlchemy. Either way should work fine. The association object is driven by the columns on the association table being FKs, whether or not they're part of a PK isn't relevant. I've used both ways. In my experience, an artificial PK is easier to maintain in the long run. Each way has its minor advantages and disadvantages, but generally a single artificial PK would be my preference. Mike On Mon, 3 Jul 2023, 16:43 Pierre Massé, wrote: > Dear all, > > I am currently reworking a bit of my model and stumbled into this > question, which I think mainly has opinionated answers - but I would like > to have some insight regarding SQLAlchemy usage or preferences. > > I have a situation where I am in the exact same case like the one > described in the Association Object > <https://docs.sqlalchemy.org/en/20/orm/basic_relationships.html#association-object> > in > the SQLAlchemy ORM docs. > > I want to modelize : > - Caregiver - a person taking care of one or more Helpee > - Helpee - a person who is being taken care of, by one or more Caregiver > - their Relationship, which links a Caregiver to a Helpee, but with > additional data like their family ties (spouse, parent, friend, ...) > > This is typically the Association object use case, a many to many > relationship, holding additional data. > > So far, I have been using a "natural" primary key on the Relationship > table, by using the Caregiver Id, and the Helpee Id to form a composite > primary key. > > From a handful of blog posts (this StackOverflow answer > <https://dba.stackexchange.com/a/6110> being quite in depth), it looks > like adding an "artificial" or surrogate primary key on the Relationship > table should be the way to go. Of course, I would keep a unique constraint > on (Caregiver Id x Helpee Id) on this table along the new primary key. > > My questions are : > - is the addition of a surrogate primary key a good idea - without taking > into account the fact that I am using SQLAlchemy? > - would the "magic" of the association object still operate even though > the mapped ORM relationships would not be part of the primary key anymore? > > The docs example would become: > > > from typing import Optional > > from sqlalchemy import ForeignKey > from sqlalchemy import Integer > from sqlalchemy.orm import Mapped > from sqlalchemy.orm import mapped_column > from sqlalchemy.orm import DeclarativeBase > from sqlalchemy.orm import relationship > > > class Base(DeclarativeBase): > pass > > > class Association(Base): > __tablename__ = "association_table" > *id: Mapped[int] = mapped_column(primary_key=True)* > left_id: Mapped[int] = mapped_column(ForeignKey("left_table.id")*, > primary_key=True*) > right_id: Mapped[int] = mapped_column( > ForeignKey("right_table.id")*, primary_key=True* > ) > extra_data: Mapped[Optional[str]] > child: Mapped["Child"] = relationship(back_populates="parents") > parent: Mapped["Parent"] = relationship(back_populates="children") > *__table_args__ = (UniqueConstraint('left_id', 'right_id', > name='_relationship_uc'),)* > > > class Parent(Base): > __tablename__ = "left_table" > id: Mapped[int] = mapped_column(primary_key=True) > children: Mapped[List["Association"]] = > relationship(back_populates="parent") > > > class Child(Base): > __tablename__ = "right_table" > id: Mapped[int] = mapped_column(primary_key=True) > parents: Mapped[List["Association"]] = > relationship(back_populates="child") > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sqlalchemy+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/CAH4TWVvPVKtjtyVHJO9WtX2ZCjmdX3aGWDBGCQrvU_7c-CswEg%40mail.gmail.com > <https://groups.google.com/d/msgid/sqlalchemy/CAH4TWVvPVKtjtyVHJO9WtX2ZCjmdX3aGWDBGCQrvU_7c-CswEg%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > -- SQLAlche
Re: [sqlalchemy] DRYing up model relationships with custom properties
Just to follow up on what I ended up doing - I went with approach 1 and created a function that would add it. I also switched from __ to _ to avoid any surprises. Thanks again for the suggestions! On Wednesday, May 3, 2023 at 9:27:57 PM UTC+8 Mike Bayer wrote: > I'd be a little concerned about the double-underscore as it modifies the > way Python getattr() works for those names, and I dont know that we have > tests for that specific part of it, but besides that, you can always add > new relationships or other mapped attributes to classes in one of two ways: > > 1. when using declarative Base, you can assign it: > ModelClass._field_for_rel = relationship() > 2. you can set it in the mapper: > ModelClass.__mapper__.add_property("_field_for_rel", relationship()) > > > https://docs.sqlalchemy.org/en/20/orm/mapping_api.html#sqlalchemy.orm.Mapper.add_property > > so you could make use of that code within some kind of function that is > applied to the class with a certain name. > > if you are looking to do it fully inline within the class, you might need > to use event hooks that receive the class and mapper so that you can apply > things after the fact, a good event for this would be "instrument_class": > https://docs.sqlalchemy.org/en/20/orm/events.html#sqlalchemy.orm.MapperEvents.instrument_class > > > On Wed, May 3, 2023, at 4:18 AM, 'Tony Cosentini' via sqlalchemy wrote: > > Hi, > > I have a pattern in some models where there is a private (as in prefixed > with __) relationship and then a property to handle some things that need > to happen when the relationship is fetched or written to. > > Currently it's implemented like this: > class ModelClass(Base): > __field_for_relationship = relationship('OtherModel') > @property > def field_for_relationship(self): > # Some custom logic > return self.__field_for_relationship > > @field_for_relationship.setter > def field_for_relationship(self, value: OtherModel): > # Some custom logic. > self.__field_for_relationship = value > > This works, but I'd like to DRY this up into some kind of nice one-liner > if possible because I'm using this pattern on a handful of relationships. > However, it seems like the relationship field itself needs to be on the > model because of some magic that happens under the hood. > > Any ideas on if it's possible to combine this into some kind of reusable > utility to DRY it up? > > I'm not sure how important it is, but I'm still on 1.4.x, haven't made the > jump to 2.x yet. > > Thanks! > Tony > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sqlalchemy+...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/b285bdb1-c7b9-4734-9a5b-fe937241b44an%40googlegroups.com > > <https://groups.google.com/d/msgid/sqlalchemy/b285bdb1-c7b9-4734-9a5b-fe937241b44an%40googlegroups.com?utm_medium=email&utm_source=footer> > . > > > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/ed007263-08cc-4597-a4ff-3f314ac3851fn%40googlegroups.com.
[sqlalchemy] DRYing up model relationships with custom properties
Hi, I have a pattern in some models where there is a private (as in prefixed with __) relationship and then a property to handle some things that need to happen when the relationship is fetched or written to. Currently it's implemented like this: class ModelClass(Base): __field_for_relationship = relationship('OtherModel') @property def field_for_relationship(self): # Some custom logic return self.__field_for_relationship @field_for_relationship.setter def field_for_relationship(self, value: OtherModel): # Some custom logic. self.__field_for_relationship = value This works, but I'd like to DRY this up into some kind of nice one-liner if possible because I'm using this pattern on a handful of relationships. However, it seems like the relationship field itself needs to be on the model because of some magic that happens under the hood. Any ideas on if it's possible to combine this into some kind of reusable utility to DRY it up? I'm not sure how important it is, but I'm still on 1.4.x, haven't made the jump to 2.x yet. Thanks! Tony -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/b285bdb1-c7b9-4734-9a5b-fe937241b44an%40googlegroups.com.
Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'")
Granted, it’s difficult to read my (admittedly rather blank) mind. Maybe I should restate the question: What are my options? I just want to see the rows from the query below. Why is it telling me I need to aggregate, and if I do truly need to, what might be an aggregate function that won’t eliminate much of what the query is producing? From: 'Dan Stromberg [External]' via sqlalchemy Date: Tuesday, March 21, 2023 at 9:05 AM To: sqlalchemy@googlegroups.com Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") Alright, using join_from may have led to clearing a hurdle. I’m now using: query = ( select(NV. id, func. min(bs_3. build_id)) .select_from(bs) .join(v_2, onclause=(bs. version_id == v_2. id)) .join_from(bs_2, Br, onclause=(Br. id == bs_2. branch_id)) Alright, using join_from may have led to clearing a hurdle. I’m now using: query = ( select(NV.id, func.min(bs_3.build_id)) .select_from(bs) .join(v_2, onclause=(bs.version_id == v_2.id)) .join_from(bs_2, Br, onclause=(Br.id == bs_2.branch_id)) ) ..and am getting a new error: sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1140, "In aggregated query without GROUP BY, expression #1 of SELECT list contains nonaggregated column 'tact_dev.tb_nv.id'; this is incompatible with sql_mode=only_full_group_by") Do I need to aggregate? Or perhaps change sql_mode? Thanks! From: sqlalchemy@googlegroups.com on behalf of Mike Bayer Date: Monday, March 20, 2023 at 5:33 PM To: noreply-spamdigest via sqlalchemy Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") OK, not really, you want tables in the FROM clause. use either the select_from() or join_from() method to do that: https: //docs. sqlalchemy. org/en/20/tutorial/data_select. html#explicit-from-clauses-and-joins On Mon, Mar 20, 2023, at 5: 16 PM, OK, not really, you want tables in the FROM clause. use either the select_from() or join_from() method to do that: https://docs.sqlalchemy.org/en/20/tutorial/data_select.html#explicit-from-clauses-and-joins<https://urldefense.com/v3/__https:/docs.sqlalchemy.org/en/20/tutorial/data_select.html*explicit-from-clauses-and-joins__;Iw!!Ci6f514n9QsL8ck!gsSZYRslnIShc80D5SJP9hQv7FJkNL5Bzfvc8dkqobmEg8-ctkAcRyR1sZuv3pRL4eCzLvlJC-VDSf5sXXQNtX0d4POMpzTQh3-QUw$> On Mon, Mar 20, 2023, at 5:16 PM, 'Dan Stromberg [External]' via sqlalchemy wrote: I’m getting some pushback internally, from my team lead – he and I both think it’s probably too much detail to share. It’s 43 lines of SQL with multiple subqueries. Would just the simplest parts of the from clause work? From: sqlalchemy@googlegroups.com on behalf of Mike Bayer Date: Monday, March 20, 2023 at 1:11 PM To: noreply-spamdigest via sqlalchemy Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") what SQL are you going for ? start with that. On Mon, Mar 20, 2023, at 10: 33 AM, 'Dan Stromberg [External]' via sqlalchemy wrote: That makes sense, but…. I’m afraid I don’t know how to add tb_br to the select. I tried: query = ( select(NV. id, what SQL are you going for ? start with that. On Mon, Mar 20, 2023, at 10:33 AM, 'Dan Stromberg [External]' via sqlalchemy wrote: That makes sense, but…. I’m afraid I don’t know how to add tb_br to the select. I tried: query = ( select(NV.id, func.min(bs_3.build_id)) .select_from(bs, Br) .join(v_2, onclause=(bs.version_id == v_2.id)) .join(bs_2, onclause=(Br.id == bs_2.branch_id)) ) …which gave: 1054, "Unknown column 'tb_br.id' in 'on clause'" …and I tried: query = ( select(NV.id, func.min(bs_3.build_id), Br) .select_from(bs) .join(v_2, onclause=(bs.version_id == v_2.id)) .join(bs_2, onclause=(Br.id == bs_2.branch_id)) ) …which also gave: (1054, "Unknown column 'tb_br.id' in 'on clause'") I’m guessing I’m missing something simple, but I have no idea what. Any (further) suggestions? From: sqlalchemy@googlegroups.com on behalf of Mike Bayer Date: Saturday, March 18, 2023 at 8:01 AM To: noreply-spamdigest via sqlalchemy Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") the query emitted is: SELECT tb_nv. id, min(bs_3. build_id) AS min_1 FROM tb_nv, tb_
Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'")
Hoping to save an iteration: the SQL currently looks like: [SQL: SELECT tb_nv.id, min(bs_3.build_id) AS min_1 FROM tb_nv, tb_brst AS bs_3, tb_brst AS bs INNER JOIN tb_vers AS v_2 ON bs.version_id = v_2.id, tb_brst AS bs_2 INNER JOIN tb_br ON tb_br.id = bs_2.branch_id] From: 'Dan Stromberg [External]' via sqlalchemy Date: Tuesday, March 21, 2023 at 9:05 AM To: sqlalchemy@googlegroups.com Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") Alright, using join_from may have led to clearing a hurdle. I’m now using: query = ( select(NV. id, func. min(bs_3. build_id)) .select_from(bs) .join(v_2, onclause=(bs. version_id == v_2. id)) .join_from(bs_2, Br, onclause=(Br. id == bs_2. branch_id)) Alright, using join_from may have led to clearing a hurdle. I’m now using: query = ( select(NV.id, func.min(bs_3.build_id)) .select_from(bs) .join(v_2, onclause=(bs.version_id == v_2.id)) .join_from(bs_2, Br, onclause=(Br.id == bs_2.branch_id)) ) ..and am getting a new error: sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1140, "In aggregated query without GROUP BY, expression #1 of SELECT list contains nonaggregated column 'tact_dev.tb_nv.id'; this is incompatible with sql_mode=only_full_group_by") Do I need to aggregate? Or perhaps change sql_mode? Thanks! From: sqlalchemy@googlegroups.com on behalf of Mike Bayer Date: Monday, March 20, 2023 at 5:33 PM To: noreply-spamdigest via sqlalchemy Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") OK, not really, you want tables in the FROM clause. use either the select_from() or join_from() method to do that: https: //docs. sqlalchemy. org/en/20/tutorial/data_select. html#explicit-from-clauses-and-joins On Mon, Mar 20, 2023, at 5: 16 PM, OK, not really, you want tables in the FROM clause. use either the select_from() or join_from() method to do that: https://docs.sqlalchemy.org/en/20/tutorial/data_select.html#explicit-from-clauses-and-joins<https://urldefense.com/v3/__https:/docs.sqlalchemy.org/en/20/tutorial/data_select.html*explicit-from-clauses-and-joins__;Iw!!Ci6f514n9QsL8ck!gsSZYRslnIShc80D5SJP9hQv7FJkNL5Bzfvc8dkqobmEg8-ctkAcRyR1sZuv3pRL4eCzLvlJC-VDSf5sXXQNtX0d4POMpzTQh3-QUw$> On Mon, Mar 20, 2023, at 5:16 PM, 'Dan Stromberg [External]' via sqlalchemy wrote: I’m getting some pushback internally, from my team lead – he and I both think it’s probably too much detail to share. It’s 43 lines of SQL with multiple subqueries. Would just the simplest parts of the from clause work? From: sqlalchemy@googlegroups.com on behalf of Mike Bayer Date: Monday, March 20, 2023 at 1:11 PM To: noreply-spamdigest via sqlalchemy Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") what SQL are you going for ? start with that. On Mon, Mar 20, 2023, at 10: 33 AM, 'Dan Stromberg [External]' via sqlalchemy wrote: That makes sense, but…. I’m afraid I don’t know how to add tb_br to the select. I tried: query = ( select(NV. id, what SQL are you going for ? start with that. On Mon, Mar 20, 2023, at 10:33 AM, 'Dan Stromberg [External]' via sqlalchemy wrote: That makes sense, but…. I’m afraid I don’t know how to add tb_br to the select. I tried: query = ( select(NV.id, func.min(bs_3.build_id)) .select_from(bs, Br) .join(v_2, onclause=(bs.version_id == v_2.id)) .join(bs_2, onclause=(Br.id == bs_2.branch_id)) ) …which gave: 1054, "Unknown column 'tb_br.id' in 'on clause'" …and I tried: query = ( select(NV.id, func.min(bs_3.build_id), Br) .select_from(bs) .join(v_2, onclause=(bs.version_id == v_2.id)) .join(bs_2, onclause=(Br.id == bs_2.branch_id)) ) …which also gave: (1054, "Unknown column 'tb_br.id' in 'on clause'") I’m guessing I’m missing something simple, but I have no idea what. Any (further) suggestions? From: sqlalchemy@googlegroups.com on behalf of Mike Bayer Date: Saturday, March 18, 2023 at 8:01 AM To: noreply-spamdigest via sqlalchemy Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") the query emitted is: SELECT tb_nv. id, min(bs_3. build_id) AS min_1 FROM tb_nv, tb_brst AS bs_3, tb_brst AS bs INNER JOIN tb_vers AS v_2 ON bs. versio
Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'")
Alright, using join_from may have led to clearing a hurdle. I’m now using: query = ( select(NV.id, func.min(bs_3.build_id)) .select_from(bs) .join(v_2, onclause=(bs.version_id == v_2.id)) .join_from(bs_2, Br, onclause=(Br.id == bs_2.branch_id)) ) ..and am getting a new error: sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1140, "In aggregated query without GROUP BY, expression #1 of SELECT list contains nonaggregated column 'tact_dev.tb_nv.id'; this is incompatible with sql_mode=only_full_group_by") Do I need to aggregate? Or perhaps change sql_mode? Thanks! From: sqlalchemy@googlegroups.com on behalf of Mike Bayer Date: Monday, March 20, 2023 at 5:33 PM To: noreply-spamdigest via sqlalchemy Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") OK, not really, you want tables in the FROM clause. use either the select_from() or join_from() method to do that: https: //docs. sqlalchemy. org/en/20/tutorial/data_select. html#explicit-from-clauses-and-joins On Mon, Mar 20, 2023, at 5: 16 PM, OK, not really, you want tables in the FROM clause. use either the select_from() or join_from() method to do that: https://docs.sqlalchemy.org/en/20/tutorial/data_select.html#explicit-from-clauses-and-joins<https://urldefense.com/v3/__https:/docs.sqlalchemy.org/en/20/tutorial/data_select.html*explicit-from-clauses-and-joins__;Iw!!Ci6f514n9QsL8ck!gsSZYRslnIShc80D5SJP9hQv7FJkNL5Bzfvc8dkqobmEg8-ctkAcRyR1sZuv3pRL4eCzLvlJC-VDSf5sXXQNtX0d4POMpzTQh3-QUw$> On Mon, Mar 20, 2023, at 5:16 PM, 'Dan Stromberg [External]' via sqlalchemy wrote: I’m getting some pushback internally, from my team lead – he and I both think it’s probably too much detail to share. It’s 43 lines of SQL with multiple subqueries. Would just the simplest parts of the from clause work? From: sqlalchemy@googlegroups.com on behalf of Mike Bayer Date: Monday, March 20, 2023 at 1:11 PM To: noreply-spamdigest via sqlalchemy Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") what SQL are you going for ? start with that. On Mon, Mar 20, 2023, at 10: 33 AM, 'Dan Stromberg [External]' via sqlalchemy wrote: That makes sense, but…. I’m afraid I don’t know how to add tb_br to the select. I tried: query = ( select(NV. id, what SQL are you going for ? start with that. On Mon, Mar 20, 2023, at 10:33 AM, 'Dan Stromberg [External]' via sqlalchemy wrote: That makes sense, but…. I’m afraid I don’t know how to add tb_br to the select. I tried: query = ( select(NV.id, func.min(bs_3.build_id)) .select_from(bs, Br) .join(v_2, onclause=(bs.version_id == v_2.id)) .join(bs_2, onclause=(Br.id == bs_2.branch_id)) ) …which gave: 1054, "Unknown column 'tb_br.id' in 'on clause'" …and I tried: query = ( select(NV.id, func.min(bs_3.build_id), Br) .select_from(bs) .join(v_2, onclause=(bs.version_id == v_2.id)) .join(bs_2, onclause=(Br.id == bs_2.branch_id)) ) …which also gave: (1054, "Unknown column 'tb_br.id' in 'on clause'") I’m guessing I’m missing something simple, but I have no idea what. Any (further) suggestions? From: sqlalchemy@googlegroups.com on behalf of Mike Bayer Date: Saturday, March 18, 2023 at 8:01 AM To: noreply-spamdigest via sqlalchemy Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") the query emitted is: SELECT tb_nv. id, min(bs_3. build_id) AS min_1 FROM tb_nv, tb_brst AS bs_3, tb_brst AS bs INNER JOIN tb_vers AS v_2 ON bs. version_id = v_2. id INNER JOIN tb_brst AS bs_2 ON tb_br. id = bs_2. branch_id the error means that your the query emitted is: SELECT tb_nv.id, min(bs_3.build_id) AS min_1 FROM tb_nv, tb_brst AS bs_3, tb_brst AS bs INNER JOIN tb_vers AS v_2 ON bs.version_id = v_2.id INNER JOIN tb_brst AS bs_2 ON tb_br.id = bs_2.branch_id the error means that your ON clause refers to a table "tb_br" which is not otherwise in the FROM clause: "ON tb_br.id = bs_2.branch_id" the ON clause can only refer to columns from tables that are being SELECTed from, such as: SELECT tb_nv.id, min(bs_3.build_id) AS min_1 FROM tb_nv, tb_brst AS bs_3, tb_brst AS bs INNER JOIN tb_vers AS v_2 ON bs.version_id = v_2.id INNER JOIN tb_br ON tb_br.id = bs_2.branch_id INNER JOIN tb_brst AS bs_2 ON tb_br.id = bs_2.branch_id so you'd need to alter your qu
Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'")
Here’s the select, and most of the from clause: select nv.id, min(bs.build_id) as min_build_id from tb_v as v, tb_nv as nv, tb_bs as bs, tb_br as br, From: 'Dan Stromberg [External]' via sqlalchemy Date: Monday, March 20, 2023 at 2:16 PM To: sqlalchemy@googlegroups.com Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") I’m getting some pushback internally, from my team lead – he and I both think it’s probably too much detail to share. It’s 43 lines of SQL with multiple subqueries. Would just the simplest parts of the from clause work? From: sqlalchemy@ googlegroups. com I’m getting some pushback internally, from my team lead – he and I both think it’s probably too much detail to share. It’s 43 lines of SQL with multiple subqueries. Would just the simplest parts of the from clause work? From: sqlalchemy@googlegroups.com on behalf of Mike Bayer Date: Monday, March 20, 2023 at 1:11 PM To: noreply-spamdigest via sqlalchemy Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") what SQL are you going for ? start with that. On Mon, Mar 20, 2023, at 10: 33 AM, 'Dan Stromberg [External]' via sqlalchemy wrote: That makes sense, but…. I’m afraid I don’t know how to add tb_br to the select. I tried: query = ( select(NV. id, what SQL are you going for ? start with that. On Mon, Mar 20, 2023, at 10:33 AM, 'Dan Stromberg [External]' via sqlalchemy wrote: That makes sense, but…. I’m afraid I don’t know how to add tb_br to the select. I tried: query = ( select(NV.id, func.min(bs_3.build_id)) .select_from(bs, Br) .join(v_2, onclause=(bs.version_id == v_2.id)) .join(bs_2, onclause=(Br.id == bs_2.branch_id)) ) …which gave: 1054, "Unknown column 'tb_br.id' in 'on clause'" …and I tried: query = ( select(NV.id, func.min(bs_3.build_id), Br) .select_from(bs) .join(v_2, onclause=(bs.version_id == v_2.id)) .join(bs_2, onclause=(Br.id == bs_2.branch_id)) ) …which also gave: (1054, "Unknown column 'tb_br.id' in 'on clause'") I’m guessing I’m missing something simple, but I have no idea what. Any (further) suggestions? From: sqlalchemy@googlegroups.com on behalf of Mike Bayer Date: Saturday, March 18, 2023 at 8:01 AM To: noreply-spamdigest via sqlalchemy Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") the query emitted is: SELECT tb_nv. id, min(bs_3. build_id) AS min_1 FROM tb_nv, tb_brst AS bs_3, tb_brst AS bs INNER JOIN tb_vers AS v_2 ON bs. version_id = v_2. id INNER JOIN tb_brst AS bs_2 ON tb_br. id = bs_2. branch_id the error means that your the query emitted is: SELECT tb_nv.id, min(bs_3.build_id) AS min_1 FROM tb_nv, tb_brst AS bs_3, tb_brst AS bs INNER JOIN tb_vers AS v_2 ON bs.version_id = v_2.id INNER JOIN tb_brst AS bs_2 ON tb_br.id = bs_2.branch_id the error means that your ON clause refers to a table "tb_br" which is not otherwise in the FROM clause: "ON tb_br.id = bs_2.branch_id" the ON clause can only refer to columns from tables that are being SELECTed from, such as: SELECT tb_nv.id, min(bs_3.build_id) AS min_1 FROM tb_nv, tb_brst AS bs_3, tb_brst AS bs INNER JOIN tb_vers AS v_2 ON bs.version_id = v_2.id INNER JOIN tb_br ON tb_br.id = bs_2.branch_id INNER JOIN tb_brst AS bs_2 ON tb_br.id = bs_2.branch_id so you'd need to alter your query to include some indication how tb_br is part of what's being joined. On Fri, Mar 17, 2023, at 7:52 PM, 'Dan Stromberg' via sqlalchemy wrote: Hi people. I'm having trouble with a test query. As the subject line says, I'm getting: sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") But it seems like tb_br exists, and has an id column - tb_br being an empty table, but still, existent: mysql> show create table tb_br; +---+-+ | Table | Create Table | +---+--
Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'")
I’m getting some pushback internally, from my team lead – he and I both think it’s probably too much detail to share. It’s 43 lines of SQL with multiple subqueries. Would just the simplest parts of the from clause work? From: sqlalchemy@googlegroups.com on behalf of Mike Bayer Date: Monday, March 20, 2023 at 1:11 PM To: noreply-spamdigest via sqlalchemy Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") what SQL are you going for ? start with that. On Mon, Mar 20, 2023, at 10: 33 AM, 'Dan Stromberg [External]' via sqlalchemy wrote: That makes sense, but…. I’m afraid I don’t know how to add tb_br to the select. I tried: query = ( select(NV. id, what SQL are you going for ? start with that. On Mon, Mar 20, 2023, at 10:33 AM, 'Dan Stromberg [External]' via sqlalchemy wrote: That makes sense, but…. I’m afraid I don’t know how to add tb_br to the select. I tried: query = ( select(NV.id, func.min(bs_3.build_id)) .select_from(bs, Br) .join(v_2, onclause=(bs.version_id == v_2.id)) .join(bs_2, onclause=(Br.id == bs_2.branch_id)) ) …which gave: 1054, "Unknown column 'tb_br.id' in 'on clause'" …and I tried: query = ( select(NV.id, func.min(bs_3.build_id), Br) .select_from(bs) .join(v_2, onclause=(bs.version_id == v_2.id)) .join(bs_2, onclause=(Br.id == bs_2.branch_id)) ) …which also gave: (1054, "Unknown column 'tb_br.id' in 'on clause'") I’m guessing I’m missing something simple, but I have no idea what. Any (further) suggestions? From: sqlalchemy@googlegroups.com on behalf of Mike Bayer Date: Saturday, March 18, 2023 at 8:01 AM To: noreply-spamdigest via sqlalchemy Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") the query emitted is: SELECT tb_nv. id, min(bs_3. build_id) AS min_1 FROM tb_nv, tb_brst AS bs_3, tb_brst AS bs INNER JOIN tb_vers AS v_2 ON bs. version_id = v_2. id INNER JOIN tb_brst AS bs_2 ON tb_br. id = bs_2. branch_id the error means that your the query emitted is: SELECT tb_nv.id, min(bs_3.build_id) AS min_1 FROM tb_nv, tb_brst AS bs_3, tb_brst AS bs INNER JOIN tb_vers AS v_2 ON bs.version_id = v_2.id INNER JOIN tb_brst AS bs_2 ON tb_br.id = bs_2.branch_id the error means that your ON clause refers to a table "tb_br" which is not otherwise in the FROM clause: "ON tb_br.id = bs_2.branch_id" the ON clause can only refer to columns from tables that are being SELECTed from, such as: SELECT tb_nv.id, min(bs_3.build_id) AS min_1 FROM tb_nv, tb_brst AS bs_3, tb_brst AS bs INNER JOIN tb_vers AS v_2 ON bs.version_id = v_2.id INNER JOIN tb_br ON tb_br.id = bs_2.branch_id INNER JOIN tb_brst AS bs_2 ON tb_br.id = bs_2.branch_id so you'd need to alter your query to include some indication how tb_br is part of what's being joined. On Fri, Mar 17, 2023, at 7:52 PM, 'Dan Stromberg' via sqlalchemy wrote: Hi people. I'm having trouble with a test query. As the subject line says, I'm getting: sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") But it seems like tb_br exists, and has an id column - tb_br being an empty table, but still, existent: mysql> show create table tb_br; +---+-+ | Table | Create Table | +---+-+ | tb_br | CREATE TABLE `tb_br` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(45) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 | +---+-+ 1 row in set (0.04 sec) mysql> select * from tb_br; Empty set (0.03 sec) The query, along with sample models, looks like: #!/usr/bin/env python3 """ A little test program. Environment variables: DBU Your database user DBP Your database password DBH Your database host IDB Your initial database """ import os import pprint from sql
Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'")
That makes sense, but…. I’m afraid I don’t know how to add tb_br to the select. I tried: query = ( select(NV.id, func.min(bs_3.build_id)) .select_from(bs, Br) .join(v_2, onclause=(bs.version_id == v_2.id)) .join(bs_2, onclause=(Br.id == bs_2.branch_id)) ) …which gave: 1054, "Unknown column 'tb_br.id' in 'on clause'" …and I tried: query = ( select(NV.id, func.min(bs_3.build_id), Br) .select_from(bs) .join(v_2, onclause=(bs.version_id == v_2.id)) .join(bs_2, onclause=(Br.id == bs_2.branch_id)) ) …which also gave: (1054, "Unknown column 'tb_br.id' in 'on clause'") I’m guessing I’m missing something simple, but I have no idea what. Any (further) suggestions? From: sqlalchemy@googlegroups.com on behalf of Mike Bayer Date: Saturday, March 18, 2023 at 8:01 AM To: noreply-spamdigest via sqlalchemy Subject: Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") the query emitted is: SELECT tb_nv. id, min(bs_3. build_id) AS min_1 FROM tb_nv, tb_brst AS bs_3, tb_brst AS bs INNER JOIN tb_vers AS v_2 ON bs. version_id = v_2. id INNER JOIN tb_brst AS bs_2 ON tb_br. id = bs_2. branch_id the error means that your the query emitted is: SELECT tb_nv.id, min(bs_3.build_id) AS min_1 FROM tb_nv, tb_brst AS bs_3, tb_brst AS bs INNER JOIN tb_vers AS v_2 ON bs.version_id = v_2.id INNER JOIN tb_brst AS bs_2 ON tb_br.id = bs_2.branch_id the error means that your ON clause refers to a table "tb_br" which is not otherwise in the FROM clause: "ON tb_br.id = bs_2.branch_id" the ON clause can only refer to columns from tables that are being SELECTed from, such as: SELECT tb_nv.id, min(bs_3.build_id) AS min_1 FROM tb_nv, tb_brst AS bs_3, tb_brst AS bs INNER JOIN tb_vers AS v_2 ON bs.version_id = v_2.id INNER JOIN tb_br ON tb_br.id = bs_2.branch_id INNER JOIN tb_brst AS bs_2 ON tb_br.id = bs_2.branch_id so you'd need to alter your query to include some indication how tb_br is part of what's being joined. On Fri, Mar 17, 2023, at 7:52 PM, 'Dan Stromberg' via sqlalchemy wrote: Hi people. I'm having trouble with a test query. As the subject line says, I'm getting: sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") But it seems like tb_br exists, and has an id column - tb_br being an empty table, but still, existent: mysql> show create table tb_br; +---+-+ | Table | Create Table | +---+-+ | tb_br | CREATE TABLE `tb_br` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(45) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 | +---+-+ 1 row in set (0.04 sec) mysql> select * from tb_br; Empty set (0.03 sec) The query, along with sample models, looks like: #!/usr/bin/env python3 """ A little test program. Environment variables: DBU Your database user DBP Your database password DBH Your database host IDB Your initial database """ import os import pprint from sqlalchemy import create_engine, select from sqlalchemy.orm import aliased, sessionmaker, declarative_base from sqlalchemy.sql.expression import func from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy() Base = declarative_base() class NV(Base): __tablename__ = "tb_nv" __bind_key__ = "testdb" __table_args__ = ( { "mysql_engine": "InnoDB", "mysql_charset": "utf8", "mysql_collate": "utf8_general_ci", }, ) id = db.Column("id", db.Integer, primary_key=True, autoincrement=True) builds = db.relationship("Bld", primaryjoin="(NV.id == Bld.variant_id)") class Vers(Base): __tablename__ = "tb_vers" __bind_key__ = "testdb" __table_args__ = ( { "mysql_engine": "InnoDB", "mysql_charset": "utf8", "mysql_collate": "utf8_general_ci", }, )
Re: [sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'")
Sorry, I don’t know why Google Groups decided to aggregate a few lines into 2 large lines. Here’s that list of versions again. Hopefully GG will be appeased this time. I'm using: $ python3 -m pip list -v | grep -i sqlalchemy Flask-SQLAlchemy 2.5.1 /data/home/dstromberg/.local/lib/python3.10/site-packages pip SQLAlchemy 1.4.36 /data/home/dstromberg/.local/lib/python3.10/site-packages pip $ python3 -m pip list -v | grep -i mysql mysqlclient2.1.1 /data/home/dstromberg/.local/lib/python3.10/site-packages pip PyMySQL0.8.0 /data/home/dstromberg/.local/lib/python3.10/site-packages pip bash-4.2# mysql --version mysql Ver 14.14 Distrib 5.7.41, for Linux (x86_64) using EditLine wrapper -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/DM5PR12MB2503CB97085F7BF2AE76D952C5829%40DM5PR12MB2503.namprd12.prod.outlook.com.
[sqlalchemy] Test query seems to spuriously give sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'")
Hi people. I'm having trouble with a test query. As the subject line says, I'm getting: sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'tb_br.id' in 'on clause'") But it seems like tb_br exists, and has an id column - tb_br being an empty table, but still, existent: mysql> show create table tb_br; +---+-+ | Table | Create Table | +---+-+ | tb_br | CREATE TABLE `tb_br` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(45) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 | +---+-+ 1 row in set (0.04 sec) mysql> select * from tb_br; Empty set (0.03 sec) The query, along with sample models, looks like: #!/usr/bin/env python3 """ A little test program. Environment variables: DBU Your database user DBP Your database password DBH Your database host IDB Your initial database """ import os import pprint from sqlalchemy import create_engine, select from sqlalchemy.orm import aliased, sessionmaker, declarative_base from sqlalchemy.sql.expression import func from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy() Base = declarative_base() class NV(Base): __tablename__ = "tb_nv" __bind_key__ = "testdb" __table_args__ = ( { "mysql_engine": "InnoDB", "mysql_charset": "utf8", "mysql_collate": "utf8_general_ci", }, ) id = db.Column("id", db.Integer, primary_key=True, autoincrement=True) builds = db.relationship("Bld", primaryjoin="(NV.id == Bld.variant_id)") class Vers(Base): __tablename__ = "tb_vers" __bind_key__ = "testdb" __table_args__ = ( { "mysql_engine": "InnoDB", "mysql_charset": "utf8", "mysql_collate": "utf8_general_ci", }, ) id = db.Column("id", db.Integer, primary_key=True, autoincrement=True) class St(Base): __tablename__ = "tb_brst" __bind_key__ = "testdb" __table_args__ = ({"mysql_engine": "InnoDB", "mysql_charset": "utf8"},) id = db.Column("id", db.Integer, primary_key=True, autoincrement=True) version_id = db.Column( "version_id", db.Integer, db.ForeignKey( "tb_vers.id", name="fk_tb_brst_version_id", onupdate="CASCADE", ondelete="RESTRICT", ), nullable=False, ) branch_id = db.Column( "branch_id", db.Integer, db.ForeignKey( "tb_br.id", name="fk_tb_brst_branch_id", onupdate="CASCADE", ondelete="RESTRICT", ), nullable=False, ) build_id = db.Column( "build_id", db.Integer, db.ForeignKey( "tb_bld.id", name="fk_tb_brst_build_id", onupdate="CASCADE", ondelete="RESTRICT", ), nullable=False, ) version = db.relationship( "Vers", innerjoin=True, primaryjoin="(St.version_id == Vers.id)" ) branch = db.relationship( "Br", innerjoin=True, primaryjoin="(St.branch_id == Br.id)" ) build = db.relationship( "Bld", innerjoin=True, primaryjoin="(St.build_id == Bld.id)" ) class Br(Base): __tablename__ = "tb_br" __bind_key__ = "testdb" __table_args__ = ( { "mysql_engine": "InnoDB", "mysql_charset": "utf8", "mysql_collate": "utf8_general_ci", }, ) id = db.Column("id", db.Integer, primary_key=True, autoincrement=True) name = db.Column("name", db.String(45), nullable=False) class Bld(Base): __tablename__ = "tb_bld" __bind_key__ = "testdb" __table_args__ = ( {
Re: [sqlalchemy] Re: load_only when loading relatinoships from an instance
populate_existing doesn't change the behavior, but expunge_all does. The code works correctly now though - it's just our test setup/teardown that was causing trouble (although it does seem like weird behavior). Thanks again for so much help with such a great library, Tony On Thursday, January 5, 2023 at 10:12:56 PM UTC+8 Mike Bayer wrote: > not as much set-ordering as gc, most likely. > > add populate_existing to the query, that also should force the lazy loader > to take effect > > On Thu, Jan 5, 2023, at 9:00 AM, Mike Bayer wrote: > > the "b" object in question is not really "lazy loaded" in the usual sense > here because it's already present in the session, that looks like an > unexpire query. the delete() might be just changing something where the > issue comes down to a set-ordering issue, perhaps. try adding > session.expunge_all() before the query and see if that makes things look > more expected. > > On Thu, Jan 5, 2023, at 1:27 AM, 'Tony Cosentini' via sqlalchemy wrote: > > Ok, I was able to at least create a script that easily reproduces what I'm > seeing - > https://gist.github.com/tonycosentini/22f42455c5068898efa473760e4f65ed > > We have some code that runs before our tests to ensure all the tables are > empty. When that runs, load_only doesn't seem to work. It sounds bizarre, > but that gist link contains a really short sample that reproduces the same > behavior. I'm running 1.4.44. > > Tony > > On Thursday, January 5, 2023 at 11:09:40 AM UTC+8 Tony Cosentini wrote: > > Funny enough, this is what I tried. I just wrote up a small sample script > using defaultload + load_only and sure enough it works. There must be > something in the code base I'm working with that prevents the load_only bit > from being applied. I'm pretty sure defaultload is woroking fine. I'll > report back if I find it. > > Thanks for clarifying! > > On Thursday, January 5, 2023 at 8:14:21 AM UTC+8 Michael Bayer wrote: > > yeah you can use defaultload.load_only > > > defaultload(ModelA.model_b).load_only(ModelB.only_field) > > > > On Wednesday, January 4, 2023 at 3:15:02 AM UTC-5 to...@revenuecat.com > wrote: > > Hi, > > This might be a strange question, but I tried to find this in the > documentation to no avail. > > Is it possible to use something like load_only to override which columns > are loaded in when loading a relationship (as in, a relationship that is > not loaded at first with the original query)? > > Something like: > class ModelB: > ... > > class ModelA: > model_b = relationship("ModelB") > > model_a = > session.query(ModelA).options(load_only(Model_b.only_field_i_want_in_the_future)).filter(ModelA.id==1).first() > > It's a bit strange, but I want to ensure if someone loads the model_b > property in the future, only specific columns are loaded in at first. > > I can do this if I just query for model_b via the foreign key instead of > using the relationship property, but I'd like to avoid that if possible. > > Sorry if this question is a bit weird/confusing, it's kind of a strange > use case. > > Thanks, > Tony > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sqlalchemy+...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/e2dc23f8-97cf-494e-8661-bc0267fdf075n%40googlegroups.com > > <https://groups.google.com/d/msgid/sqlalchemy/e2dc23f8-97cf-494e-8661-bc0267fdf075n%40googlegroups.com?utm_medium=email&utm_source=footer> > . > > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sqlalchemy+...@googlegroups.com. > > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/2c67a
[sqlalchemy] Re: load_only when loading relatinoships from an instance
Ok, I was able to at least create a script that easily reproduces what I'm seeing - https://gist.github.com/tonycosentini/22f42455c5068898efa473760e4f65ed We have some code that runs before our tests to ensure all the tables are empty. When that runs, load_only doesn't seem to work. It sounds bizarre, but that gist link contains a really short sample that reproduces the same behavior. I'm running 1.4.44. Tony On Thursday, January 5, 2023 at 11:09:40 AM UTC+8 Tony Cosentini wrote: > Funny enough, this is what I tried. I just wrote up a small sample script > using defaultload + load_only and sure enough it works. There must be > something in the code base I'm working with that prevents the load_only bit > from being applied. I'm pretty sure defaultload is woroking fine. I'll > report back if I find it. > > Thanks for clarifying! > > On Thursday, January 5, 2023 at 8:14:21 AM UTC+8 Michael Bayer wrote: > >> yeah you can use defaultload.load_only >> >> >> defaultload(ModelA.model_b).load_only(ModelB.only_field) >> >> >> >> On Wednesday, January 4, 2023 at 3:15:02 AM UTC-5 to...@revenuecat.com >> wrote: >> >>> Hi, >>> >>> This might be a strange question, but I tried to find this in the >>> documentation to no avail. >>> >>> Is it possible to use something like load_only to override which columns >>> are loaded in when loading a relationship (as in, a relationship that is >>> not loaded at first with the original query)? >>> >>> Something like: >>> class ModelB: >>> ... >>> >>> class ModelA: >>> model_b = relationship("ModelB") >>> >>> model_a = >>> session.query(ModelA).options(load_only(Model_b.only_field_i_want_in_the_future)).filter(ModelA.id==1).first() >>> >>> It's a bit strange, but I want to ensure if someone loads the model_b >>> property in the future, only specific columns are loaded in at first. >>> >>> I can do this if I just query for model_b via the foreign key instead of >>> using the relationship property, but I'd like to avoid that if possible. >>> >>> Sorry if this question is a bit weird/confusing, it's kind of a strange >>> use case. >>> >>> Thanks, >>> Tony >>> >> -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/e2dc23f8-97cf-494e-8661-bc0267fdf075n%40googlegroups.com.
[sqlalchemy] Re: load_only when loading relatinoships from an instance
Funny enough, this is what I tried. I just wrote up a small sample script using defaultload + load_only and sure enough it works. There must be something in the code base I'm working with that prevents the load_only bit from being applied. I'm pretty sure defaultload is woroking fine. I'll report back if I find it. Thanks for clarifying! On Thursday, January 5, 2023 at 8:14:21 AM UTC+8 Michael Bayer wrote: > yeah you can use defaultload.load_only > > > defaultload(ModelA.model_b).load_only(ModelB.only_field) > > > > On Wednesday, January 4, 2023 at 3:15:02 AM UTC-5 to...@revenuecat.com > wrote: > >> Hi, >> >> This might be a strange question, but I tried to find this in the >> documentation to no avail. >> >> Is it possible to use something like load_only to override which columns >> are loaded in when loading a relationship (as in, a relationship that is >> not loaded at first with the original query)? >> >> Something like: >> class ModelB: >> ... >> >> class ModelA: >> model_b = relationship("ModelB") >> >> model_a = >> session.query(ModelA).options(load_only(Model_b.only_field_i_want_in_the_future)).filter(ModelA.id==1).first() >> >> It's a bit strange, but I want to ensure if someone loads the model_b >> property in the future, only specific columns are loaded in at first. >> >> I can do this if I just query for model_b via the foreign key instead of >> using the relationship property, but I'd like to avoid that if possible. >> >> Sorry if this question is a bit weird/confusing, it's kind of a strange >> use case. >> >> Thanks, >> Tony >> > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/f7159223-31f1-4f0e-9cb1-600e7fd517b5n%40googlegroups.com.
[sqlalchemy] load_only when loading relatinoships from an instance
Hi, This might be a strange question, but I tried to find this in the documentation to no avail. Is it possible to use something like load_only to override which columns are loaded in when loading a relationship (as in, a relationship that is not loaded at first with the original query)? Something like: class ModelB: ... class ModelA: model_b = relationship("ModelB") model_a = session.query(ModelA).options(load_only(Model_b.only_field_i_want_in_the_future)).filter(ModelA.id==1).first() It's a bit strange, but I want to ensure if someone loads the model_b property in the future, only specific columns are loaded in at first. I can do this if I just query for model_b via the foreign key instead of using the relationship property, but I'd like to avoid that if possible. Sorry if this question is a bit weird/confusing, it's kind of a strange use case. Thanks, Tony -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/71fd35e3-6840-4c19-84d8-6cf6f0cf3239n%40googlegroups.com.
Re: [sqlalchemy] Large increase in memory use when upgrading from 1.3 to 1.4
Somehow I never followed up on this one, sorry :( We upgraded to 1.4.44 and indeed did see a big decrease in memory usage. However, with how we are using gunicorn there's still a noticable jump in memory usage with the cache enabled, however it's not nearly as large as before. We're still running our web servers with the cache off, but mostly because our performance has remained similar to our 1.3.x performance. I'm hoping to enable it selectively for our super high throughput queries at some point though, but alas, there's always a huge backlog of things to do :). Thanks again for helping look into this, Tony On Monday, November 14, 2022 at 3:43:53 PM UTC+8 Tony Cosentini wrote: > Oh wow, this sounds like it would definitely impact us. I'm out this week > travelling, but I will definitely upgrade + re-enable the cache and report > back. > > Thanks! > > On Sat, Nov 12, 2022 at 3:17 PM Mike Bayer > wrote: > >> 1.4.44 is released with this change. if you can try it out with your >> application and let me know if you see improvements in memory use for your >> memory-intensive case, that would be much appreciated! thanks >> >> https://www.sqlalchemy.org/blog/2022/11/12/sqlalchemy-1.4.44-released/ >> >> >> >> On Fri, Nov 11, 2022, at 11:42 AM, Mike Bayer wrote: >> >> We've identified a significant source of memory over-use in the 1.4 >> caching system, on the particular environment where it was discovered, an >> extremely long and complex query nonetheless created a cache key that used >> orders of magnitude more memory than the statement itself. A fix for this >> issue will be released in version 1.4.44, however if you have the ability >> to test ahead of time to see if it resolves your issues, let me know. I am >> attempting to improve upon the patch to reduce memory use further. issue >> is tracked at https://github.com/sqlalchemy/sqlalchemy/issues/8790. >> >> On Thu, Nov 3, 2022, at 8:33 AM, Mike Bayer wrote: >> >> >> >> On Thu, Nov 3, 2022, at 3:11 AM, 'Tony Cosentini' via sqlalchemy wrote: >> >> Hey, sorry for the crazy delay on this! >> >> We ended up turning off the cache in everything via query_cache_size and >> memory usage returned to previous levels. We also didn't see any noticeable >> change in CPU usage in our web servers. >> >> We did see a pretty noticable perf regression in a worker job that is >> also very query heavy. For that we turned the cache back on and CPU usage >> ended up being lower than previous levels (kind of as expected given the >> caching gains). >> >> I think for our web servers, because of the number of processes + >> engines, we ended up with a very noticeable jump in memory usage. >> Additionally, I think when we did have aching turned on, we never really >> noticed any CPU usage improvements. My guess around this is because the >> cache might have been thrashing a lot - it's a fairly large code base >> so it might not have been very effective. >> >> >> >> OK it's too bad because we'd like to know what might be going on, the >> cache should not really "thrash" unless you have elements that are not >> being cached properly.it defaults to 500 which will grow as large as >> 750. It's difficult for your application to have 750 individually >> different SQL statements, all of which are in constant flow, unless you >> have some areas where there are perhaps very custom query building >> mechanisms where query structure is highly variable based on user input >> (like a search page). you'd get better performance if you could restore >> the cache on and just locate those specific queries which have too much >> variability in structure, and just disable the cache for those queries >> specifically using the compiled_cache execution option ( >> https://docs.sqlalchemy.org/en/14/core/connections.html#disabling-or-using-an-alternate-dictionary-to-cache-some-or-all-statements >> ) >> >> >> >> >> >> Hope this helps for anyone else that runs into these kinds of issues. >> Thanks again Mike for the helpful response! >> >> Tony >> On Friday, October 21, 2022 at 8:08:32 PM UTC+8 Tony Cosentini wrote: >> >> Hi Mike, >> >> Thanks for such a fast reply! We tried setting query_cache_size on a >> canary environment today, will be rolling it out widely on servers on >> Monday and can report back on if it has a noticeable impact. >> >> After thinking about this more, I think o
[sqlalchemy] How to generate view columns in mapped class with async postgresql
Hi, I am using alembic to apply a migration for a postgresql view using an *async *engine. I can see this successfully applied in the database. I have the following declarative mapped class to the view, defined as: *class MailingListView(Base): """View for mailing labels.After metata reflection from db -> model expecting columns tobe available on this class. """* *__tablename__ = "mailing_list_view"* *# Specify the column override from the underlying view that is the primary keyid = Column(UUID(as_uuid=True), primary_key=True)* *# Expecting these columns below to be mapped in this class after # metadata reflection. Currently have to uncomment these# to manually synchronise with view!## addressee = Column(String)# street = Column(String)# town = Column(String)# county = Column(String)# postcode = Column(String)# cursor = Column(String)* I am reflecting the views using the following: * def use_inspector(conn):inspector = inspect(conn) return inspector.get_view_names()views = await connection.run_sync(use_inspector)# I can see the table columns in __table__.c.keys()# after the reflection below has run* *await connection.run_sync(* *target_metadata.reflect,* *only=views,* *views=True,* *extend_existing=True,* *)* After performing the above reflection I can see that my mapped model has the underlyingtable columns updated with those defined in the underlying view. *obj = MailingListView()obj.__table__.c.keys()* However, the properties of my mapped class are not updated after reflection, raising an exception: *obj = MailingListView()obj.town = "town" # this raises an exception with unavailable property* How is it possible for a postgresql db (asyncpg) + async sqlalchemy to: Synchronise the columns of a declarative model with its underlying table after metadata reflection. Currently, I have to manually specify the columns in the declarative model. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/0e17cc45-f531-4f86-8c1c-db9ae903ad03n%40googlegroups.com.
Re: [sqlalchemy] Large increase in memory use when upgrading from 1.3 to 1.4
Oh wow, this sounds like it would definitely impact us. I'm out this week travelling, but I will definitely upgrade + re-enable the cache and report back. Thanks! On Sat, Nov 12, 2022 at 3:17 PM Mike Bayer wrote: > 1.4.44 is released with this change. if you can try it out with your > application and let me know if you see improvements in memory use for your > memory-intensive case, that would be much appreciated! thanks > > https://www.sqlalchemy.org/blog/2022/11/12/sqlalchemy-1.4.44-released/ > > > > On Fri, Nov 11, 2022, at 11:42 AM, Mike Bayer wrote: > > We've identified a significant source of memory over-use in the 1.4 > caching system, on the particular environment where it was discovered, an > extremely long and complex query nonetheless created a cache key that used > orders of magnitude more memory than the statement itself. A fix for this > issue will be released in version 1.4.44, however if you have the ability > to test ahead of time to see if it resolves your issues, let me know. I am > attempting to improve upon the patch to reduce memory use further. issue > is tracked at https://github.com/sqlalchemy/sqlalchemy/issues/8790. > > On Thu, Nov 3, 2022, at 8:33 AM, Mike Bayer wrote: > > > > On Thu, Nov 3, 2022, at 3:11 AM, 'Tony Cosentini' via sqlalchemy wrote: > > Hey, sorry for the crazy delay on this! > > We ended up turning off the cache in everything via query_cache_size and > memory usage returned to previous levels. We also didn't see any noticeable > change in CPU usage in our web servers. > > We did see a pretty noticable perf regression in a worker job that is also > very query heavy. For that we turned the cache back on and CPU usage ended > up being lower than previous levels (kind of as expected given the caching > gains). > > I think for our web servers, because of the number of processes + engines, > we ended up with a very noticeable jump in memory usage. Additionally, I > think when we did have aching turned on, we never really noticed any CPU > usage improvements. My guess around this is because the cache might have > been thrashing a lot - it's a fairly large code base > so it might not have been very effective. > > > > OK it's too bad because we'd like to know what might be going on, the > cache should not really "thrash" unless you have elements that are not > being cached properly.it defaults to 500 which will grow as large as > 750. It's difficult for your application to have 750 individually > different SQL statements, all of which are in constant flow, unless you > have some areas where there are perhaps very custom query building > mechanisms where query structure is highly variable based on user input > (like a search page). you'd get better performance if you could restore > the cache on and just locate those specific queries which have too much > variability in structure, and just disable the cache for those queries > specifically using the compiled_cache execution option ( > https://docs.sqlalchemy.org/en/14/core/connections.html#disabling-or-using-an-alternate-dictionary-to-cache-some-or-all-statements > ) > > > > > > Hope this helps for anyone else that runs into these kinds of issues. > Thanks again Mike for the helpful response! > > Tony > On Friday, October 21, 2022 at 8:08:32 PM UTC+8 Tony Cosentini wrote: > > Hi Mike, > > Thanks for such a fast reply! We tried setting query_cache_size on a > canary environment today, will be rolling it out widely on servers on > Monday and can report back on if it has a noticeable impact. > > After thinking about this more, I think our situation might exacerbate > things a bit, in particular because: > >- We have many engines (about 4 of them are used heavily, but there >are like 9 total). Some are for different databases, others have different >configurations for a database (for example, one has a more aggressive >statement timeout). >- We're running behind a Gunicorn server which has 17 worker >processes. Each of these workers processes will have their own caches. >- It's a fairly sizable app (at least for the engines that have a lot >of throughput) so we might be constantly adding keys to the cache and >evicting stale ones (this one is more of a theory though). > > I'll report back if we see any changes. > > Thanks again for the fast reply (and for building such a useful + well > documented library), > Tony > On Friday, October 21, 2022 at 12:20:07 PM UTC+8 Mike Bayer wrote: > > > > On Fri, Oct 21, 2022, at 12:00 AM, 'Tony Cosentini' via sqlalchemy wrote: > > Hi, > > We recently upgr
[sqlalchemy] Breaking Integration with Locust Tests? Having a hard time debugging
I have a model that uses ChoiceType like an enum for my FastAPI app. As soon as I install *locust* from pip, i get errors like this: *from sqlalchemy_utils import ChoiceType File "/Users/theo/.local/share/virtualenvs/optis2-data-service-iqC41Vso/lib/python3.9/site-packages/sqlalchemy_utils/__init__.py", line 59, in from .types import ( # noqa File "/Users/theo/.local/share/virtualenvs/optis2-data-service-iqC41Vso/lib/python3.9/site-packages/sqlalchemy_utils/types/__init__.py", line 42, in from .uuid import UUIDType # noqa File "/Users/theo/.local/share/virtualenvs/optis2-data-service-iqC41Vso/lib/python3.9/site-packages/sqlalchemy_utils/types/uuid.py", line 8, in sqlalchemy_version = tuple([int(v) for v in __version__.split(".")]) File "/Users/theo/.local/share/virtualenvs/optis2-data-service-iqC41Vso/lib/python3.9/site-packages/sqlalchemy_utils/types/uuid.py", line 8, in sqlalchemy_version = tuple([int(v) for v in __version__.split(".")]) ValueError: invalid literal for int() with base 10: '0b2'* I've tried a number of trial and error changes - but I have no idea where this error stems from. Is there anything I can do to support this? Thanks in advance -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/203e2aaf-0daa-4b97-bed5-beaf59c3b469n%40googlegroups.com.
Re: [sqlalchemy] Large increase in memory use when upgrading from 1.3 to 1.4
Hey, sorry for the crazy delay on this! We ended up turning off the cache in everything via query_cache_size and memory usage returned to previous levels. We also didn't see any noticeable change in CPU usage in our web servers. We did see a pretty noticable perf regression in a worker job that is also very query heavy. For that we turned the cache back on and CPU usage ended up being lower than previous levels (kind of as expected given the caching gains). I think for our web servers, because of the number of processes + engines, we ended up with a very noticeable jump in memory usage. Additionally, I think when we did have aching turned on, we never really noticed any CPU usage improvements. My guess around this is because the cache might have been thrashing a lot - it's a fairly large code base so it might not have been very effective. Hope this helps for anyone else that runs into these kinds of issues. Thanks again Mike for the helpful response! Tony On Friday, October 21, 2022 at 8:08:32 PM UTC+8 Tony Cosentini wrote: > Hi Mike, > > Thanks for such a fast reply! We tried setting query_cache_size on a > canary environment today, will be rolling it out widely on servers on > Monday and can report back on if it has a noticeable impact. > > After thinking about this more, I think our situation might exacerbate > things a bit, in particular because: > >- We have many engines (about 4 of them are used heavily, but there >are like 9 total). Some are for different databases, others have different >configurations for a database (for example, one has a more aggressive >statement timeout). >- We're running behind a Gunicorn server which has 17 worker >processes. Each of these workers processes will have their own caches. >- It's a fairly sizable app (at least for the engines that have a lot >of throughput) so we might be constantly adding keys to the cache and >evicting stale ones (this one is more of a theory though). > > I'll report back if we see any changes. > > Thanks again for the fast reply (and for building such a useful + well > documented library), > Tony > On Friday, October 21, 2022 at 12:20:07 PM UTC+8 Mike Bayer wrote: > >> >> >> On Fri, Oct 21, 2022, at 12:00 AM, 'Tony Cosentini' via sqlalchemy wrote: >> >> Hi, >> >> We recently upgraded our application (a Flask web app) from SQLAlchemy >> 1.3.19 to 1.4.41. >> >> Overall things are stable, but we have noticed a very large increase in >> memory use: >> [image: Screen Shot 2022-10-21 at 11.26.18 AM.png] >> >> Is this from the new query caching feature? I'm planning on getting some >> heap dumps to see if there is something obvious, but thought I'd ask here >> as well. >> >> >> you would be able to tell if you set query_cache_size=0 which then >> resolves the memory issue. >> >> The cache itself uses memory, which can cause memory increases. However >> we have a slight concern for the case of extremely large and highly nested >> SQL constructs that might be generating unreasonably large cache keys. We >> had one user with this problem some months ago and they were not able to >> give us details in order to reproduce the problem. query_cache_size=0 >> would prevent this problem also, but if you have very nested queries, >> particularly with CTEs, we'd be curious if you can isolate particular >> queries that might have that issue. >> >> >> >> The application is using the Postgres dialect. Nothing else was changed >> besides the SQLAlchemy version. It's running in a Docker container with 8 >> GB of RAM allocated to it. >> >> Anyway, I'll continue digging in more, but just asking in case there is >> something obvious, >> Tony >> >> >> -- >> SQLAlchemy - >> The Python SQL Toolkit and Object Relational Mapper >> >> http://www.sqlalchemy.org/ >> >> To post example code, please provide an MCVE: Minimal, Complete, and >> Verifiable Example. See http://stackoverflow.com/help/mcve for a full >> description. >> --- >> You received this message because you are subscribed to the Google Groups >> "sqlalchemy" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to sqlalchemy+...@googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/sqlalchemy/33cc0c20-64b7-4624-af68-737347f18c13n%40googlegroups.com >> >> <https://groups.google.com/d/msgid/sqlalchemy/33cc0c20-64b7-4624-af68-737347f18c
Re: [sqlalchemy] Large increase in memory use when upgrading from 1.3 to 1.4
Hi Mike, Thanks for such a fast reply! We tried setting query_cache_size on a canary environment today, will be rolling it out widely on servers on Monday and can report back on if it has a noticeable impact. After thinking about this more, I think our situation might exacerbate things a bit, in particular because: - We have many engines (about 4 of them are used heavily, but there are like 9 total). Some are for different databases, others have different configurations for a database (for example, one has a more aggressive statement timeout). - We're running behind a Gunicorn server which has 17 worker processes. Each of these workers processes will have their own caches. - It's a fairly sizable app (at least for the engines that have a lot of throughput) so we might be constantly adding keys to the cache and evicting stale ones (this one is more of a theory though). I'll report back if we see any changes. Thanks again for the fast reply (and for building such a useful + well documented library), Tony On Friday, October 21, 2022 at 12:20:07 PM UTC+8 Mike Bayer wrote: > > > On Fri, Oct 21, 2022, at 12:00 AM, 'Tony Cosentini' via sqlalchemy wrote: > > Hi, > > We recently upgraded our application (a Flask web app) from SQLAlchemy > 1.3.19 to 1.4.41. > > Overall things are stable, but we have noticed a very large increase in > memory use: > [image: Screen Shot 2022-10-21 at 11.26.18 AM.png] > > Is this from the new query caching feature? I'm planning on getting some > heap dumps to see if there is something obvious, but thought I'd ask here > as well. > > > you would be able to tell if you set query_cache_size=0 which then > resolves the memory issue. > > The cache itself uses memory, which can cause memory increases. However > we have a slight concern for the case of extremely large and highly nested > SQL constructs that might be generating unreasonably large cache keys. We > had one user with this problem some months ago and they were not able to > give us details in order to reproduce the problem. query_cache_size=0 > would prevent this problem also, but if you have very nested queries, > particularly with CTEs, we'd be curious if you can isolate particular > queries that might have that issue. > > > > The application is using the Postgres dialect. Nothing else was changed > besides the SQLAlchemy version. It's running in a Docker container with 8 > GB of RAM allocated to it. > > Anyway, I'll continue digging in more, but just asking in case there is > something obvious, > Tony > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sqlalchemy+...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/33cc0c20-64b7-4624-af68-737347f18c13n%40googlegroups.com > > <https://groups.google.com/d/msgid/sqlalchemy/33cc0c20-64b7-4624-af68-737347f18c13n%40googlegroups.com?utm_medium=email&utm_source=footer>. > > > > > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/940dba20-74ef-4401-aa18-a68866105316n%40googlegroups.com.
[sqlalchemy] Large increase in memory use when upgrading from 1.3 to 1.4
Hi, We recently upgraded our application (a Flask web app) from SQLAlchemy 1.3.19 to 1.4.41. Overall things are stable, but we have noticed a very large increase in memory use: [image: Screen Shot 2022-10-21 at 11.26.18 AM.png] Is this from the new query caching feature? I'm planning on getting some heap dumps to see if there is something obvious, but thought I'd ask here as well. The application is using the Postgres dialect. Nothing else was changed besides the SQLAlchemy version. It's running in a Docker container with 8 GB of RAM allocated to it. Anyway, I'll continue digging in more, but just asking in case there is something obvious, Tony -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/33cc0c20-64b7-4624-af68-737347f18c13n%40googlegroups.com.
[sqlalchemy] Re: Filling up a field in a database from a text file, flask
You should ask this in a Flask discussion group or stackoverflow. This is a sqlalchemy group and most users here have no experience with Flask. On Friday, August 19, 2022 at 4:13:50 PM UTC-4 nand...@gmail.com wrote: > I am trying to fill up a field in a table database with contents of a text > file, but get None as the response when I run the code. Any assistance will > be appreciated. Here is my code: > > # view function - routes.py > ... > @app.route('/add_vlans', methods = ['GET', 'POST']) > @login_required > def add_vlans(): > form = AddVlanForm(current_user.routername) > if form.validate_on_submit(): > with open("show_vlans", "r") as vlans: > vlan_output = vlans.read() > rt = Router(raw_vlans=vlan_output) #raw_vlans - field variable > name > db.session.add(rt) > db.session.commit() > return render_template('_show_vlans.html', title='Router Vlans') > > #forms.py > class AddVlanForm(FlaskForm): > raw_vlans = TextAreaField('Router vlan output:', > validators=[Length(min=0, max=140)]) > submit = SubmitField('Get Vlans on Router') > > #templates - router.html > {% extends "base.html" %} > > {% block content %} > > Router: {{ router.routername }} > > {% if router.about_router %} About router: {{ router.about_router > }} {% endif %} > > Vlans on {{ router.routername }} > {% for vlan in vlans %} > > {% include '_vlan.html' %} > > {% endfor %} > {% if router == current_user %} > Edit Router > {% endif %} > Vlan Configurations > Show Router Vlans > > {% include '_show_vlans.html' %} > > {% endblock %} > > #sub-template - _show_vlans.html > > > Vlans on router {{ current_user.routername }}: > {{ current_user.raw_vlans }} > > > > > I get the response: > Vlans on router router20:None > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/f5b7ed8b-4c65-4e5d-a274-f8f8460e96fbn%40googlegroups.com.
[sqlalchemy] Re: SQL Alchemy TypeDecorator
This is usually done in the ORM with functions, and possibly hybrids. See https://docs.sqlalchemy.org/en/14/orm/mapped_attributes.html On Tuesday, August 9, 2022 at 1:55:45 PM UTC-4 Justvuur wrote: > Hi there, > > Is there a way to pass/access model data for a row within the " > process_result_value" method of a TypeDecorator? > > For example, I want to decrypt the value but only if another value in the > same model row is true/false. > > Regards, > Justin > > > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/daa3-b548-4d33-a638-a7fae644f322n%40googlegroups.com.
[sqlalchemy] Re: SQLAlchemy exists() used with first() ?
I think you misunderstand `exists()` in SQLAlchemy and SQL. `exists()` is a convenience function to create a SQL `EXISTS` clause, which is an operator used for filtering subqueries. The 'from_exists' is just a subquery. It is supposed to be used within a query which would then limit the query, not executed itself. See https://docs.sqlalchemy.org/en/14/core/selectable.html?#sqlalchemy.sql.expression.Exists https://docs.sqlalchemy.org/en/14/core/selectable.html?#sqlalchemy.sql.expression.exists https://docs.sqlalchemy.org/en/14/orm/query.html#sqlalchemy.orm.Query.exists https://www.w3schools.com/sql/sql_exists.asp On Tuesday, August 9, 2022 at 2:05:56 PM UTC-4 Justvuur wrote: > Hi there, > > When creating another column property in a model that makes use of the > exists(), I noticed that the exists does a "select *". > > *For example, the form exists below:* > class Contact(ResourceMixin, db.Model): > __tablename__ = 'contacts' > > form_contacts = db.relationship(FormContact, backref='contact', > passive_deletes=True) > > form_exists = column_property( > exists().where(and_( FormContact .form_contact_id == id, > FormContact.partnership_id == partnership_id > )).label('form_contact_exist'), deferred=True > ) > *prints out to be something like:* > exists(select * from form_contacts where form_contacts.form_contact_id == > id and form_contacts. partnership_id == partnership_id) > > Does the exists "stop" the query once one row is returned or does it > execute the entire select all query? > If the latter, is there a way to limit the select all to one row? > > Regards, > Justin > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/0cb5d3e1-8e8d-4367-861d-f5e8328c4ffen%40googlegroups.com.
[sqlalchemy] Re: Shared ORM objects between threads
Thanks Jonathan, that should help me move forward. On Tuesday, July 5, 2022 at 12:51:52 PM UTC-4 Jonathan Vanasco wrote: > > > I'm guessing we shouldn't be passing ORM objects to threads, but rather > just passing IDs and then querying the full object in the thread function > > Correct. > > Database Connections and Sessions are not threadsafe, they are > thread-local. See > https://docs.sqlalchemy.org/en/14/orm/session_basics.html#is-the-session-thread-safe > > Consequently, all objects are thread-local. > > If you are simply dealing with read-only concepts, you can `.expunge` an > object from one session/thread and `.merge` it into another > session/thread. This is often playing with fire though, as you must be > prepared to handle situations where the data may have changed as that type > of work is not transaction-safe. See: > https://docs.sqlalchemy.org/en/14/orm/session_state_management.html > > > > On Thursday, June 30, 2022 at 4:02:23 PM UTC-4 ben.c...@level12.io wrote: > >> Hi, >> >> I'm troubleshooting some code that uses thread_pool_executor to run a >> function, passing an ORM entity as an argument. Within the executed >> function, we are sometimes receiving a "Value Error: generator already >> executing" when accessing a related entity via a relationship property. >> >> I'm guessing we shouldn't be passing ORM objects to threads, but rather >> just passing IDs and then querying the full object in the thread function. >> Does that hunch sound correct? >> > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/b1c62fc8-7fd6-449e-ae65-2bc3cc22e1b7n%40googlegroups.com.
[sqlalchemy] Re: Shared ORM objects between threads
> I'm guessing we shouldn't be passing ORM objects to threads, but rather just passing IDs and then querying the full object in the thread function Correct. Database Connections and Sessions are not threadsafe, they are thread-local. See https://docs.sqlalchemy.org/en/14/orm/session_basics.html#is-the-session-thread-safe Consequently, all objects are thread-local. If you are simply dealing with read-only concepts, you can `.expunge` an object from one session/thread and `.merge` it into another session/thread. This is often playing with fire though, as you must be prepared to handle situations where the data may have changed as that type of work is not transaction-safe. See: https://docs.sqlalchemy.org/en/14/orm/session_state_management.html On Thursday, June 30, 2022 at 4:02:23 PM UTC-4 ben.c...@level12.io wrote: > Hi, > > I'm troubleshooting some code that uses thread_pool_executor to run a > function, passing an ORM entity as an argument. Within the executed > function, we are sometimes receiving a "Value Error: generator already > executing" when accessing a related entity via a relationship property. > > I'm guessing we shouldn't be passing ORM objects to threads, but rather > just passing IDs and then querying the full object in the thread function. > Does that hunch sound correct? > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/b4fe09b4-bca9-43c6-9079-e601d88100e5n%40googlegroups.com.
[sqlalchemy] Shared ORM objects between threads
Hi, I'm troubleshooting some code that uses thread_pool_executor to run a function, passing an ORM entity as an argument. Within the executed function, we are sometimes receiving a "Value Error: generator already executing" when accessing a related entity via a relationship property. I'm guessing we shouldn't be passing ORM objects to threads, but rather just passing IDs and then querying the full object in the thread function. Does that hunch sound correct? -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/e2a9ee96-e089-49eb-92c0-91cc7af4e08cn%40googlegroups.com.
Re: [sqlalchemy] simple query takes to long
When you select in the database ui tool, you are just displaying raw data. When you select within your code snippets above, Python is creating pandas' DataFrame objects for the results. These two concepts are not comparable at all. Converting the SQL data to Python data structures in Pandas (and SQLAlchemy's ORM) is a lot of overhead - and that grows with the result size. You can use memory and code profiling tools to explore this and see where the issues are. The best approach is what Philip suggested above though, and not use pandas, so you can see how Python/SqlAlchemy handles the raw data. On Wednesday, June 8, 2022 at 9:28:38 AM UTC-4 Trainer Go wrote: > Hello Phil, > > i tested both and without printing the result. > > table_df = pd.read_sql_query(''SELECT, engine) > #print(table_df) > #query = "SELECT" > #for row in conn.execute(query).fetchall(): > #pass > > > both have nearly the same runtime. So this is not my problem. And yes, > they are the same queries cause i copy pasted the select from my DBUI where > is tested first the results and the runtime and i expected the same runtime > in my program but no ;) > > Greeting Manuel > > Philip Semanchuk schrieb am Mittwoch, 8. Juni 2022 um 15:04:08 UTC+2: > >> >> >> > On Jun 8, 2022, at 8:29 AM, Trainer Go wrote: >> > >> > When im using pandas with pd.read_sql_query() >> > with chunksize to minimiza the memory usage there is no difference >> between both runtimes.. >> >> Do you know that, or is that speculation? >> >> > >> > table_df = pd.read_sql_query('''select , engine, chunksize = 3) >> > >> > for df in table_df: >> > print(df) >> > >> > the runtime is nearly the same like 5 minutes >> >> Printing to the screen also takes time, and your terminal probably >> buffers the results, which requires memory allocation. I’m not saying this >> is your problem (it probably isn’t), but your test still involves pandas >> and your terminal, both of which cloud the issue. You would benefit from >> simplifying your tests. >> >> Did you try this suggestion from my previous email? >> >> >> > for row in conn.execute(my_query).fetchall(): >> > pass >> >> Also, are you 100% sure you’re executing the same query from SQLAlchemy >> that you’re pasting into your DB UI? >> >> Cheers >> Philip >> >> >> >> > >> > >> > >> > #print(table_df) result: #generator object SQLDatabase._query_iterator >> at 0x0DC69C30> >> > I dont know if the query will be triggered by using print(table_df) the >> result is generator object SQLDatabase._query_iterator at 0x0DC69C30> >> > >> > but the runtime is 6 seconds like in the DBUI im using. >> > >> > I have no clue what to do. >> > >> > Greetings Manuel >> > >> > Trainer Go schrieb am Mittwoch, 8. Juni 2022 um 09:27:04 UTC+2: >> > thank you Philip, >> > >> > I will test it today. >> > >> > >> > Greetings Manuel >> > >> > Philip Semanchuk schrieb am Dienstag, 7. Juni 2022 um 17:13:28 UTC+2: >> > >> > >> > > On Jun 7, 2022, at 5:46 AM, Trainer Go wrote: >> > > >> > > Hello guys, >> > > >> > > Im executing 2 queries in my python program with sqlalchemy using the >> pyodbc driver. >> > > The database is a Adaptive SQL Anywhere Version 7 32 Bit. >> > > >> > > When im executing the queries in a DB UI it takes 5-6 seconds for >> both together and when im using the same queries in my python programm it >> takes 5-6 minutes instead of 6 seconds. What im doing wrong? Im new at >> this. >> > >> > To start, debug one query at a time, not two. >> > >> > Second, when you test a query in your DB UI, you’re probably already >> connected to the database. Your Python program has to make the connection — >> that’s an extra step, and it might be slow. If you step through the Python >> program in the debugger, you can execute one statement at a time (the >> connection and the query) to understand how long each step takes. That will >> help to isolate the problem. >> > >> > Third, keep in mind that receiving results takes time too. If your DB >> UI is written in C or some other language that allocates memory very >> efficiently, it might be a lot faster than buildin
[sqlalchemy] Re: SQLALCHEMY conncection to Sybase Adaptive Server Anywhere Version 7 via TCI/IP
thanks, gord! On Thursday, April 14, 2022 at 12:30:44 PM UTC-4 Gord Thompson wrote: > > Der Datenquellenname wurde nicht gefunden, und es wurde kein > Standardtreiber angegeben > > "The data source name was not found and no default driver was specified" > > Use > > import pyodbc > > print(pyodbc.drivers()) > > to view the names of the ODBC drivers that are available to your > application. > > On Thursday, April 14, 2022 at 3:35:52 AM UTC-6 Trainer Go wrote: > >> i tried to connect my database but im getting an InterfaceError and i >> dont know how so solve it. >> >> connection_string = ( >> "DRIVER=Adaptive Server Anywhere 7.0;" >> "SERVER=IP;" >> "PORT=Port;" >> "UID=ID;PWD=PASSWORD;" >> "DATABASE=NameOfDatabase;" >> "charset=utf8;" >> ) >> connection_url = URL.create( >> "sybase+pyodbc", >> query={"odbc_connect": connection_string} >> ) >> engine = create_engine(connection_url) >> >> conn = engine.connect() >> >> InterfaceError: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Der >> Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber >> angegeben (0) (SQLDriverConnect)') >> InterfaceError: (pyodbc.InterfaceError) ('IM002', '[IM002] >> [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, >> und es wurde kein Standardtreiber angegeben (0) (SQLDriverConnect)') >> (Background on this error at: http://sqlalche.me/e/14/rvf5) >> >> i have installed the driver on my computer and its called Adaptive >> Server Anywhere 7.0 so i dont know where the problem is... >> >> Jonathan Vanasco schrieb am Donnerstag, 14. April 2022 um 00:07:06 UTC+2: >> >>> The Sybase dialect was deprecated from first-party support by SQLAlchemy >>> and is currently unsupported. >>> >>> Gord Thompson, who is a frequent contributor to the core SQLAlchemy >>> project, and has generously taken over responsibility for the original >>> dialect as a third-party dialect:: >>> >>> https://github.com/gordthompson/sqlalchemy-sybase >>> >>> In addition to offering some excellent code, his repo offers a wiki and >>> some documentation - both of which should help. >>> >>> >>> On Tuesday, April 12, 2022 at 11:10:40 AM UTC-4 Trainer Go wrote: >>> >>>> im a bit lost and need some help. >>>> >>>> im trying to set up a database connection with sqlalchemy to a Sybase >>>> Adaptive Server Anywhere Version 7 and i dont know how. >>>> >>>> I would be really happy if somebody could help me. >>>> >>>> Thanks in advace. >>>> >>>> >>>> Greetings Mae >>>> >>> -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/43542499-65df-4afd-b052-5a6517bd9b16n%40googlegroups.com.
[sqlalchemy] Re: SQLALCHEMY conncection to Sybase Adaptive Server Anywhere Version 7 via TCI/IP
The Sybase dialect was deprecated from first-party support by SQLAlchemy and is currently unsupported. Gord Thompson, who is a frequent contributor to the core SQLAlchemy project, and has generously taken over responsibility for the original dialect as a third-party dialect:: https://github.com/gordthompson/sqlalchemy-sybase In addition to offering some excellent code, his repo offers a wiki and some documentation - both of which should help. On Tuesday, April 12, 2022 at 11:10:40 AM UTC-4 Trainer Go wrote: > im a bit lost and need some help. > > im trying to set up a database connection with sqlalchemy to a Sybase > Adaptive Server Anywhere Version 7 and i dont know how. > > I would be really happy if somebody could help me. > > Thanks in advace. > > > Greetings Mae > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/35a41aa2-83ca-4a22-af92-eca30662912dn%40googlegroups.com.
[sqlalchemy] Re: create database name lowcase ?
I'm not aware of any recent changes in the libraries that would cause that behavior. It may be how you are using the libraries or raw sql. PostgreSQL will convert database names to lowercase UNLESS the database name is in quotes. These will all create `abc`: CREATE DATABASE abc; CREATE DATABASE Abc; CREATE DATABASE ABc; CREATE DATABASE ABC; CREATE DATABASE aBc; CREATE DATABASE aBC; CREATE DATABASE abC; These will create two different databases: CREATE DATABASE "abc"; CREATE DATABASE "Abc"; CREATE DATABASE "ABc"; CREATE DATABASE "ABC"; .. etc.. On Thursday, March 31, 2022 at 2:39:32 PM UTC-4 ois...@gmail.com wrote: > Hi everyone, I have a question > > I use Postgresql > Before creating a database, the name is uppercase and lowercase, and there > is no problem. > > Later SQLAlchemy was updated to version 1.4 > Don't know when the version started, > When creating a database again, use uppercase and lowercase names, which > will always be lowercase database names. > As a result, using drop database will fail. > > I am currently using: > Arch-linux > postgresql V13.6-1 > sqlalcgemy V1.4.33 > pyscopg2V2.93 > dictalchemy3 V1.0.0 > > E.g : > engine = sqlalchemy.create_engine( > "postgresql://xxx:yyy@localhost/postgres" > ) > conn = engine.connect() > conn.execute( "commit" ) > stt = "CREATE DATABASE ABCDEF" > conn.execute(stt) > conn.close() > > === > The database name will become abcdef > > I'm not sure if this is the reason for sqlalchemy or pyscopg2 ? > > Thank you everyone. > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/25fa8667-d4dd-43f8-a137-0c6a9125ccbbn%40googlegroups.com.
Re: [sqlalchemy] Re: many-to-many orm warnings
I'm sorry you're getting bit by this messaging - but also glad that I'm not the only one. This got me a while ago too. SqlAlchemy just uses a bare field name when emitting the warning and accepting the `overlaps` arguments. In more complex models with 3+ tables that have standardize relationship names, it's hard to tell what caused the issue and fixing one relationship can unknowingly affect others. There is a related ticket/PR. I'm not sure if you can pull it against the current main branch, but you can do a manual patch of the warnings code locally to make the output better: https://github.com/sqlalchemy/sqlalchemy/issues/7309 - Make the overlaps arguments use fully-qualified names There's also a related ticket to improve the errors when not calling `configure_mappers` as above: https://github.com/sqlalchemy/sqlalchemy/issues/7305 On Thursday, March 10, 2022 at 12:27:33 PM UTC-5 Michael Merickel wrote: > Thank you Mike. Really appreciate you unpacking my rambling. This works > for me. I found a few spots in our codebase where we were relying on > append() working because it really was a simple link table but I rewrote > them to just create the link manually and add it to the session which also > causes them to appear in the lists. > > On Thu, Mar 10, 2022 at 9:17 AM Mike Bayer > wrote: > >> hey there. >> >> The warnings go away entirely by making Parent.children viewonly=True, >> which for this type of mapping is recommended: >> >> class Parent(Base): >> __tablename__ = "left" >> id = Column(Integer, primary_key=True) >> children = relationship( >> "Child", secondary=Association.__table__, backref="parents", >> viewonly=True >> >> ) >> >> >> you wouldn't want to append new records to Parent.children because that >> would create invalid Association rows (missing extra_data). >> >> The warning box at the end of >> https://docs.sqlalchemy.org/en/14/orm/basic_relationships.html#association-object >> >> discusses this situation and the desirability of making the relationship >> which includes "secondary" as viewonly=True. >> >> hope this helps >> >> >> On Wed, Mar 9, 2022, at 8:09 PM, Michael Merickel wrote: >> >> Sorry for the rambling, it's been difficult for me to figure out what >> question to ask because I'm so confused. Below is the minimum viable >> example that produces no warnings with respect to the overlaps flags and I >> cannot explain hardly any of them. For example, why does Child.parents >> require "child_links,parent,child"? 3 values that seem to be somewhat >> unrelated and are at the very least definitely on different models? >> >> class Association(Base): >> __tablename__ = 'association' >> left_id = Column(ForeignKey('left.id'), primary_key=True) >> right_id = Column(ForeignKey('right.id'), primary_key=True) >> extra_data = Column(String(50)) >> >> parent = relationship('Parent', back_populates='child_links') >> child = relationship('Child', back_populates='parent_links') >> >> class Parent(Base): >> __tablename__ = 'left' >> id = Column(Integer, primary_key=True) >> >> children = relationship( >> 'Child', >> secondary=Association.__table__, >> back_populates='parents', >> overlaps='child,parent', >> ) >> child_links = relationship( >> 'Association', >> back_populates='parent', >> overlaps='children', >> ) >> >> class Child(Base): >> __tablename__ = 'right' >> id = Column(Integer, primary_key=True) >> >> parents = relationship( >> 'Parent', >> secondary=Association.__table__, >> back_populates='children', >> overlaps='child_links,parent,child', >> ) >> parent_links = relationship( >> 'Association', >> back_populates='child', >> overlaps='children,parents', >> ) >> >> >> On Wed, Mar 9, 2022 at 4:50 PM Michael Merickel >> wrote: >> >> I think ultimately I want the overlaps config but reading through >> https://docs.sqlalchemy.org/en/14/errors.html#relationship-x-will-copy-column-q-to-column-p-which-conflicts-with-relat
[sqlalchemy] Setting up indexes in __table_args__ that depend on columns created in __declare_first__
I'm trying to create a mixin that will setup FK columns that are dynamically named based on the name of the parent as opposed to a static name like `parent_id`. If was going to do the latter, I could easily use `declarted_attr` but since I want the former, I thought I could use `__declare_first__()`. It works except that I also need to setup an index on the FK column. When trying to do that with `__table_args__()`, I get an exception b/c, `__table_args__()` gets called before `__declare_first__()`. class FlawMixin: @sa.orm.declared_attr def __tablename__(cls): return f'{cls.__flaw_ident__}_flaws' @sa.orm.declared_attr def __table_args__(cls): return ( sa.Index(f'ix_{cls.__flaw_ident__}_flaws_{cls.__flaw_ident__}', f'{cls.__flaw_ident__}_id'), ) @classmethod def __declare_first__(cls): setattr(cls, f'{cls.__flaw_ident__}_id', sa.Column( sa.Integer, sa.ForeignKey(cls.__flaw_parent__.id, ondelete='cascade'), nullable=False )) setattr(cls, cls.__flaw_ident__, sa.orm.relationship(cls.__flaw_parent__, lazy='raise_on_sql')) I realize I have an event ordering issue with the way this is setup. Just not sure what the correct way is to solve it. Thanks in advance for any help you can provide. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/45921724-ba6a-40e5-8ae9-cad92169ddfbn%40googlegroups.com.
Re: [sqlalchemy] How to SELECT computed data using SQLAlchemy?
That worked brilliantly, thanks so much for your help! Very much appreciated :) On Tuesday, 14 December 2021 at 18:16:02 UTC Mike Bayer wrote: > > > On Tue, Dec 14, 2021, at 12:26 PM, 'Daniel Grindrod' via sqlalchemy wrote: > > Hi Michael, > > Thanks for such a quick reply. > I enjoyed reading it! I actually inherited this API (I swear I'm not just > making excuses!) from a colleague who left a few months earlier, so it's > very much been a case of 'Figuring it out as I go along'. > > Apologies for the incomplete code - despite it not being particularly > exciting code, I wanted to double check that I'm allowed to post it > publicly. > So the original (complete) code for this function is as follows: > > def similar_structure_matches(smiles, similarity_threshold): > > struc_sim_query = db.select([structures_tbl, text(":q_smiles as > query_smiles, jc_tanimoto(canonical_smiles, :q_smiles) as > similarity").bindparams(q_smiles=smiles)]). \ > where(text("jc_tanimoto(canonical_smiles, :q_smiles) >= :q_sim"). > bindparams(q_smiles=smiles, q_sim=similarity_threshold)) > > struc_sim_res = struc_sim_query.execute().fetchall() > > if len(struc_sim_res) == 0: > return '', 204 > > returnMatchLimaSchema(many=True).dump(struc_sim_res) > The above code is used to generate tanimoto (similarity) scores for the > queried structure against each structure in the database( SMILES describe > chemical structures). > As I understand it, the jc_tanimoto function comes from the Chemaxon > Cartridge which we have installed on our Oracle server (Cartridge API | > ChemAxon Docs > <https://docs.chemaxon.com/display/docs/cartridge-api.md#src-1803469-cartridgeapi-jc-tanimoto> > ). > > I'm not entirely sure how to call this function, without it being wrapped > by text(). > As I understand it, the code you sent across would be applying the > comparison (now jc_tanimoto) function in the Python; not within Oracle > itself (of course, that was impossible for you to predict with the > incomplete code I sent across). > > Could you please advise on how to correctly structure this query? > > > There's a construct in SQLAlchemy called func that renders a SQL > -function-like syntax for any arbitrary word, like this: > > f>>> from sqlalchemy import func > >>> from sqlalchemy import select > >>> print(select([func.jc_tanimoto('some data').label("my_label")])) > SELECT jc_tanimoto(:jc_tanimoto_1) AS my_label > > so as long as there's no unusual SQL syntaxes in play you can use > func. to generate SQL for any SQL function with parameters. > > > > > > > Thanks again, > Dan > > On Tuesday, 14 December 2021 at 13:31:12 UTC Mike Bayer wrote: > > > > On Tue, Dec 14, 2021, at 5:40 AM, 'Daniel Grindrod' via sqlalchemy wrote: > > Hi all, > > I'm working on a REST API which is built using Flask-SQLAlchemy and > Connexion. I'm fairly new to SQLAlchemy, but it's been brilliant so far > :) This API uses SQLAlchemy 1.3.16, and connects to an Oracle Database (12c > 12.1.0.1.0 64bit). > <https://stackoverflow.com/posts/70341129/timeline> > > I'm having an issue generating the correct SQL from a SQLAlchemy query. I > would really appreciate any help. The troublesome function is shown below. > def similar_matches(input_descriptor, threshold, lim=None, offset): > > query = db.select([tbl, text(":q_descriptors as query_descriptors, > comparison(descriptors, :q_descriptors) as > similarity")bindparams(q_descriptor=input_descriptor).\ > where( text("comparison(descriptors, :q_descriptors) >= > q_threshold").bindparams(q_descriptor=input_descriptor, q_threshold = > threshold) > > > heya - > > it's early here but I almost want to be able to tell a story about that > pattern above, which has select(text("cola, colb, colc, ...)) in it. > It's kind of an "anti-unicorn" for me, as I've done many refactorings to > the result-processing part of SQLAlchemy's engine and each time I do so, > there's some internal handwringing over, "what if someone is SELECTING from > a text() that has multiple columns comma-separated in them?", which > specifically is a problem because it means we can't positionally link the > columns we get back from the cursor to the Python expressions that are in > the select() object, and each time it's like, "nah, nobody would do that", > or, "nah, nobody *should* do that", but yet, as there's not a &
Re: [sqlalchemy] How to SELECT computed data using SQLAlchemy?
Hi Michael, Thanks for such a quick reply. I enjoyed reading it! I actually inherited this API (I swear I'm not just making excuses!) from a colleague who left a few months earlier, so it's very much been a case of 'Figuring it out as I go along'. Apologies for the incomplete code - despite it not being particularly exciting code, I wanted to double check that I'm allowed to post it publicly. So the original (complete) code for this function is as follows: def similar_structure_matches(smiles, similarity_threshold): struc_sim_query = db.select([structures_tbl, text(":q_smiles as query_smiles, jc_tanimoto(canonical_smiles, :q_smiles) as similarity").bindparams(q_smiles=smiles)]). \ where(text("jc_tanimoto(canonical_smiles, :q_smiles) >= :q_sim"). bindparams(q_smiles=smiles, q_sim=similarity_threshold)) struc_sim_res = struc_sim_query.execute().fetchall() if len(struc_sim_res) == 0: return '', 204 returnMatchLimaSchema(many=True).dump(struc_sim_res) The above code is used to generate tanimoto (similarity) scores for the queried structure against each structure in the database( SMILES describe chemical structures). As I understand it, the jc_tanimoto function comes from the Chemaxon Cartridge which we have installed on our Oracle server (Cartridge API | ChemAxon Docs <https://docs.chemaxon.com/display/docs/cartridge-api.md#src-1803469-cartridgeapi-jc-tanimoto> ). I'm not entirely sure how to call this function, without it being wrapped by text(). As I understand it, the code you sent across would be applying the comparison (now jc_tanimoto) function in the Python; not within Oracle itself (of course, that was impossible for you to predict with the incomplete code I sent across). Could you please advise on how to correctly structure this query? Thanks again, Dan On Tuesday, 14 December 2021 at 13:31:12 UTC Mike Bayer wrote: > > > On Tue, Dec 14, 2021, at 5:40 AM, 'Daniel Grindrod' via sqlalchemy wrote: > > Hi all, > > I'm working on a REST API which is built using Flask-SQLAlchemy and > Connexion. I'm fairly new to SQLAlchemy, but it's been brilliant so far > :) This API uses SQLAlchemy 1.3.16, and connects to an Oracle Database (12c > 12.1.0.1.0 64bit). > <https://stackoverflow.com/posts/70341129/timeline> > > I'm having an issue generating the correct SQL from a SQLAlchemy query. I > would really appreciate any help. The troublesome function is shown below. > def similar_matches(input_descriptor, threshold, lim=None, offset): > > query = db.select([tbl, text(":q_descriptors as query_descriptors, > comparison(descriptors, :q_descriptors) as > similarity")bindparams(q_descriptor=input_descriptor).\ > where( text("comparison(descriptors, :q_descriptors) >= > q_threshold").bindparams(q_descriptor=input_descriptor, q_threshold = > threshold) > > > heya - > > it's early here but I almost want to be able to tell a story about that > pattern above, which has select(text("cola, colb, colc, ...)) in it. > It's kind of an "anti-unicorn" for me, as I've done many refactorings to > the result-processing part of SQLAlchemy's engine and each time I do so, > there's some internal handwringing over, "what if someone is SELECTING from > a text() that has multiple columns comma-separated in them?", which > specifically is a problem because it means we can't positionally link the > columns we get back from the cursor to the Python expressions that are in > the select() object, and each time it's like, "nah, nobody would do that", > or, "nah, nobody *should* do that", but yet, as there's not a > straightforward way to detect/warn for that, there's a whole set of code / > commentary at > https://github.com/sqlalchemy/sqlalchemy/blob/main/lib/sqlalchemy/engine/cursor.py#L325 > > which wonders if we'd ever see this. > > and here it is! :) the dark unicorn.So, it's also the source of the > issue here, because the Oracle dialect has to restructure the query to > simulate limit/offset. S back into the barn w/ the unicorn and > what we do here is make sure the select() has enough structure so that > SQLAlchemy knows what's going on and here that would look like (note I'm > making some syntactical assumptions about the code above which seems to be > incomplete ): > > from sqlalchemy import literal, func > > query = db.select( > [ > tbl, > literal(input_descriptor).label("query_descriptors"), > func.comparison(tbl.c.descriptors, > q_descriptors).label("similarity") &g
[sqlalchemy] How to SELECT computed data using SQLAlchemy?
Hi all, I'm working on a REST API which is built using Flask-SQLAlchemy and Connexion. I'm fairly new to SQLAlchemy, but it's been brilliant so far :) This API uses SQLAlchemy 1.3.16, and connects to an Oracle Database (12c 12.1.0.1.0 64bit). <https://stackoverflow.com/posts/70341129/timeline> I'm having an issue generating the correct SQL from a SQLAlchemy query. I would really appreciate any help. The troublesome function is shown below. def similar_matches(input_descriptor, threshold, lim=None, offset): query = db.select([tbl, text(":q_descriptors as query_descriptors, comparison(descriptors, :q_descriptors) as similarity")bindparams(q_descriptor=input_descriptor).\ where( text("comparison(descriptors, :q_descriptors) >= q_threshold").bindparams(q_descriptor=input_descriptor, q_threshold = threshold) res = query.execute().fetchall() if len(res)=0 return '', 204 return MatchLimaScheme(many = True).dump(res) This SQLAlchemy code takes two inputs (descriptor and threshold), and searches through each descriptor in an Oracle database, calculating a similarity measure between the queried descriptor and each stored descriptor. All rows where similarity score >= threshold are returned in a JSON. The above code works fine, but returns all results - whereas I want to also be able to include a .offset() and a .limit() (for lazy loading). The code above generates SQL along these lines: SELECT ID, last_modified, descriptors, :q_descriptors as query_descriptors, comparison(descriptors, :q_descriptors) as similarity' FROM tbl WHERE compare(descriptors, :q_descriptors) >= :q_threshold which works well. However, when I add .limit() or .offset() on the end of my query i.e. query = db.select([tbl, text(":q_descriptors as query_descriptors, comparison(descriptors, :q_descriptors) as similarity" ).bindparams(q_descriptor=input_descriptor).where( text("comparison(descriptors, :q_descriptors) >= :q_threshold") .bindparams(q_descriptor=input_descriptor,q_threshold = threshold).limit(limit) The SQL generated changes to be along these lines: SELECT ID, last_modified, descriptors FROM (SELECT tbl.ID as ID, tbl.last_modified as last_modified, tbl.descriptors as descriptors, :q_descriptors as query_descriptors, comparison(descriptors, :q_descriptors) as similarity) FROM tbl WHERE compare(descriptors, :q_descriptors) >= :q_threshold WHERE ROWNUM <= :q_limit As a raw SQL query this is fine, but I'm no longer including the query_descriptors and similarity metrics in my SELECT clause. Thus I get a columnNotFoundError. How do I adjust the .select() function above so that my SQL looks more like: SELECT ID, last_modified, descriptors, query_descriptors, similarity FROM (SELECT tbl.ID as ID, tbl.last_modified as last_modified, tbl.descriptors as descriptors,:q_descriptors as query_descriptors comparison(descriptors, :q_descriptors) as similarity' FROM tbl WHERE compare(descriptors, :q_descriptors) >= :q_threshold WHERE ROWNUM <= :q_limit OFFSET :q_offset Basically, I'm looking to explicitly tell SQLAlchemy that I want to SELECT tbl AND query_descriptors AND similarity. I've also been informed that it's bad practice to not include a .order_by() in these queries, but I don't think that is what's causing the issue here. It is on my to do list though. Please let me know if I need to provide more information. Thanks for any help, Dan -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/adf3ee50-d198-4193-bcfb-e866cad62e52n%40googlegroups.com.
[sqlalchemy] Re: Calculate rank of single row using subquery
> Is this the most efficient way to do this, or am I over-complicating it? That roughly looks like code that I've implemented in the past. If it works and you don't have issues, I wouldn't worry about efficiency. Stuff like this will often vary based on the underlying table data - the structure, size, etc. Adding indexes on columns can often improve performance a lot. If you're really concerned on optimizing this, the typical approach is to focus on generating the target SQL query that works within the performance constraints you want, and then porting it to sqlalchemy by writing python code that will generate that same output. On Thursday, September 9, 2021 at 7:57:57 AM UTC-4 ursc...@gmail.com wrote: > I'm trying to calculate the rank of a particular (unique) row id by using > a subquery: > > I first calculate the total ranking for a table, Game (using 1.4.23): > > sq = ( > session.query( > Game.id, > Game.score, > func.rank().over(order_by=Game.score.desc()).label("rank"), > ) > .filter(Game.complete == True) > .subquery() > ) > > Then filter by the row ID I want (gameid): > > gamerank = ( > session.query( > sq.c.id, sq.c.score, sq.c.rank > ) > .filter(sq.c.id == gameid) > .limit(1) > .one() > ) > > Game.score is a Float column. Is this the most efficient way to do this, > or am I over-complicating it? > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/e6f92f26-4afb-44d5-a194-f04ace66be2cn%40googlegroups.com.
[sqlalchemy] Re: AsyncEngine always returns UTC time for datetime column
Can you share the database drivers / dialects you use? The discrepancy could be there. On Tuesday, September 14, 2021 at 7:03:27 AM UTC-4 ivan.ran...@themeanalytics.com wrote: > Hi all, > > I am trying to figure it out why AsyncEngine always returns UTC time for > datetime column, any help is appreciated? > > I am working with sqlalchemy core and async engine. Column definition: > *Column('test', DateTime(timezone=True), nullable=False)* > > Also tried with this, but no luck: > > *_connect_args = {'server_settings': {'timezone': > ''America/New_York''}}async_db: AsyncEngine = > create_async_engine(async_url_from_config(), connect_args=_connect_args)* > > When I tried with regular create_engine, everything worked as expected > with the same database data. > Data in the database (configured for New York) contains timezone info: > > *test | 2021-08-26 16:02:46.057288-04* > > BR, > > Ivan > > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/e7f24870-80d0-49f7-99a0-de4d141f33b9n%40googlegroups.com.
Re: [sqlalchemy] Change in before_flush (maybe from 1.3 to 1.4?)
What version of 1.4 are you using? It is before 1.4.7? If so, please update to the latest (1.4.23 is current) There was a regression in some early 1.4s that affected flush/commit/transaction in some situations. That was solved in 1.4.7. On Thursday, September 9, 2021 at 8:52:59 AM UTC-4 Mike Bayer wrote: > > > On Sun, Sep 5, 2021, at 6:41 PM, and...@acooke.org wrote: > > > I'm having a hard time understanding some of my own code, which no longer > works. The only way I can see it having worked in the past is if > auto-flushing did not call before_flush, but commit did? Is that possible? > > > autoflushing has always called before_flush. > > > Somehow I was managing to create instances (in Python) and populate them > with auto-generated key values from the database, but then filter out some > objects (those with certain attributes null) and never commit them to the > database (in before_flush). > > I realise this is a somewhat confusing question, sorry, and I can probably > fix my code anyway. I am just looking for some peace of mind in > understanding how on earth it ever worked. > > Thanks, > Andrew > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sqlalchemy+...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/6a11bc86-54d3-4993-8746-ec865b3003a9n%40googlegroups.com > > <https://groups.google.com/d/msgid/sqlalchemy/6a11bc86-54d3-4993-8746-ec865b3003a9n%40googlegroups.com?utm_medium=email&utm_source=footer> > . > > > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/0b13b266-9108-4c4b-87ea-18986fcbe140n%40googlegroups.com.
[sqlalchemy] Re: sqlite setting foreign_keys=off temporarily
The first two things I would look into: 1. Check the sqlite install/version that SqlAlchemy uses. It is often NOT the same as the basic operating system install invoked in your terminal. Sometimes that version does not have the functionality you need. 2. Check the transactional isolation level in sqlalchemy and that you are committing if needed. IIRC, the sqlite client defaults to non-transactional but the python library defaults to transactional. I could be wrong on this. Someone else may be able to look through your code and give more direct answers. On Saturday, August 7, 2021 at 11:19:48 PM UTC-4 RexE wrote: > On startup of my program, my in-memory sqlite DB needs to turn off foreign > key enforcement temporarily (in order to insert data from a different > sqlite DB). However, it seems my command to set foreign_keys back on has no > effect. See the attached MRE. > > I expect this output: > after turning back on [(1,)] > > But I get this: > after turning back on [(0,)] > > Interestingly, if I comment out the insert statement (or put it before the > toggle) the code works fine. > > Any ideas? I tried replicating this in the sqlite CLI but it works as I > expect: > > SQLite version 3.35.4 2021-04-02 15:20:15 > Enter ".help" for usage hints. > sqlite> pragma foreign_keys; > 0 > sqlite> pragma foreign_keys=on; > sqlite> pragma foreign_keys; > 1 > sqlite> create table groups (id primary key); > sqlite> pragma foreign_keys=off; > sqlite> pragma foreign_keys; > 0 > sqlite> insert into groups default values; > sqlite> pragma foreign_keys=on; > sqlite> pragma foreign_keys; > 1 > > I'm using SQLAlchemy==1.3.22. > > Thanks! > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/ce24e2db-b526-4f9b-bbcb-d0b2ead7b701n%40googlegroups.com.
Re: [sqlalchemy] Join multiple tables with association tables
*Is the relationship between Fact and Info meant to be many-to-many? And likewise the relationship between Text and Info?* You are right about that. Your code did exactly what I wanted. Thank you so much! I figured it could have to do somethin with a subquery but I'm just starting with sql so it is quiet difficult for me to understand. Your explanation really helped. Thanks again, Timo Simon King schrieb am Dienstag, 10. August 2021 um 11:13:32 UTC+2: > It's difficult to tell from your code what your intention is. Is the > relationship between Fact and Info meant to be many-to-many? And likewise > the relationship between Text and Info? > > Forgetting SQLAlchemy for a moment, what is the SQL that you want to > produce? > > > Does the script below do what you want? > > > import sqlalchemy as sa > import sqlalchemy.orm as saorm > from sqlalchemy.ext.declarative import declarative_base > > > Base = declarative_base() > > > facts_info = sa.Table( > "facts_info", > Base.metadata, > sa.Column( > "fact_id", sa.Integer, sa.ForeignKey("fact.id"), primary_key=True > ), > sa.Column( > "info_id", sa.Integer, sa.ForeignKey("info.id"), primary_key=True > ), > ) > > > info_text = sa.Table( > "info_text", > Base.metadata, > sa.Column( > "info_id", sa.Integer, sa.ForeignKey("info.id"), primary_key=True > ), > sa.Column( > "text_id", sa.Integer, sa.ForeignKey("text.id"), primary_key=True > ), > ) > > > class Fact(Base): > __tablename__ = "fact" > > id = sa.Column(sa.Integer, primary_key=True) > fact = sa.Column(sa.String(500), nullable=False, unique=True) > created_at = sa.Column(sa.DateTime) > updated_at = sa.Column(sa.DateTime) > > info = saorm.relationship( > "Info", secondary=facts_info, back_populates="facts" > ) > > > class Info(Base): > __tablename__ = "info" > > id = sa.Column(sa.Integer, primary_key=True) > filename = sa.Column(sa.String(50)) > format = sa.Column(sa.String(10)) > > facts = saorm.relationship( > "Fact", secondary=facts_info, back_populates="info" > ) > text = saorm.relationship( > "Text", secondary=info_text, back_populates="info" > ) > > > class Text(Base): > __tablename__ = "text" > > id = sa.Column(sa.Integer, primary_key=True) > text = sa.Column(sa.String(1000)) > > # Relationships > info = saorm.relationship( > "Info", secondary=info_text, back_populates="text" > ) > > > if __name__ == "__main__": > engine = sa.create_engine("sqlite://", echo=True) > Base.metadata.create_all(engine) > > Session = saorm.sessionmaker(bind=engine) > > session = Session() > > # two facts > facts = [Fact(fact="factone"), Fact(fact="facttwo")] > # three infos, first two are associated with both facts, third is > # only linked to second fact > infos = [ > Info(filename="infoone", facts=facts), > Info(filename="infotwo", facts=facts), > Info(filename="infothree", facts=facts[1:]), > ] > # three texts, first two linked to first info instance, third > # linked to third info instance > texts = [ > Text(text="textone", info=[infos[0]]), > Text(text="texttwo", info=[infos[0]]), > Text(text="textthree", info=[infos[2]]), > ] > session.add_all(facts + infos + texts) > session.flush() > > # Joining to both facts_info and info_text in the same query > # doesn't really make sense, because it would end up producing a > # cartesian product between those tables. Instead we'll use a > # subquery against facts_info to select the info ids we are > # interested in. > info_ids = ( > session.query(facts_info.c.info_id) > .filter(facts_info.c.fact_id == 1) > ) > query = ( > session.query(Info, Text) > .filter(Info.id.in_(info_ids)) > .join(Info.text) > ) > > # Note that this only outputs Info objects that have at least one > # text object associated with them. If you want to include Info > # objects without a related Text object, change the > # ".join(Info.text)" to ".outerjoin(Info.text)" >
[sqlalchemy] Join multiple tables with association tables
I am trying to figure out the correct join query setup within SQLAlchemy, but I can't seem to get my head around it. I have the following table setup (simplified, I left out the non-essential fields): [image: Unbenannt.png] [image: Unbenannt2.png] The facts are associated to info, info is associated to text. Text and facts aren't directly associated. I would like to join them all together but can't figure out to do so. In this example I would like to get all instaces of "Info" that are associated to Fact.id = 1 and all "Text" instances that are associated to that "Info" instance. I came up with select(Info, Text) .join(facts_info) .join(Facts) .join(info_text) .join(Text) here(Facts.id ==1) But it obviously gives me an error. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/59a98354-4ecf-4b4b-b76b-a8ad67aae207n%40googlegroups.com.
[sqlalchemy] Join multiple tables with association tables
I am trying to figure out the correct join query setup within SQLAlchemy, but I can't seem to get my head around it. I have the following table setup (simplified, I left out the non-essential fields): ```pyhton "facts_info", Base.metadata, sqlColumn("fact_id", Integer, ForeignKey("fact.id"), primary_key=True), sqlColumn("info_id", Integer, ForeignKey("info.id"), primary_key=True), class Facts(Base): __tablename__ = "facts" id = sqlColumn(Integer, primary_key=True) fact = sqlColumn(String(500), nullable=False, unique=True) created_at = sqlColumn(DateTime) updated_at = sqlColumn(DateTime) # Relationships info = relationship("Info", secondary=sachverhalt_info, back_populates="fact") class Info(Base): __tablename__ = "info" id = sqlColumn(Integer, primary_key=True) filename = sqlColumn(String(50)) format = sqlColumn(String(10)) # Relationships fact = relationship("Facts", secondary=facts_info; back_populates="info") text = relationship("Text", secondary=facts_info; back_populates="info") "info_text", Base.metadata, sqlColumn("info_id", Integer, ForeignKey("info.id"), primary_key=True), sqlColumn("text_id", Integer, ForeignKey("text.id"), primary_key=True) class Text(Base): __tablename__ = "text" id = sqlColumn(Integer, primary_key=True) text = sqlColumn(String(1000)) # Relationships info = relationship("Info", secondary=info_text; back_populates="text") ``` The facts are associated to info, info is associated to text. Text and facts aren't directly associated. I would like to join them all together but can't figure out to do so. In this example I would like to get all instaces of "Info" that are associated to Fact.id = 1 and all "Text" instances that are associated to that "Info" instance. I came up with ```python select(Info, Text).join(facts_info).join(Facts).join(info_text).join(Text).where(Facts.id ==1) ``` But it obviously gives me an error. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/ac2d085d-beab-4793-b602-e4d2421e9b80n%40googlegroups.com.
[sqlalchemy] Join multiple tables with association tables
I am trying to figure out the correct join query setup within SQLAlchemy, but I can't seem to get my head around it. I have the following table setup (simplified, I left out the non-essential fields): ```pyhton "facts_info", Base.metadata, sqlColumn("fact_id", Integer, ForeignKey("fact.id"), primary_key=True), sqlColumn("info_id", Integer, ForeignKey("info.id"), primary_key=True), class Facts(Base): __tablename__ = "facts" id = sqlColumn(Integer, primary_key=True) fact = sqlColumn(String(500), nullable=False, unique=True) created_at = sqlColumn(DateTime) updated_at = sqlColumn(DateTime) # Relationships info = relationship("Info", secondary=sachverhalt_info, back_populates="fact") class Info(Base): __tablename__ = "info" id = sqlColumn(Integer, primary_key=True) filename = sqlColumn(String(50)) format = sqlColumn(String(10)) # Relationships fact = relationship("Facts", secondary=facts_info; back_populates="info") text = relationship("Text", secondary=facts_info; back_populates="info") "info_text", Base.metadata, sqlColumn("info_id", Integer, ForeignKey("info.id"), primary_key=True), sqlColumn("text_id", Integer, ForeignKey("text.id"), primary_key=True) class Text(Base): __tablename__ = "text" id = sqlColumn(Integer, primary_key=True) text = sqlColumn(String(1000)) # Relationships info = relationship("Info", secondary=info_text; back_populates="text") ``` The facts are associated to info, info is associated to text. Text and facts aren't directly associated. I would like to join them all together but can't figure out to do so. In this example I would like to get all instaces of "Info" that are associated to Fact.id = 1 and all "Text" instances that are associated to that "Info" instance. I came up with ```python select(Info, Text).join(facts_info).join(Facts).join(info_text).join(Text).where(Facts.id ==1) ``` But it obviously gives me an error. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/9c62e7f4-ec8f-4f96-b122-251db6a2a9efn%40googlegroups.com.
[sqlalchemy] Re: Oracle connection problem
You should ensure the connection string does not have any reserved/escape characters in it. People have had similar issues in the past. If that is the case, there are some recent threads in this group and on the github issues that show how to overcome the issue by building a connection string from components. Other than that, this could be an Oracle configuration issue? (see http://dba-oracle.com/t_ora_12514_tns_listener_does_not_currently_know_service_requested.htm) Is production configured correctly? On Wednesday, August 4, 2021 at 7:52:42 PM UTC-4 jca...@gmail.com wrote: > I am using sqlalchemy 1.4.22 and cx oracle 8.2.1 to connect to production > and development environments that each host a similar copy of the same > schema. > > The connection string that I use is the same for each excluding the > password: > > oracle+cx_oracle://user:pass@MyTNS > > Dev works without a problem, but prod throws an Oracle error: > > ORA-12514: TNS:listener does not currently know of service requested in > connect descriptor > > We can connect using sqlplus with the same credentials and tns id, any > idea why it may not work for one environment? > > Thanks, > jlc > > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/add2ce16-1470-4b0b-a561-f9549ef4ef48n%40googlegroups.com.
[sqlalchemy] Re: Testing and deprecation of nested transactions
I typically do local developer testing with sqlite3, and the switch the database to postgresql for build/deploy/ci testing in the cloud. For complex tests, I typically use a fresh database "image". e.g. a sqlite file or pgdump output that is tracked in git. This is not the solution you're looking for, but i've found it very useful. I spent a long time working on a testing setup like you are trying to accomplish, but abandoned it when we built out an integrated test suite and data had to persist across multiple database connections. On Friday, July 30, 2021 at 4:19:35 AM UTC-4 dcab...@gmail.com wrote: > Hello everyone, > > I am working on a new project using SqlAlchemy Core 1.4 with Postgresql > and wanted to implement the following pattern for my tests: > > - Before each test I would start a transaction (in a > @pytest.fixture(autorun=True)) > - Each test may create its own transactions > - At the end of each test, I would rollback the transaction > > The purpose is to keep the database "clean" between tests and not have to > manually delete all inserted data. > > However, it seems that SqlAlchemy 1.4 is deprecating nested transactions > and that they will be removed in 2.0. > > Is there an alternative approach or best practice that I can use for > isolating tests in transactions? > > I had an alternative idea, like: > > - Before each test create the first savepoint (let's call current > savepoint N) > - Catch any commit in the code and instead create a savepoint N+1 > - Catch any rollback and rollback to N-1 > > Obviously, that seems like a lot of work and I'm not even sure if I can > intercept begins, commits and rollbacks that easily. > > Alternatively, I could run upgrade and downgrade migrations on every test, > but that would slow the test suite down a lot. > > Any advice and thoughts would be appreciated. > > Thanks! > Dejan > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/e4e452ef-351d-4f92-a87c-1ab52ebc70ffn%40googlegroups.com.
Re: [sqlalchemy] prevent (raise exceptions) on bytestring values for non-byte types
The second option looks perfect. Will try it! Thank you so much, Simon! On Friday, July 30, 2021 at 1:32:42 PM UTC-4 Simon King wrote: > I can think of a couple of options: > > 1. Create a TypeDecorator for String and Text columns that raises an > error if it sees a bytestring. This will only flag the error when the > session is flushed. > 2. Listen for mapper_configured events, iterate over the mapper > properties and add an "AttributeEvents.set" listener for each one. > This should flag the error when a bytestring is assigned to a mapped > attribute. > > Hope that helps, > > Simon > > On Fri, Jul 30, 2021 at 5:10 PM 'Jonathan Vanasco' via sqlalchemy > wrote: > > > > Mike, thanks for replying but go back to vacation. > > > > Anyone else: I am thinking more about an event that can be used to > catch, perhaps log, all bytes that go in. I only use a few column classes > that expect bytestrings, but many that do not. I've gotten every known bug > so far, but I'd like to make sure I'm not just lucky. > > > > On Thursday, July 29, 2021 at 6:05:03 PM UTC-4 Mike Bayer wrote: > >> > >> The Unicode datatype will emit a warning if you pass it a bytestring. > you can use that instead of String, or use a datatype with your own > assertions based on > https://docs.sqlalchemy.org/en/14/core/custom_types.html#coercing-encoded-strings-to-unicode > >> > >> > >> > >> On Thu, Jul 29, 2021, at 5:17 PM, 'Jonathan Vanasco' via sqlalchemy > wrote: > >> > >> I am finally at the tail end of migrating my largest (and hopefully > last) Python2 application to Python3. > >> > >> An issue that has popped up a lot during this transition, is when a py3 > bytestring gets submitted into SqlAlchemy. > >> > >> When that happens, it looks like SqlAlchemy just passes the value into > psycopg2, which wraps it in an object, and I get a psycopg exception that > bubbles up to SqlAlchemy: > >> > >> > sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) > operator does not exist: character varying = bytea > >> > LINE 3: WHERE foo = '\x626337323133... > >> > HINT: No operator matches the given name and argument type(s). You > might need to add explicit type casts. > >> > > >> > WHERE foo = %(foo)s > >> > LIMIT %(param_1)s] > >> > [parameters: {'foo': 0x10fe99060>, 'param_1': 1}] > >> > (Background on this error at: http://sqlalche.me/e/13/f405) > >> > >> Is there an easy way to catch this in SQLAlchemy *before* sending this > to the driver and executing it on the server? I'd like to ensure I'm > catching everything I should, and nothing is working just by-chance. > >> > >> > >> > >> > >> -- > >> SQLAlchemy - > >> The Python SQL Toolkit and Object Relational Mapper > >> > >> http://www.sqlalchemy.org/ > >> > >> To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > >> --- > >> You received this message because you are subscribed to the Google > Groups "sqlalchemy" group. > >> To unsubscribe from this group and stop receiving emails from it, send > an email to sqlalchemy+...@googlegroups.com. > >> To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/f70bf020-d120-46fb-96d1-d5509ff9b3c3n%40googlegroups.com > . > >> > >> > > -- > > SQLAlchemy - > > The Python SQL Toolkit and Object Relational Mapper > > > > http://www.sqlalchemy.org/ > > > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > > --- > > You received this message because you are subscribed to the Google > Groups "sqlalchemy" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to sqlalchemy+...@googlegroups.com. > > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/d6f8d50c-9465-41bc-a293-d8295c35ecc1n%40googlegroups.com > . > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/51d32a79-136c-4ec3-9075-b3d1f098d629n%40googlegroups.com.
Re: [sqlalchemy] prevent (raise exceptions) on bytestring values for non-byte types
Mike, thanks for replying but go back to vacation. Anyone else: I am thinking more about an event that can be used to catch, perhaps log, all bytes that go in. I only use a few column classes that expect bytestrings, but many that do not. I've gotten every known bug so far, but I'd like to make sure I'm not just lucky. On Thursday, July 29, 2021 at 6:05:03 PM UTC-4 Mike Bayer wrote: > The Unicode datatype will emit a warning if you pass it a bytestring. you > can use that instead of String, or use a datatype with your own assertions > based on > https://docs.sqlalchemy.org/en/14/core/custom_types.html#coercing-encoded-strings-to-unicode > > > > On Thu, Jul 29, 2021, at 5:17 PM, 'Jonathan Vanasco' via sqlalchemy wrote: > > I am finally at the tail end of migrating my largest (and hopefully last) > Python2 application to Python3. > > An issue that has popped up a lot during this transition, is when a py3 > bytestring gets submitted into SqlAlchemy. > > When that happens, it looks like SqlAlchemy just passes the value into > psycopg2, which wraps it in an object, and I get a psycopg exception that > bubbles up to SqlAlchemy: > > >sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) > operator does not exist: character varying = bytea > >LINE 3: WHERE foo = '\x626337323133... > >HINT: No operator matches the given name and argument type(s). You > might need to add explicit type casts. > > > >WHERE foo = %(foo)s > >LIMIT %(param_1)s] > >[parameters: {'foo': 0x10fe99060>, 'param_1': 1}] > >(Background on this error at: http://sqlalche.me/e/13/f405) > > Is there an easy way to catch this in SQLAlchemy *before* sending this to > the driver and executing it on the server? I'd like to ensure I'm catching > everything I should, and nothing is working just by-chance. > > > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sqlalchemy+...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/f70bf020-d120-46fb-96d1-d5509ff9b3c3n%40googlegroups.com > > <https://groups.google.com/d/msgid/sqlalchemy/f70bf020-d120-46fb-96d1-d5509ff9b3c3n%40googlegroups.com?utm_medium=email&utm_source=footer> > . > > > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/d6f8d50c-9465-41bc-a293-d8295c35ecc1n%40googlegroups.com.
[sqlalchemy] prevent (raise exceptions) on bytestring values for non-byte types
I am finally at the tail end of migrating my largest (and hopefully last) Python2 application to Python3. An issue that has popped up a lot during this transition, is when a py3 bytestring gets submitted into SqlAlchemy. When that happens, it looks like SqlAlchemy just passes the value into psycopg2, which wraps it in an object, and I get a psycopg exception that bubbles up to SqlAlchemy: >sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) operator does not exist: character varying = bytea >LINE 3: WHERE foo = '\x626337323133... >HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. > >WHERE foo = %(foo)s >LIMIT %(param_1)s] >[parameters: {'foo': , 'param_1': 1}] >(Background on this error at: http://sqlalche.me/e/13/f405) Is there an easy way to catch this in SQLAlchemy *before* sending this to the driver and executing it on the server? I'd like to ensure I'm catching everything I should, and nothing is working just by-chance. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/f70bf020-d120-46fb-96d1-d5509ff9b3c3n%40googlegroups.com.
[sqlalchemy] Re: sqlacodegen bug ?
Also, why only public.* tables are generated in my case, but all non-public.* tables are not generated? On Thursday, June 15, 2017 at 4:30:45 AM UTC+1 Kevin Ernst wrote: > Hi Jean-Luc, > > I wish I'd seen your post earlier, I could've helped. :) I had this exact > same problem today, found your post by searching for "sqlacodegen table > class." > > Something in this semi-unrelated post > <https://groups.google.com/d/msg/sqlalchemy/uQ7MijlHW1Y/3xaA7vJ6BwAJ> by > Mike Bayer prompted me to check to see what was different about the tables > for which proper classes were generated and the others. I found that they > were missing a primary key; adding one and re-running sqlacodegen did > indeed solve my problem. For tables lacking primary keys, sqlacodegen > simply creates a Table variable, rather than a class as you would expect. > > Hopefully this can still help someone else in this situation, who comes > here searching for a resolution. > > Cheers, > Kevin > > On Tuesday, May 24, 2016 at 12:35:43 PM UTC-4, Jean-Luc Menut wrote: >> >> >> But some of them are converted as variables, such as : >> >> -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/f6f22726-46e0-44f0-9824-4a49d53cd79bn%40googlegroups.com.
[sqlalchemy] Re: checking in
> If not I wonder why messages aren't arriving in my INBOX. Check your settings for this group. If you do not see the option on the menu, try visiting https://groups.google.com/g/sqlalchemy/membership Google sometimes has a product change de-selects the email delivery option. Sometimes users de-select email delivery and forget about that too. On Monday, June 14, 2021 at 5:25:47 PM UTC-4 rshe...@appl-ecosys.com wrote: > I've not worked with SQLAlchemy for several years but now want to use it in > a couple of applications. I've not seen messages on this maillist for a > very > long time so I tried subscribing and learned that I'm still subscribed. > > Am I the only one on this list now? > > If not I wonder why messages aren't arriving in my INBOX. > > Rich > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/b9f2ec03-8dff-4796-b1a3-2efc4409729en%40googlegroups.com.
Re: [sqlalchemy] Session and optimistic disconnect handling
On Tue, Jun 8, 2021 at 11:58 AM Mike Bayer wrote: > > Unknown network failures, I suppose. I have an application that is > throwing an exception right now due to: > > psycopg2.OperationalError: terminating connection due to administrator > command > SSL connection has been closed unexpectedly > > > right so if that happens on a connection that's been sitting in the pool > when you first go to use it, pre_ping will solve that. > Okay. That sounds pretty good. I've turned on logging (at the DEBUG level) per... https://docs.sqlalchemy.org/en/14/core/engines.html#dbengine-logging and I am trying to force an invalid connection by: while True: # read stdin o = SQLObject(something_from_stdin) session.add(o) session.commit() # Go to DB server and restart postgresql However I don't see any indication that the pre_ping invalidated a connection... # Here is the first commit from SA... INFO:sqlalchemy.engine.base.Engine:select version() INFO:sqlalchemy.engine.base.Engine:{} DEBUG:sqlalchemy.engine.base.Engine:Col ('version',) DEBUG:sqlalchemy.engine.base.Engine:Row ('PostgreSQL 12.4 (Debian 12.4-3) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.0-13) 10.2.0, 64-bit',) INFO:sqlalchemy.engine.base.Engine:select current_schema() INFO:sqlalchemy.engine.base.Engine:{} DEBUG:sqlalchemy.engine.base.Engine:Col ('current_schema',) DEBUG:sqlalchemy.engine.base.Engine:Row ('public',) INFO:sqlalchemy.engine.base.Engine:SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1 INFO:sqlalchemy.engine.base.Engine:{} INFO:sqlalchemy.engine.base.Engine:SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1 INFO:sqlalchemy.engine.base.Engine:{} INFO:sqlalchemy.engine.base.Engine:show standard_conforming_strings INFO:sqlalchemy.engine.base.Engine:{} DEBUG:sqlalchemy.engine.base.Engine:Col ('standard_conforming_strings',) DEBUG:sqlalchemy.engine.base.Engine:Row ('on',) INFO:sqlalchemy.engine.base.Engine:BEGIN (implicit) INFO:sqlalchemy.engine.base.Engine:INSERT INTO call_records (inserted_at, acct_code, vdn) VALUES (%(inserted_at)s, %(acct_code)s, %(vdn)s) RETURNING call_records.id INFO:sqlalchemy.engine.base.Engine:{'inserted_at': 'now', 'acct_code': 'yuy', 'vdn': 'tyt'} DEBUG:sqlalchemy.engine.base.Engine:Col ('id',) DEBUG:sqlalchemy.engine.base.Engine:Row (18,) INFO:sqlalchemy.engine.base.Engine:COMMIT # DB restarted and the second pass through the while loop... INFO:sqlalchemy.engine.base.Engine:BEGIN (implicit) INFO:sqlalchemy.engine.base.Engine:INSERT INTO call_records (inserted_at, acct_code, vdn) VALUES (%(inserted_at)s, %(acct_code)s, %(vdn)s) RETURNING call_records.id INFO:sqlalchemy.engine.base.Engine:{'inserted_at': 'now', 'acct_code': '909', 'vdn': '909'} DEBUG:sqlalchemy.engine.base.Engine:Col ('id',) DEBUG:sqlalchemy.engine.base.Engine:Row (19,) INFO:sqlalchemy.engine.base.Engine:COMMIT Should I be seeing something in the logs about an invalidated connection? Or am I not forcing an invalid connection correctly? Thanks for the feedback! -m -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAOLfK3W539EFhaDYAu3A7Gxr3WrwbdCGX12V_Y19gXZz3O74sA%40mail.gmail.com.
Re: [sqlalchemy] Session and optimistic disconnect handling
On Tue, Jun 8, 2021 at 10:28 AM Mike Bayer wrote: > > > > however, I would advise using pool_pre_ping instead which is much easier > to use and has basically no downsides.this feature didn't exist when > the docs for "optimistic disconnect" were written. > > > Sure. I was only looking at doing the optimistic disconnect because it > seemed a little more resilient to failures (if a DB error happens mid > transaction) and because I felt I could control the number of retries and > put in an exponential backoff. > > Do you suggest I use the custom pessimistic ping code: > > > https://docs.sqlalchemy.org/en/14/core/pooling.html#custom-legacy-pessimistic-ping > > to add in exponential backoff or add additional retries? > > > what's the real-world use case where exponential backoff is useful? > Unknown network failures, I suppose. I have an application that is throwing an exception right now due to: psycopg2.OperationalError: terminating connection due to administrator command SSL connection has been closed unexpectedly I don't know exactly what is causing the failure. Our VM guy seems to think it might be a network issue. Either way, I feel the code should retry a few times just to see if the cause of the error goes away. > do you expect databases to be shut down for a number of minutes without > disabling the application ? > No. I don't know what the timing parameters are of my particular failure. Pre ping doesn't seem like it allows for any sort of length of time before failing. I don't know if this offers any more clarity to what I'm experiencing and what I'm trying to code around. Thanks again for the help and dialogue! -m -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAOLfK3XEJWahpk9WRCNG%2BHkGGeiM7wR5zgmNeJHG2ECYkk%3Dc0A%40mail.gmail.com.
Re: [sqlalchemy] Session and optimistic disconnect handling
Hi Mike, Thanks for the reply! On Mon, Jun 7, 2021 at 6:08 PM Mike Bayer wrote: > ORM Sessions ride on top of connections, but since the Session under it's > default pattern of being bound to an Engine does the "connect" internally, > it's probably inconvenient to adapt the optimistic disconnect approach to > it. You would probably want to bind the Session to the Connection > explicitly. > I searched the SA docs, but could not find how to bind the Session to the Connection. > however, I would advise using pool_pre_ping instead which is much easier > to use and has basically no downsides.this feature didn't exist when > the docs for "optimistic disconnect" were written. > Sure. I was only looking at doing the optimistic disconnect because it seemed a little more resilient to failures (if a DB error happens mid transaction) and because I felt I could control the number of retries and put in an exponential backoff. Do you suggest I use the custom pessimistic ping code: https://docs.sqlalchemy.org/en/14/core/pooling.html#custom-legacy-pessimistic-ping to add in exponential backoff or add additional retries? Thank you again for the help! -m -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAOLfK3WYSinbhQLbMaBTf7sRRDZ8PtVcQ6u2-gV4ftjS36ZstA%40mail.gmail.com.
[sqlalchemy] Session and optimistic disconnect handling
Greetings SQLAlchemy folks, I am following the guide at [0] for recovering from a database error in my SQLAlchemy code. I normally use sessions for my SA work and am wondering if sessions will work with the aforementioned SA example. My initial attempt to combine the example at [0] with sessions did not seem to work as expected. What do folks think? Should it work? Here is a code snippet of how I am creating sessions: connection_string = self.get_connection_string() engine= create_engine(connection_string) Session = sessionmaker(bind = engine) If folks believe it should work, then I'll formulate a minimal working example and post my error. Thank you! -m [0] https://docs.sqlalchemy.org/en/14/core/pooling.html#disconnect-handling-optimistic -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAOLfK3Ud5spwMWPQTJws4zJixu7FuS5bLTu3OCmpxFLUpYn62A%40mail.gmail.com.
Re: [sqlalchemy] correct usage of next_value for a sequence
On Tue, Apr 20, 2021 at 1:14 PM Mike Bayer wrote: > > > On Tue, Apr 20, 2021, at 1:52 PM, 'Matt Zagrabelny' via sqlalchemy wrote: > > Greetings SQLAlchemy, > > I'm attempting to use the next_value function to get the (next) value from > a sequence: > > cycle_counter = next_value(Sequence('cycle_seq')) > print(cycle_counter) > > However, the print statement yields: > > > > Does anyone know the correct way to get the value of a sequence? > > > > you should execute that with a connection: > > with engine.connect() as conn: > conn.scalar(seq.next_value()) > > > Thanks for the tip, Mike! Best, -m -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAOLfK3WK0%3DF3gu_pouZhTx-Banwzyn90qT-WVa%3DJUiQW7x7VEA%40mail.gmail.com.
[sqlalchemy] correct usage of next_value for a sequence
Greetings SQLAlchemy, I'm attempting to use the next_value function to get the (next) value from a sequence: cycle_counter = next_value(Sequence('cycle_seq')) print(cycle_counter) However, the print statement yields: Does anyone know the correct way to get the value of a sequence? Thanks for any help! -m -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAOLfK3UWDoh4n%2BQpj%3DBDOK616TpOEfn46ZQ%2BCo88c5VQyVK%3DZA%40mail.gmail.com.
Re[4]: [sqlalchemy] Invertinace mapped type_id to fix value for each child class
Hi Simon Again you really helped me out. I don't know what point I missed, but now it works. As usual it's not as simpe le or lets say there are a lot more code pieces to change before I can really test it in my code. But I got it. just one more thing: I often have to check if a given tpye t the class type. Therefore I usally use the statci method. Thus what would you do: if test_type == ChildClass1().typ_id: or if test_type==ChildClass.TypID(): and to ensure only TypId exists fpr that type: __mapper_args__ = { "polymorphic_identity": ChildClass.TypID(), } And as I said: Thanks a lot! SirAnn -- Originalnachricht -- Von: "Simon King" An: sqlalchemy@googlegroups.com Gesendet: 12.04.2021 20:26:48 Betreff: Re: Re[2]: [sqlalchemy] Invertinace mapped type_id to fix value for each child class Here's a standalone working example: import sqlalchemy as sa import sqlalchemy.orm as saorm from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Objekt(Base): __tablename__ = "objekt" id = sa.Column(sa.Integer, primary_key=True) typ_id = sa.Column(sa.Integer, sa.ForeignKey("objekt_typ.id")) typ = saorm.relationship("ObjektTyp") name = sa.Column(sa.String(100)) __mapper_args__ = { "polymorphic_on": typ_id, } class ObjektTyp(Base): __tablename__ = "objekt_typ" id = sa.Column(sa.Integer, primary_key=True) name = sa.Column(sa.String(100)) class ChildObjekt1(Objekt): __tablename__ = "child_objekt1" id = sa.Column(sa.Integer, sa.ForeignKey(Objekt.id), primary_key=True) text = sa.Column(sa.String(255)) __mapper_args__ = { "polymorphic_identity": 1, } class ChildObjekt2(Objekt): __tablename__ = "child_objekt2" id = sa.Column(sa.Integer, sa.ForeignKey(Objekt.id), primary_key=True) text = sa.Column(sa.String(255)) __mapper_args__ = { "polymorphic_identity": 2, } if __name__ == "__main__": engine = sa.create_engine("sqlite://") Base.metadata.create_all(bind=engine) Session = saorm.sessionmaker(bind=engine) session = Session() child1type = ObjektTyp(id=1, name="child1") child2type = ObjektTyp(id=2, name="child1") child1 = ChildObjekt1(text="child 1 text") child2 = ChildObjekt2(text="child 2 text") session.add_all([child1type, child2type, child1, child2]) session.flush() for obj in session.query(Objekt): print(obj) Simon On Mon, Apr 12, 2021 at 6:40 PM 'Sören Textor' via sqlalchemy wrote: class Objekt(db.Model): __tablename__ = 'objekt' def __init__(self,**kwargs): super().__init__(**kwargs) id = db.Column(db.Integer, primary_key=True) typ_id = db.Column(db.Integer, db.ForeignKey('objekt_typ.id')) typ= db.relationship("ObjektTyp") name = db.Column(db.String(100)) __mapper_args__ = { 'polymorphic_on': typ_id } class ChildObjekt1(Objekt): __versioned__ = {} __tablename__ = 'child_objekt1' @staticmethod def TypId(): return 7 # User fields def __init__(self,**kwargs): super().__init__(**kwargs) #super().__init__(typ_id=ChildObjekt1.TypId(), **kwargs) ### id db.Column(db.Integer, db.ForeignKey('objekt.id'), primary_key=True) text = db.Column(db.String(255 ), default='') __mapper_args__ = { 'polymorphic_identity': 7, } leads to: venv\lib\site-packages\sqlalchemy\orm\mapper.py", line 1542, in _configure_polymorphic_setter self.polymorphic_on = self._props[self.polymorphic_on] KeyError: 'typ_id' raise exception sqlalchemy.exc.ArgumentError: Can't determine polymorphic_on value 'typ_id' - no attribute is mapped to this name. maybe i do something totally worg.. I am also using sql continuum -- Originalnachricht -- Von: "Simon King" An: sqlalchemy@googlegroups.com Gesendet: 12.04.2021 19:06:11 Betreff: Re: [sqlalchemy] Invertinace mapped type_id to fix value for each child class >I don't understand this comment: > >> I though on polymorphic_on, but I think that does not work because of the fact that type_id ha a foreign key ... > >As far as I can tell, you ought to have this in the base class: > > __mapper_args__ = { > 'polymorphic_on': typ_id > } > >And this in the subclass: > > __mapper_args__ = { > 'polymorphic_identity': 7, > } > >...and you should get rid of the typ_id function and the >"Objekt.t
Re[2]: [sqlalchemy] Invertinace mapped type_id to fix value for each child class
class Objekt(db.Model): __tablename__ = 'objekt' def __init__(self,**kwargs): super().__init__(**kwargs) id = db.Column(db.Integer, primary_key=True) typ_id = db.Column(db.Integer, db.ForeignKey('objekt_typ.id')) typ= db.relationship("ObjektTyp") name = db.Column(db.String(100)) __mapper_args__ = { 'polymorphic_on': typ_id } class ChildObjekt1(Objekt): __versioned__ = {} __tablename__ = 'child_objekt1' @staticmethod def TypId(): return 7 # User fields def __init__(self,**kwargs): super().__init__(**kwargs) #super().__init__(typ_id=ChildObjekt1.TypId(), **kwargs) ### id db.Column(db.Integer, db.ForeignKey('objekt.id'), primary_key=True) text = db.Column(db.String(255 ), default='') __mapper_args__ = { 'polymorphic_identity': 7, } leads to: venv\lib\site-packages\sqlalchemy\orm\mapper.py", line 1542, in _configure_polymorphic_setter self.polymorphic_on = self._props[self.polymorphic_on] KeyError: 'typ_id' raise exception sqlalchemy.exc.ArgumentError: Can't determine polymorphic_on value 'typ_id' - no attribute is mapped to this name. maybe i do something totally worg.. I am also using sql continuum -- Originalnachricht -- Von: "Simon King" An: sqlalchemy@googlegroups.com Gesendet: 12.04.2021 19:06:11 Betreff: Re: [sqlalchemy] Invertinace mapped type_id to fix value for each child class I don't understand this comment: I though on polymorphic_on, but I think that does not work because of the fact that type_id ha a foreign key ... As far as I can tell, you ought to have this in the base class: __mapper_args__ = { 'polymorphic_on': typ_id } And this in the subclass: __mapper_args__ = { 'polymorphic_identity': 7, } ...and you should get rid of the typ_id function and the "Objekt.typ_id = ChildClass.typ_id" line. Does that work for you? Simon On Mon, Apr 12, 2021 at 5:18 PM 'Sören Textor' via sqlalchemy wrote: I run into a problem and don't know how to solve it. The theory is very simple: I habe one base class table with name, id and type column The child class shall have a unique type_id (all child_class1 objekt shall get type_id 7, all child_class2 objekts type_id = 8, ...) How can I map the base class typ_id to an hard coded value for eahc class type. My actual approach does not change the type_id-columns of Objekt and after saving the objekt the column Objekt.type_id entry is always empty for all entries :-( class Objekt(db.Model): __tablename__ = 'objekt' def __init__(self,**kwargs): super().__init__(**kwargs) id = db.Column(db.Integer, primary_key=True) typ_id = db.Column(db.Integer, db.ForeignKey('objekt_typ.id')) typ= db.relationship("ObjektTyp") name = db.Column(db.String(100)) class ChildClass1(Objekt): __tablename__ = 'child_class1' @staticmethod def typ_id(): return 7 def __init__(self,**kwargs): super().__init__(**kwargs) Objekt.typ_id = ChildClass1.typ_id() ### fix type id = db.Column(db.Integer, db.ForeignKey('objekt.id'), primary_key=True) text = db.Column(db.String(255 ), default='') __mapper_args__ = { 'polymorphic_identity':'child_class1', } any ideas where to look? I though on polymorphic_on, but I think that does not work because of the fact that type_id ha a foreign key ... SirAnn -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/ema56ad245-cad9-4096-8c55-9d75e8d52ea2%40textors-01. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAFHw
[sqlalchemy] Inheritnace mapped type_id to fix value for each child class
I run into a problem and don't know how to solve it. The theory is very simple: I habe one base class table with name, id and type column The child class shall have a unique type_id (all child_class1 objekt shall get type_id 7, all child_class2 objekts type_id = 8, ...) How can I map the base class typ_id to an hard coded value for eahc class type. My actual approach does not change the type_id-columns of Objekt and after saving the objekt the column Objekt.type_id entry is always empty for all entries :-( class Objekt(db.Model): __tablename__ = 'objekt' def __init__(self,**kwargs): super().__init__(**kwargs) id = db.Column(db.Integer, primary_key=True) typ_id = db.Column(db.Integer, db.ForeignKey('objekt_typ.id')) typ= db.relationship("ObjektTyp") name = db.Column(db.String(100)) class ChildClass1(Objekt): __tablename__ = 'child_class1' @staticmethod def typ_id(): return 7 def __init__(self,**kwargs): super().__init__(**kwargs) Objekt.typ_id = ChildClass1.typ_id() ### fix type id = db.Column(db.Integer, db.ForeignKey('objekt.id'), primary_key=True) text = db.Column(db.String(255 ), default='') __mapper_args__ = { 'polymorphic_identity':'child_class1', } any ideas where to look? I though on polymorphic_on, but I think that does not work because of the fact that type_id ha a foreign key ... right know I get an error: c = ChildClass() db.session.add(c) db.session.commit() c.typ -> UnmappedColumnError('No column objekt.typ_id is configured on mapper mapped class ChildClass1->child_class1...') SirAnn -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/em9f57039b-c8a0-4dce-93db-9352f992db8a%40textors-01.
[sqlalchemy] Invertinace mapped type_id to fix value for each child class
I run into a problem and don't know how to solve it. The theory is very simple: I habe one base class table with name, id and type column The child class shall have a unique type_id (all child_class1 objekt shall get type_id 7, all child_class2 objekts type_id = 8, ...) How can I map the base class typ_id to an hard coded value for eahc class type. My actual approach does not change the type_id-columns of Objekt and after saving the objekt the column Objekt.type_id entry is always empty for all entries :-( class Objekt(db.Model): __tablename__ = 'objekt' def __init__(self,**kwargs): super().__init__(**kwargs) id = db.Column(db.Integer, primary_key=True) typ_id = db.Column(db.Integer, db.ForeignKey('objekt_typ.id')) typ= db.relationship("ObjektTyp") name = db.Column(db.String(100)) class ChildClass1(Objekt): __tablename__ = 'child_class1' @staticmethod def typ_id(): return 7 def __init__(self,**kwargs): super().__init__(**kwargs) Objekt.typ_id = ChildClass1.typ_id() ### fix type id = db.Column(db.Integer, db.ForeignKey('objekt.id'), primary_key=True) text = db.Column(db.String(255 ), default='') __mapper_args__ = { 'polymorphic_identity':'child_class1', } any ideas where to look? I though on polymorphic_on, but I think that does not work because of the fact that type_id ha a foreign key ... SirAnn -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/ema56ad245-cad9-4096-8c55-9d75e8d52ea2%40textors-01.
[sqlalchemy] Re: (cx_Oracle.DatabaseError) ORA-00972
Try passing a small number to `label_length` in your `create_engine`. Something like `label_length=5` might work. I typically use 4-6 on Production/Staging servers, and no argument on Development. * https://docs.sqlalchemy.org/en/14/core/engines.html#sqlalchemy.create_engine.params.label_length I don't have Oracle, so I am not sure if this fixes your exact problem or just related ones. `label_length` will limit the length of aliases that sqlalchemy generates. so you would see something like this: - SELECT very_long_table_name_i_mean_it_is_long.id AS very_long_table_name_i_mean_it_is_long_id, very_long_table_name_i_mean_it_is_long.foo_bar_biz_bang_bash_boom_bomb_bing_bong_foo AS very_long_table_name_i_mean_it_is_long_foo_bar_biz_bang_bash_boom_bomb_bing_bong_foo FROM very_long_table_name_i_mean_it_is_long LIMIT ? OFFSET ? + SELECT very_long_table_name_i_mean_it_is_long.id AS _1, very_long_table_name_i_mean_it_is_long.foo_bar_biz_bang_bash_boom_bomb_bing_bong_foo AS _2 FROM very_long_table_name_i_mean_it_is_long LIMIT ? OFFSET ? If the exception is caused by the generated alias (notice the underscore separator) `very_long_table_name_i_mean_it_is_long_foo_bar_biz_bang_bash_boom_bomb_bing_bong_foo` being too long, that will solve your problem. but if the exception is caused by (notice the dot separator in table/column addressing) "very_long_table_name_i_mean_it_is_long.foo_bar_biz_bang_bash_boom_bomb_bing_bong_foo" , then I don't know the remedy. On Monday, March 22, 2021 at 8:28:10 AM UTC-4 durand...@gmail.com wrote: > Hello, > > SqlAchemy automatically specify the table name in front of columns and > thus my query parameters are too long and I get the > "(cx_Oracle.DatabaseError) ORA-00972" error on Oracle. For example if my > table name is "TABLE_NAME_TOO_LONG" and my columns are "id" and "name" a > request will look like this: > > SELECT "TABLE_NAME_TOO_LONG".id, "TABLE_NAME_TOO_LONG".name FROM > "TABLE_NAME_TOO_LONG" where ... > > I could use alias for select request in order to bypass this issue if I > understand well ( > https://www.tutorialspoint.com/sqlalchemy/sqlalchemy_core_using_aliases.htm > ). > > However for an insert I cannot find any solution. > > Is there a way to set an alias to a table name for an insert ? or remove > the table name ? > > Best regards ;) > > > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/0435579f-8db8-4525-b3b3-54e5edeb243fn%40googlegroups.com.
Re: [sqlalchemy] Injecting User info into _history table to track who performed the change
Going beyond what Simon did.. I typically make make a table like `user_transaction`, which has all of the relevant information for the transaction: * User ID * Timestamp * Remote IP Using the sqlalchemy hooks, I'll then do something like: * update the object table with the user_transaction id or * use an association table that tracks a user_transaction_id to an object id and version FYI, Simon -- as of a few weeks ago, that pattern is now part of the pyramid sqlalchemy starter template! On Monday, March 15, 2021 at 6:46:02 AM UTC-4 Simon King wrote: > I use pyramid as a web framework, and when I create the DB session for > each request, I add a reference to the current request object to the > DB session. The session object has an "info" attribute which is > intended for application-specific things like this: > > > https://docs.sqlalchemy.org/en/13/orm/session_api.html#sqlalchemy.orm.session.Session.info > > Then, in the before_flush event handler, I retrieve the request object > from session.info, and then I can add whatever request-specific info I > want to the DB. > > Simon > > On Sun, Mar 14, 2021 at 4:05 PM JPLaverdure wrote: > > > > Hi Elmer, > > > > Thanks for your reply ! > > My issue is not with obtaining the info I want to inject (the logged in > users's email), I already have that all ready to go :) > > > > My whole database is versioned using the history_meta.py example from > SQLAlchemy > > > https://docs.sqlalchemy.org/en/13/_modules/examples/versioned_history/history_meta.html > > > > I was hoping for a simple way to inject the user info into the _history > row creation steps. > > > > The SQLAlchemy example makes use of this event listener: > > > > def versioned_session(session): > > > > @event.listens_for(session, "before_flush") > > def before_flush(session, flush_context, instances): > > for obj in versioned_objects(session.dirty): > > create_version(obj, session) > > for obj in versioned_objects(session.deleted): > > create_version(obj, session, deleted=True) > > > > So I'm tempted to follow the same strategy and just override this > listener to supplement it with the user info but I'm wondering how to pass > in non SQLAlchemy info into its execution context... > > > > So basically, I have the info I want to inject, I'm just not sure how to > pass it to SQLAlchemy > > > > Thanks, > > > > JP > > > > On Friday, March 12, 2021 at 6:55:19 p.m. UTC-5 elmer@gmail.com > wrote: > >> > >> Hi JP, > >> > >> Depending on how you've implemented your history tracking, that routine > is quite far removed from your web framework and getting a neat, clean way > of dealing with that might not be within reach. > >> > >> However, most web frameworks have some concept of a threadlocal request > (or function to retrieve it), which you could invoke and if such a request > exists, you could use that to load whatever user identity you have > available on there (again, the details differ, but this tends to be a > shared feature). From there you can store the user either as a foreign key, > or a unique identifier like email. Which one you pick would depend on how > you want the history to be affected when you delete a user record for > example. > >> > >> > >> > >> On Fri, Mar 12, 2021 at 11:58 PM JPLaverdure > wrote: > >>> > >>> Hello everyone, > >>> > >>> We already have the ability to timestamp the creation of the history > row, but it would also be interesting to be able to track the user > responsible for the content update. > >>> I would like to get suggestions on the best way to achieve this. > >>> > >>> I realize this is somewhat outside the scope of sqlalchemy as the > notion of a "logged in user" is more closely related to the context of the > app/webapp using SQLAlchemy as its ORM but maybe other people would benefit > from having a way to inject arbitrary data in the history table. > >>> > >>> Ideally, I would like the insert in the _history table to be atomic, > so I feel like hooking an update statement to an event might not be the way > to go. > >>> I'm tempted to modify the signature of before_flush but I'm not sure > where it gets called. > >>> > >>> Any help is welcome ! > >>> Thanks > >>> > >>> JP > >>> > >>> -- > >>> SQLAlchemy - > >>> The Python SQL Toolkit and Objec
Re: [sqlalchemy] Supporting Function Indexes on a Minimum Sqlite Version
Thank you so much, Mike! I roughly had that same @compiles in my tests, but I didn't trust myself... and the .dbapi looked like what I wanted, but I really wasn't sure! On Monday, March 8, 2021 at 4:36:03 PM UTC-5 Mike Bayer wrote: > > > On Mon, Mar 8, 2021, at 12:06 PM, 'Jonathan Vanasco' via sqlalchemy wrote: > > I have a project that, in a few rare situations, may run on a version of > sqlite that does not support function indexes, and "need" to run a unique > index on `lower(name)`. For simplicity, I'll just use a normal index on > correct systems, > > I'm trying to figure out the best way to implement this. > > 1. in terms of sqlite3, what is the best way to access the version > Sqlalchemy is using? the import is in a classmethod, and could either be > pysqlite2 or sqlite3? i seriously doubt anyone would deploy with > pysqlite2, but I feel like I should do things the right way. > > > you'd get this from the dbapi: > > >>> from sqlalchemy import create_engine > >>> e = create_engine("sqlite://") > >>> e.dialect.dbapi.sqlite_version > '3.34.1' > > > > > > 2. What is the best way to implement this contextual switch? I thought > about a `expression.FunctionElement` with custom `@compiles`. > > > yeah that is probably the best approach > > from sqlalchemy.sql import expression > from sqlalchemy.ext.compiler import compiles > > class maybe_lower(expression.FunctionElement): > type = String() > > @compiles(maybe_lower, 'sqlite') > def sl_maybe_lower(element, compiler, **kw): > args = list(element.clauses) > if compiler.dialect.dbapi_version < ...: > return "LOWER(%s)" % (compiler.process(args[0], **kw)) > else: > return compiler.process(args[0], **kw) > > @compiles(maybe_lower) > def default_maybe_lower(element, compiler, **kw): > args = list(element.clauses) > return compiler.process(args[0], **kw) > > > > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sqlalchemy+...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/99598f81-3851-4f2c-988c-1560d2f5e906n%40googlegroups.com > > <https://groups.google.com/d/msgid/sqlalchemy/99598f81-3851-4f2c-988c-1560d2f5e906n%40googlegroups.com?utm_medium=email&utm_source=footer> > . > > > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/106d60b5-8610-42a4-9738-bd27788b253bn%40googlegroups.com.
[sqlalchemy] Supporting Function Indexes on a Minimum Sqlite Version
I have a project that, in a few rare situations, may run on a version of sqlite that does not support function indexes, and "need" to run a unique index on `lower(name)`. For simplicity, I'll just use a normal index on correct systems, I'm trying to figure out the best way to implement this. 1. in terms of sqlite3, what is the best way to access the version Sqlalchemy is using? the import is in a classmethod, and could either be pysqlite2 or sqlite3? i seriously doubt anyone would deploy with pysqlite2, but I feel like I should do things the right way. 2. What is the best way to implement this contextual switch? I thought about a `expression.FunctionElement` with custom `@compiles`. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/99598f81-3851-4f2c-988c-1560d2f5e906n%40googlegroups.com.
Re: [sqlalchemy] relationship query_class in SQLAlchemy 1.4.0b3
"is it better to think of rebuilding medium+ projects for 2.0 while maintaining existing codebases for 1.3? In other words, how much will 2.0 be backward compatible with 1.3?" I am saying the following as a general user, and not a past contributor to this project: As per the Release Status system (https://www.sqlalchemy.org/download.html#relstatus) when 1.4 becomes the official "Current Release", 1.3 will drop to "Maintenance" status. I believe we can expect that, when 2.0 becomes the "Current Release", 1.4 will drop to "Maintenance" and 1.3 will drop to "EOL". IMHO, while I might prioritize some migration work based on the size of a project, if any given project is expected to be undergoing active development or be deployed in 2022 and beyond, they should start planning for the "2.0" style migration in their sprints. I can't stress this enough, my metric would be active-use and active-development, not the size of the codebase. Personally, I would prioritize adapting projects to deploy on 1.4 as the ASAP first step -- there are a few small backwards incompatibilities between 1.4 and 1.3. I still run everything on 1.3, but we test and develop against 1.4 -- using comments. docstrings to note what changes will be required in 1.4 -- or "switch" blocks so CI can run against both versions. I strongly recommend doing all new work in the 2.0 style, and start scheduling the 2.0 migration into sprints. Building anything against 1.3 right now is really doing nothing but assuming technical debt, and it's going to be much easier (and much less work!) planning for this change now. I would not want to be in a situation where one or more projects require an EOL version, and there are critical features/bugfixes in the newer branch. You're likely to get a good chunk of time out of 1.4, but I would not target 1.3 at this point. On Monday, March 1, 2021 at 9:45:55 AM UTC-5 aa@gmail.com wrote: > yes so, SQLAlchemy 2.0's approach is frankly at odds with the spirit of >> Flask-SQLAlchemy.The Query and "dynamic" loaders are staying around >> largely so that Flask can come on board, however the patterns in F-S are >> pretty much the ones I want to get away from. > > > 2.0's spirit is one where the act of creating a SELECT statement is a >> standalone thing that is separate from being attached to any specific class >> (really all of SQLAlchemy was like this, but F-S has everyone doing the >> Model.query thing that I've always found to be more misleading than >> helpful), but SELECT statements are now also disconnected from any kind of >> "engine" or "Session" when constructed. > > > > as for with_parent(), with_parent is what the dynamic loader actually uses >> to create the query. so this is a matter of code organization. >> F-S would have you say: >> > > > user = User.query.filter_by(name='name').first() >> address = user.addresses.filter_by(email='email').first() >> > > > noting above, there's no "Session" anywhere. where is it? Here's a >> Hacker News comment lamenting the real world implications of this: >> https://news.ycombinator.com/item?id=26183936 >> > > > SQLAlchemy 2.0 would have you say instead: >> > > > with Session(engine) as session: >> user = session.execute( >> select(User).filter_by(name='name') >> ).scalars().first() >> >>address = session.execute( >>select(Address).where(with_parent(user, >> Address.user)).filter_by(email='email') >>).scalars().first() >> > > > Noting above, a web framework integration may still wish to provide the >> "session" to data-oriented methods and manage its scope, but IMO it should >> be an explicit object passed around. The database connection / transaction >> shouldn't be made to appear to be inside the ORM model object, since that's >> not what's actually going on. > > > The newer design indeed provides a clearer view of the session. > > If you look at any commentary anywhere about SQLAlchemy, the top >> complaints are: > > >> 1. too magical, too implicit > > >> 2. what's wrong with just writing SQL? > > >> SQLAlchemy 2.0 seeks to streamline the act of ORMing such that the user >> *is* writing SQL, they're running it into an execute() method, and they are >> managing the scope of connectivity and transactions in an obvious way. >> People don't necessarily want bloat and verbosity but they do wan
Re: [sqlalchemy] Batching INSERT statements
I'm not familiar with this exactly, but have a bit of experience in this area. I just took a look at this module (nice work!). It's VERY well documented in the docstrings (even nicer work!) I think the core bit of this technique looks to be in `_get_next_sequence_values` - https://github.com/benchling/sqlalchemy_batch_inserts/blob/master/sqlalchemy_batch_inserts/__init__.py#L51-L83 Vineet is obtaining the ids by running the SQL generate_series function over the nextval function. When I've done large batches and migrations like this, I've used a somewhat dirty cheat/trick. Assuming a window of 1000 inserts, I would just increment the serial by 1000 and use "new number - 1000" as the range for IDs. That is somewhat closer to the "max id" concept. Vineet's approach is better. In terms of dealing with multiple foreign key constraints, pre-assigning IDs may or may not work depending on how your database constraints exist. As a habit, I always create (or re-create) Postgres foreign key checks as deferrable. When dealing with batches, I (i) defer all the involved constraints [which can be on other tables!], (ii) process the batch, (iii) set constraints to immediate. If the migration is LARGE, i'll drop all the indexes the tables too, and possibly drop the constraints too and run multiple workers. This gets around the overheads from every insert populating rows+indexes, and the FKEY integrity checks on every row. On Friday, February 12, 2021 at 2:06:55 PM UTC-5 christia...@gmail.com wrote: > Hi Vineet, Mike, > > @Vineet, thank you for the interesting blog post on bulk insert with > SQLAlchemy ORM: > https://benchling.engineering/sqlalchemy-batch-inserts-a-module-for-when-youre-inserting-thousands-of-rows-and-it-s-slow-16ece0ef5bf7 > > A few questions: > > 1. Do we need to get the incremented IDs from Postgresql itself, or can we > just fetch the current max ID on a table and increment IDs in Python > without querying the DB for the incremented values? > > 2. I was intrigued by the following phrase: > > > * P.S. execute_values in psycopg2 v2.8 supports getting returned values > back, so it’s possible that SQLAlchemy may support batch inserting these > models (with an auto-incrementing primary key) in the future. > > @Mike @Vineet, do you know if this is the case, ie if bulk insert now > works out of the box (without pre-assigning incremented IDs)? > > 3. Does this imply any change in case of bulk insert of multiple models > with foreign keys referring to each other? This answer > <https://stackoverflow.com/a/36387887/11750716> seems to suggest > pre-assigning IDs for it to work. > On Friday, February 21, 2020 at 3:49:54 PM UTC+1 Mike Bayer wrote: > >> Hi Vineet - >> >> glad that worked! I'll have to find some time to recall what we worked >> out here and how it came out for you, I wonder where on the site this kind >> of thing could be mentioned.we have 3rd party dialects listed out in >> the docs but not yet a place for extensions. >> >> On Wed, Feb 19, 2020, at 9:28 PM, Vineet Gopal wrote: >> >> Hi Mike, >> >> Thanks for all of your help getting this working again. We've used this >> solution in production for two years now, and it's helped our performance >> significantly. >> >> We just open-sourced the solution that we built so others can use it, and >> are also writing a blog post to cover some of the details. I'm attaching a >> copy of the blog post here. Obviously not expected, but if you are >> interested in taking a look, we are happy to incorporate any comments that >> you may have before publishing. >> >> Here's a link to the repo as well: >> https://github.com/benchling/sqlalchemy_batch_inserts >> >> Best, >> Vineet >> >> On Mon, Oct 9, 2017 at 10:27 PM wrote: >> >> if you're using Postgresql, there's a vastly easier technique to use >> which is just to pre-fetch from the sequence: >> identities = [ >> val for val, in session.execute( >> "select nextval('mytable_seq') from " >> "generate_series(1,%s)" % len(my_objects)) >> ) >> ] >> for ident, obj in zip(identities, my_objects): >> obj.pk = ident >> >> Wow, that's a great idea! I got it working for most of our models. I have >> some questions about how inserts for joined-table inheritance tables are >> batched together, but I'll ask them in a separate post since they're >> somewhat unrelated to this. >> >> So the complexity of adding multi-values insert
[sqlalchemy] Re: Relationship with 2 intermediary tables
This is, IMHO, one of the most complex parts of SQLAlchemy. In this public project, i have a handful of secondary/secondaryjoin examples that may help you https://github.com/aptise/peter_sslers/blob/main/peter_sslers/model/objects.py#L3778-L4714 There is a section in the docs that should help a bit https://docs.sqlalchemy.org/en/14/orm/join_conditions.html#composite-secondary-joins I think you want something like Person.photos = relationship( Photo, primaryjoin="""Person.id==PersonInstance.person_id""", secondary="""join(PersonInstance, PhotoInstance, PersonInstance.id==PhotoInstance.person_instance_id).join(Photo, PhotoInstance.photo_id == Photo.id)""", ) I don't think the secondaryjoin is needed in this case. I could be wrong. The way I like to structure these complex joins is something like this... A.Zs = relationship( Z, # the destination primaryjoin="""A.id == B.id""", # only the first association table secondary="""join(B.id == C.id).join(C.id == D.id)...(X.id==Y.id)""", # bring the rest of the tables in secondaryjoin=="""and_(Y.id==Z.id, Z.id.in(subselect))""" # custom filtering/join conditions ) Does that make sense? Mike has another way of explaining it in the docs, but this is how I best remember and implement it. On Sunday, February 7, 2021 at 3:25:35 PM UTC-5 daneb...@gmail.com wrote: > I am trying to create a relationship from one table to another, which > involves two intermediary tables. I *think* I need to use the secondaryjoin > + secondary arguments to relationship(). But after studying the > documentation for a long time, I can't get my head around how these > arguments are supposed to work. > > Here is my schema: > > class Person(Base): > __tablename__ = "person" > id = Column(Integer, primary_key=True) > > class PersonInstance(Base): > __tablename__ = "person_instance" > id = Column(Integer, primary_key=True) > person_id = Column(Integer, ForeignKey("person.id")) > > class Photo(Base): > __tablename__ = "photo" > id = Column(Integer, primary_key=True) > > class PhotoInstance(Base): > __tablename__ = "photo_instance" > id = Column(Integer, primary_key=True) > photo_id = Column(Integer, ForeignKey("photo.id")) > person_instance_id = Column(Integer, ForeignKey("person_instance.id")) > > I want to create a one-to-many relationship *Person.photos* which goes > from Person -> Photo. A Person is one-to-many with PersonInstance, and a > Photo is one-to-many with PhotoInstance objects. The connection from a > Person to a Photo exists between PersonInstance and PhotoInstance, via the > PhotoInstance.person_instance_id foreign key. > > First I tried using only primaryjoin: > > photos = relationship( > "Photo", > primaryjoin=( > "and_(Person.id==PersonInstance.person_id, " > "PersonInstance.id==PhotoInstance.person_instance_id, " > "PhotoInstance.photo_id==Photo.id)" > ) > ) > > I got an error saying it couldn't find the necessary foreign keys to > compute the join. > > So now I'm messing with secondary + secondaryjoin, but it's really trial & > error as I don't know how these arguments are supposed to work in my case. > > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/ff490d21-4e96-492a-a8ca-f953d1dd3e2fn%40googlegroups.com.
Re: [sqlalchemy] SQLAlchemy transaction ID
FWIW, within the realm of pyramid_tm, the more common use-cases for two-phase transaction support are for sending mail and a dealing with task queues - not two separate databases. On Wednesday, January 27, 2021 at 2:40:21 PM UTC-5 Mike Bayer wrote: > > > On Wed, Jan 27, 2021, at 2:23 PM, Thierry Florac wrote: > > Hi, > > I'm actually using two databases connections: one is my "main" connection, > opened on a ZODB (with RelStorage), and **sometimes** I have to open > another connection on another database (and event more sometimes); the two > transactions have to be synchronized: if one of them is aborted for any > reason, the two transactions have to be aborted. > > > > OK, then two phase it is > > I have always thought that the two-phase transaction was created to handle > this kind of use case, but if there is another better solution, I would be > very happy to know about it! > > > if you need the ORM to call prepare() then you need the XID and there you > are. > > This is all stuff that I think outside of the Zope community (but still in > Python) you don't really see much of. If someone's Flask app is writing to > Postgresql and MongoDB they're just going to spew data out to mongo and not > really worry about it, but that's becasue mongo doesn't have any 2pc > support.It's just not that commonly used because we get basically > nobody asking about it. > > > > @jonathan, I made a patch to Pyramid DebugToolbar that I pushed to Github > and made a pull request. But I don't know how to provide a test case as a > two-phase commit is not supported by SQLite... > I'll try anyway to provide a description of a "method" I use to reproduce > this! > > > So interesting fact, it looks like you are using Oracle for 2pc, that's > what that tuple is, and we currently aren't including Oracle 2pc in our > test support as cx_Oracle no longer includes the "twophase" flag which I > think we needed for some of our more elaborate tests. At the moment, > create_xid() emits a deprecation warning. I've been in contact with Oracle > devs and it looks like we should be supporting 2pc as I can get help from > them now for things that aren't working. I've opened > https://github.com/sqlalchemy/sqlalchemy/issues/5884 to look into this. > you should have been seeing a deprecation warning in your logs all this > time though. > > > > > > Best regards, > Thierry > -- > https://www.ulthar.net -- http://pyams.readthedocs.io > > > Le mer. 27 janv. 2021 à 19:19, Mike Bayer a > écrit : > > > > > On Wed, Jan 27, 2021, at 8:32 AM, Thierry Florac wrote: > > > Hi, > I'm actually using SQLAlchemy with Pyramid and zope.sqlalchemy packages. > My main database connection is a ZODB connection and, when required, I > create an SQLAlchemy session which is joined to main transaction using this > kind of code: > > * from *sqlalchemy.orm *import *scoped_session, sessionmaker > > * from *zope.sqlalchemy *import *register > * from *zope.sqlalchemy.datamanager *import* join_transaction > > _engine = get_engine(*engine*, *use_pool*) > if *use_zope_extension*: > factory = scoped_session(sessionmaker(*bind*=_engine, *twophase*= > *True*)) > else: > factory = sessionmaker(*bind*=_engine, *twophase*=*True*) > session = factory() > if *use_zope_extension*: > register(session, *initial_state*=*STATUS_ACTIVE*) > if *join*: > join_transaction(session, *initial_state*=*STATUS_ACTIVE*) > > Everything is working correctly! > > So my only question is that I also use Pyramid_debugtoolbar package, which > is tracking many SQLAlchemy events, including two-phase commits > transactions, and which in this context receives transaction IDs as a three > values tuple instead of a simple string (like, for example: (4660, > '12345678901234567890123456789012', '0009'), > which is raising an exception)! > Is it normal behaviour, and what does this value mean? > > > I would ask if you really really want to use the "twophase=True" flag, and > I would suggest turning it off if you aren't in fact coordinating against > multiple RDBMS backends (and even if you are, maybe). I'm not really sure > what that tuple is, I'd have to look but it seems likely to be related to > the XID stuff, which is really not something anyone uses these days. > > > > Best regards, > Thierry > > -- > https://www.ulthar.net -- http://pyams.readthedocs.io > > > -- > SQLAlchemy - &
Re: [sqlalchemy] FAQ or Feature Ideas for ORM Object and Session
Ok. I'll generate a docs PR for sqlalchemy and pyramid. this comes up so much. On Wednesday, January 27, 2021 at 2:25:29 PM UTC-5 Mike Bayer wrote: > > > On Wed, Jan 27, 2021, at 1:12 PM, 'Jonathan Vanasco' via sqlalchemy wrote: > > I've been working with a handful of SQLAlchemy and Pyramid based projects > recently, and two situations have repeatedly come up: > > 1. Given a SQLAlchemy Object, access the SQLAlchemy Session > 2. Given a SQLAlchemy Object or Session, access the Pyramid Request object > > The general solutions I've used to handle this is: > > 1. An Object can use the runtime inspection API to grab it's active > session: > > from sqlalchemy import inspect > > @property > def _active_session(self): > dbSession = inspect(self).session > return dbSession > > > There's a much older function sqlalchemy.orm.object_session() that also > does this. I prefer giving people the inspect() interface because I'd > rather expose the first class API and not confuse things. but > object_session() isn't going away. > > > > 2. Attach the Pyramid request to the session_factory when a session is > created: > > def get_tm_session(request): > dbSession = session_factory() > zope.sqlalchemy.register(dbSession, > transaction_manager=transaction_manager, keep_session=True) > if request is not None: > def _cleanup(request): > dbSession.close() > request.add_finished_callback(_cleanup) > # cache our request onto the dbsession > dbSession.pyramid_request = request > return dbSession > > I've needed to implement these patterns in a lot of projects. This makes > me wonder if there is/could be a better way. > > > That request would be better placed in session.info which is the official > dictionary for third-party things to go. > > > > > 1. Would it be beneficial if ORM objects could surface the current > Session, if any, as a documented property ? I do this in my base classes, > but with the overhead of the inspect system, and I repeat this in every > project. > > > as a property? no, we can't do that.we try to add zero "names" to the > class of any kind.there's "_sa_instance_state", > "_sa_instrumentation_manager" and that's as far as we go; doing absolute > zero to the namespace of the mapped class is a fundamental rule of the > ORM. > > > > 2. Would it be better for the sessionmaker had any of ? > > a. An official namespace were developers could attach information. > I'm using `pyramid_request` because I doubt SQLAlchemy will every step on > that - but it would be nice if there were a dedicated > attribute/object/namespace on the Session > > > session.info: > > > https://docs.sqlalchemy.org/en/13/orm/session_api.html?highlight=session%20info#sqlalchemy.orm.session.Session.info > > > b. `sqlalchemy.orm.session.Session()` could accept a > dict/payload/object/whatever on init, which would be attached to a single > session in the aforementioned dedicated namespace. > > > Session.info: :) > > > https://docs.sqlalchemy.org/en/13/orm/session_api.html?highlight=session%20info#sqlalchemy.orm.session.Session.params.info > > > > > The usage would be something like: > > sess = Session(customized={"request": request}) > > which might then me accessed as: > > sess.customized.request > > > poof! it's done > > > > > > > > > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sqlalchemy+...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/b5031f10-c2c8-4065-b968-3a55f2bf6daen%40googlegroups.com > > <https://groups.google.com/d/msgid/sqlalchemy/b5031f10-c2c8-4065-b968-3a55f2bf6daen%40googlegroups.com?utm_medium=email&utm_source=footer> > . > > > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/a36086e9-499f-43af-969f-4f5cf3c0ff96n%40googlegroups.com.
[sqlalchemy] FAQ or Feature Ideas for ORM Object and Session
I've been working with a handful of SQLAlchemy and Pyramid based projects recently, and two situations have repeatedly come up: 1. Given a SQLAlchemy Object, access the SQLAlchemy Session 2. Given a SQLAlchemy Object or Session, access the Pyramid Request object The general solutions I've used to handle this is: 1. An Object can use the runtime inspection API to grab it's active session: from sqlalchemy import inspect @property def _active_session(self): dbSession = inspect(self).session return dbSession 2. Attach the Pyramid request to the session_factory when a session is created: def get_tm_session(request): dbSession = session_factory() zope.sqlalchemy.register(dbSession, transaction_manager=transaction_manager, keep_session=True) if request is not None: def _cleanup(request): dbSession.close() request.add_finished_callback(_cleanup) # cache our request onto the dbsession dbSession.pyramid_request = request return dbSession I've needed to implement these patterns in a lot of projects. This makes me wonder if there is/could be a better way. 1. Would it be beneficial if ORM objects could surface the current Session, if any, as a documented property ? I do this in my base classes, but with the overhead of the inspect system, and I repeat this in every project. 2. Would it be better for the sessionmaker had any of ? a. An official namespace were developers could attach information. I'm using `pyramid_request` because I doubt SQLAlchemy will every step on that - but it would be nice if there were a dedicated attribute/object/namespace on the Session b. `sqlalchemy.orm.session.Session()` could accept a dict/payload/object/whatever on init, which would be attached to a single session in the aforementioned dedicated namespace. The usage would be something like: sess = Session(customized={"request": request}) which might then me accessed as: sess.customized.request -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/b5031f10-c2c8-4065-b968-3a55f2bf6daen%40googlegroups.com.
[sqlalchemy] Re: SQLAlchemy transaction ID
Thierry, Would you mind putting together a test-case on this? I haven't experienced that before, and I authored that feature in the debugtoolbar. If I can recreate it, I'll put together a fix and work with the pyramid team to get a new release out asap. On Wednesday, January 27, 2021 at 8:32:34 AM UTC-5 tfl...@gmail.com wrote: > Hi, > I'm actually using SQLAlchemy with Pyramid and zope.sqlalchemy packages. > My main database connection is a ZODB connection and, when required, I > create an SQLAlchemy session which is joined to main transaction using this > kind of code: > > from sqlalchemy.orm import scoped_session, sessionmaker > > from zope.sqlalchemy import register > from zope.sqlalchemy.datamanager import join_transaction > > _engine = get_engine(engine, use_pool) > if use_zope_extension: > factory = scoped_session(sessionmaker(bind=_engine, twophase=True)) > else: > factory = sessionmaker(bind=_engine, twophase=True) > session = factory() > if use_zope_extension: > register(session, initial_state=STATUS_ACTIVE) > if join: > join_transaction(session, initial_state=STATUS_ACTIVE) > > Everything is working correctly! > > So my only question is that I also use Pyramid_debugtoolbar package, which > is tracking many SQLAlchemy events, including two-phase commits > transactions, and which in this context receives transaction IDs as a three > values tuple instead of a simple string (like, for example: (4660, > '12345678901234567890123456789012', '0009'), > which is raising an exception)! > Is it normal behaviour, and what does this value mean? > > Best regards, > Thierry > > -- > https://www.ulthar.net -- http://pyams.readthedocs.io > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/8728dadd-102f-4751-a798-d1a5794145den%40googlegroups.com.
Re: [sqlalchemy] Implementing Role-Based Access Control (RBAC) in SQLAlchemy with oso
Thanks for sharing this. Will take a good look at it, I've been looking for something like this. There's a broken link on the text "guide to roles patterns <https://docs.osohq.com/getting-started/rbac.html#resource-specific-roles> ". On Thu, 7 Jan 2021, 18:15 Stephie Glaser, wrote: > Hi all, we've been working towards building Role-Based Access Control > (RBAC) features into our libraries at oso. We had released a preview of > those features in our sqlalchemy-oso package, and since then have polished > those features up, written some docs, and are excited to share a sample app > showcasing our new out-of-the box roles features! > > Link to Introducing Built-in Roles with oso. > <https://www.osohq.com/post/introducing-builtin-roles> It covers how to > structure Role-Based Access Control (RBAC) and how we ship roles > out-of-the-box for SQLAlchemy. Plus feature designs, broader thinking on > roles, and the sample app we use to validate and showcase the > sqlalchemy-oso library. > > > > > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sqlalchemy+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/50f0c039-9fdb-42cb-b40e-a17f8c70c282n%40googlegroups.com > <https://groups.google.com/d/msgid/sqlalchemy/50f0c039-9fdb-42cb-b40e-a17f8c70c282n%40googlegroups.com?utm_medium=email&utm_source=footer> > . > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAHxMHYWWekkQ7TWbUPfznYupgi5LczjB-yWNKKTBL4X1M0bJYg%40mail.gmail.com.
Re: [sqlalchemy] One to One relation problem [re-open?]
Ah. I see. Thus this was a newbie question. Thanks again! Mike Bayer schrieb am Fr. 18. Dez. 2020 um 19:44: > hey there - > > you can assign the "id" but that doesn't give SQLAlchemy any clue that you > are working with the "daughter" relationship so it doesn't know to > deassociate m2.daughter. You'll note that relationally, there's no issue > as from a foreign key perspective Mama->daughter is many to one. Some FAQ > on this here: > https://docs.sqlalchemy.org/en/13/faq/sessions.html#i-set-the-foo-id-attribute-on-my-instance-to-7-but-the-foo-attribute-is-still-none-shouldn-t-it-have-loaded-foo-with-id-7 > > > Usually if you were modelling "Mama->Daughter", you'd have > Daughter.mama_id foreign key since Parent->Chlld is typically one-to-many, > not many-to-one. > > > > On Fri, Dec 18, 2020, at 12:50 PM, 'Sören Textor' via sqlalchemy wrote: > > This example fails. Instead of assigning an objekt, I assign just the > daughters id ... > But I think that's "correct"? > > from sqlalchemy import Column > from sqlalchemy import create_engine > from sqlalchemy import ForeignKey > from sqlalchemy import Integer > from sqlalchemy.ext.declarative import declarative_base > from sqlalchemy.orm import relationship > from sqlalchemy.orm import Session > > Base = declarative_base() > > > class Super(Base): > __tablename__ = "super" > id = Column(Integer, primary_key=True) > > > class Mama(Super): > __tablename__ = "mama" > id = Column(Integer, ForeignKey("super.id"), primary_key=True) > > daughter_id = Column(Integer, ForeignKey("daughter.id")) > daughter = relationship("Daughter", foreign_keys=[daughter_id], > back_populates="mama" ) > > > class Daughter(Super): > __tablename__ = "daughter" > id = Column(Integer, ForeignKey("super.id"), primary_key=True) > mama = relationship( "Mama", foreign_keys=[Mama.daughter_id], uselist= > False, back_populates="daughter", ) > > > e = create_engine("sqlite://", echo=True) > Base.metadata.create_all(e) > > session = Session(e) > > m1 = Mama() > m2 = Mama() > d1 = Daughter() > d2 = Daughter() > > session.add(m1) > session.add(m2) > session.add(d1) > session.add(d2) > session.commit() > > m1.daughter = d1 > m2.daughter = d2 > session.commit() > > m1.daughter_id = d2.id #instead of m1.daughter = d2 > session.commit() > > assert m1.daughter is d2 > assert m2.daughter is None # FAILS > assert m2.daughter_id is None #FAILS > > SirAnn > -- Originalnachricht -- > Von: "Sören Textor" > An: "mike...@zzzcomputing.com" > Gesendet: 18.12.2020 16:52:35 > Betreff: Re: [sqlalchemy] One to One relation problem > > Hi Mike. > Thanks for answering. I‘ll check it out on monday. > We use MSSQL2016 and flask. That‘s the only difference I see at the first > look. > > I‘ll send a detailed answer to the group afterwards. Without the super > class it also works fine. That‘s why I thought it is an issue with foreign > keys. > > We are running the newest 1.3.x of SQLAlchemy. > > Mike Bayer schrieb am Fr. 18. Dez. 2020 um > 16:31: > > > hey there - > > these mappings are pretty good, as is always the case I cannot predict why > an issue is occurring, or usually even understand the issue, without > running the code. your code is pretty runnable with a few imports added so > that's great. however adding an assertion for the condition you describe > "m2.daughter is not None" is not reproducible on my end. Try out the > script below and see if you have different results. > > > from sqlalchemy import Column > from sqlalchemy import create_engine > from sqlalchemy import ForeignKey > from sqlalchemy import Integer > from sqlalchemy.ext.declarative import declarative_base > from sqlalchemy.orm import relationship > from sqlalchemy.orm import Session > > Base = declarative_base() > > > class Super(Base): > __tablename__ = "super" > id = Column(Integer, primary_key=True) > > > class Mama(Super): > __tablename__ = "mama" > id = Column(Integer, ForeignKey("super.id"), primary_key=True) > > daughter_id = Column(Integer, ForeignKey("daughter.id")) > daughter = relationship( > "Daughter", foreign_keys=[daughter_id], back_populates="mama" > ) > > > class Daughter(Super): > __tablename__ = "daughter" > id
Re: [sqlalchemy] One to One relation problem [re-open?]
This example fails. Instead of assigning an objekt, I assign just the daughters id ... But I think that's "correct"? from sqlalchemy import Column from sqlalchemy import create_engine from sqlalchemy import ForeignKey from sqlalchemy import Integer from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship from sqlalchemy.orm import Session Base = declarative_base() class Super(Base): __tablename__ = "super" id = Column(Integer, primary_key=True) class Mama(Super): __tablename__ = "mama" id = Column(Integer, ForeignKey("super.id"), primary_key=True) daughter_id = Column(Integer, ForeignKey("daughter.id")) daughter = relationship("Daughter", foreign_keys=[daughter_id], back_populates="mama" ) class Daughter(Super): __tablename__ = "daughter" id = Column(Integer, ForeignKey("super.id"), primary_key=True) mama = relationship( "Mama", foreign_keys=[Mama.daughter_id], uselist=False, back_populates="daughter", ) e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) session = Session(e) m1 = Mama() m2 = Mama() d1 = Daughter() d2 = Daughter() session.add(m1) session.add(m2) session.add(d1) session.add(d2) session.commit() m1.daughter = d1 m2.daughter = d2 session.commit() m1.daughter_id = d2.id #instead of m1.daughter = d2 session.commit() assert m1.daughter is d2 assert m2.daughter is None # FAILS assert m2.daughter_id is None #FAILS SirAnn -- Originalnachricht -- Von: "Sören Textor" An: "mike...@zzzcomputing.com" Gesendet: 18.12.2020 16:52:35 Betreff: Re: [sqlalchemy] One to One relation problem Hi Mike. Thanks for answering. I‘ll check it out on monday. We use MSSQL2016 and flask. That‘s the only difference I see at the first look. I‘ll send a detailed answer to the group afterwards. Without the super class it also works fine. That‘s why I thought it is an issue with foreign keys. We are running the newest 1.3.x of SQLAlchemy. Mike Bayer schrieb am Fr. 18. Dez. 2020 um 16:31: hey there - these mappings are pretty good, as is always the case I cannot predict why an issue is occurring, or usually even understand the issue, without running the code. your code is pretty runnable with a few imports added so that's great. however adding an assertion for the condition you describe "m2.daughter is not None" is not reproducible on my end. Try out the script below and see if you have different results. from sqlalchemy import Column from sqlalchemy import create_engine from sqlalchemy import ForeignKey from sqlalchemy import Integer from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship from sqlalchemy.orm import Session Base = declarative_base() class Super(Base): __tablename__ = "super" id = Column(Integer, primary_key=True) class Mama(Super): __tablename__ = "mama" id = Column(Integer, ForeignKey("super.id"), primary_key=True) daughter_id = Column(Integer, ForeignKey("daughter.id")) daughter = relationship( "Daughter", foreign_keys=[daughter_id], back_populates="mama" ) class Daughter(Super): __tablename__ = "daughter" id = Column(Integer, ForeignKey("super.id"), primary_key=True) mama = relationship( "Mama", foreign_keys=[Mama.daughter_id], uselist=False, back_populates="daughter", ) e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) session = Session(e) m1 = Mama() m2 = Mama() d1 = Daughter() d2 = Daughter() session.add(m1) session.add(m2) session.add(d1) session.add(d2) session.commit() m1.daughter = d1 m2.daughter = d2 session.commit() m1.daughter = d2 session.commit() assert m2.daughter is None On Fri, Dec 18, 2020, at 2:01 AM, 'Sören Textor' via sqlalchemy wrote: Hello I have a huge problem with süecific "one to one" relation. Woking (it's the tutorial code) class Son(db.Model): __tablename__ = 'son' id = db.Column(db.Integer, primary_key=True) papa_id = db.Column(db.Integer, db.ForeignKey('papa.id')) papa = db.relationship("Papa", foreign_keys=[papa_id], back_populates="son") class Papa(db.Model): __tablename__ = 'papa' id = db.Column(db.Integer, primary_key=True) son = db.relationship("Son", uselist=False, back_populates="papa") main: p1 = Papa() p2 = Papa() s1 = Son() s2 = Son() db.session.add(p1) db.session.add(p2) db.session.add(s1) db.session.add(s2) db.session.commit() p1.son = s1 p2.son = s2 db.session
[sqlalchemy] Re: One to One relation problem [solved]
Hi Mike Thanks for looking at my code. Next time I'll post an testcase like you. Sorry for that one. And I cannot believe it. But it works now. I also updated SQLAlchemy, flast-RESTful, flask-migrate and so on, to their newest version. And now it seems to work. And problem before was > assert m2.daughter:id is None This failed. But I tried and tried and tried in out production code isntead of sampling a small testcase like you. bad idea. And I really have NO idea why it works now. But I think I stop here for today and can have fun at wekkeend.. instead of thinking thinking thinking about that!! Many thanks! Mike Bayer schrieb am Fr. 18. Dez. 2020 um 16:31: hey there - these mappings are pretty good, as is always the case I cannot predict why an issue is occurring, or usually even understand the issue, without running the code. your code is pretty runnable with a few imports added so that's great. however adding an assertion for the condition you describe "m2.daughter is not None" is not reproducible on my end. Try out the script below and see if you have different results. from sqlalchemy import Column from sqlalchemy import create_engine from sqlalchemy import ForeignKey from sqlalchemy import Integer from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship from sqlalchemy.orm import Session Base = declarative_base() class Super(Base): __tablename__ = "super" id = Column(Integer, primary_key=True) class Mama(Super): __tablename__ = "mama" id = Column(Integer, ForeignKey("super.id"), primary_key=True) daughter_id = Column(Integer, ForeignKey("daughter.id")) daughter = relationship( "Daughter", foreign_keys=[daughter_id], back_populates="mama" ) class Daughter(Super): __tablename__ = "daughter" id = Column(Integer, ForeignKey("super.id"), primary_key=True) mama = relationship( "Mama", foreign_keys=[Mama.daughter_id], uselist=False, back_populates="daughter", ) e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) session = Session(e) m1 = Mama() m2 = Mama() d1 = Daughter() d2 = Daughter() session.add(m1) session.add(m2) session.add(d1) session.add(d2) session.commit() m1.daughter = d1 m2.daughter = d2 session.commit() m1.daughter = d2 session.commit() assert m2.daughter is None On Fri, Dec 18, 2020, at 2:01 AM, 'Sören Textor' via sqlalchemy wrote: Hello I have a huge problem with süecific "one to one" relation. Woking (it's the tutorial code) class Son(db.Model): __tablename__ = 'son' id = db.Column(db.Integer, primary_key=True) papa_id = db.Column(db.Integer, db.ForeignKey('papa.id')) papa = db.relationship("Papa", foreign_keys=[papa_id], back_populates="son") class Papa(db.Model): __tablename__ = 'papa' id = db.Column(db.Integer, primary_key=True) son = db.relationship("Son", uselist=False, back_populates="papa") main: p1 = Papa() p2 = Papa() s1 = Son() s2 = Son() db.session.add(p1) db.session.add(p2) db.session.add(s1) db.session.add(s2) db.session.commit() p1.son = s1 p2.son = s2 db.session.commit() p1.son = s2 db.session.commit() Works like a charm. afterwards every relation is correct My code (I have to use a super class, that's the only difference): class Super(db.Model): __tablename__ = 'super' id = db.Column(db.Integer, primary_key=True) class Mama(Super): __tablename__ = 'mama' id = db.Column(db.Integer, db.ForeignKey('super.id'), primary_key=True) daughter_id = db.Column(db.Integer, db.ForeignKey('daughter.id')) daughter = db.relationship("Daughter", foreign_keys=[daughter_id], back_populates="mama") class Daughter(Super): __tablename__ = 'daughter' id = db.Column(db.Integer, db.ForeignKey('super.id'), primary_key=True) mama = db.relationship("Mama", foreign_keys=[Mama.daughter_id], uselist=False, back_populates="daughter") main: m1 = Mama() m2 = Mama() d1 = Daughter() d2 = Daughter() db.session.add(m1) db.session.add(m2) db.session.add(d1) db.session.add(d2) db.session.commit() m1.daughter = d1 m2.daughter = d2 db.session.commit() m1.daughter = d2 db.session.commit() everything is correct EXCEPT: m2.daughter! it still points on d2 instead of None. And the table contains still the daughter_id of d2. Thus, what foreign key did I miss? All the best and stay h
[sqlalchemy] One to One relation problem
Hello I have a huge problem with süecific "one to one" relation. Woking (it's the tutorial code) class Son(db.Model): __tablename__ = 'son' id = db.Column(db.Integer, primary_key=True) papa_id = db.Column(db.Integer, db.ForeignKey('papa.id')) papa = db.relationship("Papa", foreign_keys=[papa_id], back_populates="son") class Papa(db.Model): __tablename__ = 'papa' id = db.Column(db.Integer, primary_key=True) son = db.relationship("Son", uselist=False, back_populates="papa") main: p1 = Papa() p2 = Papa() s1 = Son() s2 = Son() db.session.add(p1) db.session.add(p2) db.session.add(s1) db.session.add(s2) db.session.commit() p1.son = s1 p2.son = s2 db.session.commit() p1.son = s2 db.session.commit() Works like a charm. afterwards every relation is correct My code (I have to use a super class, that's the only difference): class Super(db.Model): __tablename__ = 'super' id = db.Column(db.Integer, primary_key=True) class Mama(Super): __tablename__ = 'mama' id = db.Column(db.Integer, db.ForeignKey('super.id'), primary_key=True) daughter_id = db.Column(db.Integer, db.ForeignKey('daughter.id')) daughter = db.relationship("Daughter", foreign_keys=[daughter_id], back_populates="mama") class Daughter(Super): __tablename__ = 'daughter' id = db.Column(db.Integer, db.ForeignKey('super.id'), primary_key=True) mama = db.relationship("Mama", foreign_keys=[Mama.daughter_id], uselist=False, back_populates="daughter") main: m1 = Mama() m2 = Mama() d1 = Daughter() d2 = Daughter() db.session.add(m1) db.session.add(m2) db.session.add(d1) db.session.add(d2) db.session.commit() m1.daughter = d1 m2.daughter = d2 db.session.commit() m1.daughter = d2 db.session.commit() everything is correct EXCEPT: *m2.daughter! *it still points on d2 instead of None. And the table contains still the daughter_id of d2. Thus, what foreign key did I miss? All the best and stay healthy! SirAnn -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/592197fc-0f15-4c99-a2a7-a9443767bcedn%40googlegroups.com.
Re: [sqlalchemy] Differences between TableClause insert and Model.__table__ inserts?
This was not clear enough in Mike's post: `Foo.__table__` is the same type of object as `_foo = table(...)`. SQLAlchemy ORM is built on top of SQLAlchemy's Core, so the ORM's `.__table__` attribute is the Core's `table()` object. Since they're the same, the two will have the same performance within `conn.execute(`. On Wednesday, November 25, 2020 at 4:18:46 PM UTC-5 Kata Char wrote: > I see, does that mean there is no difference in performance if one or the > other is used? In other words > from sqlalchemy.sql import table > > _foo = table(...) > conn.execute(_foo.insert(), [{...}, ...]) > > Would have the same performance as `conn.execute(Foo.__table__.insert(), > [{...}, ...])` > > On Wednesday, November 25, 2020 at 8:27:53 AM UTC-8 Mike Bayer wrote: > >> >> >> On Wed, Nov 25, 2020, at 10:30 AM, Kata Char wrote: >> >> Hi, sorry if this post is a duplicate, my first one didn't seem to make >> it. >> >> I was reading the documentation: >> - https://docs.sqlalchemy.org/en/13/core/tutorial.html#execute-multiple >> >> - >> https://docs.sqlalchemy.org/en/13/_modules/examples/performance/bulk_inserts.html >> >> Is there any difference between conn.execute(TableClause.insert(), [...]) >> vs conn.execute(Model.__table__.insert(), [...])? >> >> The first one is documented to use execumany(), but what about the second >> one? >> >> >> Any conn.execute() that passes a list of dictionaries as the second >> argument, where there is more than one entry in the list, will use the >> executemany() style with the DBAPI connection. >> >> With the ORM the Model.__table__ attribute is a Table object. That >> tutorial seems to be referencing TableClause which is the base class for >> Table, but all the examples there are using Table objects. >> >> >> >> -- >> SQLAlchemy - >> The Python SQL Toolkit and Object Relational Mapper >> >> http://www.sqlalchemy.org/ >> >> To post example code, please provide an MCVE: Minimal, Complete, and >> Verifiable Example. See http://stackoverflow.com/help/mcve for a full >> description. >> --- >> You received this message because you are subscribed to the Google Groups >> "sqlalchemy" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to sqlalchemy+...@googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/sqlalchemy/1ffe48c6-4124-40ab-902f-ffa86885ea94n%40googlegroups.com >> >> <https://groups.google.com/d/msgid/sqlalchemy/1ffe48c6-4124-40ab-902f-ffa86885ea94n%40googlegroups.com?utm_medium=email&utm_source=footer> >> . >> >> >> -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/89002280-96e7-45e9-a11a-f104d8e2aa3fn%40googlegroups.com.
[sqlalchemy] Re: Dis/Reassociate objects with a db session.
Read the docs on State Management and pay attention to `merge`: https://docs.sqlalchemy.org/en/14/orm/session_state_management.html Also, to simplify this stuff a popular related pattern is to use a RevisionID or RevisionTimestamp on the objects. In the first session, you note the version information. On the second session you fetch a new object and ensure it is the same - if so, your data is safe to update. If not, the objects became out of-sync and may require more logic. On Wednesday, November 25, 2020 at 12:57:23 PM UTC-5 jens.t...@gmail.com wrote: > Hello, > > My question is regarding long-running tasks and db sessions. Currently I > have the very rare situation where a task takes longer than a db session is > valid and thus fails when it wants to write back results. Extending the TTL > of a db session is probably not a good idea. > > I think the proper approach would be to open a db session, fetch data, > close the db session, do work, open a new db session, write data, close the > db session. So, I must make sure that I fetch all data ahead of time while > the first session is active. > > Is there a way to re-associate objects that belonged to the first session > with a newly opened one? What’s the recommended approach here, does SQLA > have any magic in store to help me with very long-lived ORM objects across > db sessions? Or should I manage that data independently of their respective > ORM objects? > > Thanks! > Jens > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/bf1d5c69-d500-4cac-bb29-026f1343a37bn%40googlegroups.com.
[sqlalchemy] Re: Zope.sqalchemy: AttributeError: '_thread._local' object has no attribute 'value'
Your new code is exactly what I have been running on several production systems, so it looks good to me! Long story short, `zope.sqlalchemy` had been using the `sqlalchemy` "extensions", which were deprecated in 2012 and are set to be removed (if they haven't been already). see https://github.com/zopefoundation/zope.sqlalchemy/issues/31 The change that caused your issues was due to `zope.sqlalchemy` migrating from the deprecated system to the next. There wasn't a clean way of swapping this out, so their developers opted for a tiny breaking change. For most people, that means changing two lines of code; in some complex projects, 4 lines of code might need to be changed! Unfortunately, someone posted that answer on StackOverflow that is incorrect and misdirected you – it's not a simple change in class names. I'm sorry that tripped you up. On Sunday, October 25, 2020 at 9:23:36 AM UTC-4 dever...@gmail.com wrote: > Thanks for the pointer to the source. My confusion came from the Zope docs > (and other sources e.g. this answer: > https://stackoverflow.com/a/58567212/123033 ) that seemed to suggest > *EITHER* > from zope.sqlalchemy import ZopeTransactionExtension, register > *OR* > changing all instances of ZopeTransactionExtension to > ZopeTransactionEvents > and using: > from zope.sqlalchemy import register > then > DBSession = scoped_session(sessionmaker(**options)) > but the below - i.e. no parameters to sessionmaker() - got past the > errors in the end (so far so good, anyway): > > from zope.sqlalchemy import register > # . . . > DBSession = scoped_session(sessionmaker()) > register(DBSession) > > Feel free to point out anything glaringly obvious. I've not been in this > territory before, and it's a codebase in which I'm still finding my way > (written by a codev) and yes, I might not spot what's taken for granted by > anyone more familiar with SQLAlchemy etc. - I've often been in the reverse > situation! > > On Saturday, October 24, 2020 at 6:55:42 PM UTC+1 Jonathan Vanasco wrote: > >> The extract code you posted is incorrect. >> >> You were given a step towards the right answer - you MUST invoke >> `register`. >> >> I say a step, because there may be other factors going on. >> >> However as you can see from the source code ( >> https://github.com/zopefoundation/zope.sqlalchemy/blob/master/src/zope/sqlalchemy/datamanager.py#L293-L329), >> >> the call to `register` is required because it invokes the >> ZopeTransactionExtenstion AND sets up the transaction events. >> >> On Saturday, October 24, 2020 at 10:47:27 AM UTC-4 dever...@gmail.com >> wrote: >> >>> I'm updating a working Pyramid app that uses sqlalchemy and have some >>> success by replacing ZopeTransactionExtension with ZopeTransactionEvents. >>> >>> On running initialise with my local .ini file, All goes well, the >>> database tables (MariaDB) are all written, but these errors occur: >>> >>> Traceback (most recent call last): >>> "[...]sqlalchemy/util/_collections.py", line 1055, in __call__ return >>> self.registry.value AttributeError: '_thread._local' object has no >>> attribute 'value' During handling of the above exception, another exception >>> occurred: >>> >>> [cruft omitted] >>> "[...]sqlalchemy/orm/deprecated_interfaces.py", line 367, in >>> _adapt_listener ls_meth = getattr(listener, meth) AttributeError: >>> 'ZopeTransactionEvents' object has no attribute 'after_commit' >>> >>> For more code details, I've posted extracts from the models and main >>> app code on StackOverflow <https://stackoverflow.com/q/64486574/123033>, >>> but with no joy so far. >>> >> -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/10c30fed-2898-4072-949d-a7011b454081n%40googlegroups.com.
[sqlalchemy] Recreating Access in LibreOffice Base
I used to find MS Access to be a very useful tool in the right hands. I've been doing some python script coding in LibreOffice Base and started looking at sqlalchemy it looks as if it wouldnt be too much of a job to add the components necessary to turn LO Base front end into something not far from the Access of old (I have not used MS for 20 years or so). I was wondering if anyone has ventured there or is interested in providing some input? -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/d979a062-2b51-4646-babf-95d08abb6138n%40googlegroups.com.
[sqlalchemy] Re: Update multiple rows in SQLite Databse
I believe your error is tied to this section of code: > for item in ingredDict: > ingredient_item = Ingredients(ingredientKey=item['ingredientKey'], > > ingredientDescription=item['ingredientDescription'], > ingredientRecipeKey=recipeKey, > > ingredientQuantity=item['ingredientQuantity']) > Ingredients_item_object_list.append(ingredient_item) It looks like you are iterating through this dict, creating new ingredients, and adding them to the recipe/database. This is fine for CREATE, but is raising an integrity error on UPDATE because the ingredients already exist and you are creating a new entry on every iteration. A lazy way to address this would be something like: remove all the existing ingredients, flush, then run this loop. A common way to handle this is the bit of Mike's suggestion which you missed: calculate the differences between the set of old and new items to determine which ingredients need to be added or removed (or updated, as that seems possible now). Within the Unit of Work, as mike said, you need to delete and add (and also update it would seem). -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/36add23c-b1c5-4c6a-a494-d9d71addc1a8o%40googlegroups.com.
Re: [sqlalchemy] How to disable the integer col + pk = serial behaviour?
I've just done something like this the other day, but it was with an existing sequence. We're using Alembic for schema updates, so I'm not sure whether SQLAlchemy's built-in create_all would behave the same way. You should still be able to use a similar approach. shared_sequence = Sequence('shared_id__seq') class ModelA(Base): shared_id = Column(Integer, primary_key=True, default=shared_sequence.next_value(), server_default=shared_sequence.next_value()) class ModelB(Base): shared_id = Column(Integer, primary_key=True, default=shared_sequence.next_value(), server_default=shared_sequence.next_value()) This seems to be working fine. Before that I'd tried providing Sequence as an arg to Column like you have and I'd run into some problems and this seemed like an easier option. On Wed, 5 Aug 2020 at 13:59, Zsolt Ero wrote: > Hi, > > I've split a table into two tables, for performance reasons. I'd like to > insert into both tables using the same sequence. I'm inserting using > executemany_mode='values'. > > My idea is to call nextval() on the sequence before insert and fill in the > values client side, before inserting. > > select nextval('mysql') FROM generate_series(1,...) > > Everything looks good, except for the default behaviour of SQLAlchemy to > turn an integer + pk column into a SERIAL. > > As an alternative I'm also looking at using Sequence('myseq') from > https://docs.sqlalchemy.org/en/13/dialects/postgresql.html#sequences-serial-identity, > but this is broken for issuing "CREATE SEQUENCE myseq" before the table > creation, which is missing the IF NOT EXISTS part. > > How can I either: > - turn off the automatic behaviour of making a pg + int = serial? > - add a IF NOT EXISTS to the Sequence()? > > Or any alternative ideas? > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sqlalchemy+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/7860f5ab-64a4-481d-8e72-9b40d2ab2527o%40googlegroups.com > <https://groups.google.com/d/msgid/sqlalchemy/7860f5ab-64a4-481d-8e72-9b40d2ab2527o%40googlegroups.com?utm_medium=email&utm_source=footer> > . > -- Michael Mulqueen *Method B Ltd* m...@method-b.uk / +44 (0)330 223 0864 http://www.method-b.uk/ Method B Limited is registered in England and Wales with the company number 9189496. Registered office: 28-29 Richmond Place, Brighton, East Sussex, England, BN2 9NA -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAHxMHYXTJ8zLobCdW%2BAtTMaUHGFKkub-9L8cEJMJhYBfPUO%3DQA%40mail.gmail.com.
Re: [sqlalchemy] How can I query two fields in my database to confirm it exists and then delete
With the ORM, you'd filter, something like this: .filter(LessonEntity.lesson_id == lesson_id, LessionEntity.device_type_id == device_type_id) If you were just expecting to find one, you might want to chain .first() onto the end of that. You can delete as normal: session.delete(obj) If you want to delete without checking whether they exist first and avoid shuttling data back and forth from the database to the application, you can use the expression language to issue a DELETE command limited by a WHERE clause. https://docs.sqlalchemy.org/en/13/core/tutorial.html Kind regards, Mike On Tue, 4 Aug 2020 at 10:32, Adam Shock wrote: > My SQL database has the following: > > > > to query the lesson_id i would do : > lesson = LessonEntity.query.get(lesson_id) > > How can i query the database to check if lesson_id exists on the same > entry as device type and then remove.. > > so for example. if lesson_id matches 107 and the same entry includes > device_type_id = 7 i want to remove this whole entry > > -- > SQLAlchemy - > The Python SQL Toolkit and Object Relational Mapper > > http://www.sqlalchemy.org/ > > To post example code, please provide an MCVE: Minimal, Complete, and > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > description. > --- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sqlalchemy+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/sqlalchemy/85301330-9208-4f8e-b1d4-e89ea8942239o%40googlegroups.com > <https://groups.google.com/d/msgid/sqlalchemy/85301330-9208-4f8e-b1d4-e89ea8942239o%40googlegroups.com?utm_medium=email&utm_source=footer> > . > -- Michael Mulqueen *Method B Ltd* m...@method-b.uk / +44 (0)330 223 0864 http://www.method-b.uk/ Method B Limited is registered in England and Wales with the company number 9189496. Registered office: 28-29 Richmond Place, Brighton, East Sussex, England, BN2 9NA -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAHxMHYVRk9bSD31Ut6ppQttPr-A1OQ5E8moW6JbdO_uSN-SV7Q%40mail.gmail.com.
[sqlalchemy] Re: sqlalchemy messes up names with "_1" suffix
> i have this litte flask-admin game running, now out of nowwhere sqlalchemy has begun to add strange "_1" suffixes to the column names. i know sqlalchemy does this to keep names unique, but in my case the queries are failing SQLAlchemy does do this, for those reasons, and to the columns... but note those exact error: sqlalchemy.exc.InternalError: (pymysql.err.InternalError) (1054, "Unknown column 'attribs_1.ts' in 'field list'") It's not finding the `.ts` on the `attribs` table, which was mapped to `attribs_1` in the query. I think the best thing do to is what mike said - create a complete executable example you can share. the model + the query. My first guess is that you have a typo on the column/table name in the model or query. There could also be an inheritance issue because of a typo too. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/c53dd18c-bc8a-42bd-819c-0b111e1a71a2o%40googlegroups.com.
Re: [sqlalchemy] Encrypt/Decrypt specific column(s)
On Thursday, July 9, 2020 at 2:12:36 PM UTC-4, Justvuur wrote: > > I've done some more digging... It seems when I did the search for > "secrets", the text is encrypted and compared to the value in the columns, > That is how client-side encryption works. If you want to search for "secrets", you need to use server-side encryption (which depends on the database). In those systems, the server will decrypt the column in every row when searching - which can be a performance issue. The thing is this type of comparison wont work, the algorithm generates a > different string each encryption for the same string. > What are you using for your encryption key? The key should be persistent, and should always generate the same output for a given input. In the example from Michael Bayer, a random uuid is used as a placeholder. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/2506c7ef-7a66-4662-a40b-db6e70b93347o%40googlegroups.com.
Re: [sqlalchemy] convert subset to dictionary
Based on what you shared above: * The "Subject" table is: `StudentId, SubjectCode, SubjectName` * There are 181 subjects It looks like you don't have a "Subject" table, but a "StudentAndSubject" table. I think you'd have a bigger performance improvement by normalizing that data into two tables: Subject: SubjectId (primary key), SubjectCode, SubjectName Student2Subject: StudentId, SubjectId, (primary key is both) Assuming this can be done with your data... the database performance should improve because 1. The raw filestorage will decrease 2. The in-memory dataset size will decrease You could then either 1. use the Subject table as part of a joined query to keep things simple, or 2. just select off a join of Student+Student2Subject , and query all the Subjects separately. Even if there are 2000 subjects total, it should only take a few ms to get all that into a python datastructure that is used to generate your csv -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/37704b34-346d-4bf5-b0fa-c892b13d4e1eo%40googlegroups.com.
[sqlalchemy] Re: SQLAlchemy taking too much time to process the result
On Monday, July 6, 2020 at 2:14:33 PM UTC-4, Saylee M. wrote: > So, when I passed the query to MySQL directly, it took very less time > (around 0.016 seconds) but when I passed the same > query through SQLAlchemy connector, it took around 600 seconds > "query ... MySQL directly" Do you mean using the MySQL commandline client? Assuming yes, the 0.016 time only reflects the time MySQL spent processing the query and generating the result set; the SQLAlchemy time includes that + transferring all the data + generating Python data structures (which could be SQLAlchemy ORM models or generic python data structures) There are also external factors that can account for time changes - like server load, index loading, cache utilization I am not sure what can be issue. It'll be great if I can get any pointers > to reduce the time, preferably under 10 seconds! > Showing a short, self contained, correct example (sscce) of your code would let others troubleshoot it more effectively. The most likely situation though, is that you are loading all the rows. There should be no difference in the query time. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/1b1d83d9-eb01-4541-962a-3d4a6551afb9o%40googlegroups.com.
[sqlalchemy] Re: Locking method used in SQLAlchemy (postgres)
On Monday, June 29, 2020 at 8:00:40 PM UTC-4, gbr wrote: > > > I'm using SQLAlchemy's Core to interface a postgres database (via > psycopg2) component alongside Flask-SQLAlchemy in a Flask app. Everything > was working fine until I recently discovered what seems to be a deadlock > state which is caused by two queries locking each other (at least that's my > working hypothesis). > Beyond what Mike said... I don't use Flask but I use Pyramid and Twisted. The only times I have experienced locking issues with SQLAlchemy: * unit tests: the setup uses a first db connection, but it is erroneously implemented and not closed. when test runners begin, the db is locked so everything fails. * application design issues: if you deploy a forking server and don't reset the pool on fork (`engine.dispose()`), all sorts of integrity and locking issues pop up (multiple processes try to use a single connection which never closes properly). if you don't have a proper connection checkout/return that can happen too. * very high concurrency: a pileup of connections want to lock for update/insert. inevitably, some timeout and deadlock. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/3981ffb7-611f-4a88-9058-a0e09ab60005o%40googlegroups.com.