[sqlalchemy] Re: Patterns for building with SQLAlchemy / asyncio

2018-08-10 Thread bkc
Mike, Thanks for the pointer to your enlightening post 
http://techspot.zzzeek.org/2015/02/15/asynchronous-python-and-databases/ 

 


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] declarative Association table w/ foreign_keys in relationship() still raises "multiple foreign key paths linking the tables"

2016-12-09 Thread bkc
That's a good suggestion, thanks.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] declarative Association table w/ foreign_keys in relationship() still raises "multiple foreign key paths linking the tables"

2016-12-09 Thread bkc
Thanks Mike for the quick reply.

I have changed the code so it uses backref on one side. The models now get 
created w/o error. I haven't actually tested using the relationship but I'm 
sure it's fine now.

For anyone else with this problem, here's the 'fixed' code. I didn't change 
the Artifact_Reference table, it's the same as posted above. I'm not keen 
on the naming master/slave.. maybe primary/secondary would be better.. It's 
not exactly Parent/Child.. heh.


class Artifact_Relation(Base):
__tablename__ = 'artifact_relation'
master_artifact_id = Column(
Integer,
ForeignKey('artifact.id', name='artifact_relation_master_id_fk', 
ondelete="cascade", onupdate="cascade"),
primary_key=True,
nullable=False
)

slave_artifact_id = Column(
Integer,
ForeignKey('artifact.id', name='artifact_relation_slave_id_fk', 
ondelete="cascade", onupdate="cascade"),
primary_key=True,
nullable=False
)

relationship_type = Column(String(24), nullable=False)

slave = relationship("Artifact", backref=backref("masters"), 
foreign_keys=[slave_artifact_id])
master = relationship("Artifact", backref=backref("slaves"), 
foreign_keys=[master_artifact_id])

class Artifact(Base):
"Artifact"
__tablename__ = 'artifact'
id = Column('id', Integer, primary_key=True)
artifact_type = Column('artifact_type', String(16), nullable=False)
__mapper_args__ = {
'polymorphic_on': artifact_type
}

artifact_references = relationship(
'Artifact_Reference',
back_populates='artifact',
cascade="all, delete-orphan",
passive_deletes=True
)



-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] declarative Association table w/ foreign_keys in relationship() still raises "multiple foreign key paths linking the tables"

2016-12-08 Thread bkc
Hi,

Running SQLAlchemy==1.1.4

I'm getting this

AmbiguousForeignKeysError: Could not determine join condition between 
> parent/child tables on relationship Artifact.slaves - there are multiple 
> foreign key paths linking the tables.  Specify the 'foreign_keys' argument, 
> providing a list of those columns which should be counted as containing a 
> foreign key reference to the parent table.
>

and I'm wondering if I'll have to use primaryjoin= statement instead of 
relying on foreign_keys= to work around this.

The code is shown below. I've included Artifact_Reference for completeness 
but it's probably not needed.

Is the problem on the Artifact_Relation model, or on the Artifact model? 
I'm thinking the later, but I don't see any logical way to specify 
foreign_keys on Artifact as it's on the one side of many to one. 

I'm following this link for Artifact_Relation (i.e. Association 
table) 
http://docs.sqlalchemy.org/en/latest/orm/basic_relationships.html#association-object
 
 where Artifact serves as both the Parent and Child

I'm following this link for multiple-path 
resolution 
http://docs.sqlalchemy.org/en/latest/orm/join_conditions.html#relationship-foreign-keys

Even though Artifact is on both sides of the association, I can't see 
needing to use backref()  w/ remote_id to resolve this, 
ala http://docs.sqlalchemy.org/en/latest/orm/self_referential.html

Thanks for any suggestions..


Base = declarative_base()

class Artifact_Relation(Base):
__tablename__ = 'artifact_relation'
master_artifact_id = Column(
Integer,
ForeignKey('artifact.id', name='artifact_relation_master_id_fk', 
ondelete="cascade", onupdate="cascade"),
primary_key=True,
nullable=False
)

slave_artifact_id = Column(
Integer,
ForeignKey('artifact.id', name='artifact_relation_slave_id_fk', 
ondelete="cascade", onupdate="cascade"),
primary_key=True,
nullable=False
)

relationship_type = Column(String(24), nullable=False)

slave = relationship("Artifact", back_populates="masters", 
foreign_keys=[slave_artifact_id])
master = relationship("Artifact", back_populates="slaves", 
foreign_keys=[master_artifact_id])

class Artifact(Base):
"Artifact"
__tablename__ = 'artifact'
id = Column('id', Integer, primary_key=True)
artifact_type = Column('artifact_type', String(16), nullable=False)
__mapper_args__ = {
'polymorphic_on': artifact_type
}

artifact_references = relationship(
'Artifact_Reference',
back_populates='artifact',
cascade="all, delete-orphan",
passive_deletes=True
)

slaves = relationship("Artifact_Relation", back_populates="master")
masters = relationship("Artifact_Relation", back_populates="slave")

class Artifact_Reference(Base):
"""Artifact_Reference"""
__tablename__ = 'artifact_reference'
id = Column('id', Integer, primary_key=True)
type = Column('type', String(24), nullable=False)
artifact_id = Column('artifact_id', Integer, ForeignKey(Artifact.id, 
name="artifact_reference_artifact_id_fk", ondelete="cascade", 
onupdate="cascade"), nullable=False)
reference = Column('reference', String(64), nullable=False)

artifact = relationship(
Artifact,
back_populates='artifact_references'
)
Index('reference_idx', reference, unique=False)


 


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Selecting distinct entries based on max timestamp

2016-11-21 Thread bkc
Thanks for the book reference Jonathan, it looks very useful!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] firebird engine patch for type_conv

2007-04-11 Thread bkc

patch against rev 2497

I believe type_conv only allows numeric values, so any type_conv opt
passed has to
be converted to an int. I bet the same rule applies to
concurrency_level, but this
patch doesn't address that.

(sorry, using google groups I can't see how to attach this as a file)

Index: /usr/local/src/sqlalchemy/lib/sqlalchemy/databases/firebird.py
===
--- /usr/local/src/sqlalchemy/lib/sqlalchemy/databases/
firebird.py  (revision 2497)
+++ /usr/local/src/sqlalchemy/lib/sqlalchemy/databases/
firebird.py  (working copy)
@@ -120,7 +120,7 @@
 del opts['port']
 opts.update(url.query)

-type_conv = opts.pop('type_conv', self.type_conv)
+type_conv = int(opts.pop('type_conv', self.type_conv))
 concurrency_level = opts.pop('concurrency_level',
self.concurrency_level)
 global _initialized_kb
 if not _initialized_kb and self.dbapi is not None:


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] custom join conditions not-reevaluated after changes to joined table records

2006-11-16 Thread bkc

I thought this was working yesterday before svn update, but I can't be
sure.

I have two tables, related like this:

mapper(Shipment, TShipment,
   order_by=TShipment.c.id,
   properties={
   'quote_xml': deferred(TShipment.c.quote_xml),
   'quote_pickle': deferred(TShipment.c.quote_pickle),
   'raw_manifest': deferred(TShipment.c.raw_manifest),
   'decoded_manifest': deferred(TShipment.c.decoded_manifest),
   'packages': relation(Package, backref=shipment,
cascade=all, delete-orphan),

   'voided_packages': relation(Package, primaryjoin=\
   and_(TShipment.c.id ==
TPackage.c.shipment_id,

TPackage.c.void_confirmed != None
   )
   ),
   'nonvoid_packages': relation(Package, primaryjoin=\
   and_(TShipment.c.id ==
TPackage.c.shipment_id,

TPackage.c.void_confirmed == None
   )
   )
})

my code loads a shipment, then does a len(shipment.nonvoid_packages),
and gets the proper number.

Then it decides it needs to void a package, so it sets the package's
void_confirmed date and  session.flush()

later, the code does another len(shipment.nonvoid_packages) and the
already voided package is still counted as non-void.

Are custom joins supposed to be re-evaluated when there's a change to
one of the tables in the join?

The first non-void lookup:

select  stuff FROM package
WHERE ? = package.shipment_id AND package.void_confirmed IS NULL ORDER
BY package.id
2006-11-16 17:18:14,707 INFO
sqlalchemy.engine.threadlocal.TLEngine.0x..94 [138]

void_confirmed::

2006-11-16 17:18:15,071 INFO
sqlalchemy.engine.threadlocal.TLEngine.0x..94 UPDATE package SET
void_confirmed=? WHERE package.id = ?
INFO:sqlalchemy.engine.threadlocal.TLEngine.0x..94:UPDATE package SET
void_confirmed=? WHERE package.id = ?
2006-11-16 17:18:15,072 INFO
sqlalchemy.engine.threadlocal.TLEngine.0x..94 [datetime.datetime(2006,
11, 16, 17, 18, 15, 67070), 214]
INFO:sqlalchemy.engine.threadlocal.TLEngine.0x..94:[datetime.datetime(2006,
11, 16, 17, 18, 15, 67070), 214]

Later this code runs::

if len(existing_shipment.nonvoid_packages) != 0:
for package in existing_shipment.nonvoid_packages:
print non-void package tr %r, void date %r %
(package.trackingnumber, package.void_confirmed)

and the output:

non-void package tr '013514470107311', void date
datetime.datetime(2006, 11, 16, 17, 18, 15, 67070)


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] session.flush calls commit, and Binary issue

2006-11-15 Thread bkc

Using sqlalchemy rev 2102

is flush supposed to commit by itself?

Also, looks like Binary fields get flushed every time, even when not
changed..


Again, using wsgi middleware to automatically rollback if an exception
occurs in my app, I have a use case where I absolutely must commit a
transaction even if an exception occurs later.

So I have a set of master/child tables, Shipment - Package


I have session and a transaction (on which, begin has already been
called).

Here's what happens.

now = datetime.datetime.now()

shipment = session.query(Shipment).get_by()

shipment.hash = ''
for package in shipment.packages:
   package.void_requested = now
   print FLUSH 1
   session.flush()
   print COMMIT 1
   trans.commit()
   # some other non db stuff
   package.void_confirmed = now
   print FLUSH 2
   session.flush()
   print COMMIT 2
   trans.commit()


(this causes uow to write out the entire shipment object, including
large
and unchanged pickle fields.. then write out package)

FLUSH 1
2006-11-14 21:04:38,532 INFO
sqlalchemy.engine.threadlocal.TLEngine.0x..94
UPDATE shipment SET quote_pickle=?, quote_hash=?,  decoded_manifest=?
snip

2006-11-14 21:04:38,837 INFO
sqlalchemy.engine.threadlocal.TLEngine.0x..94
UPDATE package SET void_requested=? WHERE package.id = ?
INFO:sqlalchemy.engine.threadlocal.TLEngine.0x..94:UPDATE package SET
void_requested=? WHERE package.id = ? 2006-11-14 21:04:38,838 INFO
sqlalchemy.engine.threadlocal.TLEngine.0x..94 [datetime.datetime(2006,
11,
14, 21, 4, 38, 518976), 200]
INFO:sqlalchemy.engine.threadlocal.TLEngine.0x..94:[datetime.datetime(2006
, 11, 14, 21, 4, 38, 518976), 200] COMMIT 1 2006-11-14 21:04:38,850
INFO
sqlalchemy.engine.threadlocal.TLEngine.0x..94 COMMIT
INFO:sqlalchemy.engine.threadlocal.TLEngine.0x..94:COMMIT

And then comes FLUSH2, like so:

FLUSH 2
2006-11-14 21:04:39,544 INFO
sqlalchemy.engine.threadlocal.TLEngine.0x..94
BEGIN INFO:sqlalchemy.engine.threadlocal.TLEngine.0x..94:BEGIN
2006-11-14
21:04:39,551 INFO sqlalchemy.engine.threadlocal.TLEngine.0x..94 UPDATE
shipment SET quote_pickle=?, decoded_manifest=? WHERE shipment.id = ?
INFO:sqlalchemy.engine.threadlocal.TLEngine.0x..94:UPDATE shipment SET
quote_pickle=?, decoded_manifest=? WHERE shipment.id = ?

(here it is, again writing out these unchanged shipment fields,
shipment
hasn't been touched between the 2 flushes)

2006-11-14 21:04:39,750 INFO
sqlalchemy.engine.threadlocal.TLEngine.0x..94
UPDATE package SET void_confirmed=? WHERE package.id = ?
INFO:sqlalchemy.engine.threadlocal.TLEngine.0x..94:UPDATE package SET
void_confirmed=? WHERE package.id = ? 2006-11-14 21:04:39,750 INFO
sqlalchemy.engine.threadlocal.TLEngine.0x..94 [datetime.datetime(2006,
11,
14, 21, 4, 39, 541557), 200]
INFO:sqlalchemy.engine.threadlocal.TLEngine.0x..94:[datetime.datetime(2006
, 11, 14, 21, 4, 39, 541557), 200] 2006-11-14 21:04:39,755 INFO
sqlalchemy.engine.threadlocal.TLEngine.0x..94 COMMIT
INFO:sqlalchemy.engine.threadlocal.TLEngine.0x..94:COMMIT

(see above, flush did a commit by itself)

COMMIT 2

File '/usr/local/src/sqlalchemy/lib/sqlalchemy/engine/base.py', line
389
in commit
  raise exceptions.InvalidRequestError(This transaction is inactive)
InvalidRequestError: This transaction is inactive


engine strategy is threadlocal.

Is this the way it's supposed to work?

--
Brad Clements,[EMAIL PROTECTED](315)268-1000
http://www.murkworks.com
AOL-IM or SKYPE: BKClements


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: session.flush calls commit, and Binary issue

2006-11-15 Thread bkc

I am using PickleType, and Binary on the same table.. I'll see if I can
come up with a small test. These fields are being written back every
time any other field changes, even when they haven't changed.

Regarding transactions. I thought session.flush was just UOW work and
didn't actually do commits.

I need a outside transaction wrapper that begins when a web request is
received, and commits if the web request was processed w/o error. In
that use case, session.flush should not actually commit it's changes.

My second use case (described in the original message) is that
occassionaly, within this transaction wrapper, I will need to make a
commit once in a while.

So write me up as really confused now.. I thought session was just UOW
and didn't handle low-level transactions at all.. I am using explicit
connections and sessions..


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---