[sqlalchemy] Re: in-place modification of queries?

2015-12-28 Thread Lloyd Kvam
just assign to the same variable name.
query = guery.filter(Foo.x == 1)

The generative queries are a feature.

Making the queries mutable would allow functions to modify queries and 
complicate debugging.

You can have functions return the modified query, or have functions return 
the expressions to be applied.

What is the benefit you seek from mutable queries?

On Monday, December 28, 2015 at 6:19:01 AM UTC-5, Chris Withers wrote:
>
> Hi All, 
>
> Is there anything I can do to make Query instance non-generative? 
>
> query = session.query(Foo) 
> query.filter(Foo.x==1) 
>
> ...and have the query actually be modified rather than returning a new 
> query with the clause added? 
>
> cheers, 
>
> Chris 
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Bulk update when using .execute() with multiple parameters?

2014-09-05 Thread Lloyd Kvam
You really want to test this yourself on YOUR setup.  I know my code is 
sending a single execute for this scenario, but I'm not hitting any speed 
bumps.  Set echo to True and see what happens.  (a_table.metadata.bind.echo 
= True).   Then set it back to False.  You can change echo on the fly.



On Friday, September 5, 2014 6:34:09 PM UTC-4, Joshua Ma wrote:

 Hi all,

 Can someone confirm for me that, when Connection.execute is run with an 
 UPDATE statement and a list of 100k values, you get a batch update query 
 that incurs only one DB round-trip instead of 100k roundtrips? And this 
 should be true for alembic's execute (
 http://alembic.readthedocs.org/en/latest/ops.html#alembic.operations.Operations.execute)
  
 as well?

 I'm asking because I thought that's what it did, but we're hitting some 
 speed bumps during a migration (doing a few hundred thousand updates) and 
 I'm starting to suspect that it's somehow waiting for each update query to 
 come back before sending the next.

 I'm probably wrong though, so I was hoping someone could confirm the bulk 
 behavior for me before I dig deeper. Running alembic 0.6.5 and SQLAlchemy 
 0.9.4.

 Thanks!
 Josh


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How Can I Build a ForeignKey to a Table Which Has Multiple Primary Keys?

2013-03-07 Thread Lloyd Kvam
While primary_key is specified twice, once for each column, there is only 
ONE primary key which is a composite.

You need to use ForeignKeyConstraint at the Table level to specify a 
composite foreign key.

You need to provide two lists, the local table columns, and the 
corresponding foreign table columns.
Now you know what to search for if  you need more information.

from the schema definition language docs

It’s important to note that the 
ForeignKeyConstrainthttp://docs.sqlalchemy.org/en/rel_0_7/core/schema.html#sqlalchemy.schema.ForeignKeyConstraint
 is 
the only way to define a composite foreign key. While we could also have 
placed individual 
ForeignKeyhttp://docs.sqlalchemy.org/en/rel_0_7/core/schema.html#sqlalchemy.schema.ForeignKey
 objects 
on both theinvoice_item.invoice_id and invoice_item.ref_num columns, 
SQLAlchemy would not be aware that these two values should be paired 
together - it would be two individual foreign key constraints instead of a 
single composite foreign key referencing two columns.


On Wednesday, March 6, 2013 9:00:56 PM UTC-5, Randall Degges wrote:

 Hi Mike,

 Sorry about that, I actually had a typo there. I've got a correct code 
 sample available here 
 http://stackoverflow.com/questions/15260842/how-can-i-build-a-foreignkey-to-a-table-which-has-multiple-primary-keys
  (just 
 posted it).

 Thank you,

 -Randall


 On Wed, Mar 6, 2013 at 5:54 PM, Michael Bayer 
 mik...@zzzcomputing.comjavascript:
  wrote:

 a database table can only have one primary key (hence primary), but 
 that key can contain more than one column (a composite primary key).

 the model you have here is a little unclear, did you mean for the primary 
 key of Exchange to be exchange , and the primary key of PhoneNumber to be 
 the composite of exchange and phone number ?   that would be my guess 
 as to what you're looking for.



 On Mar 6, 2013, at 6:05 PM, Randall Degges rde...@gmail.comjavascript: 
 wrote:

 Hi all,

 I'm having a lot of trouble figuring out how to properly build my 
 ForeignKey column for a table I'm defining. I've outlined my models here: 
 http://pastie.org/6407419# (and put a comment next to the problematic 
 line in my PhoneNumber model).

 Here's what's happening:

 My Exchange table has two primary keys. This is required for my use case.

 The PhoneNumber table I'm trying to define needs a ForeignKey to the 
 Exchange table, but since the Exchange table has two primary keys, I can't 
 figure out how to make the relationship work.

 Any guidance would be appreciated. Thank you.

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.

 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  


  -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.

 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  




 -- 
 Randall Degges
 *http://rdegges.com/*
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: sanitizing sql with sqlalchemy

2013-01-01 Thread Lloyd Kvam
I should probably remove my fingers from the keyboard since I have so 
little experience with mssql, but here goes:

Isn't the . only special in field and table names?
If that's where the user input was used, I think it's the programmer's 
responsibility to validate/sanitize the input.  A plugin simply doesn't 
have the knowledge required to sanitize those parts of an SQL statement.

Please correct me if I'm wrong.

On Monday, December 31, 2012 5:24:21 PM UTC-5, alonn wrote:

 I'm using sqlalchemy orm (with turbogears) to write data from a web 
 application to an mssql 2005 Db (used by another application, not 
 maintained by me).
 after dealing with a serious case of data corruption (basically because of 
 user data including the . sign). is there a way to use sqlalchemy also as 
 a validator/sanitizor for userdate?
 I know there is a basic sql escaping (preventing sql injection) baked into 
 sqlalchemy but obviousely I need something stronger.
 if sqlalchemy can't handle it by itself is there another library (or 
 sqlalchemy plugin) that can give me this functionality?
 thanks for the help 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/vvJBni7Oo38J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Making ResultProxy test as False when there are no rows

2011-10-14 Thread Lloyd Kvam
Would it make sense to add a __nonzero__ method to ResultProxy that
was tied to the rowcount?

def __nonzero__(self):
return bool(self.rowcount)

This would allow code like

if results:
process(results)

I was surprised when this did not work.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Making ResultProxy test as False when there are no rows

2011-10-14 Thread Lloyd Kvam


On Oct 14, 11:02 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Oct 14, 2011, at 10:38 AM, Lloyd Kvam wrote:

  Would it make sense to add a __nonzero__ method to ResultProxy that
  was tied to the rowcount?

  def __nonzero__(self):
     return bool(self.rowcount)

  This would allow code like

  if results:
     process(results)

  I was surprised when this did not work.

 this is because .rowcount is not related to the count of rows in a result 
 set.  Some DBAPI's may do this, but this is not the intent of .rowcount and 
 is not supported on most.  .rowcount is only intended to describe the number 
 of rows matched by an UPDATE or DELETE statement.

 The only way to know if there are any rows in a result set are to fetch them. 
   The cursor itself, based on backend, doesn't even know there's a row until 
 it sends a message to the database to get the first row.

 Therefore you have to use fetchone() to check for a row.

I apologize for not reading the documentation.
http://www.sqlalchemy.org/docs/core/connections.html#sqlalchemy.engine.base.ResultProxy.rowcount

Thanks for being so kind in setting me straight.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Using multiple databases for reliability (NOT sharding)

2010-12-17 Thread Lloyd Kvam


On Dec 17, 1:21 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 Doesn't seem like anyone has any thoughts here.   Its certainly not something 
 I've tried, but the general area of study here is multi-master replication: 
  http://en.wikipedia.org/wiki/Multi-master_replication.   The various ways of 
 dealing with whos dirty and whatnot fall under that realm of study.   For 
 this kind of thing I'd try to use existing techniques and tools as much as 
 possible since its not a trivial affair.

 On Dec 15, 2010, at 7:31 AM, Marcin Krol wrote:

  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1

  Hello everyone,

  I'm in process of writing a distributed application and I'd like to use
  SQLAlchemy as backend.

  However, it's not performance that is my concern, but reliability.

  Suppose I have n server nodes.

  Writing:

  I would like to be able to write on any node from 1 to n, and have write
  results being sent to DB on every node.

  Reading:

  I would like to randomly select one of the n clean (up-to-date) nodes
  and use it for reading objects (normal SA session handling).

  Rationale: the write stream in my case is not going to be very large
  (storing test results), while read stream is going to be heavy, so I can
  afford this sort of architecture.

  On the face of it, implementing such scenario manually should be simple:
  just wrap around SA session and have the object sent to each backend in
  turn (yes, I know, it's a performance hit but that's not a big problem
  in this project).

  However, suppose one of the nodes gets offline at some moment: it would
  have to be marked as 'dirty' and synchronized somehow with other nodes
  when returned to 'online' status. This gets complex and risky.

  Alternatively, I could go with the low tech version: always assign
  particular client to a particular server node, and back the DB up /
  replicate it elsewhere. But this cuts into availability and makes me
  maintain n backups / replicas.

  - --

  Regards,
  mk

  - --
  Premature optimization is the root of all fun.
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v1.4.9 (MingW32)
  Comment: Using GnuPG with Mozilla -http://enigmail.mozdev.org/

  iQEcBAEBAgAGBQJNCLU/AAoJEFMgHzhQQ7hO/VYH+wXF08U/+dSJ0op9/h9KgnO3
  fclL3eTuRu1ppZtISoEf3VFoJoE6bzlOU2FYd/YviGHHgU3MoK+QsgL6rPiA1lGp
  wITsKExnl4jZPvGBe4pT+QQivzVMdENNTuIClGjLJq+DiqXYL7gkdzU2qukdHQB7
  JhyVyvKicU0h+E6jvlv8CpVg2WpLNyGXrmpSTap0Fs3FnUcs18P7hZCsZWNxt+mw
  nMFD9Zp/BTGiB0eOJDC6reL+ZtjDc23/oKskTp3tFI4m3KOri+k1XyO8i1DEPbiH
  fVvUPy2610+Im8/y3a1gnyxktECIhpDRsErE5lm4pXfe01dDchSkQc5eDIyECdY=
  =whqS
  -END PGP SIGNATURE-

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Using multiple databases for reliability (NOT sharding)

2010-12-17 Thread Lloyd Kvam
(Sorry, I mis-clicked before.)

I've used MySQL replication for sharing data from multiple databases.
You can organize the servers in a loop and configure them to pass
along the changes replicated by other servers.  Of course if any
server fails, the loop is broken until that server is running again.
This might be good enough for you if all of the servers are local and
failures are rare.

I was able to satisfy my requirements by having an extra server that I
called the collector.  The individual servers send their logfiles to
the collector irregularly, but roughly daily.  The collector passes
the log files through rewriting log headers as necessary so that it
can masquerade as the last server in the loop.  server#1 replicates
from the collector.  server#2 to server#n replicate from #1, but cheat
sending their logs to the collector.  The replication logic for all
the normal servers works as though they were in a looped replication
stream.

This is not for everyone.  Transactions go through out of order.
Inserts create their keys independently of each other.  (I use the
server_ID as part of the primary keys.)  We have controls so that some
data must be processed on a particular server.  However, updates to
uncontrolled data can happen out of sequence since the transfer of log
files to the collector is unregulated.  The replication stream can not
flow if the collector or server#1 are down, so you still have critical
failure points.  Our objective was distributing data and closing the
loop even when some servers were off the network.


On Dec 17, 1:21 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 Doesn't seem like anyone has any thoughts here.   Its certainly not something 
 I've tried, but the general area of study here is multi-master replication: 
  http://en.wikipedia.org/wiki/Multi-master_replication.   The various ways of 
 dealing with whos dirty and whatnot fall under that realm of study.   For 
 this kind of thing I'd try to use existing techniques and tools as much as 
 possible since its not a trivial affair.

 On Dec 15, 2010, at 7:31 AM, Marcin Krol wrote:

  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1

  Hello everyone,

  I'm in process of writing a distributed application and I'd like to use
  SQLAlchemy as backend.

  However, it's not performance that is my concern, but reliability.

  Suppose I have n server nodes.

  Writing:

  I would like to be able to write on any node from 1 to n, and have write
  results being sent to DB on every node.

  Reading:

  I would like to randomly select one of the n clean (up-to-date) nodes
  and use it for reading objects (normal SA session handling).

  Rationale: the write stream in my case is not going to be very large
  (storing test results), while read stream is going to be heavy, so I can
  afford this sort of architecture.

  On the face of it, implementing such scenario manually should be simple:
  just wrap around SA session and have the object sent to each backend in
  turn (yes, I know, it's a performance hit but that's not a big problem
  in this project).

  However, suppose one of the nodes gets offline at some moment: it would
  have to be marked as 'dirty' and synchronized somehow with other nodes
  when returned to 'online' status. This gets complex and risky.

  Alternatively, I could go with the low tech version: always assign
  particular client to a particular server node, and back the DB up /
  replicate it elsewhere. But this cuts into availability and makes me
  maintain n backups / replicas.

  - --

  Regards,
  mk

  - --
  Premature optimization is the root of all fun.
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v1.4.9 (MingW32)
  Comment: Using GnuPG with Mozilla -http://enigmail.mozdev.org/

  iQEcBAEBAgAGBQJNCLU/AAoJEFMgHzhQQ7hO/VYH+wXF08U/+dSJ0op9/h9KgnO3
  fclL3eTuRu1ppZtISoEf3VFoJoE6bzlOU2FYd/YviGHHgU3MoK+QsgL6rPiA1lGp
  wITsKExnl4jZPvGBe4pT+QQivzVMdENNTuIClGjLJq+DiqXYL7gkdzU2qukdHQB7
  JhyVyvKicU0h+E6jvlv8CpVg2WpLNyGXrmpSTap0Fs3FnUcs18P7hZCsZWNxt+mw
  nMFD9Zp/BTGiB0eOJDC6reL+ZtjDc23/oKskTp3tFI4m3KOri+k1XyO8i1DEPbiH
  fVvUPy2610+Im8/y3a1gnyxktECIhpDRsErE5lm4pXfe01dDchSkQc5eDIyECdY=
  =whqS
  -END PGP SIGNATURE-

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Error - class 'sqlalchemy.exc.OperationalError': (OperationalError) (2006, 'MySQL server has gone away')

2010-10-14 Thread Lloyd Kvam
On Oct 14, 5:48 pm, Timmy Chan timmy.cha...@gmail.com wrote:
 sorry if this is addressed, but i'm running

 apache2
 SQLAlchemy 0.5.8
 Pylons 1.0
 Python 2.5.2

 and on a simple page (just retrieve data from DB), I get:

 Error - class 'sqlalchemy.exc.OperationalError': (OperationalError) (2006,
 'MySQL server has gone away')

 every few other requests, not after a long time as other posts I've searched
 for.  I still added

 sqlalchemy.pool_recycle = 1800

 but that did not fix the issue.  After a fresh apache restart, every 5th or
 6th requests gets a 500 from the above error.

The most likely explanation is that you are holding connections and
not returning them to the pool.  After enough idle time has elapsed
the connection was closed (possibly by a firewall) and no longer
working, but you are still trying to use it.  Close your connections
and return them to the pool as soon as the current commands are
completed.

If you enable echo in your database engine,
engine.echo=True
you will get a huge amount of debugging data that will include
information about connections.  You should be able to determine which
connections are not getting closed.

 thanks

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] reflecting (autoload=True) MySQL tinyint(1) fields

2010-03-27 Thread Lloyd Kvam
I've just discovered that some tinyint (8-bit) fields have had their
values limited to 0 and 1 regardless of actual value supplied.  Digging
through the documentation, I've learned that when MySQL tables are
reflected, tinyint(1) fields are processed as booleans.

I did not find emails from others howling in pain, so I suppose most
people are either happy with this behavior or unaffected.  I understand
why a bool column definition would be mapped to tinyint(1).  However,
doing the reverse, mapping tinyint(1) to bool, discards bits that MySQL
would not discard.

For me this was a misfeature.  I would think that supplying bools to an
integer field would work OK.  In python 2 + True == 3.  So people using
booleans should not have too much difficulty, would they?  Is there any
chance you'd consider autoloading tinyint(1) as an integer field?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: insert() on compound key-how to auto number second key

2007-09-08 Thread Lloyd Kvam

On Sat, 2007-09-08 at 10:40 +0200, Marco Mariani wrote:
 Your post begs for a question, which is why?  :-)
 
 You are fighthing the system. Having an autonumber column as part of
 a 
 compound key does not make a lot of sense.
 I don't think you have a really, really, really strange use case that 
 justifies it.

I've been doing this kind of compound primary key when needing to
combine satellite databases into a central database.  So the first
column is the database id (actually MySQL server_id), and the second
column is the autonumber column.  When combined with MySQL replication,
you have a fairly straight-forward way to have parallel independent
remote databases (think retail stores) and a central database.

-- 
Lloyd Kvam
Venix Corp


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---