On 12 Feb 2010, at 19:36, Michael Bayer wrote:
Ed Singleton wrote:
class InsertFromSelect(ClauseElement):
def __init__(self, table, select):
self.table = table
self.select = select
@compiles(InsertFromSelect)
def visit_insert_from_select(element, compiler, **kw):
return
Michael Bayer wrote:
> If you prefer, you can reflect your database once and store the resulting
> MetaData (or individual Table objects) into a pickled datafile. Your
> application can then read the datafile upon startup to configure its
> previously loaded table metadata.
>
> The serializer ext
If you prefer, you can reflect your database once and store the resulting
MetaData (or individual Table objects) into a pickled datafile. Your
application can then read the datafile upon startup to configure its
previously loaded table metadata.
The serializer extension makes this possible, requ
Ed Singleton wrote:
> class InsertFromSelect(ClauseElement):
> def __init__(self, table, select):
> self.table = table
> self.select = select
>
> @compiles(InsertFromSelect)
> def visit_insert_from_select(element, compiler, **kw):
> return "INSERT INTO %s (%s) %s" % (
>
On 12 Feb 2010, at 17:43, Michael Bayer wrote:
Ed Singleton wrote:
In the case of:
sa
.insert
(mytable
).values(myothertable.select().filter_by(foo=sa.bindparam("bar"))
This doesn't currently work because... [snip]
if you're using the @compiler extension to generate this, the same
compiler
On 12 Feb 2010, at 17:43, Michael Bayer wrote:
Ed Singleton wrote:
To partially clarify and answer my own question here (I was very
tired
by the time I pasted this last night)
In the case of:
sa
.insert
(mytable
).values(myothertable.select().filter_by(foo=sa.bindparam("bar"))
This doesn't
Ed Singleton wrote:
> To partially clarify and answer my own question here (I was very tired
> by the time I pasted this last night)
>
> In the case of:
>
> sa
> .insert
> (mytable
> ).values(myothertable.select().filter_by(foo=sa.bindparam("bar"))
>
> This doesn't currently work because the bindpa
Kent wrote:
> If I have a one to many RelationProperty, which uses a list, how can I
> check if this has been set without actually referencing it?
>
> It seems once I reference it, it sets it to an empty list if it hasn't
> been set already.
look for it in obj.__dict__.
>
> --
> You received th
To partially clarify and answer my own question here (I was very tired
by the time I pasted this last night)
In the case of:
sa
.insert
(mytable
).values(myothertable.select().filter_by(foo=sa.bindparam("bar"))
This doesn't currently work because the bindparam required for the
select st
If I have a one to many RelationProperty, which uses a list, how can I
check if this has been set without actually referencing it?
It seems once I reference it, it sets it to an empty list if it hasn't
been set already.
--
You received this message because you are subscribed to the Google Groups
Thanks!
"Michael Bayer"
Sent by: sqlalchemy@googlegroups.com
02/12/2010 10:48 AM
Please respond to
sqlalchemy@googlegroups.com
To
sqlalchemy@googlegroups.com
cc
Subject
Re: [sqlalchemy] Oracle in clause limit
grach wrote:
> Hello all,
>
> I'm relatively new to SQLAlchemy - is there an
On Fri, 12 Feb 2010 11:01:23 -0500, Michael Bayer
wrote:
> you would connect:
>
> conn = engine.connect()
>
> check the PID:
>
> pid = conn.execute("SELECT pg_backend_pid()").scalar()
>
>
> then continue as needed:
>
> conn.execute(text(...))
Thanks, Michael. That's very clear and helpful.
Faheem Mitha wrote:
>> PostgreSQL forks a new backend process when a connection is
> established, however. It sounds like that's what you want. Do
> "SELECT pg_backend_pid()" to get the PID of the backend process
> serving your connection.
>
>> That and other stat functions are documented her
Bob Farrell wrote:
> I tried this against Oracle and it works without a hitch, so it looks
> like it's a problem with sqlite - we've got some ideas on how to fix
> it so we'll carry on looking. So this thread can be ignored now, as
> it's not a sqlalchemy issue (unless sqlalchemy planned to special
grach wrote:
> Hello all,
>
> I'm relatively new to SQLAlchemy - is there any elegant workaround
> that SQLAlchemy provides for queries with in clause that contains more
> than 1000 items?
>
> I have, say, "date, item, value" table that I'd like to query for
> arbitrary set of dates and items (date
On Thu, 11 Feb 2010 13:06:03 -0500, Michael Bayer
wrote:
> Faheem Mitha wrote:
>>
>> Hi,
>>
>> sqlalchemy forks a process when it calls the db (in my case PostgreSQL,
>> but I don't think it matters) using, for example
>>
>> from sqlalchemy.sql import text
>> s = text(...)
>
> um, what ? ther
On Fri, 12 Feb 2010 13:33:01 +0100, Alex Brasetvik wrote:
>
> On Feb 11, 2010, at 18:58 , Faheem Mitha wrote:
>
>> sqlalchemy forks a process when it calls the db
>
> No, it does not.
> PostgreSQL forks a new backend process when a connection is
established, however. It sounds like that's what
Adam Hayward wrote:
> Hello there.
>
> (first post to group)
>
> I've been having a problem with an incorrect rowcount for ResultProxies
> using Sqlite databases. Regardless of how many rows in the resultset, it
> gives me a rowcount of "-1". Best demonstrated with an example:
>
> Is this a bug? Am
Hello there.
(first post to group)
I've been having a problem with an incorrect rowcount for ResultProxies
using Sqlite databases. Regardless of how many rows in the resultset, it
gives me a rowcount of "-1". Best demonstrated with an example:
from sqlalchemy import create_engine, __version__
fr
Hello,
I have discovered that this is a limitation of pysqlite. From a comment in
one of the test cases:
"pysqlite does not know the rowcount of SELECT statements, because we
don't fetch all rows after executing the select statement. The rowcount
has thus to be -1."
http://code.google.com/p/pysqli
Hello all,
I'm relatively new to SQLAlchemy - is there any elegant workaround
that SQLAlchemy provides for queries with in clause that contains more
than 1000 items?
I have, say, "date, item, value" table that I'd like to query for
arbitrary set of dates and items (date and item list is provided
I tried this against Oracle and it works without a hitch, so it looks
like it's a problem with sqlite - we've got some ideas on how to fix
it so we'll carry on looking. So this thread can be ignored now, as
it's not a sqlalchemy issue (unless sqlalchemy planned to special case
"commit" as text for
Sorry, I forgot to mention that if I run my "select * from test;"
*after* I get the error for test3, it shows that insert did in fact
get committed to the database:
{<. b...@mote: ~/dev/sasqlconsole .]
£ sqlite3 test.sqlite "select * from test"
7|8|9
So it seems that the "commit" is getting sent
On Feb 11, 2010, at 18:58 , Faheem Mitha wrote:
> sqlalchemy forks a process when it calls the db
No, it does not.
> The reason for this is that I want to plot a memory graph of the postgresql
> process, so it is handy to have the pid for this.
PostgreSQL forks a new backend process when a c
Hi,
I desperately hope that someone can help with this!! I am connecting
to a remote machine (the connection works fine) but am having problems
understanding how to use/instantiate reflected tables.
My class would read as follows:
Base = declarative_base()
class ConnectLog(Base):
__tab
Hi there,
I'm having a bit of trouble with session.execute() and session.commit()
Here's a simple example that demonstrates what I'm trying to do:
import os
import sys
import sqlalchemy.orm as orm
import sqlalchemy as sa
Session = orm.sessionmaker()
session = None
def setup():
global sess
26 matches
Mail list logo