Re: [sqlalchemy] pg8000 driver and JSON test

2018-06-07 Thread Tony Locke
Thanks for your swift reply Mike. I should have said that I'm changing
the way that pg8000 works so that there's a pg8000.PGJson wrapper for
JSON values. The reason for doing so is to allow pg8000 to send the
correct type code to Postgres. Previously with pg8000, JSON was
represented as a string, and it was sent with the 'unknown' type to
allow Postgres to guess the type, which can cause problems in edge
cases. Anyway, in the SQLAlchemy dialect for pg8000 I now have the
following bind processor that returns a pg8000.PGJson object:

class _PGJSON(JSON):
def bind_processor(self, dialect):
pg_json = dialect.dbapi.PGJson

def process(value):
if value is self.NULL:
value = None
elif isinstance(value, Null) or (
value is None and self.none_as_null):
return None
return pg_json(value)

return process

The problem is that now type_coerce returns a pg8000.PGJson type,
rather than a serialized JSON string, causing the
test_crit_against_string_coerce_type test to fail. I wonder if there's
a different approach that I should be taking?

Thanks,

Tony.

On 6 June 2018 at 14:56, Mike Bayer  wrote:
> On Wed, Jun 6, 2018 at 9:04 AM, Tony Locke  wrote:
>> Hi, I'm trying to get the latest pg8000 driver to pass the SQLAlchemy
>> dialect tests. I'm stuck on the following test in test_types.py:
>>
>> def test_crit_against_string_coerce_type(self):
>> name = self.tables.data_table.c.name
>> col = self.tables.data_table.c['data']
>>
>> self._test_index_criteria(
>> and_(name == 'r6',
>>  cast(col["b"], String) == type_coerce("some value", JSON)),
>> "r6",
>> test_literal=False
>> )
>>
>> This executes the following SQL:
>>
>> SELECT data_table.name
>> FROM data_table
>> WHERE data_table.name = %s AND CAST((data_table.data -> %s) AS VARCHAR) = %s
>> ('r6', 'b', 'some value')
>>
>> the problem is that the:
>>
>> CAST((data_table.data -> %s) AS VARCHAR)
>>
>> gives '"some value"', which of course doesn't equal 'some value', and so the
>> test fails. I'm not sure what I need to fix in the driver to make it work,
>> so any help is greatly appreciated.
>
> type_coerce("some value", JSON) means the value will be run through
> the JSON datatype's bind processor first thus converting it to '"some
> value"'.
>
> using pg8000 1.11.0 the test seems to pass:
>
> $ py.test test/dialect/test_suite.py  -k
> test_crit_against_string_coerce_type -s --log-debug=sqlalchemy.engine
> --dburi postgresql+pg8000://scott:tiger@localhost/test
>
> here's the relevant output you can see '"some value"':
>
> INFO:sqlalchemy.engine.base.Engine:
> CREATE TABLE data_table (
> id SERIAL NOT NULL,
> name VARCHAR(30) NOT NULL,
> data JSON,
> nulldata JSON,
> PRIMARY KEY (id)
> )
>
>
> INFO:sqlalchemy.engine.base.Engine:()
> INFO:sqlalchemy.engine.base.Engine:COMMIT
> INFO:sqlalchemy.engine.base.Engine:INSERT INTO data_table (name, data)
> VALUES (%s, %s)
> INFO:sqlalchemy.engine.base.Engine:(('r1', '{"key1": "value1", "key2":
> "value2"}'), ('r2', '{"Key \'One\'": "value1", "key two": "value2",
> "key three": "value \' three \'"}'), ('r3', '{"key1": [1, 2, 3],
> "key2": ["one", "two", "three"], "key3": [{"four": "five"}, {"six":
> "seven"}]}'), ('r4', '["one", "two", "three"]'), ('r5', '{"nested":
> {"elem1": [{"a": "b", "c": "d"}, {"e": "f", "g": "h"}], "elem2":
> {"elem3": {"elem4": "elem5"'), ('r6', '{"a": 5, "b": "some value",
> "c": {"foo": "bar"}}'))
> INFO:sqlalchemy.engine.base.Engine:COMMIT
> INFO:sqlalchemy.engine.base.Engine:SELECT data_table.name
> FROM data_table
> WHERE data_table.name = %s AND CAST((data_table.data -> %s) AS VARCHAR) = %s
> INFO:sqlalchemy.engine.base.Engine:('r6', 'b', '"some value"')
> DEBUG:sqlalchemy.engine.base.Engine:Col (b'name',)
> DEBUG:sqlalchemy.engine.base.Engine:Row ('r6',)
> PASSED
>
>
>
>>
>> Thanks,
>>
>> Tony.
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy

[sqlalchemy] pg8000 driver and JSON test

2018-06-06 Thread Tony Locke
Hi, I'm trying to get the latest pg8000 driver to pass the SQLAlchemy 
dialect tests. I'm stuck on the following test in test_types.py:

def test_crit_against_string_coerce_type(self):
name = self.tables.data_table.c.name
col = self.tables.data_table.c['data']

self._test_index_criteria(
and_(name == 'r6',
 cast(col["b"], String) == type_coerce("some value", JSON)),
"r6",
test_literal=False
)

This executes the following SQL:

SELECT data_table.name 
FROM data_table 
WHERE data_table.name = %s AND CAST((data_table.data -> %s) AS VARCHAR) = 
%s ('r6', 'b', 'some value')

the problem is that the:

CAST((data_table.data -> %s) AS VARCHAR)

gives '"some value"', which of course doesn't equal 'some value', and so 
the test fails. I'm not sure what I need to fix in the driver to make it 
work, so any help is greatly appreciated.

Thanks,

Tony.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Updating pg8000 dialect following new release of pg8000

2017-09-05 Thread Tony Locke


On Friday, 1 September 2017 17:40:23 UTC+1, Mike Bayer wrote:
>
> On Fri, Sep 1, 2017 at 12:09 PM, Tony Locke <tlo...@tlocke.org.uk 
> > wrote: 
> > No I don't think it's a bug because pg8000 is designed to always create 
> a 
> > prepared statement if one doesn't already exists, and so that entails 
> > keeping some data for each prepared statement, steadily increasing 
> memory 
> > use. There's a pg8000.max_prepared_statements parameter (set to 1000 by 
> > default) which, if exceeded, triggers the closing of all prepared 
> > statements. So memory use never goes off to infinity. 
>
> oh.So if you set that to like 100, the test here would pass for 
> you, right?  the memory use tests try for quite a while to see if 
> object use tilts down again since limited size caches are common. 
>
> Is this value available on the connect() function or through an 
> environment variable? 
>

There's a pg8000.max_prepared_statements module attribute and if that's 100 
it passes test_many_updates. Actually, I just stuck max_prepared_statements 
as a module attribute without giving it much thought, and I should probably 
put it as a keyword argument to the connect() function. If I were to do 
that, are you thinking that then I could specify that parameter in the 
connection string when the test suite is run?

 

>
>
>
> > 
> > On 1 Sep 2017 4:54 p.m., "Mike Bayer" <mik...@zzzcomputing.com 
> > wrote: 
> >> 
> >> On Fri, Sep 1, 2017 at 10:57 AM, Tony Locke <tlo...@tlocke.org.uk 
> > wrote: 
> >> > So, next problem. The test 
> >> > 
> >> > 
> test/aaa_profiling/test_memusage.py::MemUsageWBackendTest_postgresql+pg8000_9_5_8::test_many_updates
>  
>
> >> > fails because pg8000 creates a new prepared statement for each unique 
> >> > sql 
> >> > statement, which entails a creation of object within the driver, and 
> so 
> >> > the 
> >> > memory usage keeps increasing. I've marked this test with 
> >> > @testing.fails_on('postgresql+pg8000', 'prepared statements use 
> >> > memory'), 
> >> > and now all the test test/aaa_profiling/test_memusage.py tests pass. 
> >> 
> >> do you consider that to be a bug in pg8000? 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> > 
> >> > Now, on with the other tests! 
> >> > 
> >> > On Thursday, 31 August 2017 15:15:16 UTC+1, Mike Bayer wrote: 
> >> >> 
> >> >> On Thu, Aug 31, 2017 at 7:02 AM, Tony Locke <tlo...@tlocke.org.uk> 
> >> >> wrote: 
> >> >> > There's a new release (1.11.0) of the pg8000 driver for 
> PostgreSQL. 
> >> >> > It's 
> >> >> > a 
> >> >> > pure-python driver, and it already has a dialect for SQLAlchemy. 
> This 
> >> >> > latest 
> >> >> > release is not backwardly compatible with the previous release, 
> and 
> >> >> > I'm 
> >> >> > trying to modify the dialect accordingly. The main change is that 
> >> >> > connections and cursors are no longer threadsafe. In DB-API terms 
> it 
> >> >> > has 
> >> >> > a 
> >> >> > threadsafety value of 1 (Threads may share the module, but not 
> >> >> > connections). 
> >> >> > 
> >> >> > So the first problem I ran into was in the on_connect() method in 
> the 
> >> >> > dialiect. It referred to the 'unicode' keyword, which caused a 
> >> >> > problem 
> >> >> > under 
> >> >> > Python 3. So I deleted the following: 
> >> >> > 
> >> >> >  def on_connect(conn): 
> >> >> >  conn.py_types[quoted_name] = conn.py_types[unicode] 
> >> >> >  fns.append(on_connect) 
> >> >> > 
> >> >> 
> >> >> that was a recent fix for a regression: 
> >> >> 
> >> >> 
> >> >> 
> >> >> 
> https://github.com/zzzeek/sqlalchemy/commit/03560c4b83308719067ec635662c35f9a437fb7f
>  
> >> >> 
> >> >> it is fixed in 
> >> >> 
> >> >> 
> >> >> 
> >> >> 
> https://github.com/zzzeek/sqlalchemy/commit/d0470e296ea589620c94d8f2dd37e94b8f03842a
>  
> >> >> 
> >> >> 
> >> >> > just to get the tests going. The next problem is:

Re: [sqlalchemy] Updating pg8000 dialect following new release of pg8000

2017-09-01 Thread Tony Locke
No I don't think it's a bug because pg8000 is designed to always create a
prepared statement if one doesn't already exists, and so that entails
keeping some data for each prepared statement, steadily increasing memory
use. There's a pg8000.max_prepared_statements parameter (set to 1000 by
default) which, if exceeded, triggers the closing of all prepared
statements. So memory use never goes off to infinity.

On 1 Sep 2017 4:54 p.m., "Mike Bayer" <mike...@zzzcomputing.com> wrote:

> On Fri, Sep 1, 2017 at 10:57 AM, Tony Locke <tlo...@tlocke.org.uk> wrote:
> > So, next problem. The test
> > test/aaa_profiling/test_memusage.py::MemUsageWBackendTest_
> postgresql+pg8000_9_5_8::test_many_updates
> > fails because pg8000 creates a new prepared statement for each unique sql
> > statement, which entails a creation of object within the driver, and so
> the
> > memory usage keeps increasing. I've marked this test with
> > @testing.fails_on('postgresql+pg8000', 'prepared statements use
> memory'),
> > and now all the test test/aaa_profiling/test_memusage.py tests pass.
>
> do you consider that to be a bug in pg8000?
>
>
>
>
>
> >
> > Now, on with the other tests!
> >
> > On Thursday, 31 August 2017 15:15:16 UTC+1, Mike Bayer wrote:
> >>
> >> On Thu, Aug 31, 2017 at 7:02 AM, Tony Locke <tlo...@tlocke.org.uk>
> wrote:
> >> > There's a new release (1.11.0) of the pg8000 driver for PostgreSQL.
> It's
> >> > a
> >> > pure-python driver, and it already has a dialect for SQLAlchemy. This
> >> > latest
> >> > release is not backwardly compatible with the previous release, and
> I'm
> >> > trying to modify the dialect accordingly. The main change is that
> >> > connections and cursors are no longer threadsafe. In DB-API terms it
> has
> >> > a
> >> > threadsafety value of 1 (Threads may share the module, but not
> >> > connections).
> >> >
> >> > So the first problem I ran into was in the on_connect() method in the
> >> > dialiect. It referred to the 'unicode' keyword, which caused a problem
> >> > under
> >> > Python 3. So I deleted the following:
> >> >
> >> >  def on_connect(conn):
> >> >  conn.py_types[quoted_name] = conn.py_types[unicode]
> >> >  fns.append(on_connect)
> >> >
> >>
> >> that was a recent fix for a regression:
> >>
> >>
> >> https://github.com/zzzeek/sqlalchemy/commit/
> 03560c4b83308719067ec635662c35f9a437fb7f
> >>
> >> it is fixed in
> >>
> >>
> >> https://github.com/zzzeek/sqlalchemy/commit/
> d0470e296ea589620c94d8f2dd37e94b8f03842a
> >>
> >>
> >> > just to get the tests going. The next problem is:
> >> >
> >> >
> >> > test/aaa_profiling/test_memusage.py::MemUsageWBackendTest_
> postgresql+pg8000_9_5_8::()::test_alias_pathing
> >> >
> >> > which I think fails due to a threading problem. It seems that somehow
> >> > the
> >> > DB-API connection is shared between threads, which isn't supported any
> >> > more
> >> > with pg8000. So my question is, is it the case that:
> >> >
> >> > a. DB-API connections must be threadsafe to work with SQLAlchemy.
> >>
> >> not at all, SQLAlchemy holds that its own Session and Connection which
> >> ultimately refer to the DBAPI connection are themselves not
> >> threadsafe.
> >>
> >> > b. There's a something wrong with the test.
> >>
> >> the memusage tests are often omitted from normal testing as they are
> >> extremely process/memory intensive, but they don't spawn any threads.
> >>   In the past few weeks the memusage suite has been altered such that
> >> each memusage test is run in a separate *process* however, so there is
> >> some concurrency going on.   When the new process is created, the test
> >> makes a new connection pool so that it should not refer to any
> >> database connections that were transferred to the child fork, however
> >> it also doesn't try to close them or anything else - they should be
> >> totally ignored.   However if pg8000 is tracking some kind of global
> >> state, like a collection of prepared statements, this state needs to
> >> travel across the process boundary as well without impacting the
> >> parent process even if the child process ends.
> >>
> >> The failure can be i

Re: [sqlalchemy] Updating pg8000 dialect following new release of pg8000

2017-09-01 Thread Tony Locke
Thanks Mike, fetching the latest commits from SQLAlchemy solved the 
'unicode' problem.

Looking at the multiprocess problem, the pg8000 driver holds global state 
for remembering the prepared statements it's created. This means that it 
fails if multiple processes access it, even if the access is in series and 
not concurrent. I've done a (temporary?) fix by keying the state on the 
current process id. So now test/aaa_profiling/test_
memusage.py::MemUsageWBackendTest_postgresql+pg8000_9_5_8::()::test_alias_pathing
 
passes. Yay!

So, next problem. The test 
test/aaa_profiling/test_memusage.py::MemUsageWBackendTest_postgresql+pg8000_9_5_8::test_many_updates
 
fails because pg8000 creates a new prepared statement for each unique sql 
statement, which entails a creation of object within the driver, and so the 
memory usage keeps increasing. I've marked this test with 
@testing.fails_on('postgresql+pg8000', 'prepared statements use memory'), 
and now all the test test/aaa_profiling/test_memusage.py tests pass.

Now, on with the other tests!

On Thursday, 31 August 2017 15:15:16 UTC+1, Mike Bayer wrote:
>
> On Thu, Aug 31, 2017 at 7:02 AM, Tony Locke <tlo...@tlocke.org.uk 
> > wrote: 
> > There's a new release (1.11.0) of the pg8000 driver for PostgreSQL. It's 
> a 
> > pure-python driver, and it already has a dialect for SQLAlchemy. This 
> latest 
> > release is not backwardly compatible with the previous release, and I'm 
> > trying to modify the dialect accordingly. The main change is that 
> > connections and cursors are no longer threadsafe. In DB-API terms it has 
> a 
> > threadsafety value of 1 (Threads may share the module, but not 
> connections). 
> > 
> > So the first problem I ran into was in the on_connect() method in the 
> > dialiect. It referred to the 'unicode' keyword, which caused a problem 
> under 
> > Python 3. So I deleted the following: 
> > 
> >  def on_connect(conn): 
> >  conn.py_types[quoted_name] = conn.py_types[unicode] 
> >  fns.append(on_connect) 
> > 
>
> that was a recent fix for a regression: 
>
>
> https://github.com/zzzeek/sqlalchemy/commit/03560c4b83308719067ec635662c35f9a437fb7f
>  
>
> it is fixed in 
>
>
> https://github.com/zzzeek/sqlalchemy/commit/d0470e296ea589620c94d8f2dd37e94b8f03842a
>  
>
>
> > just to get the tests going. The next problem is: 
> > 
> > 
> test/aaa_profiling/test_memusage.py::MemUsageWBackendTest_postgresql+pg8000_9_5_8::()::test_alias_pathing
>  
>
> > 
> > which I think fails due to a threading problem. It seems that somehow 
> the 
> > DB-API connection is shared between threads, which isn't supported any 
> more 
> > with pg8000. So my question is, is it the case that: 
> > 
> > a. DB-API connections must be threadsafe to work with SQLAlchemy. 
>
> not at all, SQLAlchemy holds that its own Session and Connection which 
> ultimately refer to the DBAPI connection are themselves not 
> threadsafe. 
>
> > b. There's a something wrong with the test. 
>
> the memusage tests are often omitted from normal testing as they are 
> extremely process/memory intensive, but they don't spawn any threads. 
>   In the past few weeks the memusage suite has been altered such that 
> each memusage test is run in a separate *process* however, so there is 
> some concurrency going on.   When the new process is created, the test 
> makes a new connection pool so that it should not refer to any 
> database connections that were transferred to the child fork, however 
> it also doesn't try to close them or anything else - they should be 
> totally ignored.   However if pg8000 is tracking some kind of global 
> state, like a collection of prepared statements, this state needs to 
> travel across the process boundary as well without impacting the 
> parent process even if the child process ends. 
>
> The failure can be isolated by doing a pdb like this: 
>
> diff --git a/test/aaa_profiling/test_memusage.py 
> b/test/aaa_profiling/test_memusage.py 
> index 3181cfe61..ff600b85d 100644 
> --- a/test/aaa_profiling/test_memusage.py 
> +++ b/test/aaa_profiling/test_memusage.py 
> @@ -636,6 +636,8 @@ class MemUsageWBackendTest(EnsureZeroed): 
>  try: 
>  go() 
>  finally: 
> +import pdb 
> +pdb.set_trace() 
>  metadata.drop_all() 
>  clear_mappers() 
>
> This brings me right to a clean state where "next" will produce the 
> error.   Looking at Postgresql processes within the block, there are 
> no open transactions to the DB.  If you pdb right here, you can poke 
> around to see what state might be pres

[sqlalchemy] Updating pg8000 dialect following new release of pg8000

2017-08-31 Thread Tony Locke
There's a new release (1.11.0) of the pg8000 driver for PostgreSQL. It's a 
pure-python driver, and it already has a dialect for SQLAlchemy. This 
latest release is not backwardly compatible with the previous release, and 
I'm trying to modify the dialect accordingly. The main change is that 
connections and cursors are no longer threadsafe. In DB-API terms it has a 
threadsafety value of 1 (Threads may share the module, but not connections).

So the first problem I ran into was in the on_connect() method in the 
dialiect. It referred to the 'unicode' keyword, which caused a problem 
under Python 3. So I deleted the following:

 def on_connect(conn):
 conn.py_types[quoted_name] = conn.py_types[unicode]
 fns.append(on_connect)

just to get the tests going. The next problem is:

test/aaa_profiling/test_memusage.py::MemUsageWBackendTest_postgresql+pg8000_9_5_8::()::test_alias_pathing

which I think fails due to a threading problem. It seems that somehow the 
DB-API connection is shared between threads, which isn't supported any more 
with pg8000. So my question is, is it the case that:

a. DB-API connections must be threadsafe to work with SQLAlchemy.
b. There's a something wrong with the test.
c. Something else.

Thanks for your help!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Test test/engine/test_pool.py::QueuePoolTest::()::test_queuepool_close fails

2015-04-15 Thread Tony Locke
Great, thanks Mike, all tests pass for pg8000 without a hitch now.

On Wednesday, 8 April 2015 05:12:51 UTC+1, Michael Bayer wrote:

  

 On 4/7/15 4:33 PM, Mike Bayer wrote:
  
 it's a test that very seldom fails in some cases no matter what, as it's 
 checking reference counts on objects that are expected to be garbage 
 collected without any cycles.  A failure of this test is not strictly a 
 bug, it just refers to connections or cursors that are not collected 
 immediately.   if it is persistent and only occurs with pg8000 then there 
 might be some cursor cleanup issue going on.


 44a9820b4e02f65b3884fa2c016efc has a fix which adds explicit cleanup to a 
 few connection objects that are checked out and not closed, leading to the 
 pool._refs collection to not be empty when that particular test starts.  
 This is backported to 0.9 as well.


   


  

 On 4/7/15 3:22 PM, Tony Locke wrote:
  
 Hi, I found the following SQLAlchemy test fails:

 FAIL test/engine/test_pool.py::QueuePoolTest::()::test_queuepool_close

 with the stack trace:

 __ QueuePoolTest.test_queuepool_close 
 ___
 Traceback (most recent call last):
   File /home/tlocke/sqlalchemy/test/engine/test_pool.py, line 805, in 
 test_queuepool_close
 self._do_testqueuepool(useclose=True)
   File /home/tlocke/sqlalchemy/test/engine/test_pool.py, line 854, in 
 _do_testqueuepool
 assert not pool._refs
 AssertionError: assert not set([sqlalchemy.pool._ConnectionRecord object 
 at 0x7fc03cf1f4a8, sqlalchemy.pool._ConnectionRecord object at 
 0x7fc03cf1f710, sqlalchemy.pool._ConnectionRecord object at 
 0x7fc03cf1f5c0])
  +  where set([sqlalchemy.pool._ConnectionRecord object at 
 0x7fc03cf1f4a8, sqlalchemy.pool._ConnectionRecord object at 
 0x7fc03cf1f710, sqlalchemy.pool._ConnectionRecord object at 
 0x7fc03cf1f5c0]) = pool._refs

 The versions I'm using are:

 platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0

 I ran it against the tip of the master branch 
 72329433e78e57c8589e4797df523bb598190b64

 and the command I ran was a straightforward:

 py.test test/engine/test_pool.py::QueuePoolTest

 I'm sure I'm doing something really obviously wrong, but I'm not sure 
 what, so I'd be grateful if anyone can help. Btw, it was the only test that 
 failed when I ran the entire suite.

 Thanks,

 Tony.
  -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Test test/engine/test_pool.py::QueuePoolTest::()::test_queuepool_close fails

2015-04-07 Thread Tony Locke
Hi, I found the following SQLAlchemy test fails:

FAIL test/engine/test_pool.py::QueuePoolTest::()::test_queuepool_close

with the stack trace:

__ QueuePoolTest.test_queuepool_close 
___
Traceback (most recent call last):
  File /home/tlocke/sqlalchemy/test/engine/test_pool.py, line 805, in 
test_queuepool_close
self._do_testqueuepool(useclose=True)
  File /home/tlocke/sqlalchemy/test/engine/test_pool.py, line 854, in 
_do_testqueuepool
assert not pool._refs
AssertionError: assert not set([sqlalchemy.pool._ConnectionRecord object 
at 0x7fc03cf1f4a8, sqlalchemy.pool._ConnectionRecord object at 
0x7fc03cf1f710, sqlalchemy.pool._ConnectionRecord object at 
0x7fc03cf1f5c0])
 +  where set([sqlalchemy.pool._ConnectionRecord object at 
0x7fc03cf1f4a8, sqlalchemy.pool._ConnectionRecord object at 
0x7fc03cf1f710, sqlalchemy.pool._ConnectionRecord object at 
0x7fc03cf1f5c0]) = pool._refs

The versions I'm using are:

platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0

I ran it against the tip of the master branch 
72329433e78e57c8589e4797df523bb598190b64

and the command I ran was a straightforward:

py.test test/engine/test_pool.py::QueuePoolTest

I'm sure I'm doing something really obviously wrong, but I'm not sure what, 
so I'd be grateful if anyone can help. Btw, it was the only test that 
failed when I ran the entire suite.

Thanks,

Tony.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Tests hang at test_ora8_flags

2015-03-19 Thread Tony Locke
Hi, I've encountered a glitch when running py.test over the entire test 
suite with the postgresql+pg8000 dialect:

py.test --dburi postgresql+pg8000://postgres:xxx@localhost:5436/test

the test hangs at:

test/dialect/test_oracle.py::CompatFlagsTest::test_ora8_flags PASSED

then if I do a Ctrl-C it carries on and reports all tests as having passed. 
The versions are:

platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.6.4
pg8000 1.10.2 (works fine with the previous version 1.10.1)
the sqlalchemy version is the latest commit on the master branch (commit 
cd076470baf2fce0eebf5853e3145d96a9d48378)

One really odd thing is that if I run py.test with the -s option (don't 
capture output) then it runs fine. Also, if I run just 
test/dialect/test_oracle.py then it runs okay, it's only  a problem when 
running all tests. Any ideas gratefully received!

Thanks,

Tony.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Create database with sqlalchemy 0.8.0 and postgres

2014-05-14 Thread Tony Locke
I've had a look at this and submitted a patch that should make autocommit 
work for the pg8000 Sqlalchemy dialect in the same way as it does for the 
psycopg2 dialect https://github.com/zzzeek/sqlalchemy/pull/88.

On Friday, 9 May 2014 17:50:43 UTC+1, Tony Locke wrote:

 Hi, the pg8000 driver has an autocommit mode, activated using the boolean 
 'autocommit' attribute of the DBAPI connection. For example:

 import pg8000conn = pg8000.connect(user=postgres, password=C.P.Snow)
 conn.autocommit = Truecur = conn.cursor()cur.execute(CREATE DATABASE 
 qux)conn.autocommit = Falsecursor.close()conn.close()

 I'm not sure if the SQLAlchemy driver for pg8000 supports using:

 conn.execution_options(“AUTOCOMMIT”)

 I'll investigate...

 Cheers,

 Tony.



 On Wednesday, 7 May 2014 01:08:00 UTC+1, Michael Bayer wrote:


 On May 6, 2014, at 6:09 PM, Michael Costello michael7...@gmail.com 
 wrote:

 Hello.

 Setup:
   python 2.7.6
   postgres 9.3.4
   sqlalchemy 0.9.4 (also, 0.8.4)
   pg8000 1.9.8

 I am attempting to create a database using sqlalchemy with the above 
 tools and the following code:

 from sqlalchemy import create_engine

 dburl = postgresql+pg8000://user:pas...@db.foo.com:5432/postgres

 engine = create_engine(dburl)

 conn = engine.connect()
 conn.execute(COMMIT)
 conn.execute(CREATE DATABASE qux)
 conn.close()

 but I receive the following error:

 sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('ERROR', '25001', 
 'CREATE DATABASE cannot run inside a transaction block') 'CREATE DATABASE 
 qux' ()

 However, running the same code against the same database but using 
 sqlalchemy version 0.8.0 works.

 Is there something I can do to get 0.9.4 to work for me?



 I can’t imagine why that would work differently on 0.8.0 because the 
 transactional behavior is the same on the SQLAlchemy side.  

 Running this test with the latest pg8000 1.9.8:

 from sqlalchemy import create_engine

 e = create_engine(postgresql+pg8000://scott:tiger@localhost/test, 
 echo=True)
 conn = e.connect()
 conn.execute(COMMIT)
 conn.execute(create database foo)

 output on 0.9.4:

 sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('ERROR', '25001', 
 'CREATE DATABASE cannot run inside a transaction block') 'create database 
 foo' ()

 output on 0.8.0:

 sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('ERROR', '25001', 
 'CREATE DATABASE cannot run inside a transaction block') 'create database 
 foo' ()

 output on 0.8.4:

 sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('ERROR', '25001', 
 'CREATE DATABASE cannot run inside a transaction block') 'create database 
 foo' ()


 etc.

 so i think perhaps your pg8000 version has changed.

 To achieve this you should use psycopg2 and use psycopg2’s “autocommit” 
 mode.  See 
 http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#postgresql-isolation-leveland
  
 http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#psycopg2-transaction-isolation-level;
  
 with psycopg2 you can use 
 conn.execution_options(“AUTOCOMMIT”).execute(“CREATE DATABASE qux”).





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Create database with sqlalchemy 0.8.0 and postgres

2014-05-09 Thread Tony Locke
Hi, the pg8000 driver has an autocommit mode, activated using the boolean 
'autocommit' attribute of the DBAPI connection. For example:

import pg8000conn = pg8000.connect(user=postgres, password=C.P.Snow)
conn.autocommit = Truecur = conn.cursor()cur.execute(CREATE DATABASE 
qux)conn.autocommit = Falsecursor.close()conn.close()

I'm not sure if the SQLAlchemy driver for pg8000 supports using:

conn.execution_options(“AUTOCOMMIT”)

I'll investigate...

Cheers,

Tony.



On Wednesday, 7 May 2014 01:08:00 UTC+1, Michael Bayer wrote:


 On May 6, 2014, at 6:09 PM, Michael Costello 
 michael7...@gmail.comjavascript: 
 wrote:

 Hello.

 Setup:
   python 2.7.6
   postgres 9.3.4
   sqlalchemy 0.9.4 (also, 0.8.4)
   pg8000 1.9.8

 I am attempting to create a database using sqlalchemy with the above tools 
 and the following code:

 from sqlalchemy import create_engine

 dburl = postgresql+pg8000://user:pas...@db.foo.com:5432/postgres

 engine = create_engine(dburl)

 conn = engine.connect()
 conn.execute(COMMIT)
 conn.execute(CREATE DATABASE qux)
 conn.close()

 but I receive the following error:

 sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('ERROR', '25001', 
 'CREATE DATABASE cannot run inside a transaction block') 'CREATE DATABASE 
 qux' ()

 However, running the same code against the same database but using 
 sqlalchemy version 0.8.0 works.

 Is there something I can do to get 0.9.4 to work for me?



 I can’t imagine why that would work differently on 0.8.0 because the 
 transactional behavior is the same on the SQLAlchemy side.  

 Running this test with the latest pg8000 1.9.8:

 from sqlalchemy import create_engine

 e = create_engine(postgresql+pg8000://scott:tiger@localhost/test, 
 echo=True)
 conn = e.connect()
 conn.execute(COMMIT)
 conn.execute(create database foo)

 output on 0.9.4:

 sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('ERROR', '25001', 
 'CREATE DATABASE cannot run inside a transaction block') 'create database 
 foo' ()

 output on 0.8.0:

 sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('ERROR', '25001', 
 'CREATE DATABASE cannot run inside a transaction block') 'create database 
 foo' ()

 output on 0.8.4:

 sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('ERROR', '25001', 
 'CREATE DATABASE cannot run inside a transaction block') 'create database 
 foo' ()


 etc.

 so i think perhaps your pg8000 version has changed.

 To achieve this you should use psycopg2 and use psycopg2’s “autocommit” 
 mode.  See 
 http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#postgresql-isolation-leveland
  
 http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#psycopg2-transaction-isolation-level;
  
 with psycopg2 you can use 
 conn.execution_options(“AUTOCOMMIT”).execute(“CREATE DATABASE qux”).





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Prepared Statements in Postgresql

2014-02-12 Thread Tony Locke
Hi, just to confirm, the executemany() method in pg8000 does use prepared 
statements.

Cheers,

Tony.

On Friday, 15 January 2010 17:16:09 UTC, Michael Bayer wrote:

 mozillalives wrote:
  Hello Everyone,
 
  I am new to both sqlalchemy and elixir, but I have been using them for
  the past couple of weeks and I really like them. But I have a question
  about prepared statements for Postgresql.
 
  For one specific application, I am doing a bunch of inserts
  (200,000+). From what I can tell, it looks like these are not prepared
  statements. I rewrote the code to issue prepared statements and this
  cuts the insertion time in half, but the code is crude. My question's
  are:

 how did you use prepared statements in Python if you don't know that
 psycoopg2 uses prepared statements ?  was this in another language or did
 you implement a raw socket connection to your database ?

 
  Is there a way to tell sqlalchemy or the engine (which would be
  psycopg2, correct?) to use prepared statements?

 to efficiently execute the same statement many times, use the
 executemany style of execution - the tutorial describes this at

 http://www.sqlalchemy.org/docs/05/sqlexpression.html#executing-multiple-statements
 .   I don't think that psycopg2 actually uses prepared statements for
 this purpose but I am not sure.  The DBAPI executemany() method is used.

 
  I've noticed some opinions online indicating that psycopg2 does not
  have prepared statement support (e.g. -
  
 http://www.cerias.purdue.edu/site/blog/post/beware_sql_injections_due_to_missing_prepared_statement_support/
 )

 the comment at the bottom of that post ultimately references a psycopg2
 message from 2007 so you'd need to ask the psycopg2 folks for updated
 information.  However psycopg2 can do an executemany with great
 efficiency as it is using methodologies for which you'd have to ask them,
 so if they don't use PG's actual prepared mechanism, its probably
 unnecessary.  psycopg2 is an extremely mature and high performing product.


  - can I plug another engine into sqlalchemy that does?

 there's the pg8000 engine which may or may not do this.  But its written
 in pure python, is not as fast as psycopg2, and is very new and not widely
 used since its author doesn't seem to promote it very much (but it is a
 very well written library).


 
  If I can't do any of the above and just need to prepare the statements
  manually, is there at least a method in sqlalchemy to properly quote
  my data before sending it to postgres?

 Despite some of the fud-like links mentioned on that blog, SQLAlchemy, as
 it says on the website since the day we launched 5 years ago, always uses
 bind parameters, in all cases, for all literal values, everywhere.  We do
 not and have never quoted anything within SQLA as that is left up to the
 services provided by the DBAPI.   DBAPI does not have prepared statement
 API.  It has executemany(), for which the underlying implementation may
 or may not use prepared statements + server-level bind processing as an
 implementation detail.  Psycopg2 handles the quoting in this case. 
 cx_oracle, OTOH, uses Oracle's native data binding facilities provided by
 OCI.  DBAPI abstracts this detail away.

 
  Thanks,
  Phil
  --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
  To unsubscribe from this group, send email to
  sqlalchemy+...@googlegroups.com javascript:.
  For more options, visit this group at
  http://groups.google.com/group/sqlalchemy?hl=en.
 
 
 



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.