Hi,
Strange behaviour with sqa in multi-process environment... already posted
on StackOverflow for a web app but still missing some understanding so
posting here.
I've created an application where my sqa calls are encapsulated: My API's
methods always do the same kind of stuff:
1- request
Hi,
Strange behaviour with sqa in multi-process environment... already posted
on
StackOverflowhttp://stackoverflow.com/questions/21109794/delayed-change-using-sqlalchemyfor
a web app but still missing some understanding so posting here.
I've created an application where my sqa calls are
i was having this kind of problem while using a multi-threaded app, but
with a postgres backend.
in postgres, with high-concurrency, i was expecting this kind of
behaviour, so i had to implement some simple semaphores to make it work
properly -- sqlalchemy is not to blame if something changes
On Mon, Jan 20, 2014 at 1:51 PM, Richard Gerd Kuesters
rich...@humantech.com.br wrote:
i was having this kind of problem while using a multi-threaded app, but with
a postgres backend.
in postgres, with high-concurrency, i was expecting this kind of behaviour,
so i had to implement some simple
Thanks for your comments.
I'm using sqlite in default mode (which is synchronous) and here's my engine
configuration.
engine = create_engine('sqlite:///my_db.sqlite',
connect_args={'detect_types':
sqlite3.PARSE_DECLTYPES|
heavely awkward question: you said that the error occurs in memory and
in disk. is your disk a ssd? trim enabled, noatime in fstab, deadline as
scheduler. any of these?
On 01/20/2014 03:05 PM, pr64 wrote:
Thanks for your comments.
I'm using sqlite in default mode (which is synchronous) and
On Jan 20, 2014, at 11:12 AM, pr64 pierrerot...@gmail.com wrote:
Hi,
Strange behaviour with sqa in multi-process environment... already posted on
StackOverflow for a web app but still missing some understanding so posting
here.
I've created an application where my sqa calls are
qty = Column(Float(asdecimal=False), nullable=False, server_default='(0)')
the qty always return decimal object, how to return the python float type
object?
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop
1. What database backend is this?
2. What does the CREATE TABLE for the table in question look like, as it is
present in the database?
3. Was the behavior the same with 0.8.4 ?
On Jan 20, 2014, at 12:31 PM, 曹忠 joo.t...@gmail.com wrote:
qty = Column(Float(asdecimal=False), nullable=False,
OK, sorry for my explanation which is not right.
I launch two separate processes from the command line. Each is importing my
API and therefore creates its own connection to the sqlite database.
Commiting in process 1 should be visible from process 2. The problem I have
is that the change is
i'm just wondering for performance over-the-wire from the bandwidth alone...
I see great, DBA toubleshooting friendly stuff like:
SELECT table_one.column_one AS table_one_column_one FROM table_one ;
but that could be...
SELECT t1.column_1 AS t1_column_1 FROM table_one t1;
or even
On Jan 20, 2014, at 3:59 PM, pr64 pierrerot...@gmail.com wrote:
OK, sorry for my explanation which is not right.
I launch two separate processes from the command line. Each is importing my
API and therefore creates its own connection to the sqlite database.
Commiting in process 1 should
On Jan 20, 2014, at 4:32 PM, Jonathan Vanasco jonat...@findmeon.com wrote:
i'm just wondering for performance over-the-wire from the bandwidth alone...
I see great, DBA toubleshooting friendly stuff like:
SELECT table_one.column_one AS table_one_column_one FROM table_one ;
but
so wow. that's really neat. amazingly neat.
and yeah, i'm just looking for ways to get some more performance out of a
few boxes , so we don't have to add another box. trying to cut the fat here
and there -- and noticed some very verbose sql. wondering if losing it will
get 1-2% more out of
On Jan 20, 2014, at 5:23 PM, Jonathan Vanasco jonat...@findmeon.com wrote:
2. it doesn't affect the label/aliasing on the table, just the column . ie,
SELECT mytable.id AS _1 FROM mytable
not
SELECT _t1.id AS _1 FROM mytable _t1
that it can’t do. an Alias is a very
sorry , should have been more clear. i'm trying to get some more juice out
of of the database server. it is streaming sql nonstop. the webservers
are doing fine, and are simple to cluster out.
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
i know. my point was, get rid of a few SQL statements altogether instead of
trying to shrink the char counts in the ones you have….
if you really want, you can build a cursor_execute() event that does rewriting
at that level.
as far as the coerce_from_config theres a ticket somewhere to
Thanks, seemed to work for what I needed.
On Thursday, January 16, 2014 12:44:31 PM UTC-7, Michael Bayer wrote:
On Jan 16, 2014, at 1:48 PM, Rich richar...@gmail.com javascript:
wrote:
I've been using delete cascading on a particular relationship for some
time and has worked well. My
i couldn't find anything on trac earlier for `coerce_from_config`
i'd be happy to whip up a patch for the .8 branch that
updates coerce_from_config to support all the create_engine kwargs. i have
no idea how to do that for the .9 branch though.
my sql is pretty optimized as is, with
well, that's why I asked about his FS config earlier today. I got
some strange behaviors from sqlite using ssd drives with some specific
set of options for the filesystem; but I'm also not using the latest
version of sqlite ... 3.7.9 here, as of ubuntu 12.04 lts. I really
didn't narrowed it
Hi,
How to remove all mapper events?
This does not work:
events.MapperEvents._clear()
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
21 matches
Mail list logo