Thanks for the info. That clarify things :)
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/interesting-finding-on-order-by-behaviour-tp4623884p4632301.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-general mailing list
On Jul 22, 12:20 pm, scott_r...@elevated-dev.com (Scott Ribe) wrote:
On Jul 22, 2011, at 11:11 AM, Samuel Hwang wrote:
results
=
SQL Server 2008 R2 (with case insensitive data, the ordering follows
ASCII order)
f1
---
AbC
abc
ABc
cde
CdE
Well, if it's case
Simon Riggs wrote:
The real question is why anyone would actually perform that kind of
UPDATE. It doesn't really make much sense to increment a PK value.
PostgreSQL is good at supporting things people want and need, so
differences do exist in places that are fairly low priority.
I
Thanks. That's the best reason I can think of too.
Less expensive feels like taking shortcuts to speed up things and not a
good reason to make decision on.
Although maintain backward compatibility makes sense (to some extend), if
old PG doesn't behave properly, it should be fixed, shouldn't
Shianmiin wrote:
I thought the Atomic should be at statement level, could anyone tell me
why PostgreSQL behaves differently?
re-state for clarification purpose.
Since SQL is a set language, there is no concept of row order. I thought the
checking should be on a per set operation (i.e. per
setup:
drop table if exists t1;
create table t1 (f1 int);
create unique index uix_t1 on t1(f1) ;
insert into t1(f1) values (1), (2), (3);
select * from t1;
f1
---
1
2
3
test statement:
update t1 set f1 = f1 + 1;
In PostgreSQL I got,
ERROR: duplicate key value violates unique
Merlin Moncure-2 wrote:
One proposed solution is to cache plpgsql plans around the search path.
I like the proposed solution, since search_path plays a part when generating
plpgsql plan, it make sense to be part of the cache.
Merlin Moncure-2 wrote:
*) use sql functions for portions
We have recently gone thru an unexpected behavior of PostgreSQL function
written in plpgsql.
I wonder if anyone can help explain the ideas behind the design.
Test scenario:
1. create two identical schemas, let's call them tenant1 and tenant2
-- set up tenant1
create schema tenant1;
set
Merlin Moncure-2 wrote:
... I've coded a
lot of multi schema designs and they tend to either go the one
session/schema route or the connection pooling route. Either way,
cache memory usage tends to work itself out pretty well (it's never
been a problem for me before at least). I can't
Tom Lane-2 wrote:
I don't think it's a leak, exactly: it's just that the relcache entry
for each one of these views occupies about 100K. A backend that touches
N of the views is going to need about N*100K in relcache space. I can't
get terribly excited about that. Trying to reduce the
Merlin Moncure-2 wrote:
I am not seeing your results. I was able to run your test on a stock
config (cut down to 50 schemas though) on a vm with 512mb of memory.
What is your shared buffers set to?
The shared buffers was set to 32MB as in default postgresql.conf
To save you some
Merlin Moncure-2 wrote:
I think you may have uncovered a leak (I stand corrected).
The number of schemas in your test is irrelevant -- the leak is
happening in proportion to the number of views (set via \setrandom
tidx 1 10). At 1 I don't think it exists at all -- at 100 memory use
Hi Merlin,
I revised the test code with attached files and use pgbench to send the test
queries.
http://postgresql.1045698.n5.nabble.com/file/n4290723/dotest dotest
http://postgresql.1045698.n5.nabble.com/file/n4290723/initialize.sql
initialize.sql
Good point. Thanks.
The tests we did in house is all from client site and definitely not in a
single transaction. I just found this simplified test case can reproduce the
same memory usage issue and didn't pay too much attention to it.
If we repeatedly doing smaller batches, we can still see
if we go with single-db-multiple-schema model, either our data access layer
will need to ensure qualifying all the database objects with proper schema
name, or with postgresql, just to change the search path while the
connection passed from pool to app code. Another model under evaluation is
Thanks. Probably, but that's not the point here.
The issue here is how PostgreSQL backend process uses memory and I wonder if
there any way to configure it.
--
View this message in context:
Shianmiin wrote:
Hi Merlin,
I revised the test code with attached files and use pgbench to send the
test queries.
http://postgresql.1045698.n5.nabble.com/file/n4290723/dotest dotest
http://postgresql.1045698.n5.nabble.com/file/n4290723/initialize.sql
initialize.sql
http
:
On Fri, Apr 8, 2011 at 7:43 AM, Shianmiin [hidden
email]/user/SendEmail.jtp?type=nodenode=4290983i=0by-user=t
wrote:
Hi Merlin,
I revised the test code with attached files and use pgbench to send the
test
queries.
http://postgresql.1045698.n5.nabble.com/file/n4290723/dotesthttp://postgresql.1045698
No I didn't configured 1.5GB shared memory. For this test I recreated a
database cluster and leave everything in the configuration as default.
As in the original post,
when the connection was first established, the memory usage of backend
process showed in top was
VIRT = 182MB, RES = 6240K,
Further clarification,
if I run two concurrent threads
pgbench memoryusagetest -c 2 -j 2 -T180 -f test.sql
both backend processes uses 1.5GB and result in 3GB in total.
Samuel
--
View this message in context:
Merlin Moncure-2 wrote:
On Fri, Apr 8, 2011 at 2:00 PM, Shianmiin lt;shianm...@gmail.comgt;
wrote:
Further clarification,
if I run two concurrent threads
pgbench memoryusagetest -c 2 -j 2 -T180 -f test.sql
both backend processes uses 1.5GB and result in 3GB in total.
yes. could you
Hi there,
We are evaluating using PostgreSQL to implement a multitenant database,
Currently we are running some tests on single-database-multiple-schema model
(basically, all tenants have the same set of database objects under then own
schema within the same database).
The application will
22 matches
Mail list logo