Greg Stark wrote:
You may
not expect to be need to run queries which combine multiple users'
data now but you will eventually.
We store cross-user data in a separate schema, which solves all *our*
problems.
This doesn't work so great when each user is going to be specifying
their own cus
Fabio La Farcioli wrote:
Hi to all,
i am developing a web app for thousands users (1.000/2.000).
Each user have a 2 table of work...I finally have 2.000 (users) x 2
tables = 4.000 tables!
As a someone with a ~50K-table database, I can tell you it's definitely
possible to survive with such a
Hi Thomas,
How is 'messungen_v_dat_2009_04_13_gtyp_minute_tag_idx' defined? What is
the row count for the table?
Mike
Hi,
subject is the following type of query needed in a function to select
data:
SELECT ' 13.04.2009 12:00:00 ' AS zeit,
0.00..24.00 rows=1400 width=16) (never executed)
-> Sort (cost=104.83..108.61 rows=1510 width=12) (never executed)
Sort Key: b.void
-> Seq Scan on bigint_value b (cost=0.00..25.10
rows=1510 width=12) (never executed)
Total runtime: 0.479 ms
(46 rows)
-
B
ning wrote:
The log is really long,
Which usually signals a problem with the query.
but I compared the result of "explain analyze"
for first and later executions, except for 3 "time=XXX" numbers, they
are identical.
They are supposed to be identical unless something is really badly broke
Rui Carvalho wrote:
hum thanks a lot for the quick answer,
if is not abuse of your patience
what is the best alternative to the LEFT OUTER JOINS?
I meant I wasn't sure whether you really meant *outer* joins. Too many
of them looked kinda suspicious :-)
If you *do* need them, then there is no
> Merge Join (cost=111885.70..319492.88 rows=13016048 width=620)
The outermost merge join has to go through 13 million rows. If you
remove "distinct on (bien.uid)", you'll see that.
> LEFT outer JOIN ville ON ville.uid = bien.ref_ville
> LEFT outer JOIN freguesia_ville ON freguesia_ville.ref_
Hi Rui,
i have this query (i think is a simple one)
Could you EXPLAIN ANALYZE the query and show the results please?
Thanks,
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-perf
Scott Carey wrote:
the OS can either quickly allocate to the process or
the page cache from the free buffers, or more slowly take from the page
cache, or even more slowly page out a process page.
Aha, now it all makes sense.
I like to use the '5 second rule'. dirty_background_ratio should
Scott Carey wrote:
222 / 8 cores = ridiculous 27 processes per core, while the OP has 239
That's not rediculous at all. Modern OS's handle thousands of idle
processes just fine.
I meant that 27 was a ridiculously small number.
Or you can control the behavior with the following kenrnel pa
Scott Marlowe wrote:
Close, but it'll use that memory for cache. Large buffers are not
typical in linux, large kernel caches are.
OK, we're talking about different things. You're right.
If that tutorial says that, then that tutorial is wrong. I'm guessing
what that tutorial is talking abou
Scott Marlowe wrote:
Also think about it, the OP has 8G of swap and 30Gig cached. How /
why would you be caching 30Gigs worth of data when there's only 8G to
cache anyway?
You're right, I have misread it again :-)
Cheers,
Mike
--
Sent via pgsql-performance mailing list (pgsql-performan
Hi Scott,
Well, we can't be sure OP's only got one core.
In fact, we can, Sean posted what top -b -n 1 says. There was only one
CPU line.
the number of cores, it's the IO subsystem is too slow for the load.
More cores wouldn't fix that.
While I agree on the IO, more cores would defin
Scott Marlowe wrote:
The postgres shared cache is at 4G, is that too big?
Not for a machine with 32Gig of ram.
He could even add some more.
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgres
Sean,
Yes, besides another mysql server running on
the same server,
Which is a really bad idea :-)
The postgres shared cache is at 4G, is that too big?
OK, I have misread the total memory amount which was 32G, and I thought
it was 3G. Thanks to Scott Marlow who pointed that out. In this
Hi Sean,
Well, the overall impression is your machine is badly overloaded. Look:
top - 10:18:58 up 224 days, 15:10, 2 users, load average: 6.27, 7.33, 6
The load average of 6.5 means there are six and a half processes
competing for the same CPU (and this system apparently has only one).
Hi Sean,
Sean Ma wrote:
One months ago, this type of slow query happened about a few time per
day. But recently, the slow query happens more frequent at the rate of
once every 10 minutes or less. There seesm not relation to th
What is your hardware (memory, CPU type and such)?
This seems like
I'd be glad to hear any opinions/suggestions.
Many thanks to everyone who responded!
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi there,
Please help me to make a decision on how to manage users.
For some reason it is easier in the project I'm working on to split data
by schemes and assign them to Postgres' users (I mean those created with
CREATE USER) rather than support 'owner' fields referring to a global
users tab
19 matches
Mail list logo