I've got a fun problem.
My employer just purchased some new db servers that are very large. The
specs on them are:
4 Intel X7550 CPU's (32 physical cores, HT turned off)
1 TB Ram
1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)
3TB Sas Array (48 15K 146GB spindles)
The issue we are
All,
I'm trying to estimate the size of my hot data set, and wanted to get some
validation that I'm doing this correctly.
Basically, I'm using the sum(heap_blks_read + idx_blks_read) from
pg_statio_all_tables, and diffing the numbers over a period of time (1 hour
at least). Is this a fair estima
I just found out that my company is planning on migrating my databases from
our current ISCSI storage solution to NetApps connected via NFS. I knew
about the NetApp migration, but always assumed (and shame on me) that I
would have direct attachments to the servers.
Well, I am very uncomfortable w
Is anyone running their production PostgreSQL server on the RHEL Cluster
software? If so, how is it working for you? My linux admin is looking at
trying to increase the robustness of our environment and wanting to try and
eliminate as many single points of failure as possible.
So, I am looking f
I have a database where I dropped all indexes on a table last night
and built a new set of indexes. The goal is to try and let the
database have fewer indexes and use them more. I removed a bunch of
indexes that were surviving from our 7.3 days where functionality will
now be covered by 8.1's use
I have a question.
Consider this scenario.
Table customer (
customer_id int8,
customer_l_name varchar(35),
customer_f_name varchar(30),
customer_addr_1 varchar(100),\
customer_addr_2 varchar(100),
customer_city varchar(50),
customer_state char(2),
customer_zip varchar(9)
);
On this table, a cust
On 7/3/07, smiley2211 <[EMAIL PROTECTED]> wrote:
CREATE OR REPLACE VIEW temp_consent2 AS
SELECT DISTINCT temp_consent.id, temp_consent.daterecorded
FROM temp_consent
WHERE temp_consent.answer::text = 'Yes'::text
ORDER BY temp_consent.daterecorded DESC, temp_consent.id;
Get rid of the
On 6/8/07, Bill Moran <[EMAIL PROTECTED]> wrote:
In response to "Chris Hoover" <[EMAIL PROTECTED]>:
> On 6/8/07, Bill Moran <[EMAIL PROTECTED]> wrote:
> >
> > In response to "Chris Hoover" <[EMAIL PROTECTED]>:
> >
> > &g
On 6/8/07, Bill Moran <[EMAIL PROTECTED]> wrote:
In response to "Chris Hoover" <[EMAIL PROTECTED]>:
> I need some help. I have started taking snapshots of performance of my
> databases with concerns to io. I created a view on each cluster defined
as:
>
I need some help. I have started taking snapshots of performance of my
databases with concerns to io. I created a view on each cluster defined as:
SELECT pg_database.datname AS database_name,
pg_stat_get_db_blocks_fetched(pg_database.oid) AS blocks_fetched,
pg_stat_get_db_blocks_hit(pg_database.
Question,
Does (pg_stat_get_db_blocks_fetched(oid)-pg_stat_get_db_blocks_hit(oid)*8) =
number of KB read from disk for the listed database since the last server
startup?
Thanks,
Chris
Hi everyone,
I am testing my shared_buffers pool and am running into a problem with slow
inserts and commits. I was reading in several places that in the
8.XPostgreSQL engines should set the shared_buffers closer to 25% of
the
systems memory. On me development system, I have done that. We have
Is there a reason you are not upgrading to PostgreSQL 8.1? it will run natively on Windoze, and will give you much better performance. 7.1 is way out of date, and has a lot of bad issues in it.Upgrading will most likely fix this issue.
ChrisOn 8/22/06, Ravindran G - TLS, Chennai. <[EMAIL PROTECTE
Just curious, is this a production server? Also, how large is the total cluster on disk?On 8/3/06, Ian Westmacott <
[EMAIL PROTECTED]> wrote:is that all? psql -l | grep 'rows)'(2016 rows)
On Thu, 2006-08-03 at 21:15 +0200, Steinar H. Gunderson wrote:> On Thu, Aug 03, 2006 at 01:33:35PM -0500, Jim
I've got 226 customer databases in one cluster. Works like a champ with 8.1.3. I have 3 additional PostgreSQL servers with our largest customers on them. They have between 10 and 30 databases. The smallest of my servers has 261GB's worth of db's in the cluster, and the largest is 400GB's.
BTW,
Just turn on autovacuuming on your 8.1 database. You can tune the vacuum and autovacuum parameters to minimize the impact to your system. This is the optimal route to take since PG will maintain the tables for you as needed.
HTH,ChrisOn 7/15/06, Gabriele Turchi <[EMAIL PROTECTED]> wrote:
Il gior
On 7/7/06, William Scott Jordan <[EMAIL PROTECTED]> wrote:
Hi Jeff,Ah, okay. I see what information you were looking for. Doing aVACUUM on the full DB, we get the following results:INFO: free space map: 885 relations, 8315 pages stored; 177632 total
pages neededDETAIL
On 7/3/06, David Gagnon <[EMAIL PROTECTED]> wrote:
Can I hope having several hundred of db on 1 db server? Like 250 dbs =250 client = 360 000 tables !!!So is there a limit for the number of db in the db server ?(this spec isnot on the website)What about the performance? Can I expect to have the sa
Question,
How exactly is Postgres and Linux use the memory?
I have serveral databases that have multi GB indexes on very large
tables. On our current servers, the indexes can fit into memory
but not the data (servers have 8 - 12 GB). However, my boss is
wanting to get new servers for me but doe
In PostgreSQL 8.1, is the pg_autovacuum daemon affected by the
vacuum_cost_* variables? I need to make sure that if we turn
autovacuuming on when we upgrade to 8.1, we don't cause any i/o
issues.
Thanks,
Chris
---(end of broadcast)---
TIP 1: if po
Hopefully a quick question.
In 7.3.4, how does the planner execute a query with union alls in it?
Does it execute the unions serially, or does it launch a "thread" for
each union (or maybe something else entirely).
Thanks,
Chris
Here is an explain from the view I'm thinking about, how does pos
On 8/24/05, Merlin Moncure <[EMAIL PROTECTED]> wrote:
> > Ok, there is always a lot of talk about tuning PostgreSQL on linux and
> > how PostgreSQL uses the linux kernel cache to cache the tables and
> > indexes.
> [...]
> >
> > 1. Implement a partition type layout using views and rules - This
> >
Ok, there is always a lot of talk about tuning PostgreSQL on linux and
how PostgreSQL uses the linux kernel cache to cache the tables and
indexes.
My question is, is there anyway to see what files linux is caching at
this moment?
My reasoning behind this question is:
I have several database syst
Sorry, forgot to state that we are still on PG 7.3.4.
On 8/17/05, Chris Hoover <[EMAIL PROTECTED]> wrote:
> I have some questions about tuning my effective_cache_size
>
> I have a RHEL 2.1 box running with dual Xeon (2.6 GHz I believe and
> they have HT on). The box has
I have some questions about tuning my effective_cache_size
I have a RHEL 2.1 box running with dual Xeon (2.6 GHz I believe and
they have HT on). The box has 8GB memory. In my postgresql.conf, I
have set the effective_cache_size = 530505 (~4GB).
However, I am noticing on this machine, top is tel
I'm alreading running at 1.5. It looks like if I drop the
random_page_cost t0 1.39, it starts using the indexes. Are there any
unseen issues with dropping the random_page_cost this low?
Thanks,
Chris
On 7/28/05, Dan Harris <[EMAIL PROTECTED]> wrote:
>
> On Jul 28, 2005,
Does anyone have any suggestions on this? I did not get any response
from the admin list.
Thanks,
Chris
-- Forwarded message --
From: Chris Hoover <[EMAIL PROTECTED]>
Date: Jul 27, 2005 12:29 PM
Subject: Re: Help with view performance problem
To: pgsql-admin@postgresql.o
Domenico Sgarbossa wrote:
I'm running postgrSQL 7.2 on a linux Red Hat 8.0 box with 2GB of RAM
When I boot-up the system , this is the TOP situation:
11:59am up 4 min, 1 user, load average: 0.37, 0.26, 0.11
77 processes: 74 sleeping, 3 running, 0 zombie, 0 stopped
CPU states: 0.3% user, 0.7% s
Ok, I was able to run a vacuumdb -f -v on my largest db over the weekend.
However, I am having trouble reading the results of the table portion. Here
area a couple of tables, what should I be looking at. First table is the key
table to the db, and the second is the largest table in the db.
On Friday 23 April 2004 14:57, Ron St-Pierre wrote:
Does this apply to 7.3.4 also?
> Actually, since he's running 7.4, there's an even better way. Do a
> "VACUUM VERBOSE" (full-database vacuum --- doesn't matter whether you
> ANALYZE or not). At the end of the very voluminous output, you'll se
On Friday 23 April 2004 13:21, scott.marlowe wrote:
> On Fri, 23 Apr 2004, Chris Hoover wrote:
> > DB's on Powervaults 220S using raid 5 (over 6 disks)
>
> What controller is this, the adaptec? We've found it to be slower than
> the LSI megaraid based controller,
n Friday 23 April 2004 13:44, Kevin Barnard wrote:
> Chris Hoover wrote:
> >I know the numbers look ok, but we are definetly suffering. Also, if I
> > try to run any sort of vacuum or other db activity during normal business
> > hours, load goes through the roof. I have seen
I know the numbers look ok, but we are definetly suffering. Also, if I try to
run any sort of vacuum or other db activity during normal business hours,
load goes through the roof. I have seen loads of over 10 when trying to
vacuum the larger cluster and would have to kill the vacuums due to
c
I need some help. I have 5 db servers running our database servers, and they
all are having various degrees of performance problems. The problems we are
experiencing are:
1. General slowness
2. High loads
All of our db's are running on Dell Poweredge 2650 with 2 P4 Xeons (2.8 ->
3.06 GHz)
I think I have figured my problem out.
I was taking heap_blks_hit / heap_blks_read for my hit pct.
It should be heap_blks_hit/(heap_blks_read+heap_blks_hit), correct?
Thanks
On Wednesday 21 April 2004 11:34, Chris Hoover wrote:
> I just want to make sure that I am interpreting this d
I just want to make sure that I am interpreting this data correctly.
From pg_statio_user_tables, I have pulled relname, heap_blks_read,
heap_blks_hit. I get several rows like this:
relname heap_bkls_read heap_blks_hit
clmhdr 8607161
I need some help. I have a query that refuses to use the provided index and
is always sequentially scanning causing me large performance headaches. Here
is the basic situation:
Table A:
inv_num int
typechar
.
.
.
pkey (inv_num, type)
indx(inv_num)
Table B (has the same primary key
37 matches
Mail list logo