Hi,
We are running v814 with 8 GB of RAM and 3.5 GB allocated to shared memory.
It's never been a problem for us before. We ran into a situation
yesterday where we received the message above. At that time our total
available memory took a dive to 4GB and cached memory stayed around 4 GB.
Hi,
We had to kill postmaster, and it's been ugly. The last backup finished
successfully not last night, but the night before. This is 161 GB database.
How long should I expect this to take before it starts. Developers are
coming into my office every 5 minutes asking for updates. It's
0 pages are entirely empty.
CPU 1.98s/0.33u sec elapsed 2419.27 sec.
INFO: analyzing "_test"
INFO: "_est": scanned 3000 of 492247 pages, containing 74065 live rows and
0 dead rows; 3000 rows in sample, 12152758 estimated total rows
Total query runtime: 2433517 ms.
From:
Hi Tom,
Sorry - to all my groupies out there for the time delay :) -
It's a rather time consuming endeavor.
Ok the ctid numbers did all seem to match for the group of 25 rows that
disappear.
For example., ctiid (146649,1) to (146649,25) represented a group or block
of 25 rows that go missi
Hi Ray,
I've tried "it" on both WAFL and Reiser filesystems and gotten either 0,1,
or 2 blocks of 25 rows missing. I haven't tried it on ext3 or JFS. No, I
haven't seen any problems in the syslogs and I haven't seen any drops on the
network in regards to WAFL. Reiser is the local filesyste
This is pretty confusing. You mean that groups of 25 adjacent rows were
missing in the output?
Yes, isn't that interesting? It's always in a group of 25 rows. This is
always random. Say 800,000 to 800,025 out of 12 million represents one
random group of 25 rows.
What's the (24+1) supp
This behavior is on big tables.
6.5 GB - 12+ million records
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Yes - we are doing select count(*) fromÂ… the stats are worthless until you
vacuum and we aren't running vacuum. Plus those change too often to be
reliable, they just make a good guess
---(end of broadcast)---
TIP 9: In versions below 8.0, the pl
No, there are different blocks of records missing every time.
If you repeat the entire experiment, are the same records missing each
time? I'm wondering about flaky hardware as much as anything.
---(end of broadcast)---
TIP 1: if posting/
4k blocks and our NFS mounts are 4k blocks.
In summary, we are very concerned and have no idea of what direction to go
with this.
~DjK
If you repeat the entire experiment, are the same records missing each time?
I'm wondering about flaky hardware as much as anything.
From: "
We are having some serious problems with the PostgreSQL COPY FROM command
and we have no clue what to do with this at this point. I think we need to
ask for 'guru' help in diagnosing this problem. Here is exactly what we are
doing:
We have a table with 12028587 records in it.
We do a COPY TO f
Hi,
We have SLES set up to use UTC-4 right now, EDT. When we do a restore, we
have to set the time 1 hour earlier than we really want the restore time to
be. This is because we have a theory that something isn't defined quite
right. We think postgres is using UTC-5 for some reason. We don'
Hi,
Every day I'm arguing with 3 or 4 people on this point.
My point is that I have to do a tarball, a tar command as part of online
backup with postgresql v814. I keep saying that the reason we do a tarbal
(online backup) and not individual dumps is that we want the PITR
capability. I've be
Hi,
Does any one know of a nagios check alert or snmp MIB alert to notify you
when say 100 locks build up. We start to notice people getting blocked with
large numbers of locks.
~DjK
---(end of broadcast)---
TIP 9: In versions below 8.0, the
Has anyone found a nagios check_ or snmp MIB to use in NAGIOS to alert you
when someone has a row level or table level lock?
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Hi,
When the postgresql.pdf for version 8.1 says that
Each buffer is 8192 bytes, unless a different value of BLCKSZ was chosen
when building the server.
When it says buildeing the server, is it talking about the postgres server
or the the OS server that the postgres instance is going on?
Ass
Hi,
When I run an online-backup script(v810) on SLES, the process immediately
goes into the 'D' STATE.
The backup finishes in a couple of hours. The backup is good and I can
restore. Why does it do that though? If I run the backup script at 12:20
AM instead of 1:20 AM it finishes in 20 m
Another thing that conerns me is that the zipped file size keeps changing.
I'm backing up 10GB total. One night the backup file was 9 GB, one night it
was 874 MB and just now, 5 GB. I'm expecting a file around 1.5 GB (15% of
the 10GB) that's there.
---(end of broa
tar: /sqldata/Linux.pgsql/base/19473856/19524666: file changed as we
read it
psql -d DBN -c "SELECT pg_start_backup('somehting.backup');"
tar -zcvf /something.tar.gz /PGDATA/dir/*
psql -d DBN -c "SELECT pg_stop_backup();"
Thanks,
~DjK
From: "Marco Bizzarri&q
/bin/tar: /sqldata/Linux.pgsql/base/19473856/19524666: file changed as we
read it
Hi,
I'm seeing that a file changed during the /bin/tar process. Is that
anything to be concerned about?
This is the first time this happened in a month. Does that mean I'm still
backed up in regards to that da
f that was by
design or I have a problem.
~DjK
From: Andy Shellam <[EMAIL PROTECTED]>
Reply-To: Andy Shellam <[EMAIL PROTECTED]>
To: "Mr. Dan" <[EMAIL PROTECTED]>
CC: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] online backup - v810 vs. v814
Date: Tue, 01 Aug 2
Hi,
I do an online backup with v814 and v810. I've noticed that there is a file
showing up in the PGDATA
directory/pg_xlog/0125100C2.0021CDDE.backup owned by
postgres 284 bytes in size that shows up in v810. That file isn't showing
up in v814. Is that by design?
~DjK
p
Hi,
Using this query plan, an extra uid shows up in this example. We are in the
process of upgrading from v810 to v814. Does anyone see anything wrong with
this query plan that might be causing a problem?
Index Scan using pk_recent_projects on recent_projects (cost=0.00..5.81
rows=1 width
Hi & help,
I just started my first instance of v814. It's giving these errors in the
logs with archiving. I saw one other post on this in the newsgroup, but my
problem is a littlie different. The WAL files all have a .ready suffix in
the the archive status folder in pg_xlog. None of the
Thanks!
~DjK
From: Tom Lane <[EMAIL PROTECTED]>
To: "Mr. Dan" <[EMAIL PROTECTED]>
CC: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] On-line backup Date: Mon, 17 Jul 2006 14:43:30 -0400
"Mr. Dan" <[EMAIL PROTECTED]> writes:
> ... What h
#
old
##########
"Mr. Dan" <[EMAIL PROTECTED]> writes:
> Is this 2003 advice still relevant with postgresql 8.1.0? Our b-tree
> indexes corrupt pretty often on our production server running 8.1.0 and
we
> are grasping for a solution
Hi Tom,
Is this 2003 advice still relevant with postgresql 8.1.0? Our b-tree
indexes corrupt pretty often on our production server running 8.1.0 and we
are grasping for a solution. We perform online backups like this: In
addition, we archive the transactions and replay them for PITR.
/us
Hi,
I have noticed that my full vacuum and re-index script is taking a day and a
half instead of 1/2 day.
Recently, the size of my database cluster has doubled to 130 GB. Would
anyone recommend increasing one or more of these to help get my full vacuums
to run a little faster?
Thanks,
~DjK
Hi,
Pgadmin is amazing. You can log on with a user account that's been
authenticated through Linux PAM and then run a command like pg_dump or
createdb that you wouldn't have access to run on the linux command line as
long as postgres approves that access. Is there a way to bring that power
Hi,
Is there a way to hide or encrypt the postgres password when the postgres
user is required to be the user in a shell script? This way we could have
a function called in a program, and the users could do something with out
actually seeing the postgres user password.
We have tried puttin
postgresql 8.1.0
Hi,
I have checkpoint_segments set to 18, yet I have 38 segmnet files in
pg_xlog. Is there a reason for that?
Thanks in advance,
~DjK
# WRITE AHEAD LOG
#---
# - Settings -
#fsync = on
I have a large database (20 Gb) on PostgreSQL 8.1.0 where I need to make a
copy of the structure and also copy the data from a few of the tables
(approximately 40 out of 140, resulting database is approximately 10 Mb)
which have dependencies on each other. The problem I am having is finding
the
Thanks Milen!
I use -P is bash shell scirpts. It used with with pgbench.
see pgbench --help.
~DjK
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED
Hi,
When I was running 8.0.X on linux, I didn't have to include the postgres -P
password in my shell scripts(.sh). After I switched to 8.1.X, I've had to
include the -P PASSWORD ** in all my scripts. I'm thinking I'll just
add PGPASSWORD as a postgres environment variable instead. I di
34 matches
Mail list logo