Hello
I have encountered some issues of PG crash when dealing with too much
data.
It seems that PG tries to do its task as quckly as it can be and will use as
much resource as it can.
Later I tried cgroups to limit resource usage to avoid PG consuming too much
memory etc. too quickly.
And PG wo
On 13-06-19 10:16 AM, jmfox180 wrote:
so i tried to do a
/usr/local/pgsql.old/bin/postgres -D /usr/local/pgsql.old/data start
LOG: database system was shut down at 2013-06-19 08:10:51 CST
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
it got stuck ther
so i tried to do a
/usr/local/pgsql.old/bin/postgres -D /usr/local/pgsql.old/data start
LOG: database system was shut down at 2013-06-19 08:10:51 CST
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
it got stuck there ^^
the i tried to
/usr/local/pgsql.o
On 13-06-18 02:06 PM, jmfox180 wrote:
so after trying i advance just a little bit but when running pg_upgrade for a
check or upgrade
i get:
pg_control values:
First log file ID after reset:0
First log file segment after reset: 2
pg_control version number:922
Catalog vers
so after trying i advance just a little bit but when running pg_upgrade for a
check or upgrade
i get:
pg_control values:
First log file ID after reset:0
First log file segment after reset: 2
pg_control version number:922
Catalog version number: 201204301
Datab
On Sat, Jun 15, 2013 at 5:46 AM, Joshua D. Drake wrote:
> The type of shared memory postgresql uses is rarely used by other systems.
> However, as I recall 9.3 resolves the shmmax issue as a whole so it won't
> be a problem.
>
Yes, by reducing system v shared memory consumption:
http://git.postgre
On 06/14/2013 01:47 PM, Andreas wrote:
Am 14.06.2013 20:55, schrieb Peter Geoghegan:
On Fri, Jun 14, 2013 at 11:55 AM, Andreas wrote:
How can I get more memory for PG on openSUSE 12.3 ?
http://www.postgresql.org/docs/9.2/static/kernel-resources.html
OK I think that did it :)
I'm just
Am 14.06.2013 20:55, schrieb Peter Geoghegan:
On Fri, Jun 14, 2013 at 11:55 AM, Andreas wrote:
How can I get more memory for PG on openSUSE 12.3 ?
http://www.postgresql.org/docs/9.2/static/kernel-resources.html
OK I think that did it :)
I'm just wondering what point it could be to set t
On Fri, Jun 14, 2013 at 11:55 AM, Andreas wrote:
> How can I get more memory for PG on openSUSE 12.3 ?
http://www.postgresql.org/docs/9.2/static/kernel-resources.html
--
Regards,
Peter Geoghegan
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscr
My bad, I'll re-post this to pgsql-general with more data on my running
environment
On Thu, Feb 21, 2013 at 11:16 AM, Ned Wolpert wrote:
> Folks-
>
> I'm doing a postmortem on a corruption event we had. I have an idea on
> what happened, but not sure. I figure I'd share what happened and s
On 02/06/2013 08:09 AM, Dev Kumkar wrote:
Hello Everyone,
I am using postgres 9.2 and when executing function dblink facing a
fatal error while trying to execute dblink_connect as follows:
/SELECT * FROM dblink_connect('host=127.0.0.1 port=5432
dbname=postgres password=test')/
*ERROR*: co
On 29.01.2013 18:36, Lonni J Friedman wrote:
On Tue, Jan 29, 2013 at 8:32 AM, Heikki Linnakangas
wrote:
Thanks. I'm afraid I didn't get any wiser from the log output. Since this is
a test system, could you reduce the test case into something smaller and
self-contained?
Sorry, I don't underst
On 26.01.2013 01:28, Lonni J Friedman wrote:
On Tue, Jan 15, 2013 at 2:57 AM, Heikki Linnakangas
wrote:
That process sounds correct. Since you're using pg_basebackup -x option, you
don't even need to copy the WAL logs, although it shouldn't do any harm
either . The tar file should contain ever
On 09.01.2013 20:28, Lonni J Friedman wrote:
Greetings,
I'm running postgres-9.2.2 in a Linux-x86_64 cluster with 1 master and
several hot standby servers. Since upgrading to 9.2.2 from 9.1.x a
few months ago, I switched from generating a base backup on the
master, to generating it on a dedicate
On Sun, 23 Dec 2012 11:08:32 -0500, Kevin Grittner wrote:
> Kong Man wrote:
>
>> The postgresql.conf file has always been using the default value, which
>> is 'warning'.
>
> Maybe it's not using the postgresql.conf file you think it is. Does this
> show the file you've been looking at?:
>
> SHO
Thank you Tom for the suggestion and the workaround.
I think we will be able to hold off on the temporary workaround.
Would the 2nd processes in the deadlock show up in pg_locks? (It didn't)
An insert from another table that i created to test the datafile did show up
as blocked.
I got confirmation
amacvar writes:
> In addition here is a backtrace of the process while it hung again today:
> (thanks to our SA Eric)
> #0 0x00396c8d4627 in semop () from /lib64/libc.so.6
> #1 0x005dc313 in PGSemaphoreLock ()
> #2 0x00616586 in LWLockAcquire ()
> #3 0x
In addition here is a backtrace of the process while it hung again today:
(thanks to our SA Eric)
#0 0x00396c8d4627 in semop () from /lib64/libc.so.6
#1 0x005dc313 in PGSemaphoreLock ()
#2 0x00616586 in LWLockAcquire ()
#3 0x004a718c in ginHeapTupleF
On 07/06/2012 01:06 PM, Akash wrote:
@@Craig
I could only wait for 6 hours. I cancelled the process after that.
No locks on any of the tables when running.
That's a very ambiguous statement. I'm assuming you are saying "our code
does not take any explict locks on those tables using LOCK TABLE
@@Craig
I could only wait for 6 hours. I cancelled the process after that.
No locks on any of the tables when running.
Yes, It was progressing. Of the 15 tables, when I cancelled after 6 hours,
3rd table was getting populated (when checked in pg_stat_activity).
Below are some changes I have in
* Peter Cheung wrote:
I have installed Postgresql using One click installer on a Windows
Server. How can I check whether GSSAPI support is enabled so I can use
SSPI for Windows Authentication?
GSSAPI support is not enabled in the one-click installer, but you also
do not need it for SSPI. Wh
Mitesh Shah wrote:
> *pg_dump: server version: 9.1.2; pg_dump version: 9.0.5*
> *pg_dump: aborting because of server version mismatch*
This is not a bug. Use a version of pg_dump which is at least as
new as the server. The older version of pg_dump is unlikely to be
able to recognize everythi
On Wed, 14 Mar 2012 13:24:36 +0200, Khangelani Gama wrote:
> thanks, the issue we have is that we have many Linux users having root
> access into the system. So they're able to access the DB by just going
> in as "su - superusername". If this user is able to make any updates
> inside the database
On Sat, Mar 03, 2012 at 08:27:19PM +0200, Peter Eisentraut wrote:
> On ons, 2012-02-29 at 16:25 -0500, Ray Stell wrote:
> > On Wed, Feb 29, 2012 at 03:55:13PM -0500, Ray Stell wrote:
> > > http://www.postgresql.org/docs/9.1/static/release-9-1-2.html
> > > This must be repeated in each
> > > databas
On ons, 2012-02-29 at 16:25 -0500, Ray Stell wrote:
> On Wed, Feb 29, 2012 at 03:55:13PM -0500, Ray Stell wrote:
> > http://www.postgresql.org/docs/9.1/static/release-9-1-2.html
> > This must be repeated in each
> > database to be fixed.
>
> including postgres, template0, and template1?
Yes, if y
On Wed, Feb 29, 2012 at 03:55:13PM -0500, Ray Stell wrote:
> http://www.postgresql.org/docs/9.1/static/release-9-1-2.html
> This must be repeated in each
> database to be fixed.
including postgres, template0, and template1?
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To mak
on rebuilding the information_schema:
http://www.postgresql.org/docs/9.1/static/release-9-1-2.html
...
Since the view definition is installed by initdb, merely upgrading will
not fix the problem. If you need to fix this in an existing installation,
you can (as a superuser) drop the information_sche
On Wed, 07 Dec 2011 09:13:10 -0500, Gene Poole wrote:
> This has to do with my personal home environment.
>
> I'm running Oracle 11gR2 DBMS with 4 instances using a single home
> directory. For each of the instances I'm using LVM file systems with 10
> logical volumes defined (/dbmsu00 used for
On 7 Prosinec 2011, 15:13, Gene Poole wrote:
> This has to do with my personal home environment.
>
> I'm running Oracle 11gR2 DBMS with 4 instances using a single home
> directory. For each of the instances I'm using LVM file systems with 10
> logical volumes defined (/dbmsu00 used for the install
Re: Виктор Столбин 2011-09-23 <4e7c7da8.6060...@gmail.com>
> Hello!
> I'm trying create a cluster with existing data directory on Debian
> 2.6.26-2-686-bigmem. I use PostgreSQL 8.3 installed from apt-get and
> PostgreSQL 9.1 installed from
> [OpenSCG](http://www.openscg.org/se/oscg_download.jsp?fi
On Fri, 2011-09-23 at 16:38 +0400, Виктор Столбин wrote:
> Hello!
Firts, this list is not for pg_*cluster, but rather clustering, high
availability and replication solutions
> I'm trying create a cluster with existing data directory on Debian
> 2.6.26-2-686-bigmem. I use PostgreSQL 8.3 installe
SELECT a.attname, pg_catalog.format_type(a.atttypid, a.atttypmod)
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = (SELECT pg_class.oid
FROM pg_class INNER JOIN pg_namespace
ON (pg_class.relnamespace = pg_namespace.oid
AND
Thank you for the help... I will give this a try!
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/All-the-functionality-I-need-is-pgsql2shp-exe-isolated-installation-tp4675350p4678033.html
Sent from the PostgreSQL - admin mailing list archive at Nabble.com.
--
Sent via
I have set stats_temp_directory to postgresql/data/pg_stat_tmp, but still
this message is coming and also created pgstat.tmp file in that location
manually, but this file is getting removed automatically in few minutes.
postgresql/data/pg_stat_tmp> touch pgstat.tmp
postgresql/data/pg_stat_tmp> l
Simon Riggs writes:
> On Thu, Jun 16, 2011 at 7:44 AM, bakkiya wrote:
>> Space,permission are not an issue.
> Then presumably you have your stats_temp_directory parameter set to a
> path where that is not true.
But the error message is showing the specific file path that the stats
collector tri
No, we run postgres with the novell user only.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/could-not-open-temporary-statistics-file-pg-stat-tmp-pgstat-tmp-tp4493683p4493834.html
Sent from the PostgreSQL - admin mailing list archive at Nabble.com.
--
Sent via pgsql-a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 06/16/2011 08:44 AM, bakkiya wrote:
>
> postgresql/data/pg_stat_tmp> touch a
> novell@s210:/pub/var/opt/novell/sentinel/3rdparty/postgresql/data/pg_stat_tmp>
> ls -rtl
> total 96
> -rw--- 1 novell novell 91898 2011-06-15 23:36 pgstat.stat
> -r
On Thu, Jun 16, 2011 at 7:44 AM, bakkiya wrote:
> Space,permission are not an issue.
Then presumably you have your stats_temp_directory parameter set to a
path where that is not true.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training
postgresql/data/pg_stat_tmp> df -h .
FilesystemSize Used Avail Use% Mounted on
/dev/xvdb1197G 16G 171G 9% /pub
postgresql/data/pg_stat_tmp> touch a
novell@s210:/pub/var/opt/novell/sentinel/3rdparty/postgresql/data/pg_stat_tmp>
ls -rtl
total 96
-rw--- 1 novell nove
Quoting Mark Stosberg :
5. Finally, I'll drop the indexes on the parent table and
truncate it.
Luckily I noticed the problem with TRUNCATE and partitioning before my
work got to production.
TRUNCATE cascades automatically and silently to child tables, which was
not my intent.
This is menti
Scott Marlowe writes:
> On Wed, May 4, 2011 at 11:04 AM, Mark Stosberg wrote:
>> Further, since TRUNCATE permanently and instantly deletes mass amounts
>> of data, I would hope that it would provide "safety" by default, but
>> only truncating one table unless I specify otherwise.
The reason it w
On Wed, May 4, 2011 at 11:04 AM, Mark Stosberg wrote:
> It is not as findable as it could be then. Besides scanning the page, I
> also searched for "child", "parent" and "partition", and none of those
> words are mentioned. Neither is "inherit". Pulling out "ONLY" to have
> it's own "Parameter" su
On 05/04/2011 12:54 PM, Scott Marlowe wrote:
> On Wed, May 4, 2011 at 10:48 AM, Mark Stosberg wrote:
>>
5. Finally, I'll drop the indexes on the parent table and
truncate it.
>>
>> Luckily I noticed the problem with TRUNCATE and partitioning before my
>> work got to production.
>>
>> TRU
On Wed, May 4, 2011 at 10:48 AM, Mark Stosberg wrote:
>
>>> 5. Finally, I'll drop the indexes on the parent table and
>>> truncate it.
>
> Luckily I noticed the problem with TRUNCATE and partitioning before my
> work got to production.
>
> TRUNCATE cascades automatically and silently to child tabl
>> 5. Finally, I'll drop the indexes on the parent table and
>> truncate it.
Luckily I noticed the problem with TRUNCATE and partitioning before my
work got to production.
TRUNCATE cascades automatically and silently to child tables, which was
not my intent.
This is mentioned here:
http://wiki.
On 04/27/2011 03:35 PM, Mark Stosberg wrote:
In particular, I wanted to check whether the UPDATE statement would
alter all the rows automatically, or if the underlying trigger would
cause all the rows processed a row at a time.
It appears from my test that the result of the UPDATE was going to
a
On Wed, Apr 27, 2011 at 6:26 PM, Scott Marlowe wrote:
> I had a similar problem about a year ago, The parent table had about
> 1.5B rows each with a unique ID from a bigserial. My approach was to
> create all the child tables needed for the past and the next month or
> so. Then, I simple did so
I had a similar problem about a year ago, The parent table had about
1.5B rows each with a unique ID from a bigserial. My approach was to
create all the child tables needed for the past and the next month or
so. Then, I simple did something like:
begin;
insert into table select * from only tabl
On Wed, Apr 27, 2011 at 04:17:16PM -0400, Mark Stosberg wrote:
> On 04/27/2011 10:48 AM, Mark Stosberg wrote:
> >
> > Hello,
> >
> > I'm working on moving a table with over 30 million to rows to be
> > partitioned. The table seeing several inserts per second. It's
> > essentially an activity log
On 04/27/2011 10:48 AM, Mark Stosberg wrote:
>
> Hello,
>
> I'm working on moving a table with over 30 million to rows to be
> partitioned. The table seeing several inserts per second. It's
> essentially an activity log that only sees insert activity and is
> lightly used for reporting, such that
> Similar posting on partition table, take this inputs before going
> forward with partition table.
>
> http://archives.postgresql.org/pgsql-general/2011-04/msg00808.php
>
> Best solution given by Greg Smith as well Vick.
Thanks for the replies.
Today I reviewed the section on partitioning fro
On Wed, Apr 20, 2011 at 8:39 PM, Mlondolozi Ncapayi wrote:
> Thank Sir, I have managed to uninstall it.
> Now I installed PostgreSQL 8.4.4 and PostGIS 1.4.2.
>
> Can you please send me command prompts to load shapefiles using Windows 7.
>
> I want to load cities.shp into template_postgis14 [dbname
On Wed, Apr 20, 2011 at 12:32 PM, Mlondolozi Ncapayi wrote:
> Hi there
>
> I installed PostgreSql 8.4 and now I want to delete/ uninstall it
> completely to start a new fresh installation.
> Can you please give me clear instructions on how to do that or maybe a
> script that I can run.
> I am doi
I too am seeing the same issue. Does a work-around exist?
Bob
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Intallation-of-postgres-9-0-3-gives-an-error-on-already-installed-postgres-8-4-on-the-same-machine-tp3396389p4298956.html
Sent from the PostgreSQL - admin maili
Gerhard Hintermayer wrote:
> This should of course be rsync -a ... . Much better now :-)
Yeah, you need to pick the right options. The other important thing
is to use a daemon, but from the :: in the remote target
specification, it appears you're already on that. :-)
-Kevin
--
Sent via p
This should of course be rsync -a ... . Much better now :-)
Sorry for this extra round.
On Tue, Apr 12, 2011 at 2:32 PM, Gerhard Hintermayer <
gerhard.hinterma...@gmail.com> wrote:
> my basebackup is done via the following ($1 is the parameter for the
> server where the basebackup is taken from)
I managed to move my indices to btree and now at least data/queries
are consistent after hot standby takeover.
What I'm still worried about is the amount of time rsyncing the
basebackup to another machine (~10 mins over 100MBit LAN).
sent 4669370 bytes received 3287764 bytes 13590.32 bytes/sec
t
Gerhard Hintermayer wrote:
> Just checked my indices, looks like the only table in all my 5
> DB's that has a hash index is the one I ran tests on.
Well, actually that's pretty lucky. If you'd tested with other
indexes, you might have gone live with the broken index. I would
seriously consid
Just checked my indices, looks like the only table in all my 5 DB's
that has a hash index is the one I ran tests on. Should play in the
lottery this week :-) Will check this tomorrow and recreate the
affected hash indices. As the history of this db is quite long, I
really just can't remember how t
Gerhard Hintermayer wrote:
> I have e.g. a table with:
> Indexes:
> "idx_auftrag_l1" hash (a_nr)
Any *hash* index will need to be rebuilt. Like that one.
> Seeing this and reading the docs about the caveats I see no other
> solution as to REINDEX all of my DB's :-(
As the docs say,
On Mon, Apr 11, 2011 at 7:25 PM, Kevin Grittner
wrote:
> Gerhard Hintermayer wrote:
>
>> Because tests & docs say so:
>>
> http://www.postgresql.org/docs/9.0/static/continuous-archiving.html#CONTINUOUS-ARCHIVING-CAVEATS
>
> I asked because I didn't remember any mention of hashed indexes. I
> sti
Gerhard Hintermayer wrote:
> Because tests & docs say so:
>
http://www.postgresql.org/docs/9.0/static/continuous-archiving.html#CONTINUOUS-ARCHIVING-CAVEATS
I asked because I didn't remember any mention of hashed indexes. I
still don't know for sure that you have any, much less that you have
Because tests & docs say so:
http://www.postgresql.org/docs/9.0/static/continuous-archiving.html#CONTINUOUS-ARCHIVING-CAVEATS
;-) Docs say that this might be fixed sometime.
Also when I try a SELECT statement on the new primary, which makes use
of an index, I get an empty result, even though the t
Gerhard Hintermayer wrote:
> Unfortunately I had to insert 2.1 reindex database [for all
> databases] after creating the trigger file on the new dedicated
> primary
Why?
-Kevin
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://ww
On Thu, Apr 7, 2011 at 11:40 AM, Gerhard Hintermayer
wrote:
> Hi,
> I'm trying to set up at least 3 servers using hot standby streaming
> replication. I'd like to have one primary and 2 secondary (on 2
> different locations in case of a desaster in the server room).
> A primary
> B secondary 1
> C
On Apr 9, 4:31 pm, sfr...@snowman.net (Stephen Frost) wrote:
> Running a database-wide 'vacuum freeze;' will either succeed (and all
> will be well), or it'll generate the errors in the message (in which
> case you'll need to restore the CLOG files, as at the wiki page, and
> then re-run the 'vacuu
--- On Thu, 4/7/11, H S wrote:
From: H S
Subject: Re: [ADMIN] Re: [ADMIN] ▌►connecting to OS user in the same db
To: "raghu ram"
Date: Thursday, April 7, 2011, 9:07 PM
CREATE USER statement to create and configure a database user,which is an
account through which you can lo
Have you tried moving the FROM ... WHERE ... into a sub-select? GROUP BY uses
HAVING, not WHERE.
- Reply message -
From: "French, Martin"
Date: Wed, Apr 6, 2011 7:44 pm
Subject: [ADMIN] Out Of Memory 8.1
To:
Hi All,
I am having problems with a query on 8.1 running on
RHEL 5.4
16GB
On Thu, Apr 7, 2011 at 8:42 AM, H S wrote:
> Hi Sirs,
>
> I would like more than one OS user can access the same DB in the same
> cluster.
>
> For a user like postgres I had these commands:
>
> # adduser postgres
> # mkdir /usr/local/pgsql/data
> # chown postgres /usr/local/pgsql/data
> # su -pos
How recent is the vacuum in this DB?
- Reply message -
From: "Rajendra prasad"
Date: Thu, Mar 31, 2011 11:01 pm
Subject: [ADMIN] Index size growing
To:
Hi,
I have 50 GB data base folder for postgres. Out of 50 GB 20 GB has been
occupied by 8 indexes of a single table by growing upto
On Thu, Mar 3, 2011 at 2:16 AM, Heikki Linnakangas
wrote:
> On 03.03.2011 09:12, daveg wrote:
>>
>> Question: what would be the consequence of simply patching out the setting
>> of this flag? Assuming that the incorrect PD_ALL_VISIBLE flag is the only
>> problem (big assumption perhaps) then simpl
On 03.03.2011 09:12, daveg wrote:
Question: what would be the consequence of simply patching out the setting
of this flag? Assuming that the incorrect PD_ALL_VISIBLE flag is the only
problem (big assumption perhaps) then simply never setting it would at least
avoid the possibility of returning wr
On Mon, Feb 28, 2011 at 10:32 PM, Greg Stark wrote:
> On Tue, Mar 1, 2011 at 1:43 AM, David Christensen wrote:
>> Was this cluster upgraded to 8.4.4 from 8.4.0? It sounds to me like a known
>> bug in 8.4.0 which was fixed by this commit:
>>
>
> The reproduction script described was running vacu
On Tue, Mar 1, 2011 at 1:43 AM, David Christensen wrote:
> Was this cluster upgraded to 8.4.4 from 8.4.0? It sounds to me like a known
> bug in 8.4.0 which was fixed by this commit:
>
The reproduction script described was running vacuum repeatedly. A
single vacuum run out to be sufficient to cl
Wicked! Great answer. Thanks.
- Reply message -
From: "Frank Heikens"
Date: Thu, Feb 24, 2011 3:54 pm
Subject: [ADMIN] sequence numbers under pg concurrence model
To: "scorpda...@hotmail.com"
Cc: "pgsql-admin@postgresql.org"
Op 24 feb 2011, om 20:49 heeft scorpda...@hotmail.com he
Hi!
On 01/03/11 22:24, Tom Lane wrote:
I wonder whether you're failing to copy the backup_label file as part of
the base backup. The presence of that file is what tells the slave
postmaster where it has to start recovering from.
regards, tom lane
Thank you so much
Imre Oolberg wrote:
> I am starting to think more in the direction what Kevin Grittner
> suggested that i miss something in my procedure and instead
> following so to say recovery path i am doing something else which
> strangely ends up with working database process but actually
> misses some d
Imre Oolberg writes:
> First of all, thanks for your time dealing with my situation. I must
> stress that i have wal arhived starting from before issuing
> pg_start_backup, wal archives generated before pg_start/stop_backup and
> also some wal archive files generated after pg_stop_backup (and
Hi!
On 01/03/11 20:56, Tom Lane wrote:
Scott Ribe writes:
Unless I misread his earlier post, he is starting the backup, copying files, stopping
the backup, and expecting those transactions (between start& stop) to be applied,
even though files changed during that time (including WAL fil
Scott Ribe writes:
> Unless I misread his earlier post, he is starting the backup, copying files,
> stopping the backup, and expecting those transactions (between start & stop)
> to be applied, even though files changed during that time (including WAL
> files) have not all been copied. Again, u
On Jan 3, 2011, at 11:42 AM, Tom Lane wrote:
> That's either incorrect or poorly worded.
Let's go with poorly worded then. (Actually I don't do this, I use streaming
replication, where I don't have to copy the WAL files because the standby picks
up the ones that are needed.)
Unless I misread h
Scott Ribe writes:
> On Jan 3, 2011, at 11:22 AM, Imre Oolberg wrote:
>> But http://www.postgresql.org/docs/9.0/interactive/continuous-archiving.html
>> suggests to use tar on rsync and i guess that PostgreSQL recovery with wal
>> files takes care of these inconsistencies that are created during
Scott Ribe wrote:
> Yes, but the database is recovered to the consistent state as of
> the pg_start_backup command, as I pointed out to you before.
> Results of transactions that commit after the pg_start_backup
> command will not be in the backed up database.
I doubt you. The transactions be
On Jan 3, 2011, at 11:22 AM, Imre Oolberg wrote:
> But http://www.postgresql.org/docs/9.0/interactive/continuous-archiving.html
> suggests to use tar on rsync and i guess that PostgreSQL recovery with wal
> files takes care of these inconsistencies that are created during copying
> filesystem,
Hi!
Thanks for the answer.
On 01/03/11 18:36, pasman pasmański wrote:
The problem exists when you do a base backup with pg_backup?
Maybe rsync skip some data ?
Yes, problem exists using pg_start/stop_backup as i discribed.
I am sure that 'rsync -avH ...' and 'tar cvf ...' will skip at l
The problem exists when you do a base backup with pg_backup?
Maybe rsync skip some data ?
pasman
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
27.10.2010 t...@sss.pgh.pa.us napsal(a):
> =?utf-8?Q?Martin_Povolny?= writes:
>> I had 5 databases, 4 dumped ok, the 5th, the largest failed dumping: I
>> was unable to
>> make a dump in the default 'tar' format. I got this message:
>> pg_dump: [tar archiver] archive member too large for tar forma
Tom,
Thanks for getting in touch. Unfortunately, I thought to myself "why
not drop the db in single mode under database postgres", which I did,
and which worked, and thus, I can no longer produce the error, nor can
I query the "phantom" table as you suggested. I can say that when I
tried to vacuu
Robert Burgholzer writes:
> So, as I mentioned, I have tried to get this straightened out by
> vacuuming all in --single mode, but to no avail. I executed the
> following command, to see which tables were in trouble:
> SELECT relname, age(relfrozenxid) FROM pg_class WHERE relkind = 'r';
> And f
So, as I mentioned, I have tried to get this straightened out by
vacuuming all in --single mode, but to no avail. I executed the
following command, to see which tables were in trouble:
SELECT relname, age(relfrozenxid) FROM pg_class WHERE relkind = 'r';
And found a table listed, that DOES NOT ex
Al Eridani wrote:
> Is it possible to modify the size of pg_xlog? How? Where?
The size of each file within the directory, or the number of files
in the directory?
-Kevin
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postg
This is an English language list, Mariano.
The question is
Is it possible to modify the size of pg_xlog? How? Where?
--
Al
2010/6/2 mario martinez hernandez :
> Buenas tardes.
> Mi nombre es Mariano.
> Mi pregunta es.
> ¿ Es posible modificar el tamaño de pg_xlog? ¿como? ¿donde?
> Gracias
> Un
1- Of course, we could. Problem appeared in both 8.1.8 and 8.1.15 after
an initial upgrade.
Today database is not online due to a migration to another technology.
It's not our database (a customer one) so we have no access to it at
this moment.
2- One of the major on-line shops of Spain. This onl
Try this:
pg_dump -a source_db | psql target_db
Sam
From: pgsql-admin-ow...@postgresql.org
[mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Khangelani Gama
Sent: Tuesday, 25 May 2010 5:45 PM
To: pgsql-admin@postgresql.org
Subject: [ADMIN] Quickest command
Thanks for the ideas
-Original Message-
From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov]
Sent: Friday, April 09, 2010 8:03 PM
To: pgsql-admin@postgresql.org; Khangelani Gama
Subject: RE: [ADMIN] Re: database restoration problem- data became incorrect
caused by incorrect date in
Khangelani Gama wrote:
> the backups we have uses pg_dump command. The automatic
> backup dumps file happen daily and gets saved in a certain
> directory
No off-site copies or archives to fall back on? If not, my ideas
are:
(1) I hope you made a file copy as soon as you found the problem.
te in server was
incorrect.
Thanks
-Original Message-
From: pgsql-admin-ow...@postgresql.org
[mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Kevin Grittner
Sent: Friday, April 09, 2010 4:14 PM
To: pgsql-admin@postgresql.org; Khangelani Gama
Subject: [ADMIN] Re: database restor
Khangelani Gama wrote:
> We are still using 7.3.4 PostgreSQL
Do you really mean that, or is that a typo?
What sort of backups do you have? (pg_dump, file system copy, etc.)
-Kevin
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
htt
To add on to previous comments:
There is only one database in the server and the backup server also has
incorrect data because all the wrong data replicated to the backup server.
From: pgsql-admin-ow...@postgresql.org
[mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Khangelani Gama
Sen
Please don't cc two of the lists here. It makes things difficult for
users who only subscribe to one list or the other who reply--their post
to the other list will be held for moderation. And that's a pain for
the moderators too. In this case, either the pgsql-admin or
pgsql-performance list
1 - 100 of 590 matches
Mail list logo