Hi. I was promoting PostgreSQL to an AIX/Oracle shop yesterday, they are
looking to switch to open source to cut their licensing costs, and was
asked how large a database does PostgreSQL support? Is there an upper
bound on database size and if so, what it is?
Aleksey Tsalolikhin
laurenz.a...@wien.gv.at
wrote:
Aleksey Tsalolikhin wrote:
Hi. I was promoting PostgreSQL to an AIX/Oracle shop yesterday, they
are looking to switch to open
source to cut their licensing costs, and was asked how large a database
does PostgreSQL support? Is
there an upper bound on database size
Hi, Kenroy. Can you make a test suite so that you could run a performance
test on each platform? I see you will have different hardware and
operating systems.
Best,
Aleksey
On Sun, Mar 24, 2013 at 11:40 AM, Kenroy Bennett bennettk9...@gmail.comwrote:
On a hourly basis 13 tables with
How can I mix psql commands and SQL commands on the shell command line, please?
$ psql
psql (9.2.1)
Type help for help.
ddcKeyGen= \timing \\ select count(*) from auth_users;
Timing is on.
count
---
276
(1 row)
Time: 1.730 ms
$ psql -c \timing \\ select count(*) from auth_users;
Timing
Thank you very much, Adrian and Pavel. And I really appreciate the
documentation reference, Adrian!
Yours fondly,
Aleksey
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Fri, Sep 21, 2012 at 12:45 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Fri, Sep 21, 2012 at 11:41 AM, Victor Yegorov vyego...@gmail.com wrote:
It seems that this also matches your explanation, correct me if I'm wrong.
In general, doing select ctid... is a poor way of figuring out
where
On Sun, Sep 2, 2012 at 10:08 PM, Peter Eisentraut pete...@gmx.net wrote:
On Wed, 2012-08-29 at 10:31 -0700, Aleksey Tsalolikhin wrote:
What is the difference between C and en_US.UTF8, please?
There are many differences, but here is a simple one:
$ (echo a; echo A; echo b; echo B) | LC_ALL=C
Why do I have holes in my pages?
Postgres 8.4.12
select ctid from big_table on my master shows that pages have
holes in them.
Here is example for page 431665:
(431665,2)
(431665,5)
(431665,8)
(431665,11)
(431665,14)
(431665,17)
(431665,20)
(431665,23)
Current DB size is 400 GB and it
On Thu, Sep 20, 2012 at 12:34 PM, Bill Moran wmo...@potentialtech.com wrote:
In response to Aleksey Tsalolikhin atsaloli.t...@gmail.com:
Current DB size is 400 GB and it takes up 2.7 TB (on a 6.6 TB filesystem).
I expect that the first thing that others are going to ask
is what is telling
On Thu, Sep 20, 2012 at 1:53 PM, John R Pierce pie...@hogranch.com wrote:
On 09/20/12 1:34 PM, Aleksey Tsalolikhin wrote:
Right on. I got that out of my pgstatspack report.
\l+ in psql tells me the same thing - 400 GB
it might be interesting to see the output of...
du -hs $PGDATA
On Wed, Sep 5, 2012 at 7:38 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Having said that, a kill -9 on an individual backend (*not* the
postmaster) should be safe enough, if you don't mind the fact that
it'll kill all your other sessions too.
Got it, thanks.
Why will it kill all your other
Got it, thanks, Kevin, Tom.
So how about that this process that was in notify interrupt waiting
waiting status after I SIGTERM'ed it. Is the double waiting
expected?
Aleksey
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Wed, Sep 5, 2012 at 10:03 AM, Tom Lane t...@sss.pgh.pa.us wrote:
That sounded a bit fishy to me too. But unless you can reproduce it in
something newer than 8.4.x, nobody's likely to take much of an interest.
The LISTEN/NOTIFY infrastructure got completely rewritten in 9.0, so
any bugs in
Hi.
We use LISTEN/NOTIFY quite a bit but today something unusual (bad) happened.
Number of processes waiting for a lock just started to go up up up.
I finally found the object being locked was pg_listener which
RhodiumToad on IRC kindly informed happens during LISTEN/NOTIFY. The
process that
BTW, after I signalled TERM, the process status changed from
notify interrupt waiting
to
notify interrupt waiting waiting
which I thought looked kind of odd.
Then I signalled KILL.
Aleksey
On Tue, Sep 4, 2012 at 6:21 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
Hi.
We use
On Tue, Sep 4, 2012 at 7:21 PM, John R Pierce pie...@hogranch.com wrote:
On 09/04/12 7:09 PM, Aleksey Tsalolikhin wrote:
BTW, after I signalled TERM, the process status changed from
notify interrupt waiting
to
notify interrupt waiting waiting
which I thought looked kind of odd.
Then I
Hi, Seref. You might want to take a look at Stado:
http://www.slideshare.net/jim_mlodgenski/scaling-postresql-with-stado
Best,
-at
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Wed, Aug 29, 2012 at 9:45 AM, Merlin Moncure mmonc...@gmail.com wrote:
citext unfortunately doesn't allow for index optimization of LIKE
queries, which IMNSHO defeats the whole purpose. to the best way
remains to use lower() ...
this will be index optimized and fast as long as you
On Sun, May 6, 2012 at 10:49 AM, Thomas Kellerer spam_ea...@gmx.net wrote:
You can use multi-line comments with /* .. */ to send this information to
the server:
SELECT /* Query Author: Bob Programmer.
Purpose: Pull the student ID number, we'll need it to enroll the
student for
standard... is anybody
on this list involved with the SQL standard? What do you think about the
idea?
Best regards,
Aleksey Tsalolikhin
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi. When pg_dump runs, our application becomes inoperative (too
slow). I was going to ask if nice'ing the postgres backend process
that handles the COPY would help but I just realized probably the
pg_dump takes out locks when it runs and nice'ing it would just make
it run longer...
However the
On Thu, Mar 15, 2012 at 6:43 AM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
Is there any way to consolidate the pages on the slave without taking
replication offline?
Filip Rembiałkowski suggested: maybe CLUSTER?
Greg Williamson suggested: pg_reorg
Thank you, Filip and Greg
On Fri, Mar 16, 2012 at 2:03 PM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
I *think* you can get away with only sufficient free space to store the
*new* table and indexes
Yeah; I don't have that much free space. Just 30 GB short. :(
Depending on your schema and which tables
On Tue, Mar 13, 2012 at 5:14 AM, Vick Khera vi...@khera.org wrote:
I'll bet what happened was postgres re-wrote your table for you,
effectively doing a compaction. You can get similar effect by doing
an alter table and changing an INTEGER field to be INTEGER.
Postgres does not optimize that
Executive summary: Why would the TOAST table on the slave have 4x the
page count of the master? Is there a way to compact it if I don't
have enough disk space to duplicate the table? How do I prevent this
situation from recurring?
On Wed, Mar 14, 2012 at 7:38 PM, Scott Marlowe
On Wed, Mar 14, 2012 at 8:06 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
Executive summary: Why would the TOAST table on the slave have 4x the
page count of the master?
Hypothesis: the pages of the TOAST table on the slave have more
unused space in them than the pages
On Wed, Mar 14, 2012 at 9:57 PM, Scott Marlowe scott.marl...@gmail.com wrote:
Are you sure you're checking the toast table that goes with whatever
parent table?
Yep. I find out the relation id of the TOAST table:
select reltoastrelid from pg_class where relname = 'parent_table_name';
Find
from
production... if anybody has any other suggestions, I am all ears.
Yours very truly,
Aleksey
On 3/8/12, Scott Marlowe scott.marl...@gmail.com wrote:
On Thu, Mar 8, 2012 at 1:10 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
On Thu, Mar 8, 2012 at 12:10 PM, Aleksey Tsalolikhin
On Wed, Mar 7, 2012 at 10:05 AM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x
My biggest table measures 154 GB on the origin, and 533 GB on
the slave.
Why is my slave bigger than my master? How can I compact
On Thu, Mar 8, 2012 at 12:10 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
On Wed, Mar 7, 2012 at 10:05 AM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x
My biggest table measures 154 GB on the origin, and 533
On Thu, Feb 9, 2012 at 12:57 AM, Achilleas Mantzios
ach...@smadev.internal.net wrote:
On Πεμ 09 Φεβ 2012 02:32:37 Aleksey Tsalolikhin wrote:
I got an alert from check_postgres.pl today on a long-running query on
our production database, but our PostgreSQL 8.4.9 server log, which is
configured
On Tue, Mar 6, 2012 at 7:05 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x
The origin database data/base directory is 197 GB in size.
The slave database data/base directory is 562 GB in size and is
over 75
Dear Joshua,
You wrote:
Try disabling replication on that table and clustering the table and then
re-enabling
replication.
...
Alternatively you could disable replication on that table, truncate the
table, and then
re-enable replication for that table. A concern would be is that it is
On Wed, Mar 7, 2012 at 8:12 PM, Joshua D. Drake j...@commandprompt.com wrote:
check_postgres --action=bloat returns OK [after VACUUM FULL]. So it's not
bloat.
What else could it be?
I would have to double check but I think
check_postgres --action=bloat only checks for dead space, not
We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x
The origin database data/base directory is 197 GB in size.
The slave database data/base directory is 562 GB in size and is
over 75% filesystem utilization which has set off the disk free siren.
My biggest table* measures 154
I got an alert from check_postgres.pl today on a long-running query on
our production database, but our PostgreSQL 8.4.9 server log, which is
configured to log queries over 5 seconds long
(log_min_duration_statement = 5000) does not show the query.
check_postgres.pl showed:
Date/Time: Wed Feb 8
Hi,
We are using Postgres 8.4.8 and it is a very fine database.
We just noticed our pg_dumps (generated with
pg_dump -Fc -n public $our_db_name) does not
include triggers on tables in database $our_db_name.
Should it? (We'd like it to, we are counting on pg_dump to
backup all our data,
On Tue, Sep 13, 2011 at 6:55 PM, Joshua D. Drake j...@commandprompt.com wrote:
On 09/13/2011 04:01 AM, Aleksey Tsalolikhin wrote:
Now that you have streaming replication both async and sync, are you
working on multi-master replication? *excited* Or what's the
roadmap?
I would take a look
Congratulations on the release of 9.1.0!
Lots of great features, I for one can't wait to try out unlogged
tables, that should help a lot in our environment.
Now that you have streaming replication both async and sync, are you
working on multi-master replication? *excited* Or what's the
Hi,
We use the fine Bucardo check_postgres Nagios plugin,
and it reported a CRITICAL level spike in database locks
(171 locks).
I looked closely at my database logs and found an even bigger spike
just a few minutes earlier (208 locks).
I saw 8 EXCLUSIVE locks on table X. All of these
Hi. I use the following query (from
http://wiki.postgresql.org/wiki/Lock_Monitoring)
to monitor locks; and I've got an ExlusiveLock that does not have a relation
name associated with it. What is locked with the Exclusive Lock in this case,
please? (it's between d and e tables below)
psql -U
Dear George,
Do you see this issue on 9.0.3, the current and the recommended 9.x version?
Best,
Aleksey
On Tue, Mar 15, 2011 at 11:38 AM, George Woodring
george.woodr...@iglass.net wrote:
We recently upgraded from 8.3.something to 9.0.1. With 9.0.1, we have a
huge spike in vacuums every 8
On Sat, Mar 5, 2011 at 5:12 PM, Ogden li...@darkstatic.com wrote:
On Mar 5, 2011, at 7:07 PM, Bret Fledderjohn wrote:
I am using A2 Hosting (www.a2hosting.com ) which offers 8.4... They are
inexpensive and so far reliable.
Wow, that's super cheap. Is there some catch - $5.57 / month for
On Fri, Mar 4, 2011 at 7:19 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
On Friday, March 04, 2011 5:11:04 pm Aleksey Tsalolikhin wrote:
On Fri, Mar 4, 2011 at 4:45 PM, Adrian Klaver adrian.kla...@gmail.com
wrote:
On Friday, March 04, 2011 2:03:23 pm Aleksey Tsalolikhin wrote:
On Fri
On Fri, Mar 4, 2011 at 7:53 AM, Adrian Klaver adrian.kla...@gmail.com wrote:
Weird. The pgstattuple data shows that the tables are essentially the same,
the
only difference being the dead tuples, as expected, on the production table.
The
TOAST size information shows approximately a
On Fri, Mar 4, 2011 at 4:45 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
On Friday, March 04, 2011 2:03:23 pm Aleksey Tsalolikhin wrote:
On Fri, Mar 4, 2011 at 7:53 AM, Adrian Klaver adrian.kla...@gmail.com
wrote:
What is the data being stored in the table?
For the main part, it's
On Tue, Mar 1, 2011 at 7:24 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Adrian Klaver adrian.kla...@gmail.com writes:
Looks like the TOAST compression is not working on the second machine. Not
sure
how that could come to be. Further investigation underway:)
Somebody carelessly messed with the
On Sun, Feb 27, 2011 at 2:52 AM, Alban Hertroys
dal...@solfertje.student.utwente.nl wrote:
Thank you for your kind replies.
I noticed in your table definition that you seem to store timestamps in
text-fields. Restoring those from text-fields shouldn't make any difference,
but perhaps your
OK, just to recap:
database A has a table that is 50 GB in size (according to:
SELECT relname as Table,
pg_size_pretty(pg_total_relation_size(relid)) As Size from
pg_catalog.pg_statio_user_tables ORDER BY
pg_total_relation_size(relid) DESC;
)
I pg_dump -Fc this table, which gives me a
Our disk service times and % utilization (according to sar -d) while
running pg_dump are low. For example:
01:23:08 AM DEV tps rd_sec/s wr_sec/s avgrq-sz
avgqu-sz await svctm %util
01:23:09 AM sda 1473.00 0.00 98128.00 66.62
0.41 0.28 0.03
Hi. Thanks for your replies.
How do I check the fillfactor on the table, please?
(http://www.postgresql.org/docs/8.4/static/sql-createtable.html tells me how
to set it, but I haven't found yet how to read it.)
Same CPU, same filesystem, same blocksize - identical systems. Same model
of
On Thu, Feb 24, 2011 at 11:46 AM, John R Pierce pie...@hogranch.com wrote:
On 02/24/11 11:02 AM, Aleksey Tsalolikhin wrote:
How do I check the fillfactor on the table, please?
its in the field reloptions in pg_class. so...
select reloptions from pg_class where relname='tablename
Hi. We're running Postgres 8.4.4 everywhere.
I already have a pg_dump -Fc of the big table from the source, now
I am running a pg_dump -Fc on the recipient, to see if the size is different.
Then I will run a pg_dump as text, so I can diff the two files if they are
different in size.
Thanks!!
I read in the pg_dump man page that pg_dump does not block other users
accessing the database (readers or writers).
In practice, if I pg_dump our 100 GB database, our application, which
is half Web front end and half OLTP, at a certain point, slows to a
crawl and the Web interface becomes
database. Would like to
understand what is going on.And would like to not have such a swell of data
upon transfer. Is there anything I can do, please?
Best,
Aleksey
On Tue, Feb 22, 2011 at 12:44 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
Hi. Last week our 60 GB database (per psql
Hi. Last week our 60 GB database (per psql \l+) was (re-)replicated to
the DR site using SlonyI, and arrived 109 GB in size which caused a
problem as it filled up the filesystem on the DR server - we expected the
DR database to be the same size. Mystery.
Now just past weekend we upgraded our
Thank you for the discussion.
I'm on Postgres 8.4, and the hardware between Slony master and slave
is identical,
as is the autovacuum config.
We do have transactions that fail to commit, transactions that roll back.
I'm glad to have some idea of the cause of the difference in table size
between
On Mon, Jan 31, 2011 at 7:52 PM, Scott Marlowe scott.marl...@gmail.com wrote:
On Mon, Jan 31, 2011 at 5:54 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
Situation: Disk usage on production server root filesystem is at 68%
utilization (80 GB used), on DR is at 51% (56 GB used). We
Situation: Disk usage on production server root filesystem is at 68%
utilization (80 GB used), on DR is at 51% (56 GB used). We use
SlonyII-1.2.x to keep the DR up to date. I would like to account for
the 24 GB difference.
Symptoms:
1. Database size reported by psql c '\l+'
Hi. Could you please consider logging size of reply, like Apache
httpd does with its logfile?
We've started having intermittent spikes in network usage (the network
interface on the
DB server is getting maxed out) and I suspect there is a single very
large query (just a
working hypothesis at
How to see what SQL queries are associated with pg_locks, please?
Could somebody help with the query? I can then add it to
http://wiki.postgresql.org/wiki/Lock_Monitoring
Best,
-at
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Yesterday, I had twelve thousand cache lookup failed for type N
messages, like this:
2010-09-20 00:00:00 PDT ERROR: cache lookup failed for type 14237017
2010-09-20 00:00:00 PDT CONTEXT: SQL statement INSERT INTO
mycluster.sl_log_2 (log_origin, log_xid, log_tableid, log_actionseq,
log_cmdtype,
On Tue, Sep 21, 2010 at 2:06 PM, Jens Wilke j...@wilke.org wrote:
On Dienstag, 21. September 2010, Aleksey Tsalolikhin wrote:
Yesterday, I had twelve thousand cache lookup failed for type N
messages, like this:
What does type 14237017 mean?
pg_type oid
Dear Jens,
I am trying
On Tue, Sep 21, 2010 at 3:10 PM, Jens Wilke j...@wilke.org wrote:
On Dienstag, 21. September 2010, Aleksey Tsalolikhin wrote:
So PostgreSQL was trying to lookup a row in a system table and
did not find it in a cache.
yes,
select * from pg_type where oid =14237017
Thank you.
Did you
Hi. I've just discovered the check_postgres utility and am running
all the tests against my database.
The last_analyze test comes out critical - many tables unanalyzed
for 8 weeks.
I am running PostgreSQL 8.4.4 with default autovacuum settings. I
thought autovacuum was
supposed to take care of
I just ran the last_autovacuum test of check_postgres, and it reported
7 (of my 100) tables have been autovacuumed more than 1 day ago;
the oldest autovacuum time was 7 weeks ago.
8 more tables in pg_catalog were autovacuumed more than 1 day ago.
Thanks,
-at
--
Sent via pgsql-general mailing
I am sorry for the multiple posts; just noticed there are two tests
for analyze: last_analyze and last_autoanalyze
last_autoanalyze matches last_autovacuum - 7 weeks ago
Aleksey
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Fri, Jun 11, 2010 at 08:43:53AM +0200, Adrian von Bidder wrote:
Just speculation, I've not tried this. Perhaps pipe the output of pg_dump
through a software that bandwidth-limits the throughput?
Perhaps. However, moving the pg_dump to a Slony slave has solved my problem.
Thanks!!
On Thu, Jun 10, 2010 at 1:25 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
PostgreSQL 8.4.2 response time slows during pg_dump of a 14 GB database.
...
Is there any way I can keep my response fast but still get my pg_dump?
Or should I move my pg_dump to a Slony-I slave?
Thank you
On Thu, Jun 10, 2010 at 3:41 PM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
So are you writing the backup to the same disk(s) that support the db? What
happens if you do the dump from a different machine? You'll increase network
traffic, of course, but reduce your disk load. And you
PostgreSQL 8.4.2 response time slows during pg_dump of a 14 GB database.
Specifically, I am monitoring how long it takes to login to our Web
app - usually under 2 seconds, but when pg_dump is running, that
changes to 2 - 18 seconds.
Is there any way I can keep my response fast but still get my
Dear Steve,
Thanks for your suggestions!
I ended up switching from the Enterprise DB installer to the
Postgres PGDG84 YUM repository, since (a) it provides PostgreSQL 8.4.4
AND Slony 1.x (unlike Enterprise DB installer which only has
Slony 2.0.2 which has a potential data loss problem), and
Hi. What does PGDG mean, as in PGDG84, the PostgreSQL repository
containing PostgreSQL 8.4.x, etc?
Thanks,
Aleksey
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi.
We've been running PostgreSQL 8.4.2 on CentOS 5.4 (64-bit),
installed with the EnterpriseDB installer. This has been excellent.
Now we have our first CentOS 5.5 server (64-bit) and I installed
PostgreSQL 8.4.4 using the EnterpriseDB installer, and it is unable to
start the database
Hi. Need some help getting WAL log archiving going, please.
PostgreSQL 8.4.2
archive_command = '/usr/local/bin/rsync -e /usr/bin/ssh %p
postg...@remoteserver:directory/%f /dev/null'
I am able to login to remoteserver as user postgres using key-based
authentication (trust relationship exists).
Hi. I just found pg_dump errors in my Apache httpd log and am really
confused. Has anybody seen this before?
My syslog.conf does not mention the httpd error_log.
How did the errors get in there?
# grep pg_dump /var/log/httpd/error_log
pg_dump: [archiver] could not open output file:
Thank you for your replies. We don't have a CMS and none of our PHP
scripts call pg_dump so I am concerned about the security of our
system.
On Tue, Nov 3, 2009 at 1:00 PM, Russell Smith mr-r...@pws.com.au wrote:
Aleksey Tsalolikhin wrote:
# grep pg_dump /var/log/httpd/error_log
pg_dump
Hi. I am trying to build pgbench on CentOS 5.3 x86_64.
make complains that it cannot find -lpgport
# cd contrib
# make all
...
make[1]: Entering directory `/home/tsalolia/postgresql-8.3.7/contrib/pgbench'
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline
-Wdeclaration-after-statement
On Wed, Aug 12, 2009 at 4:19 PM, Tom Lanet...@sss.pgh.pa.us wrote:
Aleksey Tsalolikhin atsaloli.t...@gmail.com writes:
Hi. I am trying to build pgbench on CentOS 5.3 x86_64.
make complains that it cannot find -lpgport
# cd contrib
# make all
You need to make the rest of the tree first
On Thu, Apr 2, 2009 at 1:23 AM, Craig Ringer
cr...@postnewspapers.com.au wrote:
You might want to add this as a comment on the interactive version of the
online documentation, too, so it's not lost when revising the docs for 8.4 /
8.5 .
Done, sir. Thanks!
Aleksey
--
Aleksey Tsalolikhin
Dear Tom,
Thanks for your reply and insight! I much appreciate it. I certainly look
forward to getting off FC6! In the meantime, I did get it to work -
I remembered SELinux protects /home directories especially.
So I moved postgres user's home directory from /home/postgres
to /data/postgres,
Hi. We're trying to implement two-phase commit and did not find a
complete working example in the manual. We found examples of the
separate pieces, but not the sequence in which to put them together.
Then we found this text,
PREPARE TRANSACTION is used in place of regular COMMIT to perform
is blocking the connection.
I think I'll ask on the SELinux list as well. But if anybody here
has a clue, please give me a shout.
Best,
-at
--
Aleksey Tsalolikhin
UNIX System Administrator
I get stuff done!
http://www.verticalsysadmin.com/
LinkedIn - http://www.linkedin.com/in/atsaloli
--
Sent via
On Tue, Mar 31, 2009 at 6:35 PM, David Wilson david.t.wil...@gmail.com wrote:
On Tue, Mar 31, 2009 at 9:18 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
Could not create directory '/home/postgres/.ssh'.
Host key verification failed.
Have you tested ssh node2 as the postgres user
84 matches
Mail list logo