Re: [GENERAL] error postgresql remote access windows 7

2013-05-27 Thread chiru r
Arvind,

It seems this is Firewall issue.Server side(where as postgresql
installed) port(Ex:5432 default) was not opened to access postgres instance
from client machines. please stop firewalls on windows 7 machine and try to
connect from client machine.

Best Regards,
Chiru


On Mon, May 27, 2013 at 11:01 AM, Arvind Singh arvin...@hotmail.com wrote:

 I have installed postgres server 9.3 on windows 7 network for the first
 time.

 The systems are connected on wifi network.

 Server in installed and working normal.

 configured pghba with
 host  all   all0.0.0.0/0md5
 hostall all ::1/128 md5

 configured postgresql.conf with
 listen_addresses = '*'


 I am using pgadmin III to access server
 - From Server it works fine
 - But from Client it cannot find the Server.

 The client can access shared folders on server and Even ping is Ok from
 both sides.

 Any hint or a checklist will Help.

 regards
 arvind



[GENERAL] Seq Scan cost shown to be (cost=10000000000.00..10000000001.10)

2013-05-27 Thread Amit Langote
Hello,

I set enable_seqscan=off and also accidentally dropped the only index
on a table (actually, drop extension pg_bigm cascade) and observe
following:

postgres=# explain select * from testdata where name like '%gi%';
 QUERY PLAN
-
 Seq Scan on testdata  (cost=100.00..101.10 rows=2 width=71)
   Filter: (name ~~ '%gi%'::text)
(2 rows)

Although, I suspect the (dropped index + enable_seqscan) causes this,
is the cost shown in explain output some kind of default max or
something like that for such abnormal cases?



--
Amit Langote


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Seq Scan cost shown to be (cost=10000000000.00..10000000001.10)

2013-05-27 Thread Sergey Konoplev
On Mon, May 27, 2013 at 12:42 AM, Amit Langote amitlangot...@gmail.com wrote:
 I set enable_seqscan=off and also accidentally dropped the only index
[...]
  Seq Scan on testdata  (cost=100.00..101.10 rows=2 width=71)
[...]
 Although, I suspect the (dropped index + enable_seqscan) causes this,
 is the cost shown in explain output some kind of default max or
 something like that for such abnormal cases?

When you set enable_xxx=off, it not actually disables the xxx
operation, it sets the start cost to the high value (100).

--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979
Skype: gray-hemp
Jabber: gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Seq Scan cost shown to be (cost=10000000000.00..10000000001.10)

2013-05-27 Thread Victor Yegorov
2013/5/27 Amit Langote amitlangot...@gmail.com

 Although, I suspect the (dropped index + enable_seqscan) causes this,
 is the cost shown in explain output some kind of default max or
 something like that for such abnormal cases?


When one uses “enable_” settings to adjust planner behavior, PostgreSQL
just sets really high costs for the operations affected (like the one you
see).

As SeqScan is the only possible way to execute your query, it is still
choosen.

Somewhat related thread:
http://www.postgresql.org/message-id/4d5b06ac.2020...@lelarge.info


-- 
Victor Y. Yegorov


Re: [GENERAL] Seq Scan cost shown to be (cost=10000000000.00..10000000001.10)

2013-05-27 Thread Amit Langote
 Although, I suspect the (dropped index + enable_seqscan) causes this,
 is the cost shown in explain output some kind of default max or
 something like that for such abnormal cases?

 When you set enable_xxx=off, it not actually disables the xxx
 operation, it sets the start cost to the high value (100).


Oh, okay, thanks!

--
Amit Langote


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Seq Scan cost shown to be (cost=10000000000.00..10000000001.10)

2013-05-27 Thread Amit Langote
 When one uses “enable_” settings to adjust planner behavior, PostgreSQL
 just sets really high costs for the operations affected (like the one you
 see).

 As SeqScan is the only possible way to execute your query, it is still
 choosen.


I get it. Thanks!

--
Amit Langote


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Document Management System (DMS)

2013-05-27 Thread P. Broennimann
Hi there

I am looking for an open-source document management system (DMS) based on
PostgreSQL.

Anyone has experience with such tools?

Thanks  cheers,
Peter


Re: [GENERAL] Success stories of PostgreSQL implementations in different companies

2013-05-27 Thread Wolfgang Keller
 Even with that, some clients are being encouraged to change to
 PostgreSQL to lower their companies costs in technologies, but very
 often they ask if there are success stories of PostgreSQL
 implementations in companies in our region or around the world,
 success stories (if is possible) with some information like number of
 concurrent users, some hardware specs or storage size.

Not a company, but a pretty big installation, I guess: French Caisse
Nationale des Allocations Familiales (welfare agency) is running on
PostgreSQL:

123 local offices all over France
11 million families and 30 million people as customers
69 bio EUR annual turnover
168 databases, 4TB all databases together, largest database is 250 GB
1 bio SQL statements a day

Sincerely,

Wolfgang


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Success stories of PostgreSQL implementations in different companies

2013-05-27 Thread ocalderon
Thank you Wolfgang, just one question, what bio means? In the part that says 
69 bio EUR...

Regards.
Sent from my BlackBerry® wireless device from Telecom.

-Original Message-
From: Wolfgang Keller felip...@gmx.net
Sender: pgsql-general-owner@postgresql.orgDate: Mon, 27 May 2013 17:15:41 
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Success stories of PostgreSQL implementations in
 different companies

 Even with that, some clients are being encouraged to change to
 PostgreSQL to lower their companies costs in technologies, but very
 often they ask if there are success stories of PostgreSQL
 implementations in companies in our region or around the world,
 success stories (if is possible) with some information like number of
 concurrent users, some hardware specs or storage size.

Not a company, but a pretty big installation, I guess: French Caisse
Nationale des Allocations Familiales (welfare agency) is running on
PostgreSQL:

123 local offices all over France
11 million families and 30 million people as customers
69 bio EUR annual turnover
168 databases, 4TB all databases together, largest database is 250 GB
1 bio SQL statements a day

Sincerely,

Wolfgang


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Understanding postgresql logs

2013-05-27 Thread igivanoff
I have a PostgreSQL 9.0/9.2 which from time to time hits some memory issues.
I know the best approach is to monitor the DB performance and activity but
in the log files I see error messages similar to:


TopMemoryContext: 221952 total in 17 blocks; 7440 free (41 chunks); 214512
used
  TopTransactionContext: 8192 total in 1 blocks; 6384 free (0 chunks); 1808
used
CurTransactionContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  Type information cache: 24240 total in 2 blocks; 3744 free (0 chunks);
20496 used
  RI compare cache: 24576 total in 2 blocks; 15984 free (5 chunks); 8592
used
  RI query cache: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 used
  TableSpace cache: 8192 total in 1 blocks; 3216 free (0 chunks); 4976 used
  Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
  MessageContext: 65536 total in 4 blocks; 7200 free (8 chunks); 58336 used
  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
used
  smgr relation table: 57344 total in 3 blocks; 34320 free (10 chunks);
23024 used
  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks);
32 used
  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  PortalMemory: 8192 total in 1 blocks; 8160 free (1 chunks); 32 used
  Relcache by OID: 24576 total in 2 blocks; 11792 free (3 chunks); 12784
used
  CacheMemoryContext: 1359224 total in 23 blocks; 271648 free (1 chunks);
1087576 used
. . .
pg_amproc_fam_proc_index: 3072 total in 2 blocks; 1224 free (2 chunks);
1848 used
pg_foreign_server_name_index: 3072 total in 2 blocks; 1704 free (4
chunks); 1368 used
pg_attribute_relid_attnam_index: 3072 total in 2 blocks; 1568 free (2
chunks); 1504 used
pg_conversion_oid_index: 3072 total in 2 blocks; 1704 free (4 chunks);
1368 used
pg_user_mapping_user_server_index: 3072 total in 2 blocks; 1568 free (2
chunks); 1504 used
pg_conversion_name_nsp_index: 3072 total in 2 blocks; 1568 free (2
chunks); 1504 used
pg_authid_oid_index: 3072 total in 2 blocks; 1656 free (3 chunks); 1416
used
pg_auth_members_member_role_index: 3072 total in 2 blocks; 1568 free (2
chunks); 1504 used
pg_tablespace_oid_index: 3072 total in 2 blocks; 1656 free (3 chunks);
1416 used
pg_database_datname_index: 3072 total in 2 blocks; 1656 free (3 chunks);
1416 used
pg_auth_members_role_member_index: 3072 total in 2 blocks; 1568 free (2
chunks); 1504 used
pg_database_oid_index: 3072 total in 2 blocks; 1656 free (3 chunks);
1416 used
pg_authid_rolname_index: 3072 total in 2 blocks; 1656 free (3 chunks);
1416 used
  MdSmgr: 8192 total in 1 blocks; 5440 free (0 chunks); 2752 used
  tokenize file cxt: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  hba parser context: 3072 total in 2 blocks; 512 free (2 chunks); 2560 used
  LOCALLOCK hash: 24576 total in 2 blocks; 13920 free (4 chunks); 10656 used
  Timezones: 83472 total in 2 blocks; 3744 free (0 chunks); 79728 used
  ErrorContext: 8192 total in 1 blocks; 8160 free (4 chunks); 32 used
  1 tm:2013-05-18 11:21:58.274 UTC db:VCDB pid:29515 ERROR:  out of
memory
  2 tm:2013-05-18 11:21:58.274 UTC db:VCDB pid:29515 DETAIL:  Failed on
request of size 40.

I've searched around for some documentation/books/posts on that topic but I
failed to find detailed info what's TopMemoryContext, TopTransactionContext,
CurTransactionContext etc. and how those values can be translated.

Can someone point to a book or documentation which might be helpful?

Thanks!



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Understanding-postgresql-logs-tp5757000.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to know the exact time to which the database was recovered

2013-05-27 Thread Nikhil G Daddikar

Hello,

We use the basebackup + WAL files strategy to backup our database i.e. 
take a basebackup every day and copy WAL files to a remote server every 
15 minutes. In case of a disaster on the master, we'd recover the 
database on the slave. If this happens, I would like to tell the 
customer the exact time till when the database was recovered. How do I 
get this timestamp?


Thanks.
Nikhil


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] search_path for database vs user vs session

2013-05-27 Thread Moshe Jacobson
How can I show the value of search_path for the database, the user and the
session?
I ask because I cannot explain the following:

$ psql -U postgres -d ises
psql (9.1.4)
Type help for help.

postgres@moshe=devmain:ises=# show search_path;
search_path
---
 public, audit_log
(1 row)

postgres@moshe=devmain:ises=# alter database ises set search_path to
public, auditlog;
ALTER DATABASE

postgres@moshe=devmain:ises=# \q
$ psql -U postgres -d ises
psql (9.1.4)
Type help for help.

postgres@moshe=devmain:ises=# show search_path;
search_path
---
 public, audit_log
(1 row)


Thanks.

-- 
Moshe Jacobson
Nead Werx, Inc. | Manager of Systems Engineering
2323 Cumberland Parkway, Suite 201 | Atlanta, GA 30339
mo...@neadwerx.com | www.neadwerx.com

Quality is not an act, it is a habit. -- Aristotle


Re: [GENERAL] search_path for database vs user vs session

2013-05-27 Thread Adrian Klaver

On 05/27/2013 11:29 AM, Moshe Jacobson wrote:

How can I show the value of search_path for the database, the user and
the session?
I ask because I cannot explain the following:

$ psql -U postgres -d ises
psql (9.1.4)
Type help for help.

postgres@moshe=devmain:ises=# show search_path;
 search_path
---
  public, audit_log
(1 row)

postgres@moshe=devmain:ises=# alter database ises set search_path
to public, auditlog;
ALTER DATABASE

postgres@moshe=devmain:ises=# \q
$ psql -U postgres -d ises
psql (9.1.4)
Type help for help.

postgres@moshe=devmain:ises=# show search_path;
 search_path
---
  public, audit_log
(1 row)


Is the below what you are looking for?

http://www.postgresql.org/docs/9.2/static/runtime-config-client.html

The current effective value of the search path can be examined via the 
SQL function current_schemas (see Section 9.25). This is not quite the 
same as examining the value of search_path, since current_schemas shows 
how the items appearing in search_path were resolved.


Section 9.25:

http://www.postgresql.org/docs/9.2/static/functions-info.html




Thanks.




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] search_path for database vs user vs session

2013-05-27 Thread Moshe Jacobson
On Mon, May 27, 2013 at 2:47 PM, Adrian Klaver adrian.kla...@gmail.comwrote:

 Is the below what you are looking for?

 http://www.postgresql.org/**docs/9.2/static/runtime-**config-client.htmlhttp://www.postgresql.org/docs/9.2/static/runtime-config-client.html

 http://www.postgresql.org/**docs/9.2/static/functions-**info.htmlhttp://www.postgresql.org/docs/9.2/static/functions-info.html


Adrian,

I'd like to know how to see the search_path setting attached to a
particular user/role independent of the session.

Thanks.

-- 
Moshe Jacobson
Nead Werx, Inc. | Manager of Systems Engineering
2323 Cumberland Parkway, Suite 201 | Atlanta, GA 30339
mo...@neadwerx.com | www.neadwerx.com

Quality is not an act, it is a habit. -- Aristotle


Re: [GENERAL] search_path for database vs user vs session

2013-05-27 Thread Moshe Jacobson
On Mon, May 27, 2013 at 3:07 PM, Moshe Jacobson mo...@neadwerx.com wrote:

 I'd like to know how to see the search_path setting attached to a
 particular user/role independent of the session.


Oh, and I'd also like to see the current setting of the database so I know
what will happen if I clear the user setting.


-- 
Moshe Jacobson
Nead Werx, Inc. | Manager of Systems Engineering
2323 Cumberland Parkway, Suite 201 | Atlanta, GA 30339
mo...@neadwerx.com | www.neadwerx.com

Quality is not an act, it is a habit. -- Aristotle


Re: [GENERAL] search_path for database vs user vs session

2013-05-27 Thread Joe Conway
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/27/2013 12:16 PM, Moshe Jacobson wrote:
 On Mon, May 27, 2013 at 3:07 PM, Moshe Jacobson
 mo...@neadwerx.com mailto:mo...@neadwerx.com wrote:
 
 I'd like to know how to see the search_path setting attached to a 
 particular user/role independent of the session.
 
 
 Oh, and I'd also like to see the current setting of the database so
 I know what will happen if I clear the user setting.

create user foo;
create schema bar;
create schema foobar;
alter user foo set search_path to bar;
alter database contrib_regression set search_path to foobar;
select * from pg_db_role_setting ;
 setdatabase | setrole |  setconfig
- -+-+--
   0 |   16401 | {search_path=bar}
   16384 |   0 | {search_path=foobar}
(2 rows)

Is that what you are looking for?

HTH,

Joe

- -- 
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting,  24x7 Support
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJRo7j4AAoJEDfy90M199hlqJAP/1jkrRcNfhmWib/+8CH2vYbn
8ZyxSL5QTrAxhyVokmNOFXssnNwxSWnq2GAxtO2Jren/PZnekCuY/BKD4jAUtPPp
IssS8Tv2rQYfoXlDfj1ANHH4YUe4HNhK7X+mTBupzCUUHbZdtoIjuzW02nq7YhA/
tNJzEIvB/GAhxHRxXdAMAy19hKx7J6px7LMIBwCBonMRDGyFYSsxKRns0UQjP2T4
k2w2oM9Z3oDvzJ97teXfSfkbGlUH/2CBnJx6y/Y6noxF9fskGqjCHtOEZSEcVx4u
YhWTFDFZZbPKXP9HkGaUGoBLhvKsv4qGg3njwIbkb2AUyLOlvo4r6Z2r3xxmjONA
JxTZqhD6To/EP+GeTQa4e11hLDVyspFY0Y0c9lJhWvGjKo0LJAKVjCVAjvFcVoPm
sABKeQuxnAXQv67lhnogvpPvostWtGItlLUitDRtCVxCpHjBEwDi+2oqsJkAJvIq
KXRT5j/X+cnsr8sGYpLl+PtkOGNcUd9LQYQnBN0KgZa1LCdaClDvxIq1gwN/is63
1n0YkqBICiKWbhk2ieQ5891Zwh9GtaPRliKE9T27FROgj829nAtAfM9xJS8hHnnT
A6S8bEgndU4kcg0fUHnTgp/bKm25tVjvRtvAES29l6MKSMYljpckSxWJtDQcRFUb
NDbGu0WFsioE/5B0VkRV
=knBO
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] search_path for database vs user vs session

2013-05-27 Thread bricklen
On Mon, May 27, 2013 at 12:16 PM, Moshe Jacobson mo...@neadwerx.com wrote:

 Oh, and I'd also like to see the current setting of the database so I know
 what will happen if I clear the user setting.


I think you can find some of what you are looking for here:
http://www.postgresql.org/docs/9.2/static/view-pg-settings.html


[GENERAL] Most efficient way to initialize a standby server

2013-05-27 Thread Edson Richter
I've two distant servers I would like to configure async replication 
between.


Servers are running 9.2.4.

Since 9.0 days I do use script with rsync for transfer. And sometimes 
the servers get out of sync (due large processing in master database and 
huge network latency), and I have to reinitialize the standby server.


Lately , this script take about an hour to copy all data (23GB) over the 
standby server, and I would like to know if there is a more efficient 
way (perhaps, using pg_basebackup?) to reinitilize the standby server.


I've look in wiki pages and also pg_basebackup, but information on how 
to get data transfered to the remote standby server is vague.


Or should I run pg_basebackup from standby server, and the PostgreSQL 
protocol is more efficient than rsync?



Thanks,

Edson



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Most efficient way to initialize a standby server

2013-05-27 Thread Sergey Konoplev
Try this step-by-step instruction
https://code.google.com/p/pgcookbook/wiki/Streaming_Replication_Setup.
I constantly update it when discovering useful things, including low
bandwidth issues.

On Mon, May 27, 2013 at 5:08 PM, Edson Richter edsonrich...@hotmail.com wrote:
 Since 9.0 days I do use script with rsync for transfer. And sometimes the
 servers get out of sync (due large processing in master database and huge
 network latency), and I have to reinitialize the standby server.

WAL stream is not compressed and quite bloated by its nature. You can
use SSH tunnel with compression, described in the mentioned above
instruction, and redirect your replication through it.

 Lately , this script take about an hour to copy all data (23GB) over the
 standby server, and I would like to know if there is a more efficient way
 (perhaps, using pg_basebackup?) to reinitilize the standby server.

AFAIK pg_basebackup does not use compression either when transferring
data. In this case you can also use compressed SSH tunnel with
pg_basebackup or rsync with compression enabled.

I would also like to recommend not to set the compression level too
high, because your CPU might be a bottleneck in this case, and it
might lead to even worth transfer speed that without compression. I
usually set compression level to 1 and it works quite good.

--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979
Skype: gray-hemp
Jabber: gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Most efficient way to initialize a standby server

2013-05-27 Thread Joshua D. Drake


On 05/27/2013 05:43 PM, Sergey Konoplev wrote:


Try this step-by-step instruction
https://code.google.com/p/pgcookbook/wiki/Streaming_Replication_Setup.
I constantly update it when discovering useful things, including low
bandwidth issues.

On Mon, May 27, 2013 at 5:08 PM, Edson Richter edsonrich...@hotmail.com wrote:

Since 9.0 days I do use script with rsync for transfer. And sometimes the
servers get out of sync (due large processing in master database and huge
network latency), and I have to reinitialize the standby server.


I think the use of PITRTools is probably up your alley here.

JD



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Most efficient way to initialize a standby server

2013-05-27 Thread Edson Richter

Em 27/05/2013 21:43, Sergey Konoplev escreveu:

Try this step-by-step instruction
https://code.google.com/p/pgcookbook/wiki/Streaming_Replication_Setup.
I constantly update it when discovering useful things, including low
bandwidth issues.


Thanks. This is a good idea, of course!
I also have a lot of lessons learned, I think I should write down somewhere.



On Mon, May 27, 2013 at 5:08 PM, Edson Richter edsonrich...@hotmail.com wrote:

Since 9.0 days I do use script with rsync for transfer. And sometimes the
servers get out of sync (due large processing in master database and huge
network latency), and I have to reinitialize the standby server.

WAL stream is not compressed and quite bloated by its nature. You can
use SSH tunnel with compression, described in the mentioned above
instruction, and redirect your replication through it.


Ok, I've setup a compressed VPN secure tunnel that I use for replication.
Is very stable and is compressed. This should be enough.




Lately , this script take about an hour to copy all data (23GB) over the
standby server, and I would like to know if there is a more efficient way
(perhaps, using pg_basebackup?) to reinitilize the standby server.

AFAIK pg_basebackup does not use compression either when transferring
data. In this case you can also use compressed SSH tunnel with
pg_basebackup or rsync with compression enabled.

I would also like to recommend not to set the compression level too
high, because your CPU might be a bottleneck in this case, and it
might lead to even worth transfer speed that without compression. I
usually set compression level to 1 and it works quite good.


Good to know. I was thinking in using 9 - I've decent 2 Xeon processors 
with 8 cores each - but I think only one is used by gzip algorithm.


Thanks for all your tips, I'll make some testing. If I discover anything 
useful, I'll share as well.


Regards,

Edson



--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979
Skype: gray-hemp
Jabber: gray...@gmail.com






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Most efficient way to initialize a standby server

2013-05-27 Thread Edson Richter

Em 28/05/2013 00:03, Joshua D. Drake escreveu:


On 05/27/2013 05:43 PM, Sergey Konoplev wrote:


Try this step-by-step instruction
https://code.google.com/p/pgcookbook/wiki/Streaming_Replication_Setup.
I constantly update it when discovering useful things, including low
bandwidth issues.

On Mon, May 27, 2013 at 5:08 PM, Edson Richter 
edsonrich...@hotmail.com wrote:
Since 9.0 days I do use script with rsync for transfer. And 
sometimes the
servers get out of sync (due large processing in master database and 
huge

network latency), and I have to reinitialize the standby server.


I think the use of PITRTools is probably up your alley here.

JD

Assume I know nothing about PITRTools (which I really don't know!), can 
you elaborate a bit more your suggestion?


Thanks,

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to evaluate disk space needed by a table

2013-05-27 Thread 高健
Hello:

I  created a table, and found the file created for that table is about 10
times of that I estimated!
The following is what I did:

postgres=# create table tst01(id integer);
CREATE TABLE
postgres=#

postgres=# select oid from pg_class where relname='tst01';
  oid
---
 16384
(1 row)
Then I can see the file now:
[root@lex base]# ls ./12788/16384
./12788/16384

I heard that one integer type  will  use 4 bytes.
so I think  that  2048 records with only one column of integer data type,
will use a little more than 8K(2048 records *  4 bytes/per integer data
type + headers).

But in fact they use so much more:
After I run this:
postgres=# insert into tst01 values(generate_series(1,2048));
INSERT 0 2048
postgres=#

I can find  the file 16384  is  now 80KB!
[root@lex base]# ls -lrt ./12788/16384
-rw--- 1 postgres postgres 81920 May 28 11:54 ./12788/16384
[root@lex base]# ls -lrt -kb ./12788/16384
-rw--- 1 postgres postgres 80 May 28 11:54 ./12788/16384
[root@lex base]#

Then I tried again , I put another 2048 records:
postgres=# insert into tst01 values(generate_series(2049,4096));
INSERT 0 2048
postgres=#
And found that the file is now 152KB!
[root@lex base]# ls -lrt -kb ./12788/16384
-rw--- 1 postgres postgres 152 May 28 11:56 ./12788/16384
[root@lex base]#

Before this, I have thought that   headers and  other structure will just
use a little space.
But  what I found is about 10 times the space I evaluated.

So , Is there any  method to correctly evaluate  disk space one table will
need,
given the table's column data types and , estimated record numbers ?


Re: [GENERAL] How to evaluate disk space needed by a table

2013-05-27 Thread Stephen Frost
* 高健 (luckyjack...@gmail.com) wrote:
 So , Is there any  method to correctly evaluate  disk space one table will
 need,
 given the table's column data types and , estimated record numbers ?

The simplest might be to do exactly what you did- create the table and
then check the size with a subset of records.  It won't be exactly
linear from there but it'd provide a good estimate.

Otherwise, you need to consider the various additional fields which
PostgreSQL adds to every tuple to keep track of visibility and other
information.  Then you have to add in the page header and the other bits
of the tuple header beyond the system columns.  Lastly, you need to
figure out how many tuples will actually fit on a page based on their
size, because there will be gaps if the tuple doesn't fit exactly into
the remaining space in the page.

btw, there are helper functions to get disk usage-
pg_total_relation_size() and pg_relation_size() come to mind, though
there are also others.

Thanks,

Stephen


signature.asc
Description: Digital signature


[GENERAL] How to check if Postgresql files are OK

2013-05-27 Thread Nikhil G Daddikar

Folks,

I was using PostgreSQL 8.x in development environment when one day I 
started getting all kinds of low-level errors while running queries and 
eventually had to reinstall. Maybe it was salvageable but since it was a 
test database anyway it didn't matter.


We use PostgreSQL 9 on our production server and I was wondering if 
there there is a way to know when pages get corrupted. I see that there 
is some kind of checksum maintained from 9.3 but till then is there a 
way to be notified quickly when such a thing happens? I use a 
basebackup+rsync of WAL files as a disaster recovery solution. Will this 
be useful when such a scenario occurs?


Thanks.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to evaluate disk space needed by a table

2013-05-27 Thread Raghavendra
On Tue, May 28, 2013 at 9:48 AM, 高健 luckyjack...@gmail.com wrote:

 Hello:

 I  created a table, and found the file created for that table is about 10
 times of that I estimated!
 The following is what I did:

 postgres=# create table tst01(id integer);
 CREATE TABLE
 postgres=#

 postgres=# select oid from pg_class where relname='tst01';
   oid
 ---
  16384
 (1 row)
 Then I can see the file now:
 [root@lex base]# ls ./12788/16384
 ./12788/16384

 I heard that one integer type  will  use 4 bytes.
 so I think  that  2048 records with only one column of integer data type,
 will use a little more than 8K(2048 records *  4 bytes/per integer data
 type + headers).


You heard right, as other said there are various hidden fileds added to
every tuple like (ctid,xmin,xmax,cmin,cmax). All these occupy some bytes in
the page. Take your example.

As per integer column, every column data occupies 4 bytes.

postgres=# select pg_column_size(id) from tst01 limit 1;
 pg_column_size

  4
(1 row)

When you calculate the row size...

postgres=# select pg_column_size(t) from tst01 t limit 1;
 pg_column_size

 28
(1 row)

Here 24 bytes as row header and  4 bytes of integer data.

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/


Re: [GENERAL] How to check if Postgresql files are OK

2013-05-27 Thread Stephen Frost
Nikhil,

* Nikhil G Daddikar (n...@celoxis.com) wrote:
 We use PostgreSQL 9 on our production server and I was wondering if
 there there is a way to know when pages get corrupted. 

It's not great, but there are a few options.  First is to use pg_dump
across the entire database and monitor the PG logs to see if it barfs
about anything.  Another thing that you can do is to write a script
which pulls out all of the data from each table using an ORDER BY which
matches some index on the table- PG will, generally, use an in-order
index traversal, which will validate the index and the heap, again, to
some extent.

 I see that
 there is some kind of checksum maintained from 9.3 but till then is
 there a way to be notified quickly when such a thing happens? I use
 a basebackup+rsync of WAL files as a disaster recovery solution.
 Will this be useful when such a scenario occurs?

It really depends.  Having multiple backups over time will limit the
risk that corruption gets propagated to a slave system.  Also, there is
a CRC on the WAL records which are shipped, which helps a bit, but there
are still cases where corruption can get you.  The best thing is to have
frequent, tested, backups.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Most efficient way to initialize a standby server

2013-05-27 Thread Joshua D. Drake


On 05/27/2013 08:13 PM, Edson Richter wrote:



I think the use of PITRTools is probably up your alley here.

JD


Assume I know nothing about PITRTools (which I really don't know!), can
you elaborate a bit more your suggestion?


It is an open source tool specificaly for working with PITR/Streaming 
Replication/Hot Standby


https://public.commandprompt.com/projects/pitrtools/wiki

JD




Thanks,

Edson






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general