Re: [GENERAL] schema or database

2015-04-12 Thread Michael Cheung
Thanks for your suggestion.
I'd like to use schema as you suggest.

yours, michael

On Mon, 13 Apr 2015 11:20:59 +0900
Ian Barwick i...@2ndquadrant.com wrote:

 On 13/04/15 11:08, Michael Cheung wrote:
  hi, all;
  
  I am new here. And I need some suggestion.
  
  I have many similar database to store data for every customer.
  Structure of database is almost the same.
  As I use same application to control all these data, so I can only use
  one database user to connect to these database.
  And I have no needs to query table for different customer together.
  
  I wonder which I should use, different shema or different database to store 
  data?
  
  I 'd like to know the advantage and disadvantage for using schema or 
  database.
 
 If as you say access to the database is via a single application database
 user, it will probably make more sense to use multiple schemas rather than
 multiple databases. Keeping everything in one database will simplify
 administration (e.g. making backups - ypu'll just need to dump the one 
 database
 rather than looping through a variable number) and will make life easier if 
 you
 ever need to do some kind of query involving multiple customers.
 There will also be less overhead when adding a new schema vs adding
 a new database.
 
 
 Regards
 
 Ian Barwick
 
 -- 
  Ian Barwick   http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting user from changing its own attributes

2015-04-12 Thread Jim Nasby

On 4/11/15 4:11 PM, Sameer Kumar wrote:

Pg_settings currently has an upper bound column - though it is a
view and that value cannot be changed that I know of.


I guess that upper bound column is more of the limit that is imposed by
system which you can have for a parameter i.e. the system imposed limit
or valid range if values for a parameter. I don't think one can update that.


Correct.


But if it could I suspect that whatever catalog you would change to
affect it would only cause a global change. There is no alter
database, role, or postgresql way to change that value.

Oh ok... anyway of achieving that? There no EVENT trigger for alter user?


There is not, but as David mentioned there's way more ways to modify 
settings than just ALTER ROLE. Attempting to lock that down won't help 
you at all.


Unfortunately, there's no hook support for doing something special when 
GUCs change, though it might be possible to do something here via 
planner hooks. That would be pretty complicated and would need to be 
done in C.


It doesn't look like SELinux would help either.

So basically, there is currently no way to restrict someone changing 
GUCs, other than GUCs that are marked as superuser-only.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting user from changing its own attributes

2015-04-12 Thread Sameer Kumar
 On Mon, 13 Apr 2015 11:35 Jim Nasby jim.na...@bluetreble.com wrote:

On 4/11/15 4:11 PM, Sameer Kumar wrote:
 Pg_settings currently has an upper bound column - though it is a
 view and that value cannot be changed that I know of.


 I guess that upper bound column is more of the limit that is imposed by
 system which you can have for a parameter i.e. the system imposed limit
 or valid range if values for a parameter. I don't think one can update
that.

Correct.

 But if it could I suspect that whatever catalog you would change to
 affect it would only cause a global change. There is no alter
 database, role, or postgresql way to change that value.

 Oh ok... anyway of achieving that? There no EVENT trigger for alter
user?

There is not, but as David mentioned there's way more ways to modify
settings than just ALTER ROLE. Attempting to lock that down won't help
you at all.

Unfortunately, there's no hook support for doing something special when
GUCs change, though it might be possible to do something here via
planner hooks. That would be pretty complicated and would need to be
done in C.

It doesn't look like SELinux would help either.

So basically, there is currently no way to restrict someone changing
GUCs, other than GUCs that are marked as superuser-only.

 Is there anything ecpected in any of the near future release?


Re: [GENERAL] Benchmarking partitioning triggers and rules

2015-04-12 Thread Jim Nasby

On 4/8/15 4:58 PM, Tim Uckun wrote:

So is there a third and even faster way of doing this?


Please don't top-post.


On Wed, Apr 8, 2015 at 4:08 PM, Adrian Klaver adrian.kla...@aklaver.com
mailto:adrian.kla...@aklaver.com wrote:

On 04/07/2015 07:49 PM, Tim Uckun wrote:

I understand that there is overhead involved in parsing the
strings and
such.  The amount of overhead was surprising to me but that's
another
matter.  What I am really puzzled about is the difference
between the
statements

EXECUTE  'INSERT INTO ' ||  quote_ident(partition_name) ||  ' SELECT
($1).*' USING NEW ;

and

EXECUTE  'INSERT INTO ' ||  quote_ident(partition_name) ||
' VALUES
(($1).*)' USING NEW ;



Offhand I would say because in the first case you are doing a SELECT
and in the second you are just doing a substitution.



They both do string interpolation but one is significantly
faster than
the other.   Is there a third and even faster way?

I am using RDS so I can't really do stored procs in C.


There are only 3 ways you can steer data into the correct partition:

1) Outside the database (ie: the application itself knows what partition 
table to use)

2) Rules
3) Triggers

#1 will almost always be faster (I say almost because if something has 
to decide row-by-row it may be faster for the database to do it).


#2 and #3 depend on the workload. Rules are parsed once PER STATEMENT, 
so if you're using a single INSERT to load a lot of rows they might well 
be faster than triggers. But as mentioned, they're a real PITA to use 
and they don't work at all with COPY.


When it comes to triggers, you will get different performance depending 
on the trigger language used as well as how you write the trigger 
itself. Presumably a trigger function written in C will be faster than 
anything else. I would expect plpgsql to be the next fastest after that, 
but that's just a guess.


As for the difference between SELECT and VALUES above, that's probably 
due to a difference in parsing or in the number of transforms the NEW 
has to go through in the executor. My guess is that because SELECT is 
much more general purpose than VALUES it's both more expensive to parse 
as well as more expensive to execute. If you really want to know for 
certain, connect gdb to a database backend on your laptop/desktop, fire 
off an insert (to load plpgsql into backend memory), set a gdb 
breakpoint on exec_stmt_block(), and see what's different between the 
two use cases.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema or database

2015-04-12 Thread Michael Cheung
Thanks for your additional comment.
It is more clear, I'd better to using schema more than using database.

yours, michael



On Sun, 12 Apr 2015 19:24:30 -0700
John R Pierce pie...@hogranch.com wrote:

 On 4/12/2015 7:20 PM, Ian Barwick wrote:
  If as you say access to the database is via a single application database
  user, it will probably make more sense to use multiple schemas rather than
  multiple databases. Keeping everything in one database will simplify
  administration (e.g. making backups - ypu'll just need to dump the one 
  database
  rather than looping through a variable number) and will make life easier if 
  you
  ever need to do some kind of query involving multiple customers.
  There will also be less overhead when adding a new schema vs adding
  a new database.
 
 and less overhead in connections, as one client connection can serve multiple 
 customers
 
 -- john r pierce, recycling bits in santa cruz
 
 
 
 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting user from changing its own attributes

2015-04-12 Thread Jim Nasby

On 4/12/15 11:55 PM, Sameer Kumar wrote:


On Mon, 13 Apr 2015 11:35 Jim Nasby jim.na...@bluetreble.com
mailto:jim.na...@bluetreble.com wrote:

On 4/11/15 4:11 PM, Sameer Kumar wrote:
  Pg_settings currently has an upper bound column - though it is a
  view and that value cannot be changed that I know of.
 
 
  I guess that upper bound column is more of the limit that is
imposed by
  system which you can have for a parameter i.e. the system imposed
limit
  or valid range if values for a parameter. I don't think one can
update that.

Correct.

  But if it could I suspect that whatever catalog you would
change to
  affect it would only cause a global change. There is no alter
  database, role, or postgresql way to change that value.
 
  Oh ok... anyway of achieving that? There no EVENT trigger for
alter user?

There is not, but as David mentioned there's way more ways to modify
settings than just ALTER ROLE. Attempting to lock that down won't help
you at all.

Unfortunately, there's no hook support for doing something special when
GUCs change, though it might be possible to do something here via
planner hooks. That would be pretty complicated and would need to be
done in C.

It doesn't look like SELinux would help either.

So basically, there is currently no way to restrict someone changing
GUCs, other than GUCs that are marked as superuser-only.

Is there anything ecpected in any of the near future release?


No. I suspect the community would support at least a hook for GUC 
changes, if not a full-on permissions system. A hook would make it 
fairly easy to add event trigger support.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] schema or database

2015-04-12 Thread Michael Cheung
hi, all;

I am new here. And I need some suggestion.

I have many similar database to store data for every customer.
Structure of database is almost the same.
As I use same application to control all these data, so I can only use
one database user to connect to these database.
And I have no needs to query table for different customer together.

I wonder which I should use, different shema or different database to store 
data?

I 'd like to know the advantage and disadvantage for using schema or database.

Thanks

michael



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema or database

2015-04-12 Thread John R Pierce

On 4/12/2015 7:20 PM, Ian Barwick wrote:

If as you say access to the database is via a single application database
user, it will probably make more sense to use multiple schemas rather than
multiple databases. Keeping everything in one database will simplify
administration (e.g. making backups - ypu'll just need to dump the one database
rather than looping through a variable number) and will make life easier if you
ever need to do some kind of query involving multiple customers.
There will also be less overhead when adding a new schema vs adding
a new database.


and less overhead in connections, as one client connection can serve 
multiple customers


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting user from changing its own attributes

2015-04-12 Thread Sameer Kumar
On Mon, Apr 13, 2015 at 1:03 PM Jim Nasby jim.na...@bluetreble.com wrote:

 On 4/12/15 11:55 PM, Sameer Kumar wrote:
 
  On Mon, 13 Apr 2015 11:35 Jim Nasby jim.na...@bluetreble.com
  mailto:jim.na...@bluetreble.com wrote:
 
  On 4/11/15 4:11 PM, Sameer Kumar wrote:
Pg_settings currently has an upper bound column - though it
 is a
view and that value cannot be changed that I know of.
   
   
I guess that upper bound column is more of the limit that is
  imposed by
system which you can have for a parameter i.e. the system imposed
  limit
or valid range if values for a parameter. I don't think one can
  update that.
 
  Correct.
 
But if it could I suspect that whatever catalog you would
  change to
affect it would only cause a global change. There is no alter
database, role, or postgresql way to change that value.
   
Oh ok... anyway of achieving that? There no EVENT trigger for
  alter user?
 
  There is not, but as David mentioned there's way more ways to modify
  settings than just ALTER ROLE. Attempting to lock that down won't
 help
  you at all.
 
  Unfortunately, there's no hook support for doing something special
 when
  GUCs change, though it might be possible to do something here via
  planner hooks. That would be pretty complicated and would need to be
  done in C.
 
  It doesn't look like SELinux would help either.
 
  So basically, there is currently no way to restrict someone changing
  GUCs, other than GUCs that are marked as superuser-only.
 
  Is there anything ecpected in any of the near future release?

 No. I suspect the community would support at least a hook for GUC
 changes, if not a full-on permissions system. A hook would make it
 fairly easy to add event trigger support.


I hope someone out there is listening :)

I hope I have made my concern clear, I currently don't have a way to
control users from changing the parameter values for their own settings,
which allows each user to set in-appropriate values e.g. for work_mem.


Regards
Sameer


Re: [GENERAL] schema or database

2015-04-12 Thread Ian Barwick
On 13/04/15 11:08, Michael Cheung wrote:
 hi, all;
 
 I am new here. And I need some suggestion.
 
 I have many similar database to store data for every customer.
 Structure of database is almost the same.
 As I use same application to control all these data, so I can only use
 one database user to connect to these database.
 And I have no needs to query table for different customer together.
 
 I wonder which I should use, different shema or different database to store 
 data?
 
 I 'd like to know the advantage and disadvantage for using schema or database.

If as you say access to the database is via a single application database
user, it will probably make more sense to use multiple schemas rather than
multiple databases. Keeping everything in one database will simplify
administration (e.g. making backups - ypu'll just need to dump the one database
rather than looping through a variable number) and will make life easier if you
ever need to do some kind of query involving multiple customers.
There will also be less overhead when adding a new schema vs adding
a new database.


Regards

Ian Barwick

-- 
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with slow table update

2015-04-12 Thread Jim Nasby

On 4/9/15 6:18 PM, Pawel Veselov wrote:

Hi.

I have a plpgsql procedure that updates a few similar tables.
for some reason, updates on one of the tables take a lot longer the
updates on the other ones. The difference is, say, 7 seconds vs. 80
milliseconds.

the procedure uses cursors and record variables to do the updates. For
example:

 update r_agrio_total set
   unserved = unserved + (agrow-'unserved')::numeric(38),
   r_brkconn = mush_brk_conn(r_brkconn, q_item.r_brkconn),
 where
   tagid = _tagid and
   unitid = (akey-'unitid')::numeric and
   placement = (akey-'placement')::numeric and
   device_type = (akey-'device_type')::numeric;

There is another table (xq_agr) that is read record by record, and for
each of those records, such update is executed.

I was trying to select analyze the updates to see where the time could
be spent.
There are only 24 row in the bad table, and 3,400 rows in good
table. So, for the bad table, most of the updates will be on the same
rows. The times were measured on processing 100 original records.

When I'm analyzing pure update statements, I don't see anything strange.

bad table: explain analyze update r_agrio_total set unconfirmed =
unconfirmed +0 where tagid = 1000 and unitid = 1000 and placement = 0
and device_type = 100;

RESULT:
  Update on r_agrio_total  (cost=0.42..4.46 rows=1 width=321) (actual
time=0.253..0.253 rows=0 loops=1)
-  Index Scan using tag_r_agrio_total on r_agrio_total
  (cost=0.42..4.46 rows=1 width=321) (actual time=0.037..0.041 rows=1
loops=1)
  Index Cond: (tagid = 1000::numeric)
  Filter: ((unitid = 1000::numeric) AND (placement = 0::numeric)
AND (device_type = 100::numeric))
  Rows Removed by Filter: 7
  Total runtime: 0.282 ms

good table: explain analyze update r_agrio_hourly set unconfirmed =
unconfirmed +0 where tagid = 1000 and unitid = 1000 and placement = 0
and device_type = 100 and rowdate = '2015-02-23T13';

RESULT:
  Update on r_agrio_hourly  (cost=0.42..17.36 rows=6 width=329) (actual
time=0.102..0.102 rows=0 loops=1)
-  Index Scan using u_r_agrio_hourly on r_agrio_hourly
  (cost=0.42..17.36 rows=6 width=329) (actual time=0.047..0.048 rows=1
loops=1)
  Index Cond: ((tagid = 1000::numeric) AND (unitid =
1000::numeric) AND ((rowdate)::text = '2015-02-23T13'::text) AND
(device_type = 100::numeric) AND (placement = 0::numeric))
  Total runtime: 0.135 ms

When I try doing it with WITH statement (really, to apply the actual
data that the plpgsql function uses), there is something strange in the
bad table.

explain analyze
with SRC as (select * from xq_agr where id = 914830)
 update r_agrio_total set
   unconfirmed = unconfirmed +
(SRC.r_agrio-'unconfirmed')::numeric(38)
 from SRC
 where
   tagid = (SRC.r_agrio-'key'-'tagid')::numeric and
   unitid = (SRC.r_agrio-'key'-'unit')::numeric and
   placement = (SRC.r_agrio-'key'-'placement')::numeric and
   device_type = (SRC.r_agrio-'key'-'device_type')::numeric;

RESULT:
  Update on r_agrio_total  (cost=8.91..32777.51 rows=19331 width=409)
(actual time=0.107..0.107 rows=0 loops=1)
CTE src
  -  Index Scan using xq_agr_pkey on xq_agr  (cost=0.42..8.44
rows=1 width=379) (actual time=0.026..0.027 rows=1 loops=1)
Index Cond: (id = 914830)
-  Nested Loop  (cost=0.46..32769.07 rows=19331 width=409) (actual
time=0.107..0.107 rows=0 loops=1)
  -  CTE Scan on src  (cost=0.00..0.02 rows=1 width=88) (actual
time=0.032..0.033 rows=1 loops=1)
  -  Index Scan using u_r_agrio_total on r_agrio_total
  (*cost=0.46..32285.78 rows=19331* width=321) (actual time=0.001..0.001
rows=0 loops=1)
Index Cond: ((tagid = (((src.r_agrio - 'key'::text) -
'tagid'::text))::numeric) AND (unitid = (((src.r_agrio - 'key'::text)
- 'unit'::text))::numeric) AND (device_type = (((src.r_agrio -
'key'::text) - 'device_type'::text))::numeric) AND (placement =
(((src.r_agrio - 'key'::text) - 'placement'::text))::numeric))
  Total runtime: 0.155 ms

explain analyze
with SRC as (select * from xq_agr where id = 914830)
 update r_agrio_hourly set
   unconfirmed = unconfirmed +
(SRC.r_agrio-'unconfirmed')::numeric(38)
 from SRC
 where
   tagid = (SRC.r_agrio-'key'-'tagid')::numeric and
   unitid = (SRC.r_agrio-'key'-'unit')::numeric and
   placement = (SRC.r_agrio-'key'-'placement')::numeric and
   device_type = (SRC.r_agrio-'key'-'device_type')::numeric and
   rowdate = (SRC.r_agrio-'key'-'rowdate');

RESULT:
  Update on r_agrio_hourly  (cost=8.91..52.91 rows=20 width=417) (actual
time=0.123..0.123 rows=0 loops=1)
CTE src
  -  Index Scan using xq_agr_pkey on xq_agr  (cost=0.42..8.44
rows=1 width=379) (actual time=0.023..0.024 rows=1 loops=1)
Index Cond: (id = 914830)
-  Nested Loop  (cost=0.47..44.47 rows=20 width=417) 

Re: [GENERAL] Re: Hot standby problems: consistent state not reached, no connection to master server.

2015-04-12 Thread Adrian Klaver

On 04/12/2015 08:25 AM, Ilya Ashchepkov wrote:

On Sun, 12 Apr 2015 08:10:48 -0700
Adrian Klaver adrian.kla...@aklaver.com wrote:


On 04/12/2015 07:47 AM, Ilya Ashchepkov wrote:

Hello.

I'm setting up hot standby slave.
It recovers from wal archive files, but I can't connect to it:
$ psql
psql: FATAL:  the database system is starting up

On master:
# select name,setting from pg_settings where name like 'wal_level';
 name|   setting
---+-
   wal_level | hot_standby


My slave recovery.conf:
$ cat recovery.conf
# Note that recovery.conf must be in $PGDATA directory.
# It should NOT be located in the same directory as postgresql.conf

# Specifies whether to start the server as a standby. In streaming
replication, # this parameter must to be set to on.
standby_mode  = 'on'

# Specifies a connection string which is used for the standby
server to connect # with the primary.
primary_conninfo  = 'host=192.168.0.101 port=5432
user=replication password=*'

# Specifies a trigger file whose presence should cause streaming
replication to # end (i.e., failover).
trigger_file = '/media/psqlbak/101/main/standup'

# Specifies a command to load archive segments from the WAL
archive. If # wal_keep_segments is a high enough number to retain
the WAL segments # required for the standby server, this may not be
necessary. But # a large workload can cause segments to be recycled
before the standby # is fully synchronized, requiring you to start
again from a new base backup. restore_command =
'/usr/lib/postgresql/9.3/bin/pg_standby
-t /tmp/pgsql.trigger.5432 /media/psqlbak/101/wals/main/ %f %p %r'

I tried to comment 'restore_command' in recovery.conf on slave,
then slave connects to master and starts receiving data, but I
think it's not very good way. What should I change to receive data
through connection and reach consistent state on slave?


What have you set for hot_standby on the standby server?:

http://www.postgresql.org/docs/9.3/interactive/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-STANDBY



Oh! I missed this! Thank you!
Now slave reached consistent state some time after start, but still no
connection to master server and still restoring wal-files.


Not quite sure what you are getting at.

You are not seeing the streaming connection happening?

If a connection is not being made:

1) Dose user replication have REPLICATION rights?
2) Is the pg_hba.conf on the master set up to allow a connection from 
the standby for user replication and database replication?


Where are the WAL files coming from?



















--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Hot standby problems: consistent state not reached, no connection to master server.

2015-04-12 Thread Ilya Ashchepkov
Hello.

I'm setting up hot standby slave.
It recovers from wal archive files, but I can't connect to it:
$ psql
psql: FATAL:  the database system is starting up

On master:
# select name,setting from pg_settings where name like 'wal_level';
   name|   setting
---+-
 wal_level | hot_standby


My slave recovery.conf:
$ cat recovery.conf
# Note that recovery.conf must be in $PGDATA directory.
# It should NOT be located in the same directory as postgresql.conf

# Specifies whether to start the server as a standby. In streaming replication,
# this parameter must to be set to on.
standby_mode  = 'on'

# Specifies a connection string which is used for the standby server to connect
# with the primary.
primary_conninfo  = 'host=192.168.0.101 port=5432 user=replication 
password=*'

# Specifies a trigger file whose presence should cause streaming replication to
# end (i.e., failover).
trigger_file = '/media/psqlbak/101/main/standup'

# Specifies a command to load archive segments from the WAL archive. If
# wal_keep_segments is a high enough number to retain the WAL segments
# required for the standby server, this may not be necessary. But
# a large workload can cause segments to be recycled before the standby
# is fully synchronized, requiring you to start again from a new base backup.
restore_command = '/usr/lib/postgresql/9.3/bin/pg_standby -t 
/tmp/pgsql.trigger.5432 /media/psqlbak/101/wals/main/ %f %p %r'

I tried to comment 'restore_command' in recovery.conf on slave, then slave 
connects
to master and starts receiving data, but I think it's not very good way.
What should I change to receive data through connection and reach consistent
state on slave?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Hot standby problems: consistent state not reached, no connection to master server.

2015-04-12 Thread Ilya Ashchepkov
On Sun, 12 Apr 2015 08:10:48 -0700
Adrian Klaver adrian.kla...@aklaver.com wrote:

 On 04/12/2015 07:47 AM, Ilya Ashchepkov wrote:
  Hello.
 
  I'm setting up hot standby slave.
  It recovers from wal archive files, but I can't connect to it:
  $ psql
  psql: FATAL:  the database system is starting up
 
  On master:
  # select name,setting from pg_settings where name like 'wal_level';
  name|   setting
  ---+-
wal_level | hot_standby
 
 
  My slave recovery.conf:
  $ cat recovery.conf
  # Note that recovery.conf must be in $PGDATA directory.
  # It should NOT be located in the same directory as postgresql.conf
 
  # Specifies whether to start the server as a standby. In streaming
  replication, # this parameter must to be set to on.
  standby_mode  = 'on'
 
  # Specifies a connection string which is used for the standby
  server to connect # with the primary.
  primary_conninfo  = 'host=192.168.0.101 port=5432
  user=replication password=*'
 
  # Specifies a trigger file whose presence should cause streaming
  replication to # end (i.e., failover).
  trigger_file = '/media/psqlbak/101/main/standup'
 
  # Specifies a command to load archive segments from the WAL
  archive. If # wal_keep_segments is a high enough number to retain
  the WAL segments # required for the standby server, this may not be
  necessary. But # a large workload can cause segments to be recycled
  before the standby # is fully synchronized, requiring you to start
  again from a new base backup. restore_command =
  '/usr/lib/postgresql/9.3/bin/pg_standby
  -t /tmp/pgsql.trigger.5432 /media/psqlbak/101/wals/main/ %f %p %r'
 
  I tried to comment 'restore_command' in recovery.conf on slave,
  then slave connects to master and starts receiving data, but I
  think it's not very good way. What should I change to receive data
  through connection and reach consistent state on slave?
 
 What have you set for hot_standby on the standby server?:
 
 http://www.postgresql.org/docs/9.3/interactive/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-STANDBY
 

Oh! I missed this! Thank you!
Now slave reached consistent state some time after start, but still no
connection to master server and still restoring wal-files.

 
 
 
 
 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hot standby problems: consistent state not reached, no connection to master server.

2015-04-12 Thread Adrian Klaver

On 04/12/2015 07:47 AM, Ilya Ashchepkov wrote:

Hello.

I'm setting up hot standby slave.
It recovers from wal archive files, but I can't connect to it:
$ psql
psql: FATAL:  the database system is starting up

On master:
# select name,setting from pg_settings where name like 'wal_level';
name|   setting
---+-
  wal_level | hot_standby


My slave recovery.conf:
$ cat recovery.conf
# Note that recovery.conf must be in $PGDATA directory.
# It should NOT be located in the same directory as postgresql.conf

# Specifies whether to start the server as a standby. In streaming replication,
# this parameter must to be set to on.
standby_mode  = 'on'

# Specifies a connection string which is used for the standby server to connect
# with the primary.
primary_conninfo  = 'host=192.168.0.101 port=5432 user=replication 
password=*'

# Specifies a trigger file whose presence should cause streaming replication to
# end (i.e., failover).
trigger_file = '/media/psqlbak/101/main/standup'

# Specifies a command to load archive segments from the WAL archive. If
# wal_keep_segments is a high enough number to retain the WAL segments
# required for the standby server, this may not be necessary. But
# a large workload can cause segments to be recycled before the standby
# is fully synchronized, requiring you to start again from a new base backup.
restore_command = '/usr/lib/postgresql/9.3/bin/pg_standby -t 
/tmp/pgsql.trigger.5432 /media/psqlbak/101/wals/main/ %f %p %r'

I tried to comment 'restore_command' in recovery.conf on slave, then slave 
connects
to master and starts receiving data, but I think it's not very good way.
What should I change to receive data through connection and reach consistent
state on slave?


What have you set for hot_standby on the standby server?:

http://www.postgresql.org/docs/9.3/interactive/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-STANDBY








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general