Re: [GENERAL] Pgbasebackup help

2015-12-04 Thread Yelai, Ramkumar IN BLR STS
Thanks David,

This helped me to understand the WAL importance.

Finally, we decided to use "stream" option to copy the WAL file during the 
backup as mentioned in the help.

Enabled this options in postgres.conf

wal_level = hot_standby
max_wal_senders = 2

Though I get the information, which are archived during the backup process. I 
myself deleting archived records the after time T3, while restoring the backup 
by using our sql procedures. 

Since I am not using archive recovery or standy replica (I am restoring the 
data folder in to the same server not in standby server), I can't use the 
recovery.conf options to recover till Time T3.  Hence I forcefully deleted as 
mentioned earlier.

Please let me know If any way to replay the WAL till Time T3 then I am 
interested to use it.

Thanks once again.

Regards,
Ramkumar.

-Original Message-
From: David Steele [mailto:da...@pgmasters.net] 
Sent: Friday, December 04, 2015 6:26 PM
To: Yelai, Ramkumar IN BLR STS; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Pgbasebackup help

On 12/3/15 12:59 AM, Yelai, Ramkumar IN BLR STS wrote:
> What I wanted to achieve is simple copy of Data folder. I can't shutdown the 
> database during the backup and unable to use file system copy of data folder 
> as it creates inconsistency and don't want to use pg_dump.
>
> Hence I decided to use Pg_basebackup for copying the base backup and don't 
> want to replay the wal.

Replaying WAL is *not* optional.  Each restore will have to replay at least one 
WAL segment to become consistent, depending on write volume during the backup.

> Anyway, pg_basebackup put checkpoint before copying the data folder. For me 
> it is enough to restore till checkpoint.

This won't work - the database keeps running and making changes after the 
checkpoint.

> I saw this link 
> http://blog.veritech.io/2014/10/automated-backup-for-postgresql-cluster.html.
>
> In this link also, I have not seen they have enabled archive_mode. Archive 
> mode is not necessary as long as you streaming the your wal files to pg_xlog.

These instructions are for bringing up a replica.  Even if this is OK for your 
purposes, it still would not get you a database at time T3. 
You are supposing that because this method does not use archiving that

> Also, even if I have all wal files , how do I  restore till time T3. I am 
> analyzing at pgbackrest to know how to restore backup till time T3.

To restore to time T3 you would select a backup that ended *before* T3 then 
using point-in-time recovery to play forward to T3.

That should be explained pretty clearly in the user guide - if there's 
something you don't understand then it would be helpful to know so I can 
improve the guide.

--
-David
da...@pgmasters.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pgbasebackup help

2015-12-03 Thread Yelai, Ramkumar IN BLR STS
HI,

Thanks David.

What I wanted to achieve is simple copy of Data folder. I can't shutdown the 
database during the backup and unable to use file system copy of data folder as 
it creates inconsistency and don't want to use pg_dump. 

Hence I decided to use Pg_basebackup for copying the base backup and don't want 
to replay the wal. 

Anyway, pg_basebackup put checkpoint before copying the data folder. For me it 
is enough to restore till checkpoint.

I saw this link 
http://blog.veritech.io/2014/10/automated-backup-for-postgresql-cluster.html. 

In this link also, I have not seen they have enabled archive_mode. Archive mode 
is not necessary as long as you streaming the your wal files to pg_xlog. 

By using this option in pg_basebackup and backup will become standalone backup.

--xlog-method = stream
"Includes the required transaction log files (WAL files) in the backup. This 
will include all transaction logs generated during the backup. If this option 
is specified, it is possible to start a postmaster directly in the extracted 
directory without the need to consult the log archive, thus making this a 
completely standalone backup."

Also, even if I have all wal files , how do I  restore till time T3. I am 
analyzing at pgbackrest to know how to restore backup till time T3. 

Our private DBMS backup does not have any data after Time T3, hence I don't 
want postgresql backup contains any transaction after Time T3.

Please let me know is my understand correct? And am I doing the correct way to 
backup.

Regards,
Ramkumar.

-Original Message-
From: David Steele [mailto:da...@pgmasters.net] 
Sent: Thursday, December 03, 2015 1:27 AM
To: pgsql-general@postgresql.org; Yelai, Ramkumar IN BLR STS
Subject: Re: [GENERAL] Pgbasebackup help

On 11/30/15 6:28 AM, Yelai, Ramkumar IN BLR STS wrote:
> Hi All,
>  
> I need some help in postgresql base backup.
>  
> We are currently using multiple DBMS in our project and postgresql is 
> one of them. Our private DBMS keeps the online data and postgresql 
> keeps online as well as historical data.
>  
> At present, we are doing the backup/restore process for our project. 
> So we planned to use Pg_basebackup instead of  pg_dump.
>  
> Below is the backup steps.
>  
> Time-T1   = Start the backup of private DBMS.
> Time-T2   = Finished the private backup DBMS.
> Time-T3   = Start the pg_basebackup.
> Time-T4   = End the Pg_basebackup.
>  
> Here the requirement is we don't want to restore the data after 
> Time-T3.  But when I followed this approach 
> _https://opensourcedbms.com/dbms/point-in-time-recovery-pitr-using-pg_
> basebackup-with-postgresql-9-2/_, I am still getting the information's 
> archived from Time-T3 to TimeT4.
>  
> Seems, WAL archives are holding all the transactions, which are 
> happened between Time T3 - Time T4.

This is the expected behavior.  The WAL generated during the backup must be 
replayed to make the database consistent so T4 is the earliest you can possibly 
stop recovery.

> Also, I don't want enable archive_mode = on as it needs to maintain 
> archives files.

As it turns out, archiving would be the solution to your problem.  If you were 
archiving you could restore a *previous* backup and then replay WAL to exactly 
T3.  There might be some jitter from clock differences but it should have the 
desired effect.

I've done this to have development database reasonably in sync with each other 
and in practice it works quite well.

> So I decided the enable only these parameters.
>  
> Postgresql.conf
> -
> wal_level = hot_standby
> max_wal_senders = 1
>  
> And added replication permissions for the current user in pg_hba.conf.
>  
> It does, what I need it. In the backup I did not have the data between 
> T3-T4.

There's not not enough detail here for me to make out what you are doing.  Is 
there still a pg_basebackup going on or are you just copying files?

If archive_mode is not enabled then wal_level = hot_standby is likely ignored.

What you end up with may start, but I doubt it's consistent.  I don't see how 
you could use pg_basebackup without archiving and end up at T3 with a 
consistent cluster.

Here's a tutorial I wrote for pgBackRest that covers point in time recovery and 
goes into a bit more detail than the article you cited:

http://www.pgbackrest.org/user-guide.html#pitr

This method requires archive_mode to be enabled, which I believe is the correct 
way to achieve the desired result.

--
-David
da...@pgmasters.net



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Pgbasebackup help

2015-12-01 Thread Yelai, Ramkumar IN BLR STS
Hi All,

I need some help in postgresql base backup.

We are currently using multiple DBMS in our project and postgresql is one of 
them. Our private DBMS keeps the online data and postgresql keeps online as 
well as historical data.

At present, we are doing the backup/restore process for our project. So we 
planned to use Pg_basebackup instead of  pg_dump.

Below is the backup steps.

Time-T1   = Start the backup of private DBMS.
Time-T2   = Finished the private backup DBMS.
Time-T3   = Start the pg_basebackup.
Time-T4   = End the Pg_basebackup.

Here the requirement is we don't want to restore the data after Time-T3.  But 
when I followed this approach 
https://opensourcedbms.com/dbms/point-in-time-recovery-pitr-using-pg_basebackup-with-postgresql-9-2/,
 I am still getting the information's archived from Time-T3 to TimeT4.

Seems, WAL archives are holding all the transactions, which are happened 
between Time T3 - Time T4.

Also, I don't want enable archive_mode = on as it needs to maintain archives 
files.

So I decided the enable only these parameters.

Postgresql.conf
-
wal_level = hot_standby
max_wal_senders = 1

And added replication permissions for the current user in pg_hba.conf.

It does, what I need it. In the backup I did not have the data between T3-T4.

Is this correct or is there anything I missing it.

Please let me know.

With best regards,
Ramkumar Yelai

Siemens Technology and Services Private Limited
CT DC AA I HOUSE DEV GL4
84, Hosur Road
Bengaluru 560100, Indien
Tel.: +91 80 33136494
Fax: +91 80 33133389
Mobil: +91 9886182031
mailto:ramkumar.ye...@siemens.com
http://www.siemens.co.in/STS

Registered Office: 130, Pandurang Budhkar Marg, Worli, Mumbai 400 018. 
Telephone +91 22 39677000. Fax +91 22 39677075. Other Offices: Bangalore, 
Chennai, Gurgaon, Noida, Kolkata, Pune. Corporate Identity 
number:U9MH1986PLC093854




[GENERAL] Pgbouncer compile in VS2013

2015-06-24 Thread Yelai, Ramkumar IN BLR STS
Hi All,

I wanted to compile pgbouncer in Visual studio 2013-64bit.

I have not found any help in forums.  Also, pgbouncer  source file mentioned 
that, it is not tested in Visual studio.

Please let me know, is it possible to compile in VS2013 and works fine?

With best regards,
Ramkumar Yelai

Siemens Technology and Services Private Limited
CT DC AA I HOUSE DEV GL4
84, Hosur Road
Bengaluru 560100, Indien
Tel.: +91 80 33136494
Fax: +91 80 33133389
Mobil: +91 9886182031
mailto:ramkumar.ye...@siemens.com
http://www.siemens.co.in/STS

Registered Office: 130, Pandurang Budhkar Marg, Worli, Mumbai 400 018. 
Telephone +91 22 39677000. Fax +91 22 39677075. Other Offices: Bangalore, 
Chennai, Gurgaon, Noida, Kolkata, Pune. Corporate Identity 
number:U9MH1986PLC093854




Re: [GENERAL] user constructed where clause

2015-06-12 Thread Yelai, Ramkumar IN BLR STS
Thanks  David for the nice suggestion.

Text search would not my requirement. Because user wants to provide condition 
similar to where clause.

I thought let postgres query the data based on only time column. That result 
would be stored in cursor and fetch 5000 sequentially when the user ask. Once 
after the result is received by UI, let UI  do the complete query processing.

I am not sure about this idea, but is that possible I can apply where clause on 
cursor result (in 5000 batch)?

Thanks  Regards,
Ramkumar.

From: David G. Johnston [mailto:david.g.johns...@gmail.com]
Sent: Wednesday, June 10, 2015 1:48 AM
To: Yelai, Ramkumar IN BLR STS
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] user constructed where clause

On Tue, Jun 9, 2015 at 4:48 AM, Yelai, Ramkumar IN BLR STS 
ramkumar.ye...@siemens.commailto:ramkumar.ye...@siemens.com wrote:
Now, the requirement is if user provides filter information based on every 
column from the web UI, this filter will let the user construct the “where 
clause” and provide to postgresql.
In a month this table exceeds millions of record. If  I use the 
user-constructed query then it would take lot of time as the user may not 
include indexed column in the user-constructed query.

​The first solution that comes to mind is to construct a document containing 
the relevant information, index that, and then provide a single search input 
field that is used to construct a text search query that you apply against the 
indexed document.

In short, forget the fact that there are fields and just index and search the 
content.

Add additional controls to the UI for just those fields that are indexed.

David J.
​



[GENERAL] user constructed where clause

2015-06-09 Thread Yelai, Ramkumar IN BLR STS
Hi All,

I have one requirement in my project and don't know how to achieve.

My project is receiving the log information from PC in the network, and that 
information is stored in the below table.

CREATE TABLE PCLogTable
(
  LOG_ID serial NOT NULL,
  LOG_USER_ID integer DEFAULT 0,
  LOG_TYPE_ID integer,
  LOG_PC_ID integer NOT NULL,
  LOG_PORT text,
  LOG_NOTE text,
  LOG_ACK boolean,
  LOG_TIME timestamp without time zone,
  LOG_NON_PENDING_STATUS text,
  LOG_STATUS text,
  LOG_MONITORED_STATE text,
  LOG_RSE_RAISE_TIMESTAMP text,
  LOG_ADD_INFO jsonb,
  CONSTRAINT LOG_ID PRIMARY KEY (LOG_ID),
  CONSTRAINT LOG_TYPE_ID FOREIGN KEY (LOG_TYPE_ID)
  REFERENCES LogTextTable (LOG_TYPE_ID) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE CASCADE
)

CREATE INDEX log_table_index
  ON PCLogTable
  USING btree
  (LOG_TIME DESC, LOG_PC_ID);

At present, I have written a query to get latest 5000 log information from this 
table and it executes in 15 seconds periodically.

Now, the requirement is if user provides filter information based on every 
column from the web UI, this filter will let the user construct the where 
clause and provide to postgresql.
In a month this table exceeds millions of record. If  I use the 
user-constructed query then it would take lot of time as the user may not 
include indexed column in the user-constructed query.

Also, they want to see all the record that matches the user-constructed query.

With best regards,
Ramkumar Yelai

Siemens Technology and Services Private Limited
CT DC AA I HOUSE DEV GL4
84, Hosur Road
Bengaluru 560100, Indien
Tel.: +91 80 33136494
Fax: +91 80 33133389
Mobil: +91 9886182031
mailto:ramkumar.ye...@siemens.com
http://www.siemens.co.in/STS

Registered Office: 130, Pandurang Budhkar Marg, Worli, Mumbai 400 018. 
Telephone +91 22 39677000. Fax +91 22 39677075. Other Offices: Bangalore, 
Chennai, Gurgaon, Noida, Kolkata, Pune. Corporate Identity 
number:U9MH1986PLC093854




[GENERAL] reltoastidxid altenates in postgresql 9.4

2015-03-10 Thread Yelai, Ramkumar IN BLR STS
Hi All,

I am using the following code to know how much disk space could be saved after 
deleting certain tables (as a parameter to this function )

CREATE OR REPLACE FUNCTION Get_Tables_Recovery_Size(  IN tableNames text[] )
RETURNS TABLE( table_size bigint )
AS
$$
DECLARE
BEGIN
RETURN QUERY
(
SELECT COALESCE(SUM( ALLTABLE.totalsize ),0)::bigint 
FROM
(
SELECT
relname,

(tablesize+indexsize+toastsize+toastindexsize) AS totalsize
FROM
(
SELECT ns.nspname, cl.relname, 
pg_relation_size(cl.oid) AS tablesize,
COALESCE(
(SELECT 
SUM(pg_relation_size(indexrelid))::bigint FROM pg_index WHERE cl.oid=indrelid), 0
) AS indexsize,
CASE
WHEN reltoastrelid = 0 THEN 0
ELSE

pg_relation_size(reltoastrelid)
END AS toastsize,

CASE
WHEN reltoastrelid = 0 THEN 0
ELSE

pg_relation_size((SELECT reltoastidxid FROM pg_class ct WHERE ct.oid = 
cl.reltoastrelid))
END AS toastindexsize
FROM
pg_class cl,
pg_namespace ns

WHERE
pg_relation_size(cl.oid) != 0 
AND
cl.relnamespace = ns.oid AND
ns.nspname NOT IN 
('pg_catalog', 'information_schema') AND
cl.relname IN
(SELECT table_name FROM 
information_schema.tables WHERE table_type = 'BASE TABLE')
) ss
WHERE
relname IN ( SELECT $1[i] FROM 
generate_subscripts($1, 1) g(i) )
) ALLTABLE
);
END;
$$ LANGUAGE plpgsql;

After migrated 9.4. I am getting error that reltoastidxid is not present in 
pg_class. Due to REINDEX CONCURRENTLY this column removed. 
http://www.postgresql.org/message-id/e1uurj8-0001au...@gemulon.postgresql.org

Would you please tell me how to modify this code.

With best regards,
Ramkumar Yelai


Siemens Technology and Services Private Limited
CT DC AA I HOUSE DEV GL4
84, Hosur Road
Bengaluru 560100, Indien
Tel.: +91 80 33136494
Fax: +91 80 33133389
Mobil: +91 9886182031
mailto:ramkumar.ye...@siemens.com
http://www.siemens.co.in/STS

Registered Office: 130, Pandurang Budhkar Marg, Worli, Mumbai 400 018. 
Telephone +91 22 39677000. Fax +91 22 39677075. Other Offices: Bangalore, 
Chennai, Gurgaon, Noida, Kolkata, Pune. Corporate Identity 
number:U9MH1986PLC093854




[GENERAL] Timstamp to Json conversion issue

2015-01-13 Thread Yelai, Ramkumar IN BLR STS
Hi

I have migrated my postgresql 9.2 schemas to Postgres 9.4 (only table scheme 
migrated not the data ).  I have used the following sql to convert table output 
to json in 9.2.

select array_to_json(array_agg(row_to_json(R.*)))::text from (  select  ID,  
TIME from SN_TestTable )R;

IN 9.2, I used to get this result  [{id:1,time:2015-01-13 12:09:45.348}]

But same code in 9.4 produce this result  
[{id:1,time:2015-01-13T12:09:45.348}] . T separator is added between 
date and time.

Seems json coversion is followed ISO8601 for the timestamp. This issue is 
resolved by sending TIME column as text instead of Timestamp without timezone.

But how do I fix this problem without converting to text.

With best regards,
Ramkumar Yelai

Siemens Technology and Services Private Limited
CT DC AA I HOUSE DEV GL4
84, Hosur Road
Bengaluru 560100, Indien
Tel.: +91 80 33136494
Fax: +91 80 33133389
Mobil: +91 9886182031
mailto:ramkumar.ye...@siemens.com
http://www.siemens.co.in/STS

Registered Office: 130, Pandurang Budhkar Marg, Worli, Mumbai 400 018. 
Telephone +91 22 39677000. Fax +91 22 39677075. Other Offices: Bangalore, 
Chennai, Gurgaon, Noida, Kolkata, Pune. Corporate Identity 
number:U9MH1986PLC093854




[GENERAL] Postgresql Service and Server synch up issue.

2013-11-15 Thread Yelai, Ramkumar IN BLR STS
Hi All,



We are using postgreql 9.2 as a main backend server for our project. We have 
been using this server since last year. We have configured  postgresql as a 
windows service.  Till last week, we have not faced any issues to start server 
from service console management, but last week we have faced a issue that when 
server was started , service was timed out and pg_ctl.exe closed, while 
rebooting.



This issue is faced by few people as described in below links.



http://www.postgresql.org/message-id/flat/e1usoey-00024n...@wrigleys.postgresql.org#e1usoey-00024n...@wrigleys.postgresql.org

http://postgresql.1045698.n5.nabble.com/Unreliable-quot-pg-ctl-w-start-quot-again-td5435767.html



Based on the links, I have analyzed the pg_ctl.c code and found that.



write_eventlog(EVENTLOG_INFORMATION_TYPE, _(Waiting for server startup...\n));

if (test_postmaster_connection(true) != PQPING_OK)

{

write_eventlog(EVENTLOG_ERROR_TYPE, _(Timed out waiting for server 
startup\n));

  pgwin32_SetServiceStatus(SERVICE_STOPPED);

  return;

}



test_postmaster_connection(bool do_checkpoint)

{

...

...

...

  for (i = 0; i  wait_seconds; i++)

  {

...

...

if (i = 5)

{

  struct stat statbuf;



  if (stat(pid_file, statbuf) != 0)

return PQPING_NO_RESPONSE;



  if (found_stale_pidfile)

  {

write_stderr(_(\n%s: this data directory appears to be 
running a pre-existing postmaster\n),

  progname);

return PQPING_NO_RESPONSE;

  }

}

  }



It is checking only 5 seconds, whether postmaster.pid is available or not. If 
not then it will send Timed out waiting for server startup log and followed 
by this log The Postgres service entered the stopped state.



As per the link, I can change 5 to 20 or some value to avoid timeout.



Please help me here, how can I resolve this issue in a better way.



Thanks  Regards,

Ramkumar.











Re: [GENERAL] Pgbouncer help

2013-08-28 Thread Yelai, Ramkumar IN BLR STS

Thanks Jeff,

As I understand from your point, instead of connecting Postgresql port, try to 
use PgBouncer port. 

I am using libpq library functions connect postgreql and code changes would be 
like this.

Previous code :

sprintf(conninfo, user=%s password=%s dbname=%s hostaddr=%s port=%d, PG_USER, 
PG_PASS, PG_DB, PG_HOST, PG_PORT);
conn = PQconnectdb(conninfo);

new code:

sprintf(conninfo, user=%s password=%s dbname=%s hostaddr=%s port=%d, PG_USER, 
PG_PASS, PG_DB, PG_HOST, PG_BOUNCER_PORT);
conn = PQconnectdb(conninfo);


-Original Message-
From: Jeff Janes [mailto:jeff.ja...@gmail.com] 
Sent: Tuesday, August 27, 2013 11:10 PM
To: Yelai, Ramkumar IN BLR STS
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Pgbouncer help

On Tue, Aug 27, 2013 at 1:34 AM, Yelai, Ramkumar IN BLR STS 
ramkumar.ye...@siemens.com wrote:
 HI



 In our current project, we are opening several postgresql connection. 
 Few connections are frequently used and few are occasionally used. 
 Hence we plan to adapt connection pool method to avoid more connection 
 to open.  We plan to use Pgbouncer.  Most of the pgbouncer example 
 shows how to configure, but they are not explaining how to use in C++.



 Please provide me a example, how to use it in C++.

pgbouncer is designed to look (to the client) just like a normal postgresql 
server.

If you want all connections to the database to go through pgbouncer, you can 
move the real server to a different port, and then start up pgbouncer on that 
vacated port.  In this case, the clients do not need to make any changes at all 
to their configuration.

If you want to keep the real server on the same port as it currently is and to 
use a special port to go through pgbouncer, then you need to change the clients 
to use that new port number.  You do this the same way you would change the 
client to use a different port if that different port were a regular postgresql 
server.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pgbouncer help

2013-08-28 Thread Yelai, Ramkumar IN BLR STS
Thanks for your great inputs.

Let me see, how to handle these situations in our project.

Regards,
Ramkumar

-Original Message-
From: Steve Crawford [mailto:scrawf...@pinpointresearch.com] 
Sent: Wednesday, August 28, 2013 1:09 AM
To: Jeff Janes
Cc: Yelai, Ramkumar IN BLR STS; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Pgbouncer help

On 08/27/2013 10:40 AM, Jeff Janes wrote:
 On Tue, Aug 27, 2013 at 1:34 AM, Yelai, Ramkumar IN BLR STS 
 ramkumar.ye...@siemens.com wrote:
 HI



 In our current project, we are opening several postgresql connection. 
 Few connections are frequently used and few are occasionally used. 
 Hence we plan to adapt connection pool method to avoid more 
 connection to open.  We plan to use Pgbouncer.  Most of the 
 pgbouncer example shows how to configure, but they are not explaining how to 
 use in C++.



 Please provide me a example, how to use it in C++.
 pgbouncer is designed to look (to the client) just like a normal 
 postgresql server
However...

Since clients are reusing previously accessed server sessions, be sure to 
consider the implication of the different pool types and reset options.

For example, if you have multi-statement transactions you cannot, of course, 
use statement-level pooling since the server connection is released after the 
statement.

And if you set any runtime parameters (set time zone to..., set statement 
timeout..., etc.) then you will probably need to use session-level pooling and 
you will need to set server_reset_query appropriately otherwise you risk ending 
up either having parameters set to values you did not expect by a previously 
connected client or having parameters you set disappear when your next 
statement is assigned to a different server connection.

A similar issue exists if you use temporary tables as you need to be sure to 
stick with the same server connection while your processing needs the temporary 
table and you need to clean it up when you release the connection so it doesn't 
use extra resources and doesn't interfere with statements issued a subsequent 
client.

For more, see the following if you haven't read them already:
http://pgbouncer.projects.pgfoundry.org/doc/config.html
http://wiki.postgresql.org/wiki/PgBouncer

Cheers,
Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Pgbouncer help

2013-08-27 Thread Yelai, Ramkumar IN BLR STS
HI



In our current project, we are opening several postgresql connection. Few 
connections are frequently used and few are occasionally used. Hence we plan to 
adapt connection pool method to avoid more connection to open.  We plan to use 
Pgbouncer.  Most of the pgbouncer example shows how to configure, but they 
are not explaining how to use in C++.



Please provide me a example, how to use it in C++.



Thanks  Regards,

Ramkumar



Re: [GENERAL] Parallel Insert and Delete operation

2012-11-07 Thread Yelai, Ramkumar IN BLR STS
Ramkumar Yelai wrote:
[is worried that a database might become inconsistent if conflicting INSERTs 
and DELETEs occur]
 @Albe  - I  got you first point. The second point is little skeptical
because postgres could have been
 avoided this lock by using MVCC. Please correct me if I am wrong?

Which lock could have been avoided?

PostgreSQL locks rows when the data change.
That has little to do with MVCC.

If you INSERT into a table that has a foreign key, the referenced row in the 
referenced table gets a SHARE lock that conflicts with the EXCLUSIVE lock 
required for a DELETE.
So they cannot execute concurrently.

Yours,
Laurenz Albe

Thanks very much Albe.

I am not aware of that, delete will lock the table.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Parallel Insert and Delete operation

2012-11-05 Thread Yelai, Ramkumar IN BLR STS
Thanks Albe Laurenz, Moshe Jacobson

@Albe  - I  got you first point. The second point is little skeptical because 
postgres could have been avoided this lock by using MVCC. Please correct me if 
I am wrong?
@ Jacobson -  it could be possible that foreign key violation may arise but 
when it arise the procedure could have been stopped and it could be rollback 
the transaction. Please correct me if I am wrong?

Here, I want both operation should successfully run. So what I have to do. I 
would like to do some prototype test on this, hence please tell me is that 
possible to do it from pgadmin or I have write some example programming code on 
this.

Thanks  Regards,
Ramkumar

From: Moshe Jacobson [mailto:mo...@neadwerx.com]
Sent: Wednesday, October 31, 2012 9:01 PM
To: Albe Laurenz
Cc: Yelai, Ramkumar IN BLR STS; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Parallel Insert and Delete operation

It is also possible that you will get a foreign key violation exception on the 
process inserting into table 2, but you will not get database inconsistency.
On Wed, Oct 31, 2012 at 9:33 AM, Albe Laurenz 
laurenz.a...@wien.gv.atmailto:laurenz.a...@wien.gv.at wrote:
Yelai, Ramkumar IN BLR STS worte:
 Sent: Wednesday, October 31, 2012 12:40 PM
 To: pgsql-general@postgresql.orgmailto:pgsql-general@postgresql.org
 Subject: [GENERAL] Parallel Insert and Delete operation

 Hi All,

 Please clarify me the following example.

 I have 2 tables

 Table1  - ( it has one primary key and few  columns )
 Table2  - ( it has one primary key and few columns.  It has one
foreign key, which refers table1
 primary key ).

 I have 2 operations, which are written in pl/pgsql procedure.

 Operation1() - Inserts the records to table2 at every hour basis.
 Operation2() - Delete the records from Table 1 and Table2 based on the
primary key.

 What if both operations are running at the time for the same primary
key.

 what I have to take care to run these two operations perfectly without
creating inconsistency in
 database.
With the foreign key in place there can be no entry in table2
that does not have a corresponding entry in table1.

Concurrency is solved with locks, so one of the concurrent
operations might have to wait until the other one is done.

That is handled by the database system automatically.

Yours,
Laurenz Albe


--
Sent via pgsql-general mailing list 
(pgsql-general@postgresql.orgmailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Moshe Jacobson
Nead Werx, Inc. | Senior Systems Engineer
2323 Cumberland Parkway, Suite 201 | Atlanta, GA 30339
mo...@neadwerx.commailto:mo...@neadwerx.com | 
www.neadwerx.comhttp://www.neadwerx.com/



[GENERAL] Parallel Insert and Delete operation

2012-10-31 Thread Yelai, Ramkumar IN BLR STS
Hi All,

Please clarify me the following example.

I have 2 tables

Table1  - ( it has one primary key and few  columns )
Table2  - ( it has one primary key and few columns.  It has one foreign key, 
which refers table1 primary key ).

I have 2 operations, which are written in pl/pgsql procedure.

Operation1() - Inserts the records to table2 at every hour basis.
Operation2() - Delete the records from Table 1 and Table2 based on the primary 
key.

What if both operations are running at the time for the same primary key.

what I have to take care to run these two operations perfectly without creating 
inconsistency in database.

Thanks  Regards,
Ramkumar




Re: [GENERAL] Re: Need help in reclaiming disk space by deleting the selected records

2012-10-01 Thread Yelai, Ramkumar IN BLR STS


-Original Message-
From: Albe Laurenz [mailto:laurenz.a...@wien.gv.at] 
Sent: Friday, September 28, 2012 1:07 PM
To: Yelai, Ramkumar IN BLR STS; pgsql-general@postgresql.org
Cc: scrawf...@pinpointresearch.com; and...@2ndquadrant.com
Subject: RE: [GENERAL] Re: Need help in reclaiming disk space by deleting the 
selected records

Yelai, Ramkumar wrote:
 Anything exceeding a few hundred partitions is not considered a good
idea.
 The system needs to keep track of all the tables, and query planning
for such a partitioned table
 might be expensive.
 
 1440 is probably pushing the limits, but maybe somebody with more
experience can say more.

 By mistake I added 1440 tables, but it is incorrect, below is the
total number of tables
 
 7 base tables X 120 months = 840 child tables.
 
 As per your statement, If I create these many table then it will
affect the performance. But as per
 the document
(http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html)
 constraint_exclusion will improve query performance. Please clarify me
here how query planning will be
 expensive?

The planner will have to decide which of the 840 tables to access.

 I have one more strategy that instead of creating 10 years, I'd like
to use batch processing like
 create 2 years of tables ( 240 tables ) and when we are going above 2
years we will create next 2 year
 table and update the trigger or use 5 years (480 tables ) instead of 2
years.
 
 The above approach will not create a more partitioned table and if
user wants space they can truncate
 the old tables. Please let me know is this good approach?

I don't understand that in detail.

I would recommend that you prototype some of these variants and run some 
performance tests.  That's the only good way to know what will perform well in 
your environment.

Yours,
Laurenz Albe

Thanks Laurenz Albe.

After I went through the below articles, I understand query plans about 
partition table and its limits.

http://stackoverflow.com/questions/6104774/how-many-table-partitions-is-too-many-in-postgres
http://postgresql.1045698.n5.nabble.com/Table-partitioning-td3410542.html


As per our functionality ( 7 tables are represents 7 Unique archiving logs ), 
we will not be querying 840 tables or 7 base tables at same time. i.e each 
unique archiving logs table will have only 120 child tables, hence planner will 
have to device which of 120 table to access.

In addition to this, at any time I will be reading only one partition table 
among 120 tables as per our computations.  In this computation we will not use 
any joins or combining the partition tables. 

As Laurenz said, I will do some prototype and I will check the query plans 
based on our queries. 

Please let me know if you have any points are suggestions.

Thanks  regards,
Ramkumar






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Need help in reclaiming disk space by deleting the selected records

2012-09-26 Thread Yelai, Ramkumar IN BLR STS
Thanks Steve and Andres,

I read these articles
http://www.linuxinsight.com/optimize_postgresql_database_size.html
http://yuval.bar-or.org/blog/2010/08/sluggish-postgresql-databases-and-reindexing-indexes/
http://www.if-not-true-then-false.com/2009/partitioning-large-postgresql-tables-and-handle-millions-of-rows-efficiently-and-quickly/

and I have some more questions on the Steve comments.

1.  Do I need run REINDEX to reduce space or auto vacuum will handle re 
indexing?
2.  Cluster, Re index and Vacuum full locks the table, Hence do we need to  
avoid database operations ( select, delete, insert ) while doing disk clean up? 
Just curious what if I keep inserting while running this command?
3.  All the three commands needs some additional space to do this 
operation? Am I correct?
4.  Would all database server ( oracle, sqlserver and mysql ) needs 
downtime while doing disk clean up?
5.  I am very happy to use Truncate and table partitioning, it is 
satisfying my requirements. But in order to achieve this, for 10 years ( 
currently 6 unique archiving tables I have )  I have to create 1440 month 
tables. Will it creates any issue and is there anything  I need to consider 
carefully while doing this?

Thanks  Regards,
Ramkumar
_
From: Yelai, Ramkumar IN BLR STS
Sent: Thursday, September 13, 2012 7:03 PM
To: 'pgsql-general@postgresql.org'
Subject: Need help in reclaiming disk space by deleting the selected records


Hi All,

I am a beginner in Postgresql and Databases. I have a requirement that 
reclaiming disk space by deleting the rows in a selected time span.  I went 
through the documents and articles to know how to get the table size 
(http://wiki.postgresql.org/wiki/Disk_Usage)

But before let the user delete, I have to show the size of the records size in 
the selected time span.  But here I don't know how to calculate the selected 
records size.
In addition to this, I assume that after deleted the records I have to run 
VACUUM FULL command to reclaiming the space( Please correct me if I am wrong or 
let me know the best approach) .

The table looks like this

CREATE TABLE IF NOT EXISTS SN_SamplTable
(
  ID integer NOT NULL,
  Data integer,
  CLIENT_COUNT_TIMESTAMP timestamp without time zone
);

Please help me to how to proceed on this.

Thanks  Regards,
Ramkumar.




[GENERAL] Need help in reclaiming disk space by deleting the selected records

2012-09-13 Thread Yelai, Ramkumar IN BLR STS
Hi All,

I am a beginner in Postgresql and Databases. I have a requirement that 
reclaiming disk space by deleting the rows in a selected time span.  I went 
through the documents and articles to know how to get the table size 
(http://wiki.postgresql.org/wiki/Disk_Usage)

But before let the user delete, I have to show the size of the records size in 
the selected time span.  But here I don't know how to calculate the selected 
records size.
In addition to this, I assume that after deleted the records I have to run 
VACUUM FULL command to reclaiming the space( Please correct me if I am wrong or 
let me know the best approach) .

The table looks like this

CREATE TABLE IF NOT EXISTS SN_SamplTable
(
  ID integer NOT NULL,
  Data integer,
  CLIENT_COUNT_TIMESTAMP timestamp without time zone
);

Please help me to how to proceed on this.

Thanks  Regards,
Ramkumar.