Re: [GENERAL] Oracle to PostgreSQL Migration - Need Information

2015-07-08 Thread dinesh kumar
Hi,

We recently done the similar migration for one of our customer. We used all
opensource tools to achieve this migration process.

We used Pentaho Data Integration tool for doing Online DB migration, which
took minimal downtime with CDC{Change Data Capture} approach. Also, we used
Ora2Pg tool to migrate the DB objects with some manual syntax modifications.

Regards,
Dinesh
manojadinesh.blogspot.com

On Wed, Jul 8, 2015 at 12:24 PM, Tim Clotworthy <
tclotwor...@bluestonelogic.com> wrote:

> Hello,
>
> I have a customer that is about to undertake a migration of an Oracle 11g
> database to PostgreSQL 9.x (exact version to be determined). I am talking
> not only of the migration of schemas and data, but also of a substantial
> codebase of Pl/SQL stored procedures, as well as many triggers.
>
> I don't think they know yet what they are up against. Everything I have
> read is that this is a very substantial effort. At this stage, they would
> be particularly interested in realistic and practical information on how
> to
> estimate the effort required as well as any best-practices or guidance on
> transition strategies.
>
> I have found official documentation on the PostgreSQL site for porting
> Pl/SQL to PL/pgSQL. This is excellent technical documentation. However,
> there success will require that they are well prepared realistically
> understanding the scope of the effor they are asbout to undertake.
>
> Thanks for any response!
>


Re: [GENERAL] Oracle to PostgreSQL Migration - Need Information

2015-07-08 Thread dinesh kumar
On Wed, Jul 8, 2015 at 1:20 PM, John R Pierce  wrote:

> On 7/8/2015 1:16 PM, dinesh kumar wrote:
>
>> We recently done the similar migration for one of our customer. We used
>> all opensource tools to achieve this migration process.
>>
>> We used Pentaho Data Integration tool for doing Online DB migration,
>> which took minimal downtime with CDC{Change Data Capture} approach. Also,
>> we used Ora2Pg tool to migrate the DB objects with some manual syntax
>> modifications.
>>
>>
> thats the easy part.
>
> now what about the massive code base of pl/sql and triggers he mentioned ?
>
>
Yeah, we need to rewrite the business logic if there are any un-supported
features like autonomous transactions, packages, nested procedures, e.t.c.

Regards,
Dinesh
manojadinesh.blogspot.com


>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Sorting CSV string and removing Duplicates

2015-07-27 Thread dinesh kumar
On Mon, Jul 27, 2015 at 12:53 PM, Alex Magnum  wrote:

> Hello,
>
> I have a csv string in a text field that is unsorted and contains
> duplicates.
> Is there a simple way to remove these and sort the string.
>
> E.g
> 2,18,20,23,1,27,1,2,8,16,17,18,20,22,23,27
>
> i tried string to array and unique but that did not work...
> Any suggestions on how to do this without writing a function?
>
> Any help is appreciated.
>
>
Are you looking for this.

postgres=# SELECT unnest(string_to_array(t, ',')) from test group by 1;
 unnest

 2
 18
 8
 20
 22
 16
 27
 17
 23
 1
(10 rows)


Regards,
Dinesh
manojadinesh.blogspot.com

Thanks
> A
>


Re: [GENERAL] Sorting CSV string and removing Duplicates

2015-07-27 Thread dinesh kumar
On Mon, Jul 27, 2015 at 12:57 PM, dinesh kumar 
wrote:

> On Mon, Jul 27, 2015 at 12:53 PM, Alex Magnum 
> wrote:
>
>> Hello,
>>
>> I have a csv string in a text field that is unsorted and contains
>> duplicates.
>> Is there a simple way to remove these and sort the string.
>>
>> E.g
>> 2,18,20,23,1,27,1,2,8,16,17,18,20,22,23,27
>>
>> i tried string to array and unique but that did not work...
>> Any suggestions on how to do this without writing a function?
>>
>> Any help is appreciated.
>>
>>
> Are you looking for this.
>
> postgres=# SELECT unnest(string_to_array(t, ',')) from test group by 1;
>  unnest
> 
>  2
>  18
>  8
>  20
>  22
>  16
>  27
>  17
>  23
>  1
> (10 rows)
>
>
OR

Might be something like this

postgres=# WITH sortedstring as
postgres-# (
postgres(# SELECT unnest(string_to_array(t, ','))::int from test group by 1
ORDER BY 1
postgres(# ) SELECT array_agg(unnest) FROM sortedstring;
  array_agg
--
 {1,2,8,16,17,18,20,22,23,27}
(1 row)


Regards,
Dinesh
manojadinesh.blogspot.com


>
> Regards,
> Dinesh
> manojadinesh.blogspot.com
>
> Thanks
>> A
>>
>
>


Re: [GENERAL] Fwd: postgresql databases disappearing and reappearing and disappearing on localhost - Postgresql x649.3 on Windows 8.1 (64)

2015-08-04 Thread dinesh kumar
On Tue, Aug 4, 2015 at 2:28 PM, Killian Driscoll 
wrote:

> I am using postgresql 9.3 with pgadmin III, and Access 2013 as a front end
> for data entry using (ODBC connection) linked tables, on a Windows 8.1 (64).
>
> OK.

> I have one main database I am developing on a localhost:5432, with four
> other test databases I had on the same localhost. A few weeks ago the four
> test databases disappeared from the list in pgadmin III and using the shell
> command to list the databases, only the main database showed up.
>
> Could you be more specific about this. Using shell command to list the
database, you got one database OR all databases (?)

> A week ago, the four test databases reappeared and were apparently
> functional (in pgadmin I viewed the tables and they appeared fine). I
> turned off pgadmin and restarted, and the four test dataabases disappeared
> again.
>
Did you restarted the DB after pgAdmin turnoff.

> Today, I turned on pgadmin and the four appeared again - I took a
> screenshot to confirm I am not hallucinating (!). Again, I turned off
> pgadmin, and restarted and they disappeared again.
>
Regards,
Dinesh


Re: [GENERAL] Unexpected query result

2015-10-05 Thread dinesh kumar
Hi,

On Mon, Oct 5, 2015 at 5:02 AM, Begin Daniel  wrote:

> In order to process a large amount of data I need to run a procedure using
> parallel batch processes.
> The query I run in each process is expected to ...
>
>
It seems, you are trying to achieve the same, what we did. Find my blog
entry
,
which may help you in this scenarios.

1- select a bunch of id (500) in a table (wait4processing
>
) containing the list of all records to process
> 2- remove selected records from wait4processing table in order to
> eliminate duplicate processing
> 3- run the procedure (build_contributions_clusters) over the range of
> selected ids
>
> --The query I use:
> With ids as( delete from wait4processing where id in( select id from
> wait4processing limit 500)  returning id)
> select build_contributions_clusters(min(id),max(id)) from ids;
>
> The query runs properly if I send it sequentially (wait for the completion
> of the query before sening it again) but it does'nt work when sending
> multiple occurrences in parallel.  Seems from the results I got that the
> first query received by the server runs properly but the following ones try
> to process the same first 500 records even if deleted - the
> build_contributions_clusters procedure receive NULL values instead of the
> following 500 records.
>
> Since I am almost certain it is the expected behavior, I would like to like
> to understand why, and I would also appreciate to see alternative queries
> to do the job.
>
> Thanks :-)
>



-- 

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] Database size on disk

2015-10-05 Thread dinesh kumar
On Mon, Oct 5, 2015 at 12:47 PM, Quiroga, Damian 
wrote:

> Hi,
>
>
>
> I’m looking for some advice on how to determine the DB size on disk to
> trigger some cleanup tasks (deleting old data) if some threshold is reached.
>
>
>
> Let’s say that for simplicity I don’t want to ask for the size on disk to
> the OS, but rather to PostgreSQL itself. And I only have a single database
> other than the defaults (‘template’, etc).
>
>
>
> In that scenario is using pg_database_size() reasonable? Does that
> function return the actual size on disk or is it a calculation? Is there a
> better alternative?
>
>
>

Yes, pg_database_size() gives the size from disk level. IIRC, it do "du -h
" kind of method.




> Thanks,
>
>
>
> Damian
>



-- 

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] Pattern match against array elements?

2015-10-12 Thread dinesh kumar
On Mon, Oct 12, 2015 at 10:58 AM, Israel Brewster 
wrote:

> Is there any way to do a pattern match against the elements of an array in
> postgresql (9.4 if the version makes a difference)? I have a grouped query
> that, among other things, returns an array of values, like:
>
> SELECT lognum, array_agg(flightnum) as flightnums FROM logs GROUP BY
> lognum;
>
> Where the flightnum field is a varchar containing either a text string or
> a three-or-four digit number. Now say I want to select all logs that have a
> flight number starting with an '8' (so '800' or '8000' series flights). My
> first thought was to do something like this:
>
> SELECT * FROM (SELECT lognum,array_agg(flightnum) as flightnums FROM logs
> GROUP BY lognum) s1 WHERE '8%' like ANY(flightnums);
>
> But while this doesn't give an error, it also doesn't return any results.
> I'm guessing that this is because the wildcard is on the left of the
> operator, and needs to be on the right. Of course, turning it around to be:
>
> WHERE ANY(flightnum) like '8%'
>
> gives me a syntax error. So is there any way I can run this query such
> that I get any rows containing a flight number that starts with an 8 (or
> whatever)?
>
>
Are you looking for this ?

SELECT lognum array_agg(flightnum) FROM logs WHERE flightnum ~ '^8' GROUP
BY lognum;




> ---
> Israel Brewster
> Systems Analyst II
> Ravn Alaska
> 5245 Airport Industrial Rd
> Fairbanks, AK 99709
> (907) 450-7293 <#>
> ---
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] Creating Report for PieChart

2015-10-13 Thread dinesh kumar
+Adding to Scott

On Tue, Oct 13, 2015 at 6:25 PM, Scott Mead  wrote:

>
>
>
> On Tue, Oct 13, 2015 at 9:14 PM, Scott Mead  wrote:
>
>>
>>
>> On Oct 13, 2015, at 19:56, Alex Magnum  wrote:
>>
>> Hello,
>> I need to process some statistics for a pie chart (json) where I only
>> want to show a max of 8 slices. If I have more data points like in below
>> table I need to combine all to a slice called others. If there are less or
>> equal 8 i use them as is.
>>
>> I am currently doing this with a plperl function which works well but was
>> just wondering out of curiosity if that could be done withing an sql query.
>>
>> Anyone having done something similar who could point me in the right
>> direction?
>>
>>
>> SELECT count(*),country_name FROM stats_archive WHERE id=400 GROUP BY
>> country_name ORDER BY COUNT DESC;
>>  count |   country_name
>> ---+---
>>302 | Malaysia
>> 65 | Singapore
>> 57 | Thailand
>> 26 | Indonesia
>> 15 | France
>> 14 | United States
>> 14 | India
>> 13 | Philippines
>> 12 | Vietnam
>> 10 | Republic of Korea
>> 10 | Canada
>>  7 | Australia
>>  6 | Brazil
>>  6 | Czech Republic
>>  5 | Switzerland
>>  4 | Saudi Arabia
>>  3 | Ireland
>>  3 | Japan
>>  3 | Sweden
>>  3 | South Africa
>>  3 | Belarus
>>  3 | Colombia
>>  3 | United Kingdom
>>  1 | Peru
>>
>>
>>country_name  | count | perc
>> -+---+---
>>  Malaysia|   302 |  51.4
>>  Singapore   |65 |  11.0
>>  Thailand|57 |   9.7
>>  Indonesia   |26 |   4.4
>>  France  |15 |   2.6
>>  United States   |14 |   2.4
>>  India   |14 |   2.4
>>  Others  |95 |  16.1
>>  Total   |   588 |   100
>>
>> Thanks a lot for any suggestions
>>
>> I would use rank to get a rank number for each record.
>>
>
>   Sorry, Sent the last one from my phone, here's an example:
>
>
>
> Use 'rank' to generate the rank order of the entry.
>
> postgres=# select country, count(1) num_entries,
> rank() over (order by count(1) DESC)
> from test GROUP by country ORDER BY num_entries DESC;
>  country | num_entries | rank
> -+-+--
>  US  |  20 |1
>  CA  |  15 |2
>  SP  |   8 |3
>  IT  |   7 |4
> (4 rows)
>
> There's probably an easier way to do this without a sub-select, but, it
> works.
>
> postgres=# SELECT country, num_entries, rank
>FROM (select country, count(1) num_entries,
>rank() over (order by count(1) DESC)
>FROM test GROUP by country
> ) foo WHERE rank < 4;
>
>  country | num_entries | rank
> -+-+--
>  US  |  20 |1
>  CA  |  15 |2
>  SP  |   8 |3
> (3 rows)
>
> postgres=#
>
>

Not sure which PG version you are using, but if you are on 9.4, you may use
filters as below.

postgres=# SELECT * FROM stats_archive ;
 cname
---
 I
 I
 U
 J
 K
(5 rows)

postgres=# WITH total AS
(
SELECT COUNT(*) cnt, cname, ROW_NUMBER() OVER() FROM
stats_archive GROUP BY 2
)
SELECT 'others' as cname, sum(cnt) filter (where row_number >2) FROM total
UNION
SELECT cname, cnt FROM total WHERE row_number<=2;
 cname  | sum
+-
 J  |   1
 I  |   2
 others |   2
(3 rows)

--
> Scott Mead
> OpenSCG
> www.openscg.com
>
>>
>>
>> Alex
>>
>>
>


-- 

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] converting in() clause into a with prefix?

2015-10-16 Thread dinesh kumar
On Fri, Oct 16, 2015 at 11:18 AM, Benjamin Smith 
wrote:

> I have a horribly-performing query similar to below, and I'd like to
> convert
> it to use a "WITH mytable as ( ... ) " without having to re-architect my
> code.
> For some reason, using a WITH prefix seems to generally work much faster
> than
> IN() sub clause even allowing identical results. (runs in 1/4th the time)
>
> Is there a PG native function that can convert the listing format of in()
> clause to row-level results from a WITH prefix? I see the array* functions
> but
> they seem to work with arrays like "array[1,2,3]" and unnest seems to drill
> right through nested arrays and flattens every single element to a new row,
> regardless of depth.  EG: the following two lines are equivalent:
>
> select unnest(array([1,2,2,3]);
> select unnest(array[array[1,2],array[2,3]]);
>
> I'd expect the latter to put out two rows as
>
> 1, 2
> 2, 3
>
> Thanks for your input, clarifying pseudo code examples below (PHP). We're
> running 9.4.4 on CentOS 6.
>
> Ben
>
>
> // DESIRED END RESULT PSUEDO CODE
> $query = "
> WITH mytable AS
> (
> unnest(". $in .", school_id, building_id)
> )
> SELECT
> id,
> name
> FROM mytable
> JOIN classes ON
> (
> mytable.school_id = classes.school_id
> AND mytable.building_id = classes.building_id
> )" ;
>
>
>
Ignore this approach, if you have already tried this as below.

PREPARE stmt(record[]) AS WITH incla AS (
SELECT * FROM (SELECT UNNEST(ARRAY[$1])) f
)
SELECT * FROM incla WHERE (1,2) IN (unnest);

EXECUTE stmt(ARRAY[(1,2), (2,1)]);




> // CURRENT CODE EXAMPLE (PHP)
> $query = "
> SELECT
> id,
> name
> FROM classes
> WHERE
> (classes.school_id, classes.building_id) IN (" . $in . ")";
>
>
> // EXAMPLE RESULT (small list)
> SELECT
> id,
> name
> FROM classes
> WHERE
> (classes.school_id, classes.building_id) IN ((291,189),(291,192),
> (291,191),(291,195),(291,198),(291,197),(291,188),(291,184),(291,199),
> (291,193),(291,185),(291,194),(291,-2),(291,186),(291,183),(291,187),
> (291,200),(291,190),(291,196),(291,202),(200,455),(200,433),(200,442),
> (200,440),(200,445),(200,438),(200,437),(200,450),(200,452),(200,459),
> (200,456),(200,432),(200,441),(200,460),(200,446),(200,447),(200,448),
> (200,449),(200,434),(200,443),(200,-2),(200,444),(200,457),(200,458),
> (200,451),(200,454),(200,453),(246,-1),(246,190),(246,186),(246,188),
> (246,195),(246,196),(246,197),(246,192),(246,-2),(246,184),(246,189),
> (246,193),(246,194),(246,198),(246,191),(246,199),(246,187),(246,185),
> (246,183),(246,200),(63,-1),(63,269),(63,267),(63,264),(63,258),(63,126),
> (63,270),(63,262),(63,127),(63,134),(63,263),(63,271),(63,272),(63,265),
> (63,-2),(63,259),(63,260),(63,266),(9,-1),(9,283),(9,192),(9,266),(9,276),
> (9,313),(9,294),(9,311),(9,298),(9,299),(9,300),(9,312),(9,310),(9,263),
> (9,272),(9,305),(9,314),(9,152),(9,284),(9,277),(9,260),(9,271),(9,278),
> (9,315),(9,287),(9,289),(9,316),(9,288),(9,290),(9,259),(9,258),(9,269),
> (9,279),(9,265),(9,273),(9,280),(9,64),(9,264),(9,281),(9,317),(9,304),
> (9,303),(9,322),(9,267),(9,282),(9,318),(9,306),(9,295),(9,319),(9,301),
>
> (9,302),(9,43),(9,268),(9,275),(9,-2),(9,262),(9,296),(9,297),(9,320),(9,286),
> (9,274),(9,307),(9,308),(9,321),(9,270),(9,261),(9,309),(9,292),(9,293),
>
> (9,285),(9,291),(94,-1),(94,280),(94,285),(94,268),(94,265),(94,278),(94,283),
> (94,287),(94,288),(94,289),(94,284),(94,281),(94,110),(94,262),(94,259),
> (94,273),(94,270),(94,266),(94,258),(94,269),(94,274),(94,263),(94,290),
> (94,272),(94,264),(94,260),(94,279),(94,261),(94,286),(94,275),(94,277),
> (94,291),(94,109),(94,-2),(94,257),(111,-1),(111,452),(111,486),(111,404),
> (111,456),(111,455),(111,485),(111,484),(111,483),(111,457),(111,441),
> (111,468),(111,430),(111,442),(111,446),(111,490),(111,440),(111,466),
> (111,491),(111,449),(111,454),(111,461),(111,492),(111,469),(111,465),
> (111,458),(111,426),(111,443),(111,493),(111,488),(111,475),(111,481),
> (111,482),(111,474),(111,477),(111,478),(111,476),(111,479),(111,480),
> (111,471),(111,463),(111,450),(111,470),(111,494),(111,447),(111,448),
> (111,495),(111,-2),(111,473),(111,496),(111,429),(111,487),(111,497),
> (111,167),(111,444),(111,451),(111,431),(111,445),(111,166),(111,453),
> (111,467),(111,462),(111,464),(111,472),(111,489),(334,-1),(334,188),
> (334,189),(334,203),(334,201),(334,200),(334,193),(334,192),(334,191),
> (334,202),(334,185),(334,190),(334,204),(334,186),(334,187),(334,197),
> (334,199),(334,205),(334,184),(334,194),(334,-2),(334,196),(334,183),
> (334,195),(334,206),(334,198),(334,207),(201,-1),(201,445),(201,442),
> (201,446),(201,468),(201,458),(201,455),(201,456),(201,469),(201,454),
> (201,453),(201,466),(201,474),(201,443),(201,441),(201,440),(201,447),
> (201,470),(201,460),(201,459),(201,461),(201,471),(201,465),(201,463),
> (201,448),(201,451),(201,472),(201,457),(201,-2),(201,462),(201,46

Re: [GENERAL] PSQL Tools

2015-10-18 Thread dinesh kumar
On Sun, Oct 18, 2015 at 7:04 AM,  wrote:

> Hello
>
> Is anyone aware of any tools like TOAD that are available for Postgresql?
>
>
PgAdmin fits here.


> Regards
>
> John Wiencek
>



-- 

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] function null composite behavior

2015-10-26 Thread dinesh kumar
On Mon, Oct 26, 2015 at 3:34 PM, Rikard Pavelic  wrote:

> On Sun, 25 Oct 2015 22:31:03 +0100
> Rikard Pavelic  wrote:
>
> > I assume there is no way to get sql like result from plpgsql
> > function?
>


> I should try harder ;)
>
> Managed to get it working using array and array_agg.
>
> Yeah, that works.

Might be this is what

you are looking for.


> Regards,
> Rikard
>
> --
> Rikard Pavelic
> https://dsl-platform.com/
> http://templater.info/
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] Query regarding

2015-11-04 Thread dinesh kumar
Hi,

On Wed, Nov 4, 2015 at 1:21 AM, M Tarkeshwar Rao <
m.tarkeshwar@ericsson.com> wrote:

> Hi all,
>
> We have started to convert some oracle sql scripts and converting them to
> postgres, but facing some issues to create table.
>
> I have some common doubts in create table script ie.
>
> ALTER TABLE employee
> DROP PRIMARY KEY CASCADE;
>
> DROP employee CASCADE CONSTRAINTS;
>
> *CREATE TABLE *employee
> *(*
> *  LABEL**_IMP**  VARCHAR2(50 BYTE)*
> *)*
> *TABLESPACE DATA**_TB*
> *PCTUSED0*
> *PCTFREE10*
>

We need to use FILLFACTOR, which is an opposite setting of PCTFREE.

*INITRANS   1*
> *MAXTRANS   255*
> *STORAGE(*
> *INITIAL  5M*
> *NEXT 5M*
> *MINEXTENTS   1*
> *MAXEXTENTS   UNLIMITED*
> *PCTINCREASE  0*
> *BUFFER_POOL  DEFAULT*
> *   )*
> *LOGGING*
>

Default is LOGGED. We can specify UNLOGGED if we do not want to log trx
into WAL.


> *NOCOMPRESS*
>

Postgres supports column level storage support.

SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN }

*NOCACHE*
>



> *NOPARALLEL*
> *MONITORING;*
>
>
> I am trying to find replacement for above keywords highlighted in BOLD in
> postgres, but unable to find it on google.
> Could you please let us know if these parameter are managed internally in
> postgres or what are the alternative.
>
> Regards,
> Tarkeshwar
>
>



-- 

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] SQL conversion tool

2015-11-18 Thread dinesh kumar
On Wed, Nov 18, 2015 at 10:41 AM, Sachin Srivastava  wrote:

> Hi,
>
> Please inform which is the best tool for SQL conversion because I have to
> migration Oracle database into PostgreSQL.
>
>
Pentaho is the tool you need to have a look. Also, talend works great.

You can check my blog

post for native Oracle support.


> Regards,
> SS
>



-- 

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] SQL conversion tool

2015-11-18 Thread dinesh kumar
On Wed, Nov 18, 2015 at 11:24 AM, Thomas Kellerer 
wrote:

> Sachin Srivastava schrieb am 18.11.2015 um 10:41:
>
> > Please inform which is the best tool for SQL conversion because I have
> to migration Oracle database into PostgreSQL.
>
> Ora2Pg works quite well http://ora2pg.darold.net/
>
> +1


>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] SQL conversion tool

2015-11-18 Thread dinesh kumar
On Wed, Nov 18, 2015 at 10:58 AM, dinesh kumar 
wrote:

> On Wed, Nov 18, 2015 at 10:41 AM, Sachin Srivastava <
> ssr.teleat...@gmail.com> wrote:
>
>> Hi,
>>
>> Please inform which is the best tool for SQL conversion because I have to
>> migration Oracle database into PostgreSQL.
>>
>>
> Pentaho is the tool you need to have a look. Also, talend works great.
>
> You can check my blog
> <http://manojadinesh.blogspot.de/2014/12/heterogeneous-database-sync.html>
> post for native Oracle support.
>
>

My Bad. I thought the question is regarding the data migration. For the
data migration above tools works.

For SQL conversion, I know Ora2Pg is the tool we have.


> Regards,
>> SS
>>
>
>
>
> --
>
> Regards,
> Dinesh
> manojadinesh.blogspot.com
>



-- 

Regards,
Dinesh
manojadinesh.blogspot.com


[GENERAL] Re: [GENERAL] “Loop” in plpgsql Function - PostgreSQL 9.2

2016-01-18 Thread dinesh kumar
Hello,

On Mon, Jan 18, 2016 at 1:37 PM, drum.lu...@gmail.com 
wrote:

> I've created a function that allows me to do an huge update.
>
> But I need to limit this function. I need to do 50k rows (example) and
> then stop it. After that I need to continue from the rows that I've
> stopped... I'll have to modify the call function *select batch_number()* as
> well.
>
> How can I do that? Using for?
>
> The function below is already working, but, the table has 40m rows. And
> it's taking years.
>
>
Do you need to run the function on any Staging(Not Production).  I mean, do
you want to run this batch processes on a single transaction.

If not, I had the similar problem, where I needed to implement a function,
which we can run in multiple sessions. I ran this function in one of the BI
servers, where we have around 5 Million records.

Find this
link
about the implementation details.

If your question was about "Using Loops", then please ignore my comments.

FUNCTION:
>
> CREATE or REPLACE FUNCTION batch_number()
> RETURNS INTEGER AS $$
> declare
>batch_num integer;
>offset_num integer;begin
> offset_num = 0;
> batch_num = 1;
>
> while (select true from gorfs.nfs_data where batch_number is null limit 
> 1) loop
> with ids(id) as
> (
> select
> file_id
> from
> gorfs.nfs_data
> order by
> file_id
> offset offset_num
> limit 1000
> )
> update
> gorfs.nfs_data
> set
> batch_number = batch_num
> from ids
> where file_id = ids.id;
>
> offset_num = offset_num + 1000;
> batch_num = batch_num + 1;
> end loop;
>
> return batch_num;end
> $$ language 'plpgsql';
>
>


-- 

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] sync the data's from catalog table

2017-11-13 Thread Dinesh kumar
Hi,

Whenever the postgres user is trying to modify the user account's password
column in pg_authid table, we need to maintain a trigger in catalog table
(pg_authid) where it pop up the the "password column has been restricted
and it should not be modified". Is there any possible for the above
scenario in postgres? if it there please guide me how to proceed on this.
and one more question, Can we restrict the postgres user to not modify the
pg_authid table.  Please share your thoughts on it.

Thanks


On Mon, Nov 13, 2017 at 2:12 PM, Laurenz Albe 
wrote:

> Dinesh kumar wrote:
> > How can I sync the data's from pg_authid to manually created table (user
> table) whenever the update or insert happens on pg_authid table.
>
> You cannot do this, because you cannot define triggers on catalog tables.
>
> The question is:
> Why do you want to do this? What are you trying to achieve?
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] Insert query hangs what could be the reason

2014-06-19 Thread dinesh kumar
On Thu, Jun 19, 2014 at 1:59 PM, M Tarkeshwar Rao <
m.tarkeshwar@ericsson.com> wrote:

> Hi,
>
> Insert query hangs what could be the reason. Is there any way to find out?
> Any timeout feature there with query which can be set at client or server
> end?
>
>
It might be due to concurrent primary key/unique key modifications. Try to
enable the log_lock_waits in postgresql.conf, which gives you more locks
information.

Regards,
Dinesh
manojadinesh.blogspot.com


> Regards
> Tarkeshwar
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] BAKUP ISSUE

2014-07-09 Thread dinesh kumar
Hi,

On Wed, Jul 9, 2014 at 2:43 PM, Ramesh T 
> wrote:
>
>>  i rune problem is when i run  the pg_stop_backup() its return error  in
>> this way
>>
>> postgres=# select pg_stop_backup();
>>
>>
>> *NOTICE:  pg_stop_backup cleanup done, waiting for required WAL segments
>> to be ar*
>> *chived*
>> *WARNING:  pg_stop_backup still waiting for all required WAL segments to
>> be archi*
>> *ved (60 seconds elapsed)*
>> *HINT:  Check that your archive_command is executing properly.
>>  pg_stop_backup ca*
>> *n be canceled safely, but the database backup will not be usable without
>> all the*
>> * WAL segments.*
>>
> i changed in postgresql.conf to archive_command = 'copy "%p" "C:\Program
>> Files\PostgreSQL\ramesh %f"' still working same error return..
>>
>
I believe, you have to give the archive_command in windows like below, due
to window's escape sequences behavior.

'copy "%p" "C:Program FilesPostgreSQL%f " '

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] postgresql93-9.3.5: deadlock when updating parent table expected?

2015-02-18 Thread dinesh kumar
Hi,

If you feel FOR UPDATE is taking much time, then I believe,we can solve
this kind of issues using advisory locks
, .

Regards,
Dinesh
manojadinesh.blogspot.com


On Wed, Feb 18, 2015 at 10:45 AM, Bill Moran 
wrote:

> On Wed, 18 Feb 2015 18:30:09 +
> Dmitry O Litvintsev  wrote:
> >
> > Yes, deadlock occurs when there are multiple processes insert
> > concurrently into file table with the same volume id field.
> > I used "sometimes"  as opposed to "all the time".
>
> I resonded in that way since I frequently hear people complaining,
> "we're seeing all kinds of deadlocks" as if the whole world is
> collapsing under the deadlocks, but when pressed for more information
> it turns out they're only seeing a few deadlocks per hour when the
> system is under the heaviest load -- that scenario is hardly
> unexpected. As a result, having more detailed information than
> just "sometimes" helps to understand what's really going on.
>
> > I think you advise to retry transaction or add select for update prior
> > to insert. I will pursue this (together with upgrade to 9.3.6 suggested
> by
> > Alvaro).
>
> The nice thing about a retry strategy is that it always works.
> The problem with a retry strategy is that it's easy to do wrong
> (i.e. it may be more than just the transaction that needs to
> restart ... depending on what data has changed, calculations may
> need to be redone, the user requeried for certain information,
> etc).
>
> The problem with the SELECT ... FOR UPDATE is that it's a bit
> slower, and can be significantly slower unders some circumstances,
> but it's easier to implement correctly.
>
> The good news form Alvaro is that this is probably happening more
> frequently than necessary because of the bug he mentioned ... so
> upgrading may cause the problem to happen infrequently enough that
> you don't really care about it. The solutions I suggest are still
> relevent, they just might not be as immediately important.
>
> --
> Bill Moran
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Fwd: Data corruption after restarting replica

2015-02-18 Thread dinesh kumar
Hi,

On Mon, Feb 16, 2015 at 2:44 AM, Novák, Petr  wrote:

> Hello,
>
> sorry for posting to second list, but as I've received  no reply
> there, I'm trying my luck here.
>
> Thanks
> Petr
>
>
> -- Forwarded message --
> From: Novák, Petr 
> Date: Tue, Feb 10, 2015 at 12:49 PM
> Subject: Data corruption after restarting replica
> To: pgsql-b...@postgresql.org
>
>
> Hi all,
>
> we're experiencing data corruption after switching streamed replica to
> primary.
> This is not the first time I've encountered this issue, so I'l try to
> describe it in more detail.
>
> For this particular cluster we have 6 servers in two datacenters (3 in
> each). There are two instances running on each server, each with its
> own port and datadir. On the first two servers in each datacenter one
> instance is primary and the other is replica for the primary from the
> other server. Third server holds two offsite replicas from the other
> datacenter (for DR purposes)
>
> Each replica was set up by taking pg_basebackup from primary
> (pg_basebackup -h  -p 5430 -D /data2/basebackup -P -v -U
>  -x -c fast). Then directories from initdb were replaced with
> the ones from basebackup (only the configuration files remained) and
> the replica started and was successfully connected to primary. It was
> running with no problem keeping up with the primary. We were
> experiencing some connection problem between the two datacenters, but
> replication didn't break.
>
> Then we needed to take one datacenter offline due to hardware
> maintenance. So I've switched the applications down, verified that no
> more clients were connected to primary, then shut the primary down and
> restarted replica without recovery.conf and the application were
> started using the new db with no problem. Other replica even
> successfully reconnected to this new primary.
>
>
Before restarting replica, did you make sure that, all master transactions
applied to replication node.
May we know, why did you restarted replica without recovery.conf. Do you
want to maintain the same timeline for the xlogs. Or any specific other
reasons. ??

Regards,
Dinesh
manojadinesh.blogspot.com


> Few hours from the switch lines appeared in the server log (which
> didn't appear before), indicating a corruption:
>
> ERROR:  index "account_username_key" contains unexpected zero page at
> block 1112135
> ERROR:  right sibling's left-link doesn't match: block 476354 links to
> 1062443 instead of expected 250322 in index "account_pkey"
>
> ..and many more reporting corruption in several other indexes.
>
> The issue was resolved by creating new indexes and dropping the
> affected ones, although there were already some duplicities in the
> data, that has to be resolved, as some of the indexes were unique.
>
> This particular case uses Postgres 9.1.14 on both primary and replica.
> But I've experienced similar behavior on 9.2.9. OS Centos 6.6 in all
> cases. This may mean, that there can be something wrong with our
> configuration or the replication setup steps, but I've set up another
> instance using the same steps with no problem.
>
> Fsync related setting are at their defaults. Data directories are on
> RAID10 arrays, with BBUs. Filesystem is ext4 mounted with nobarrier
> option.
>
> Database is fairly large ~120GB with several 50mil+ tables, lots of
> indexes and FK constraints. It is mostly queried,
> updates/inserts/deletes are only several rows/s.
>
> Any help will be appreciated.
>
> Petr Novak
>
> System Engineer
> Avast s.r.o.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] pg_Restore

2013-01-21 Thread dinesh kumar
Hi Bhanu,

Yes, below is the faster approach to follow.

I don't know if that helps, but have you tried creating a template database
> and doing DROP DATABASE xxx; CREATE DATABASE xxx TEMPLATE mytemplate;
> instead of restoring a dump every time?
>
> Maybe that is faster.
>
>
If you are trying to take the dump from one cluster and restoring it in
another cluster, then make sure your pg_restore use parallel option "-j"
and also follow the parameters what Raghav said and tune WAL_BUFFERS to
some 32 to 64 MB value. And also if possible, keep your dump file into
another partition than the PGDATA which can improve the I/O balance.

Thanks.

Best Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] WARNING: pgstat wait timeout

2013-02-04 Thread dinesh kumar
Hi,

As of now, i found the following cases where we can expect these kind of
WARNING message in pg_log.

Case 1 { Huge I/O }

==

When the postgresql autovacuum process is not able to get the required I/O
to write the statistics to "stats_temp_location" then we can get this kind
of WARNING Message. As discussed, the huge I/O may causing due to the
checkpoint occurs on the Database.


How to log the checkpoints information .

===

1) Edit the postgresql.conf file as log_checkpoints=on

2) Select Pg_Reload_Conf();


Case 2 {Invalid stats_temp_location}

==

When the postgresql "stats_temp_location" is invalid path, then in this
case also we can expect this kind of WARNING Message. If you want to change
this location to some other place, then we need to follow the below
approach.


1) Edit the postgresql.conf file as stats_temp_location=''

2) Select Pg_Reload_Conf();


Case 3 {Invalid Localhost IP}

==

There might be a chance, we have an invalid Localhost IP. Please check the
localhost entires in Hosts file and rectify it if any thing wrong.

Once we made any changes in this file then we need to restart the
PostgreSQL Cluster to take it's Effect on autovacuum worker processes.



I hope some one will add more on this.


Regards,

Dinesh Kumar

manojadinesh.blogspot.com





On Mon, Feb 4, 2013 at 5:24 PM, Jake Stride  wrote:

> I have had some issues with a database on EC2 and I have restored it to a
> new instance. When vacuuming the database I am getting the following in the
> logs;
>
> WARNING:  pgstat wait timeout
>
> Is this normal/acceptable?
>
> Thanks
>


Re: [GENERAL] help me to clear postgres problem

2013-03-26 Thread dinesh kumar
IIRC, service failure creates some event logs information in windows. And
also you can verify the bit rock installer log files from %TEMP% location.

Regards,
Dinesh
manojadinesh.blogspot.com

On Mon, Mar 25, 2013 at 5:05 PM, jayaram s <123jaya...@gmail.com> wrote:

> Hello
> I have installed PostgreSQL 8.4.1 in my PC. For the requirement of data
> migration I again want to install "PostgreSQL enterprise DB  9.2".
> I couldn't install it because
> I have select option "postgresql compatible" on "configuration mode". So
> prompt wants me to enter "password". I have enter my existing postgres
> password "postgres'. But I couldn't install. An error message displayed as
> * "service user account 'postgres' couldnot be created". Please help me
> to clear the problem*
>
> --
> *With Regards
>
> Jayaram
>
> *
>
>


[GENERAL] PostgreSQL registry entries for apt-get/yum/rpm install

2013-04-15 Thread dinesh kumar
Dear Sirs,

Apologizes, if this is a duplicate question.

Would like to request you to share your valuable inputs on this. I would
like to know the PostgreSQL registry entries when we install it through
apt-get/yum/rpm. I mean, when we install the EnterpriseDB PostgreSQL one
click installer, it creates a registry file as "/etc/postgres-reg.ini" with
the entries like "Port, Data directory, Superuser, e.t.c ..". Does
apt-get/yum/rpm based postgresql installation creates the same kind of
registry file. If so, would you mind to share the details about the
location of those files.

Thanks in advance.

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] PostgreSQL registry entries for apt-get/yum/rpm install

2013-05-02 Thread dinesh kumar
Thank you Jasen/Karsten for your guidance.

Best Regards,
Dinesh
manojadinesh.blogspot.com

On Sat, Apr 20, 2013 at 3:43 PM, Jasen Betts  wrote:

> On 2013-04-15, dinesh kumar  wrote:
> > --047d7b675e70cee73c04da61865c
> > Content-Type: text/plain; charset=ISO-8859-1
> >
> > Dear Sirs,
> >
> > Apologizes, if this is a duplicate question.
> >
> > Would like to request you to share your valuable inputs on this. I would
> > like to know the PostgreSQL registry entries when we install it through
> > apt-get/yum/rpm.
>
> yes, the packaged versions of postgres are recorded as being present
> but this is not in a public data store like the windows registry.
>
> See the documentation for rpm (or dpkg etc, as apropriate)
> for how to test for the presence of postgresql.
>
> --
> ⚂⚃ 100% natural
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Position() Bug ? In PostgreSQL 9.2

2013-05-02 Thread dinesh kumar
Hello Team,

I would like to know whether the following behavior is a BUG or an expected
behavior. If this is a duplicated case, then kindly ignore.

postgres=# SELECT version();
   version
-
 PostgreSQL 9.2.3, compiled by Visual C++ build 1600, 32-bit
(1 row)


postgres=# select position('P' in 'PostgreSQL');
 position
--
1
(1 row)


postgres=# select position('' in 'PostgreSQL'); *// position(Substring as
an empty string) is returning 1.*
 position
--
1
(1 row)

Kindly let me know, if anything i miss here.

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] Position() Bug ? In PostgreSQL 9.2

2013-05-02 Thread dinesh kumar
Hi Tom,

Thank you very much for the clarification.

Let me set an empty string validation,  before passing it to position()
from API.

Thank you once again.

Regards,
Dinesh
manojadinesh.blogspot.com
On 2 May 2013 19:19, "Tom Lane"  wrote:

> dinesh kumar  writes:
> > postgres=# select position('' in 'PostgreSQL'); *// position(Substring as
> > an empty string) is returning 1.*
> >  position
> > --
> > 1
> > (1 row)
>
> This is correct according to the SQL standard:
>
>   determines the first position, if any, at
>  which one string, S1, occurs within another, S2. If S1 is of
> length
>  zero, then it occurs at position 1 (one) for any value of S2. If
> S1
>  does not occur in S2, then zero is returned.
>
> regards, tom lane
>


Re: [GENERAL] Postgres 8.4 stopped working---Please urgent help needed

2013-06-09 Thread dinesh kumar
On Sun, Jun 9, 2013 at 3:34 AM,  wrote:

>   Hi All,
>
>
>
> I have a problem with PostgreSQL 8.4. It was working perfectly fine but
> after the restart (within a couple of minutes)  of my system whenever I
> login to PostgreSQL 8.4 (localhost:5432) it gives error as under:
>
>
>
> *Server doesn't listen*
>
> The server doesn't accept connections: the connection library reports
>
> could not connect to server: Connection refused (0x274D/10061) Is the
> server running on host "localhost" (::1) and accepting TCP/IP connections
> on port 5432? could not connect to server: Connection refused
> (0x274D/10061) Is the server running on host "localhost" (127.0.0.1)
> and accepting TCP/IP connections on port 5432?
>
> If you encounter this message, please check if the server you're trying to
> contact is actually running PostgreSQL on the given port. Test if you have
> network connectivity from your client to the server host using ping or
> equivalent tools. Is your network / VPN / SSH tunnel / firewall configured
> correctly?
>
> For security reasons, PostgreSQL does not listen on all available IP
> addresses on the server machine initially. In order to access the server
> over the network, you need to enable listening on the address first.
>
> For PostgreSQL servers starting with version 8.0, this is controlled using
> the "listen_addresses" parameter in the postgresql.conf file. Here, you can
> enter a list of IP addresses the server should listen on, or simply use '*'
> to listen on all available IP addresses. For earlier servers (Version 7.3
> or 7.4), you'll need to set the "tcpip_socket" parameter to 'true'.
>
> You can use the postgresql.conf editor that is built into pgAdmin III to
> edit the postgresql.conf configuration file. After changing this file, you
> need to restart the server process to make the setting effective.
>
> If you double-checked your configuration but still get this error message,
> it's still unlikely that you encounter a fatal PostgreSQL misbehaviour. You
> probably have some low level network connectivity problems (e.g. firewall
> configuration). Please check this thoroughly before reporting a bug to the
> PostgreSQL community.
>
>
>
>
>
> I checked the pg_log file and it shows following:
>
> 2013-06-08 20:16:58 CESTLOG:  database system was shut down at 2013-06-08
> 20:10:02 CEST
>
> 2013-06-08 20:16:58 CESTFATAL:  the database system is starting up
>
> 2013-06-08 20:16:58 CESTLOG:  database system is ready to accept
> connections
>
> 2013-06-08 20:16:58 CESTLOG:  autovacuum launcher started
>
>
>
> I am unable to make out what is going wrong. Then I try to restart the
> Postgres service but it does not get started and gives error continuously.
> The error is:
>
> Windows could not start the PostgreSQL Server 8.4 service on Local
> Computer.
>
> Error 1053: The service did not respond to the start or control request in
> a timely fashion.
>
>
>

As per the logs, it seems the db server is started, and the problem is
might be due to an invalid PORT number or due to listen_addresses or may be
your firewall is blocking the connections. Regarding the service failure in
windows, you need to check the windows event logs from the event log viewer.

Best Regards,
Dinesh
manojadinesh.blogspot.com


[GENERAL] PostgreSQL 9.3 pg_dump issue

2013-06-25 Thread dinesh kumar
Hi All,

Greetings for the day.

Recently, I have installed the PostgreSQL 9.3 on my local ubuntu machine
for testing the new features in PG 9.3. I have created few test tables as
well. Now, I am trying to take the dump from the 9.3 binaries and getting
the below error.

postgres@test-ubuntu:/opt/PostgreSQL/9.3/bin$ ./pg_dump -f /tmp/pg_9.3.sql
-s -d postgres -p 5434
Password:
*cannot duplicate null pointer (internal error)*

postgres@test-ubuntu:/opt/PostgreSQL/9.3/bin$ ./psql -p 5434 -U postgres
postgres
Password for user postgres:
psql.bin (9.3beta1)
Type "help" for help.

postgres=# SELECT VERSION();

version
---
 PostgreSQL 9.3beta1 on i686-pc-linux-gnu, compiled by gcc (GCC) 4.1.2
20080704 (Red Hat 4.1.2-52), 32-bit
(1 row)

Then, I have tried to take the dump with pg_dumpall, and the result as
below.

postgres@test-ubuntu:/opt/PostgreSQL/9.3/bin$ ./pg_dumpall -f
/tmp/pg_9.3.sql -p 5434
Password:
Password:
*cannot duplicate null pointer (internal error)*
pg_dumpall: pg_dump failed on database "postgres", exiting
postgres@test-ubuntu:/opt/PostgreSQL/9.3/bin$

Below are the log entries.
==
2013-06-25 18:24:05 IST LOG:  duration: 0.017 ms  statement: SET
search_path = public, pg_catalog
2013-06-25 18:24:05 IST LOG:  duration: 0.305 ms  statement: SELECT
sequence_name, start_value, increment_by, CASE WHEN increment_by > 0 AND
max_value = 9223372036854775807 THEN NULL  WHEN increment_by < 0 AND
max_value = -1 THEN NULL  ELSE max_value END AS max_value, CASE WHEN
increment_by > 0 AND min_value = 1 THEN NULL  WHEN increment_by < 0 AND
min_value = -9223372036854775807 THEN NULL  ELSE min_value END AS
min_value, cache_value, is_cycled FROM seq
2013-06-25 18:24:05 IST LOG:  duration: 0.117 ms  statement: SELECT
attname, attacl FROM pg_catalog.pg_attribute WHERE attrelid = '16438' AND
NOT attisdropped AND attacl IS NOT NULL ORDER BY attnum
2013-06-25 18:24:05 IST LOG:  duration: 0.018 ms  statement: SET
search_path = public, pg_catalog
2013-06-25 18:24:05 IST LOG:  duration: 0.075 ms  statement: SELECT
last_value, is_called FROM seq
2013-06-25 18:24:05 IST LOG:  unexpected EOF on client connection with an
open transaction
2013-06-25 18:24:05 IST DEBUG:  unexpected EOF on client connection
2013-06-25 18:24:14 IST DEBUG:  autovacuum: processing database "postgres"

I would like to request you to share your inputs on this error. Is this a
BUG in 9.3 pg_dump binary.

Thanks in advance.

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] Postgres HA

2017-02-23 Thread dinesh kumar
Hi Dylan,

On Thu, Feb 23, 2017 at 4:28 AM, Dylan Luong 
wrote:

> Hi
>
>
>
> I am a DBA at the University of South Australia. For PostgreSQL High
> Availability, we currently have setup a Master/Slave across two datacenters
> using PostgreSQL (WAL) streaming replication. We use an LTM (load balancer)
> server that sits between the application servers and the PostgreSQL server
> that directs connections to the Master (and the Slave if failover occurs).
> We also have watchdog processes on the PostgreSQL servers that polls the
> LTM to determine who is Master and perform automatic failover if required.
> I am looking at options to improve our high availability.
>
> I would like to know how other organizations in different industries
> (other than education) setup High Availability on their PostgreSQL
> environments.
>

Below is the approach we have followed, to achieve the maximum HA with
async streaming replication.

1. Create an instance "I" with "N" number of nodes.

2. Set up the replication among the "N" nodes as "N-1" nodes points to 1
master.

3. Configured 2 physical replication slots to each "N-1" nodes.

4. One replication slot is to receive the archives, and one is for doing
the replication. (Suspected recovery process is slower than the receiver
process)

5. Configured a parallel WAL uploader (Customized program) on master to
wal-backup server. (We needed this for the PITR)

6. Implemented a quorum on "N-1" slaves as one should become as master in
worst cases. (Guaranteed data availability as per RTO settings)

7. Watchdog process which update the pgbouncer configuration from master to
the latest master.

8. Used consul service discovery for identifying the master, slave heart
beats.

9. Once Failover is completed, "N-1"(including old master) follows the new
master by doing a fresh refresh. (Planning to use pg_rewind)

10. Covered the split brain problems by removing the service discovery keys
from consul. (It's delivering the promising results, but need to spend more
time on this).

The above mentioned approach what we have done is similar to your's, but we
needed to handle with multiple slaves rather single one. In case, if you
are looking for any open source tools to implement in your production
servers, then prefer to add repmgr, pgHA, PAF tools into your list. These
open source tools are great and deliver the results as demonstrated.


> What  tools do you use. Are they commercial licensed products? How is the
> architecture setup and how do you do recovery of new slave.
>
> Your information is greatly appreciated.
>
>
>
> Regards
>
> Dylan
>
>
>
> *Dylan Luong*
>
> *Information Strategy & Technology Services*
>
> University of South Australia
>
> A Building, Room E2-07, Mawson Lakes Campus
>
> MAWSON LAKES
>
> South Australia  5095
>
>
>
> Email:*dylan.lu...@unisa.edu.au *
>
> Phone:+61 8 83023629 <+61%208%208302%203629>
>
> Fax: +61 8 83023577 <+61%208%208302%203577>
>
> WWW:http://www.unisa.edu.au
>
>
>



-- 

Regards,
Dinesh
manojadinesh.blogspot.com


[GENERAL] sync the data's from catalog table

2017-11-12 Thread Dinesh kumar
Hi Team,

How can I sync the data's from pg_authid to manually created table (user
table) whenever the update or insert happens on pg_authid table.

Thanks

Dinesh Kumar


Re: [GENERAL] when do I analyze after concurrent index creation?

2013-10-18 Thread dinesh kumar
Hi,

When we create an index, i believe the postgres engine it self update it's
catalog about the index availability.

"ANALYZE" helps you to find the right plan according to the number of rows
got selected. I don't think, "ANALYZE" take care of updating the index
entries.

I might be wrong here, hope someone will give you better information.

Thanks,
Dinesh
manojadinesh.blogspot.com



On Fri, Oct 18, 2013 at 1:13 AM, AI Rumman  wrote:

> Hi,
>
> I have a very basic question.
> If I create index concurrently, then do I need to analyze the table? If
> yes, when?
> Please let me know.
>
> Thanks.
>
>


Re: [GENERAL] when do I analyze after concurrent index creation?

2013-10-20 Thread dinesh kumar
On Fri, Oct 18, 2013 at 9:12 PM, Tom Lane  wrote:

> dinesh kumar  writes:
> > When we create an index, i believe the postgres engine it self update
> it's
> > catalog about the index availability.
>
> ANALYZE normally collects statistics about the contents of table columns.
> Thus, adding (or removing) an index does not create any reason to
> re-ANALYZE.
>
> However ... if you have an index on an expression (not just a simple
> column value), that cues ANALYZE to collect statistics about the values of
> that expression.  So re-analyzing is useful after creating such an index,
> to give the planner a better idea of when to use that index.
>
>
Thank you Tom.


> Whether you used CREATE INDEX CONCURRENTLY or some other way of creating
> the index doesn't matter at all.
>
> regards, tom lane
>

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] Theory question

2013-11-12 Thread dinesh kumar
On Tue, Nov 12, 2013 at 6:57 AM, Jayadevan M wrote:

> Hi,
> What are the real differences between the bgwriter and checkpointer
> process? Both of them write data from the buffer to the data files, right?
> Is it just a matter of 'when' they write?
>

I believe, "Checkpoint" is one of the responsible duty of bgwriter process.
AFAIK from PG 9.2 onwards, we have another independent process called
"CKPTR process" which independently works. "Bgwriter" simply move the
shared_buffer's pages into local disk on the basis of LRU method. Where as
"CKPTR process" moves all the dirty buffers into local disk, and make a
transaction log sequence, at which all the data files have been updated to
reflect this information.

Best Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] select Xpath is returning values with {}

2013-11-12 Thread dinesh kumar
On Tue, Nov 12, 2013 at 8:46 PM, gajendra s v  wrote:

> Hi All,
>
> How to remove {} from output
>
> I am using below query
>
> select (xpath('//Grading_Automated',(select xmlgrading from km_course_mast
> where id='10'))) from km_course_mast where id='10'
>
> The out is below
>
> "{"
>
> "}"
>
>
The xpath return type is of type array. PG returns the array data, enclosed
in "{}". If you want to remove these "{}" braces, then use either
array_to_string(xpath(...)) or trim the braces from the xpath output.

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] pgadmin III query

2013-12-09 Thread Dinesh Kumar
Hi,

On Fri, Dec 6, 2013 at 4:34 PM, Peter Kroon  wrote:

> Thanks, but i need a non command line option.
>
>
We can do this with a function which is having the sql queries of pgAdmin
raised against the database.

=> Log all the queries by enabling "log_minduration_statement=0".
=> Do SELECT pg_reload_conf();
=> Do a refresh on a table of pgAdmin's browser.
=> Get all the queries what it has performed.
=> Create a custom function with those queries.

Regards,
Dinesh


>
> 2013/12/6 Ian Lawrence Barwick 
>
>> 2013/12/6 Peter Kroon :
>> > When you click on a table in the "Object browser" you'll see in the "SQL
>> > pane" the sql that is needed to create that table.
>> >
>> > Which function can I call to get that SQL?
>>
>> You can use the pg_dump command line function for this:
>>
>>   pg_dump -s -t name_of_table name_of_database
>>
>> Regards
>>
>> Ian Barwick
>>
>
>


Re: [GENERAL] pgadmin III query

2013-12-09 Thread Dinesh Kumar
Hi Peter,

On Mon, Dec 9, 2013 at 7:03 PM, Peter Kroon  wrote:

> Hi Dinesh,
>
>
> >Get all the queries what it has performed.
>
> How and where?
> When I run "select * from pg_stat_activity" I get the same result with and
> without "log_minduration_statement=0"
>
>
By setting this parameter log_min_duration_statement to 0, postgres will
log all the queries, in the pg_log file.

Hope the following steps helps you on this, and make sure you have enabled
the logging_collector.

1. Modify the above parameter on the required postgres cluster.

2. Do SELECT pg_reload_conf(); on the same machine.

3. And go to pg_log file location, and do tail -f current pg_log file.

4. Go to pgadmin, and refresh on any table.

5. Check the tail -f file output. There you will find all the sql
queries, which have been executed from pgAdmin.

6. Collect those queries, and make your own custom function with pl/pgsql
language.

Regards,
Dinesh


> Could you provide a more detailed step by step guide?
>
> Best,
> Peter
>
>
>
> 2013/12/9 Dinesh Kumar 
>
>> Hi,
>>
>> On Fri, Dec 6, 2013 at 4:34 PM, Peter Kroon  wrote:
>>
>>> Thanks, but i need a non command line option.
>>>
>>>
>> We can do this with a function which is having the sql queries of pgAdmin
>> raised against the database.
>>
>> => Log all the queries by enabling "log_minduration_statement=0".
>> => Do SELECT pg_reload_conf();
>> => Do a refresh on a table of pgAdmin's browser.
>> => Get all the queries what it has performed.
>> => Create a custom function with those queries.
>>
>> Regards,
>> Dinesh
>>
>>
>>>
>>> 2013/12/6 Ian Lawrence Barwick 
>>>
>>>> 2013/12/6 Peter Kroon :
>>>> > When you click on a table in the "Object browser" you'll see in the
>>>> "SQL
>>>> > pane" the sql that is needed to create that table.
>>>> >
>>>> > Which function can I call to get that SQL?
>>>>
>>>> You can use the pg_dump command line function for this:
>>>>
>>>>   pg_dump -s -t name_of_table name_of_database
>>>>
>>>> Regards
>>>>
>>>> Ian Barwick
>>>>
>>>
>>>
>>
>


Re: [GENERAL] [pgadmin-support] Lost database

2013-12-09 Thread Dinesh Kumar
Hi Steve,

On Tue, Dec 10, 2013 at 11:08 AM, Steve Erickson
wrote:

>  While prepping a second server today the /var/lib/postgres/ directory
> got deleted.  Yes, everything.  We’re running on Ubuntu 10.4 and Postgres
> 8.4.  We immediately shutdown everything but now need to recover the
> directory tree.  No, the customer did not do any backups – the database was
> just under 1 TB and they didn’t want to take the time.
>
>
>
> We’re looking into using extundelete or ext4magic to try to recover the
> files and, if successful, are there any further steps we need to take to
> bring up the database again?
>
>
>
> If we’re unsuccessful, we do have a filesystem backup from several months
> ago that we can apply and then go through the tedium of trying to fill in
> the missing blanks for those months.  Since there are so many rows, is
> there a tool or procedure we can use to verify the integrity of the data
> once Postgres is back up and running?
>
>
>
> I don’t expect any miracles, but want to be sure there isn’t less
> intensive alternative out there.
>
>
>
> Thanks,
>
>
>
 Steve Erickson
>

I would like to forward this question to pgsql-general mailing list, since
it's a postgresql related question.

Regards,
Dinesh


Re: [GENERAL] pgadmin III query

2013-12-09 Thread Dinesh Kumar
Hi Peter,

On Mon, Dec 9, 2013 at 7:52 PM, Peter Kroon  wrote:

> Hi Dinesh,
>
> SELECT pg_reload_conf();
> Did not do the job, I had to restart the server.
> I managed to collect the queries and there are a lot of them to show the
> SQL that is needed to create the given table.
> Does postrgesql have any plan on making their own function for this?
>

I am not the right person to answer this question. Hope some will give you
answer for this.

I am not sure whether
it<http://vibhorkumar.wordpress.com/2010/11/01/pg_get_tabledef-function-in-postgres-plus/>works
for you or not.

Regards,
Dinesh


> Best,
> Peter
>
>

> 2013/12/9 Dinesh Kumar 
>
>> Hi Peter,
>>
>> On Mon, Dec 9, 2013 at 7:03 PM, Peter Kroon  wrote:
>>
>>> Hi Dinesh,
>>>
>>>
>>> >Get all the queries what it has performed.
>>>
>>> How and where?
>>> When I run "select * from pg_stat_activity" I get the same result with
>>> and without "log_minduration_statement=0"
>>>
>>>
>> By setting this parameter log_min_duration_statement to 0, postgres will
>> log all the queries, in the pg_log file.
>>
>> Hope the following steps helps you on this, and make sure you have
>> enabled the logging_collector.
>>
>> 1. Modify the above parameter on the required postgres cluster.
>>
>> 2. Do SELECT pg_reload_conf(); on the same machine.
>>
>> 3. And go to pg_log file location, and do tail -f current pg_log file.
>>
>> 4. Go to pgadmin, and refresh on any table.
>>
>> 5. Check the tail -f file output. There you will find all the sql
>> queries, which have been executed from pgAdmin.
>>
>> 6. Collect those queries, and make your own custom function with pl/pgsql
>> language.
>>
>> Regards,
>> Dinesh
>>
>>
>>> Could you provide a more detailed step by step guide?
>>>
>>> Best,
>>> Peter
>>>
>>>
>>>
>>> 2013/12/9 Dinesh Kumar 
>>>
>>>> Hi,
>>>>
>>>> On Fri, Dec 6, 2013 at 4:34 PM, Peter Kroon  wrote:
>>>>
>>>>> Thanks, but i need a non command line option.
>>>>>
>>>>>
>>>> We can do this with a function which is having the sql queries of
>>>> pgAdmin raised against the database.
>>>>
>>>> => Log all the queries by enabling "log_minduration_statement=0".
>>>> => Do SELECT pg_reload_conf();
>>>> => Do a refresh on a table of pgAdmin's browser.
>>>> => Get all the queries what it has performed.
>>>> => Create a custom function with those queries.
>>>>
>>>> Regards,
>>>> Dinesh
>>>>
>>>>
>>>>>
>>>>> 2013/12/6 Ian Lawrence Barwick 
>>>>>
>>>>>> 2013/12/6 Peter Kroon :
>>>>>> > When you click on a table in the "Object browser" you'll see in the
>>>>>> "SQL
>>>>>> > pane" the sql that is needed to create that table.
>>>>>> >
>>>>>> > Which function can I call to get that SQL?
>>>>>>
>>>>>> You can use the pg_dump command line function for this:
>>>>>>
>>>>>>   pg_dump -s -t name_of_table name_of_database
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>> Ian Barwick
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>


Re: [GENERAL] [pgadmin-support] Database has been killed after increasing memory

2013-12-23 Thread Dinesh Kumar
Hi,

+Adding postgresql general list. Hope, some one will provide more
information.

On Mon, Dec 23, 2013 at 1:56 PM, kolsze...@gmail.com wrote:

> Hi
>
> I Have problem with "postmaster" processes which unexpectedly consume very
> big amount of memory, about 200MB!!!, and as a consequence whole available
> memory in Linux, and as a consequence Linux kills postmasters processes :(
>
> my configuration:
> 8 Core
> 8 GB RAM
> max_connections = 60
> work_mem = 32MB
> shared_buffers = 2G
>
> After an investigation, I suspect, than some of my queries which contains
> many INNER JOIN's (about 30, and one of them about 60) probably causes
> memory problem.
>
> What are postgres limitations about JOINS in one query?
> How can I estimate memory needed for postgres?
>

Using EXPLAIN ANALYZE BUFFERS

> How can I protect my postgres server, when i don't know exactly how
> complicated  queries creates my customer, e.g. in reports?
>

Using pgbadger/pgfounie tools, you will get the detailed statistical report.


Regards,
Dinesh

>
>
>
>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/Database-has-been-killed-after-increasing-memory-tp5784404.html
> Sent from the PostgreSQL - pgadmin support mailing list archive at
> Nabble.com.
>
>
> --
> Sent via pgadmin-support mailing list (pgadmin-supp...@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgadmin-support
>


Re: [GENERAL] [pgadmin-support] Database has been killed after increasing memory

2013-12-23 Thread Dinesh Kumar
+Adding postgres general


Hi,


On Mon, Dec 23, 2013 at 9:34 PM, Krzysztof Olszewski wrote:

> Hi
>
> thanx for your answer
>
> "Using EXPLAIN ANALYZE BUFFERS"
> how it helps me with "How can I estimate memory needed for postgres" ?
>

Using this information, i believe we can estimate the memory usage of a
query.

Explain Analyze
Buffers<http://www.postgresql.org/docs/9.1/static/sql-explain.html>
<http://dalibo.github.io/pgbadger/samplev4.html>

> "Using pgbadger/pgfounie"
> how it helps me with  "How can I protect my postgres server, when i don't
> know exactly ..." ?
>
> By configuring the required log information on the server level, we can
get the complete statistics information using pgbadger. Find the sample
here.
 pgbadger report <http://dalibo.github.io/pgbadger/samplev4.html>

Thanks,
Dinesh

>
>
> 2013/12/23 Dinesh Kumar 
>
>> Hi,
>>
>> +Adding postgresql general list. Hope, some one will provide more
>> information.
>>
>> On Mon, Dec 23, 2013 at 1:56 PM, kolsze...@gmail.com > > wrote:
>>
>>> Hi
>>>
>>> I Have problem with "postmaster" processes which unexpectedly consume
>>> very
>>> big amount of memory, about 200MB!!!, and as a consequence whole
>>> available
>>> memory in Linux, and as a consequence Linux kills postmasters processes
>>> :(
>>>
>>> my configuration:
>>> 8 Core
>>> 8 GB RAM
>>> max_connections = 60
>>> work_mem = 32MB
>>> shared_buffers = 2G
>>>
>>> After an investigation, I suspect, than some of my queries which contains
>>> many INNER JOIN's (about 30, and one of them about 60) probably causes
>>> memory problem.
>>>
>>> What are postgres limitations about JOINS in one query?
>>> How can I estimate memory needed for postgres?
>>>
>>
>> Using EXPLAIN ANALYZE BUFFERS
>>
>>> How can I protect my postgres server, when i don't know exactly how
>>> complicated  queries creates my customer, e.g. in reports?
>>>
>>
>> Using pgbadger/pgfounie tools, you will get the detailed statistical
>> report.
>>
>>
>> Regards,
>> Dinesh
>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://postgresql.1045698.n5.nabble.com/Database-has-been-killed-after-increasing-memory-tp5784404.html
>>> Sent from the PostgreSQL - pgadmin support mailing list archive at
>>> Nabble.com.
>>>
>>>
>>> --
>>> Sent via pgadmin-support mailing list (pgadmin-supp...@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgadmin-support
>>>
>>
>>
>


Re: [GENERAL] authentication failure

2014-01-03 Thread dinesh kumar
Hi,

On Fri, Jan 3, 2014 at 3:32 PM, Jayadevan M wrote:

> Nope -
> psql -W
> psql: FATAL:  password authentication failed for user "postgres"
>
>
There might be possible of the user's password expiration. Make the user's
local authentication as trust, and reload the postgres instance, and check
the "pg_user" table for the password expire date. If the password is
expired, alter the user's expiration time.

Regards,
Dinesh
manojadinesh.blogspot.com


>
> On Fri, Jan 3, 2014 at 2:49 PM, Ashesh Vashi <
> ashesh.va...@enterprisedb.com> wrote:
>
>> Try "psql -W" for prompting the password forcefully.
>>
>>
>> On Fri, Jan 3, 2014 at 2:46 PM, Jayadevan M 
>> wrote:
>>
>>> Hi,
>>> I am trying to login from psql and consistently getting a
>>> "psql: FATAL:  password authentication failed for user "xyz"" for all
>>> users. I am not being prompted for a password at all. I faced a similar
>>> issue sometime ago because there was a .pgpass file and it had wrong
>>> entries. This time there is no .pgpass file.
>>> Any clues? How do I trouble-shoot?
>>> Other possibly relevant info - this is a chrooted environment. I
>>> upgraded the OS recently in this env and faced some issues with /dev/null
>>> /proc etc not being present and so on. Some issues there?
>>> The database itself is running in the same server in the non-chroot
>>> environment. I am also running a python application which uses psycopg2 and
>>> that is working fine.
>>> Regards,
>>> Jayadevan
>>>
>>>
>>
>>
>> --
>> --
>>
>> Thanks & Regards,
>>
>> Ashesh Vashi
>> EnterpriseDB INDIA: Enterprise PostgreSQL 
>> Company
>>
>>
>>
>> *http://www.linkedin.com/in/asheshvashi*
>>
>
>


Re: [GENERAL] authentication failure

2014-01-03 Thread dinesh kumar
On Fri, Jan 3, 2014 at 5:13 PM, Jayadevan M wrote:

> I am able to login from the non-chroot environment. So it is not an issue
> with pg_hba.conf and not an issue of password expiration. Is there a debug
> psql option?
>
> OK.

Have you checked the PGPASSWORD environment variable, from where you are
trying to login.

Regards,
Dinesh
manojadinesh.blogspot.com


>
>
> On Fri, Jan 3, 2014 at 5:09 PM, Chris Curvey wrote:
>
>>
>>
>>
>> On Fri, Jan 3, 2014 at 4:16 AM, Jayadevan M 
>> wrote:
>>
>>> Hi,
>>> I am trying to login from psql and consistently getting a
>>> "psql: FATAL:  password authentication failed for user "xyz"" for all
>>> users. I am not being prompted for a password at all. I faced a similar
>>> issue sometime ago because there was a .pgpass file and it had wrong
>>> entries. This time there is no .pgpass file.
>>> Any clues? How do I trouble-shoot?
>>> Other possibly relevant info - this is a chrooted environment. I
>>> upgraded the OS recently in this env and faced some issues with /dev/null
>>> /proc etc not being present and so on. Some issues there?
>>> The database itself is running in the same server in the non-chroot
>>> environment. I am also running a python application which uses psycopg2 and
>>> that is working fine.
>>> Regards,
>>> Jayadevan
>>>
>>
>> Could it be a problem with your pg_hba.conf?  Perhaps password
>> authentication is not enabled there?
>>
>>
>>
>> --
>> I asked the Internet how to train my cat, and the Internet told me to get
>> a dog.
>>
>
>


Re: [GENERAL] break table into portions for writing to separate files

2014-05-01 Thread dinesh kumar
Hi,

Does the below kind of approach work for you. I haven't tested this, but
would like to give an idea something like below.

Create a plpgsql function which takes 3 parameters as "From Date", "To
Date" and "Interval".

prev_interval := '0'::interval;

LOOP

IF ( "From Date" + "Interval" <= "To Date") THEN

EXECUTE FORMAT (
$$
COPY (SELECT  FROM  WHERE timestamp_column >=%s
AND timestamp_column<%s) TO '%s.csv'
$$,
("From Date" + "prev_interval")::TEXT,
("From Date" + "Interval") ::TEXT,
( Filename || (Extract(Epoch from interval)/60)::BIGINT)::TEXT);

prev_interval := "Interval";

"Interval" := "Interval" + "Interval";

ELSE
EXIT FROM LOOP;
END IF;

END LOOP;

Thanks,
Dinesh
manojadinesh.blogspot.com



On Thu, May 1, 2014 at 11:20 PM, Seb  wrote:

> Hello,
>
> I've been looking for a way to write a table into multiple files, and am
> wondering if there are some clever suggestions.  Say we have a table
> that is too large (several Gb) to write to a file that can be used for
> further analyses in other languages.  The table consists of a timestamp
> field and several numeric fields, with records every 10th of a second.
> It could be meaningfully broken down into subsets of say 20 minutes
> worth of records.  One option is to write a shell script that loops
> through the timestamp, selects the corresponding subset of the table,
> and writes it as a unique file.  However, this would be extremely slow
> because each select takes several hours, and there can be hundreds of
> subsets.  Is there a better way?
>
> Cheers,
>
> --
> Seb
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Monitoring Streaming Replication in 9.2

2014-05-16 Thread dinesh kumar
Hi,

Hereis
the blog which has good explanation about this.

If you want to find the lag in seconds, then you need to execute something
like below.

SELECT pg_last_xact_replay_timestamp() - now();


Regards,
Dinesh
manojadinesh.blogspot.com



On Fri, May 16, 2014 at 9:37 PM, J Adams  wrote:

>
> Newb question here. I have streaming replication working with 9.2 and I'm
> using Bucardo's check_postgres.pl to monitor replication. I see that it
> runs this query on the slave:
>
> SELECT pg_last_xlog_receive_location() AS receive,
> pg_last_xlog_replay_location() AS replay
>
> That returns hex, which is then converted to a number in the script.
>
> My question is this: what does that number represent? Is it just the log
> position? If so, how does the log position translate to queries? Does one
> log position = one query? (I did say this was a newb question.)
>
> How do I determine a meaningful alert threshold for that value? Is there a
> reliable way to monitor replication lag in seconds? How do other people
> handle this?
>
>


[GENERAL] Threads With Libpq Issue

2012-08-01 Thread dinesh kumar
Respected All,

This is my first post in PG-Generals. If it is not the place for these kind
of queries, then please guide me where i need to post.

I have a quick question regarding "pthread" with PostgreSQL 9.0 Libpq. I'm
facing a problem with "Pthread" and libpq. Please find the below program
behavoiur.

Connection_To_PG()
{
/* Making a connection to PG 9.0 */
}

void* Independent_Thread1()
{
while(1)
{
sleep(5);
/* Doing 1 Insert Operation on Table A*/
}
}

void* Independent_Thread2()
{
while(1)
{
sleep(5);
/*Doing 1 Insert Operation on Table B*/
}

main()
{
pthread Ind1,Ind2;
Connection_TO_PG();
pthread_create(&Ind1,NULL,&Independent_Thread1,NULL);
pthread_create(&Ind2,NULL,&Independent_Thread2,NULL);
if(pthread_join(Ind1,NULL)<0)
{
printf("Ind1 is completed");
}
if(pthread_join(Ind2,NULL)<0)
{
printf("Ind2 is completed");
}
}


Problem Description:

When i ran the above program, it's running(i.e inserting 2 to 10 records)
some time and going to hang state. Some times, it's running more than 15
minutes and some times only 2 to 3 minutes. I enabled the postgresql log
level to DEBUG5, and it's also stopped to showing the progress after 2 to 3
minutes span of time.

Can some advise me, where i'm doing mistake in the above source code.. And
would like to know to why it's not functioning properlly .. And i'm also
getting the following message when the above libpq program runs..

* Message **
message type 0x31 arrived from server while idle
message type 0x32 arrived from server while idle
message type 0x6e arrived from server while idle
message type 0x43 arrived from server while idle
message type 0x5a arrived from server while idle

However, the below sample thread(i.e without libpq) programg is working
fine witout any issues.

#include 
#include 
#include 

void* thread1()
{
while(1){
printf("Thread1\n");
}
}

void* thread2()
{
while(1){
printf("Thread2\n");
}
}

int main()
{
int status;
pthread_t tid1,tid2;
pthread_create(&tid1,NULL,&thread1,NULL);
pthread_create(&tid2,NULL,&thread2,NULL);
pthread_join(tid1,NULL);
pthread_join(tid2,NULL);
return 0;
}


Thanks in advance ...

Best Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] Threads With Libpq

2012-08-01 Thread dinesh kumar
Hi Merlin/Alban,

Thank you very much for your guidance .. Sure will use Async Llibpq
Functionalities or Will use Thread Mutex for making the the queries to be
synchronize ...

Best Regards,
Dinesh
manojadinesh.blogspot.com


On Wed, Aug 1, 2012 at 7:33 PM, Merlin Moncure  wrote:

> On Wed, Aug 1, 2012 at 8:09 AM, Alban Hertroys  wrote:
> > On 1 Aug 2012, at 14:32, dinesh kumar wrote:
> >
> >> Respected All,
> >>
> >> This is my first request/post in PG-Generals. If it is not the place
> for these kind of queries, then please guide me where i need to be.
> >>
> >> I have a quick question regarding "pthread" with PostgreSQL 9.0 Libpq.
> I'm facing a problem with "Pthread" and libpq. Please find the below
> program behavoiur.
> >>
> >> Connection_To_PG()
> >> {
> >> /* Making a connection to PG 9.0 */
> >> }
> >>
> >> void* Independent_Thread1()
> >> {
> >> while(1)
> >> {
> >> sleep(5);
> >> /* Doing 1 Insert Operation on Table A*/
> >> }
> >> }
> >>
> >> void* Independent_Thread2()
> >> {
> >> while(1)
> >> {
> >> sleep(5);
> >> /*Doing 1 Insert Operation on Table B*/
> >> }
> >>
> >> main()
> >> {
> >> pthread Ind1,Ind2;
> >> Connection_TO_PG();
> >> pthread_create(&Ind1,NULL,&Independent_Thread1,NULL);
> >> pthread_create(&Ind2,NULL,&Independent_Thread2,NULL);
> >> if(pthread_join(Ind1,NULL)<0)
> >> {
> >> printf("Ind1 is completed");
> >> }
> >> if(pthread_join(Ind2,NULL)<0)
> >> {
> >> printf("Ind2 is completed");
> >> }
> >> }
> >
> > You need a separate connection per thread or you need to synchronise
> your queries onto the single central connection, meaning that other threads
> need to be blocked (from performing queries) while any thread is performing
> a query.
> >
> > Alban Hertroys
> >
> > --
> > Screwing up is an excellent way to attach something to the ceiling.
>
> Yeah.  Also, OP left out the most important detail, namely where and
> how the connection object stored.  If the objective is to try and make
> two concurrent actions on the database, I'd consider giving
> asynchronous queries a whirl:
> http://www.postgresql.org/docs/8.1/static/libpq-async.html.  Basically
> you pair a PQsendQuery with a PQgetResult.  It's a lot easier to code
> than multi-threaded libpq and tends to be more robust in my
> experience.  If you must use threads, you'll want to keep a connection
> with each thread instance -- I'd avoid any temptation to use a client
> side connection pool.
>
> merlin
>


Re: [GENERAL] Threads With Libpq Issue

2012-08-02 Thread dinesh kumar
Thanks Craig/Reid ..

Yes it was duplicated ... Sorry

Regards,
Dinesh

On Thu, Aug 2, 2012 at 6:20 AM, Craig Ringer  wrote:

> Argh, disregard. Your post was duplicated, and this copy only just reached
> me.
>


Re: [GENERAL] Threads With Libpq

2012-08-02 Thread dinesh kumar
Hi All ..

It seems Semaphores satisfied my conditions ... Below is the modified
behaviour ...

Connection_To_PG()
{
/* Making a connection to PG 9.0 */
}

void* Independent_Thread1()
{
while(1)
{
sleep(5);
/* Doing 1 Insert Operation on Table A*/
*sem_post(&Flag);*
}
}

void* Independent_Thread2()
{
while(1)
{**
sleep(5);
*sem_wait(&Flag);*
 /*Doing 1 Insert Operation on Table B*/
}

main()
{
pthread Ind1,Ind2;
Connection_TO_PG();
pthread_create(&Ind1,NULL,&Independent_Thread1,NULL);
pthread_create(&Ind2,NULL,&Independent_Thread2,NULL);
if(pthread_join(Ind1,NULL)<0)
{
printf("Ind1 is completed");
}
if(pthread_join(Ind2,NULL)<0)
{
printf("Ind2 is completed");
}
}

Thank you all for the wonderful guidance on this ...

Best Regards,
Dinesh
manojadinesh.blogspot.com


On Wed, Aug 1, 2012 at 8:34 PM, dinesh kumar wrote:

> Hi Merlin/Alban,
>
> Thank you very much for your guidance .. Sure will use Async Llibpq
> Functionalities or Will use Thread Mutex for making the the queries to be
> synchronize ...
>
> Best Regards,
> Dinesh
> manojadinesh.blogspot.com
>
>
> On Wed, Aug 1, 2012 at 7:33 PM, Merlin Moncure  wrote:
>
>> On Wed, Aug 1, 2012 at 8:09 AM, Alban Hertroys 
>> wrote:
>> > On 1 Aug 2012, at 14:32, dinesh kumar wrote:
>> >
>> >> Respected All,
>> >>
>> >> This is my first request/post in PG-Generals. If it is not the place
>> for these kind of queries, then please guide me where i need to be.
>> >>
>> >> I have a quick question regarding "pthread" with PostgreSQL 9.0 Libpq.
>> I'm facing a problem with "Pthread" and libpq. Please find the below
>> program behavoiur.
>> >>
>> >> Connection_To_PG()
>> >> {
>> >> /* Making a connection to PG 9.0 */
>> >> }
>> >>
>> >> void* Independent_Thread1()
>> >> {
>> >> while(1)
>> >> {
>> >> sleep(5);
>> >> /* Doing 1 Insert Operation on Table A*/
>> >> }
>> >> }
>> >>
>> >> void* Independent_Thread2()
>> >> {
>> >> while(1)
>> >> {
>> >> sleep(5);
>> >> /*Doing 1 Insert Operation on Table B*/
>> >> }
>> >>
>> >> main()
>> >> {
>> >> pthread Ind1,Ind2;
>> >> Connection_TO_PG();
>> >> pthread_create(&Ind1,NULL,&Independent_Thread1,NULL);
>> >> pthread_create(&Ind2,NULL,&Independent_Thread2,NULL);
>> >> if(pthread_join(Ind1,NULL)<0)
>> >> {
>> >> printf("Ind1 is completed");
>> >> }
>> >> if(pthread_join(Ind2,NULL)<0)
>> >> {
>> >> printf("Ind2 is completed");
>> >> }
>> >> }
>> >
>> > You need a separate connection per thread or you need to synchronise
>> your queries onto the single central connection, meaning that other threads
>> need to be blocked (from performing queries) while any thread is performing
>> a query.
>> >
>> > Alban Hertroys
>> >
>> > --
>> > Screwing up is an excellent way to attach something to the ceiling.
>>
>> Yeah.  Also, OP left out the most important detail, namely where and
>> how the connection object stored.  If the objective is to try and make
>> two concurrent actions on the database, I'd consider giving
>> asynchronous queries a whirl:
>> http://www.postgresql.org/docs/8.1/static/libpq-async.html.  Basically
>> you pair a PQsendQuery with a PQgetResult.  It's a lot easier to code
>> than multi-threaded libpq and tends to be more robust in my
>> experience.  If you must use threads, you'll want to keep a connection
>> with each thread instance -- I'd avoid any temptation to use a client
>> side connection pool.
>>
>> merlin
>>
>
>


Re: [GENERAL] Check PostgreSQL status using MS-DOS bat file?

2012-08-17 Thread dinesh kumar
Hi ,

Dave's instructions are helpful for finding the status of the server..
However, I do have the below the script which is nothing but PgPing in
windows ... I hope it helps you in the implementation ..


@ECHO OFF

set PSQL="C:\Program Files\PostgreSQL\9.1\bin"

set DBNAME="template1"

set USER="postgres"

set PORT="5432"

set RES="Not Pinging"

%PSQL%\psql -Atq -c "SELECT 'ping'" -p %PORT% -U %USER% %DBNAME% > _Res.txt

set /p RES=<_Res.txt

echo %RES%

IF %RES% EQU ping (echo "No need to raise any exception ") else (echo
"PostgreSQL seems not pinging.. Need to raise an exception")


Best Regards,
Dinesh
manojadinesh.blogspot.com

On Fri, Aug 17, 2012 at 4:32 PM, Dave Page  wrote:

> [Please keep the mailing list CC'd]
>
> On Fri, Aug 17, 2012 at 11:52 AM, Loughrey, Hugh
>  wrote:
> > Hi Dave,
> >
> > Thanks for the message below. The script you forwarded looks to be for
> an instance in which the DB is running of a windows box, apologies I should
> have mentioned, we currently run PostgreSQL on a Linux box. Does this mean
> calling pg_ctl is not an option? Or do we need to install additional
> drivers?
>
> pg_ctl only checks the status of an instance running on the local
> machine. To check on a remote linux box from windows using pg_ctl,
> you'd have to run pg_ctl on the linux box, probably over SSH (look for
> Putty for an SSH client for Windows).
>
> An easier option might be to run a psql command on the remote
> database. Just run something like "SELECT 1" and check you actually
> get a 1 back, and not a connection error.
>
> > We'd be using FME to push data from other databases into PostgreSQL,
> however before running our FME scripts we'd want to check the DB is up and
> running. If the DB is running, then run the FME scripts, if not...don't run
> the FME scripts. All of this needs to be controlled via a windows server
> 2008 box.
> >
> > I appreciate your help.
> >
> > Regards
> >
> > Hugh
> >
> > -Original Message-
> > From: Dave Page [mailto:dp...@pgadmin.org]
> > Sent: 15 August 2012 16:30
> > To: Bruce Momjian
> > Cc: Loughrey, Hugh; pgsql-general@postgresql.org
> > Subject: Re: [GENERAL] Check PostgreSQL status using MS-DOS bat file?
> >
> > On Wed, Aug 15, 2012 at 4:04 PM, Bruce Momjian  wrote:
> >> On Wed, Aug 15, 2012 at 09:52:17AM +, Loughrey, Hugh wrote:
> >>> Hi All,
> >>>
> >>> I want to write a MS-DOS command to check that the PostgreSQL
> >>> database is up and running and able to accept data being pushed to
> >>> it. From a bit of reading I’ve identified the pg_ctl status command,
> >>> but can this be incorporated into a *.bat file and can the resulting
> >>> status be recorded in a *.txt file? If so does anyone have a command
> which would enable this?
> >>
> >> As I remember it is pretty tricky to call pg_ctl from a Windows batch
> >> file.  I know the Windows installers do it somehow --- you might want
> >> to downlaod it and see if you can find the shell script they use.
> >> Dave Page might know more --- CC'ing him.
> >
> > You shouldn't try to start/stop the server with pg_ctl if it's
> configured to run as a service (use "net start xxx", "net stop xxx"
> > for that), but you can check the status:
> >
> > C:\>"C:\Program Files\PostgreSQL\9.2\bin\pg_ctl.exe" -D "C:\Program
> Files\PostgreSQL\9.2\data" status
> > pg_ctl: server is running (PID: 1040)
> > C:/Program Files/PostgreSQL/9.2/bin/postgres.exe "-D" "C:/Program
> Files/PostgreSQL/9.2/data"
> >
> >
> > C:\>net stop postgresql-x64-9.2
> > The postgresql-x64-9.2 service is stopping.
> > The postgresql-x64-9.2 service was stopped successfully.
> >
> >
> > C:\>"C:\Program Files\PostgreSQL\9.2\bin\pg_ctl.exe" -D "C:\Program
> Files\PostgreSQL\9.2\data" status
> > pg_ctl: no server running
> >
> > --
> > Dave Page
> > Blog: http://pgsnake.blogspot.com
> > Twitter: @pgsnake
> >
> > EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL
> Company
> > “Any opinion expressed in this e-mail or any attached files are those of
> the individual and not necessarily those of Hoople Ltd. You should be aware
> that Hoople Ltd. monitors its email service. This e-mail and any attached
> files are confidential and intended solely for the use of the addressee.
> This communication may contain material protected by law from being passed
> on. If you are not the intended recipient and have received this e-mail in
> error, you are advised that any use, dissemination, forwarding, printing or
> copying of this e-mail is strictly prohibited. If you have received this
> e-mail in error please contact the sender immediately and destroy all
> copies of it.
>
>
>
> --
> Dave Page
> Blog: http://pgsnake.blogspot.com
> Twitter: @pgsnake
>
> EnterpriseDB UK: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] crosstab

2012-09-04 Thread dinesh kumar
Hi ,

I believe, we need to have a Dynamic SQL for this to generate the dynamic
Crostab.. I have tried the same earlier and wrote the below Calendar Query
for the year 2011 including holidays.

Crostab Example
=
select
min(case when extract(month from d.dd)= 1 then
'Jan'
when extract(month from d.dd)= 2 then
'Feb'
when extract(month from d.dd)=3 then
'Mar'
when extract(month from d.dd)= 4 then
'Apr'
when extract(month from d.dd)= 5 then
'May'
when extract(month from d.dd)= 6 then
'Jun'
when extract(month from d.dd)= 7 then
'Jul'
when extract(month from d.dd)= 8 then
'Aug'
when extract(month from d.dd)=9 then
'Sep'
when extract(month from d.dd)= 10 then
'Oct'
when extract(month from d.dd)= 11 then
'Nov'
when extract(month from d.dd)= 12 then
'Dec'
end
) as MONTH,min(case when extract(DOW from d.dd)=0 then
(extract(day from d.dd))
end) as SUN,
min(case when extract(DOW from d.dd)=1 then
(extract(day from d.dd))
end) as MON,
min(case when extract(DOW from d.dd)=2 then
(extract(day from d.dd))
end) as TUE,
min(case when extract(DOW from d.dd)=3 then
(extract(day from d.dd))
end) as WED,
min(case when extract(DOW from d.dd)=4 then
(extract(day from d.dd))
end) as THU,
min(case when extract(DOW from d.dd)=5 then
(extract(day from d.dd))
end) as FRI,
min(case when extract(DOW from d.dd)=6 then
(extract(day from d.dd))
end) as SAT,
min(case when extract(day from d.dd)=26 and extract(month from d.dd)=1 then
'Republic Day <---> 26th January 2011 <--> Wednessday'
when extract(day from d.dd)=16 and extract(month from d.dd)=2 then
'Id-E-Milad <--> 16th February 2011 <--> Wednessday'
when extract(day from d.dd)=19 and extract(month from d.dd)=2 then
'Chhatrapati Shivaji Maharaj Jayanthi <--> 19th February 2011 <--> Saturday'
when extract(day from d.dd)=2 and extract(month from d.dd)=3 then
'Mahashivratri <--> 2nd March 2011<--> Wednessday'
when extract(day from d.dd)=4 and extract(month from d.dd)=4 then
'Gudi Padwa <--> 4th April 2011 <--> Tuesday'
when extract(day from d.dd)=12 and extract(month from d.dd)=4 then
'Ram Navmi - 12th April - Tuesday'
when extract(day from d.dd)=14 and extract(month from d.dd)=4 then
'Dr. Babasaheb Ambedkar Jayanthi - 14th April - Thursday'
when extract(day from d.dd)=16 and extract(month from d.dd)=4 then
'Mahavir Jayanthi - 16th April - Saturday'
when extract(day from d.dd)=22 and extract(month from d.dd)=4 then
'Good Friday - 22nd April - Friday'
when extract(day from d.dd)=17 and extract(month from d.dd)=5 then
'Buddha Pournima - 17th May - Tuesday'
when extract(day from d.dd)=15and extract(month from d.dd)=8 then
'Independence day - 15th August - Monday'
when extract(day from d.dd)=19 and extract(month from d.dd)=8 then
'Parsi New Year - 19th August - Friday'
when extract(day from d.dd)=31 and extract(month from d.dd)=8 then
'Ramzan Id(Id-Ul-Fitar)(Shawal-1) - 31st August - Wednessday'
when extract(day from d.dd)=1 and extract(month from d.dd)=9 then
'Ganesh Chaturthi - 1st September - Thursday'
when extract(day from d.dd)=6 and extract(month from d.dd)=10 then
'Dasara - 6th October - Thursday'
when extract(day from d.dd)=26 and extract(month from d.dd)=10 then
'Diwali Amavasya (Laxmi Pujan) - 26th October - Wednessday'
when extract(day from d.dd)=27 and extract(month from d.dd)=10 then
'Diwali (Balipratipada) - 27th October - Thursday'
when extract(day from d.dd)=7 and extract(month from d.dd)=11 then
'Bakri Id (iD-Ul-Zun) - 7th November - Monday'
when extract(day from d.dd)=10 and extract(month from d.dd)=11 then
'Guru Nanank Jayanthi - 10th November - Thursday'
when extract(day from d.dd)=6 and extract(month from d.dd)=12 then
'Moharam - 6th December - Tuesday'
when extract(day from d.dd)=20 and extract(month from d.dd)=3 then
'Holi - 20th March - Sunday'
when extract(day from d.dd)=1 and extract(month from d.dd)=5 then
'Maharastra Day - 1st May - Sunday'
when extract(day from d.dd)=2 and extract(month from d.dd)=10 then
'Mahatma Gandhi Jayanthi - 2nd October - Sunday'
when extract(day from d.dd)=25 and extract(month from d.dd)=12 then
'Christmas - 25th December - Sunday'

end) as Holidays
from
(
select
date_trunc('year',current_date)::date + a.si - 1 as dd
from
(
select generate_series(1,cast((extract('day' from
date_trunc('year',current_date)-date_trunc('year',current_date-365))) as
int)) as si) as a
) as d
group by extract(week from d.dd),extract(month from d.dd)
order by extract(month from d.dd),1,2,3,4,5,6,7;


OUTPUT


month | sun | mon | tue | wed | thu | fri | sat | holidays
---+-+-+-+-+-+-+-+
Jan | 2 | | | | | | 1 |
Jan | 9 | 3 | 4 | 5 | 6 | 7 | 8 |
Jan | 16 | 10 | 11 | 12 | 13 | 14 | 15 |
Jan | 23 | 17 | 18 | 19 | 20 | 21 | 22 |
Jan | 30 | 24 | 25 | 26 | 27 | 28 | 29 | Republic Day <---> 26th January
2011 <--> Wednessday
Jan | | 31 | | | | | |
Feb | 6 | | 1 | 2 | 3 | 4 | 5 |
Feb | 13 | 7 | 8 | 9 | 10 | 11 | 12 |
Feb | 20 | 14 | 15 | 16 | 17 | 18

Re: [GENERAL] crosstab

2012-09-04 Thread dinesh kumar
This is really good feature (crostab) to implement ..

Best Regards,
Dinesh
manojadinesh.blogspot.com


On Wed, Sep 5, 2012 at 12:21 AM, Vincent Veyron  wrote:

> Le mardi 04 septembre 2012 à 08:39 -0700, punnoose a écrit :
> > hi all
> > How could i use crostab to display variable number of columns. in the
> output
> > There could be variable number of columns
>
> see the documentation for Additional Supplied Modules, in your case
> tablefunc :
>
> http://www.postgresql.org/docs/9.1/static/tablefunc.html
>
>
> --
> Vincent Veyron
> http://marica.fr/
> Gestion informatique des sinistres d'assurances et des dossiers
> contentieux pour le service juridique
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] RE: [GENERAL] INSERT. RETURNING for copying records

2012-09-07 Thread dinesh kumar
Hi David,

I am not sure the RETURNING offers you the following behavior ..

< What I'm looking for >

+--+-+

| original_rid | rid |

+--+-+

| 1| 4   |

| 2| 5   |

| 3| 6   |

+--+-+

**

I believe, the following example gives you the desired results once we
insert completes..


postgres=# SELECT * FROM TEST;
 t |   t1
---+
 1 | Dinesh
 2 | Dinesh
 3 | Kumar
 4 | Kumar
 5 | Manoja
(5 rows)

postgres=# SELECT MIN(T),MAX(T),T1 FROM TEST GROUP BY T1 HAVING
MIN(T)!=MAX(T);
 min | max |   t1
-+-+
   1 |   2 | Dinesh
   3 |   4 | Kumar
(2 rows)

Best Regards,
Dinesh
manojadinesh.blogspot.com


On Sat, Sep 8, 2012 at 12:49 AM, David Johnston  wrote:

> ** **
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Michael Sacket
> *Sent:* Friday, September 07, 2012 2:09 PM
> *To:* PG-General Mailing List
> *Subject:* [GENERAL] INSERT… RETURNING for copying records
>
> ** **
>
> Good Afternoon,
>
> ** **
>
> I'm attempting to write a function that will duplicate a few records, but
> the catch is I need to have a mapping of the original pk to the new pk.  I
> know I can use the RETURNING clause to get the new ids... but how to map
> that to the original ones is escaping me.
>
> ** **
>
> < Setup >
>
> ** **
>
> CREATE TABLE testing (rid serial PRIMARY KEY, category text NOT NULL, name
> text NOT NULL, fk_parent int4);
>
> ** **
>
> INSERT INTO testing (category, name, fk_parent) VALUES ('cat1', 'one',
> NULL), ('cat1', 'one.one', 1), ('cat1', 'one.two', 1);
>
> ** **
>
> SELECT * FROM testing;
> +-+--+-+---+
> | rid | category | name| fk_parent |
> +-+--+-+---+
> | 1   | cat1 | one | NULL  |
> | 2   | cat1 | one.one | 1 |
> | 3   | cat1 | one.two | 1 |
> +-+--+-+---+
>
> ** **
>
> < Duplicating the records >
>
> ** **
>
> INSERT INTO testing (category, name, fk_parent) (select category, name,
> fk_parent from testing where category='cat1') returning rid, category,
> name, fk_parent;
>
> +-+--+-+---+
>
> | rid | category | name| fk_parent |
>
> +-+--+-+---+
>
> | 4   | cat1 | one | NULL  |
>
> | 5   | cat1 | one.one | 1 |
>
> | 6   | cat1 | one.two | 1 |
>
> +-+--+-+---+
>
> ** **
>
> < What I'm looking for >
>
> +--+-+
>
> | original_rid | rid |
>
> +--+-+
>
> | 1| 4   |
>
> | 2| 5   |
>
> | 3| 6   |
>
> +--+-+
>
> ** **
>
> < This doesn't work >
>
> ** **
>
> INSERT INTO testing (category, name, fk_parent) select category, name,
> fk_parent from testing as original where category='cat1' returning rid,
> category, name, fk_parent, original.rid;
>
> ** **
>
> ** **
>
> Specifically, my goal is to be able to duplicate a subset of records and
> map any referenced foreign keys to the new ones from the copies. I could
> write a pl/pgsql function to loop through the records and build the mapping
> as I go, but I was thinking there might be a better way.  Any thoughts?***
> *
>
> ** **
>
> Thanks!
>
> Michael
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> Two thoughts (syntax not validated):
>
> ** **
>
> INSERT INTO …. VALUES (non-id-cols, id)
>
> SELECT [non-id-cols], nextval(‘sequence’) AS new_id FROM testing
>
> RETURNING id, new_id
>
> ** **
>
> There is no reason to delay the assignment of the ID until the time of
> insert; by polling the sequence manually you get the same effect but at a
> time when you have not forgotten what the old value was.
>
> ** **
>
> If for some reason you have to let the ID be auto-generated you likely
> need to identify the “natural key” for the record and then:
>
> ** **
>
> WITH ins (
>
>   INSERT …. RETURNING newid, (natural_key_cols) AS natrualkey*
> ***
>
> )
>
> SELECT *
>
> FROM ins
>
> JOIN testing ON
>
> ins.naturalkey = (testing.natural_key cols)
>
> ** **
>
> If there is no natural key then this method is ambiguous in the presence
> of multiple otherwise identical records.
>
> ** **
>
> David J
>


Re: [GENERAL] Again, problem with pgbouncer

2012-10-03 Thread dinesh kumar
Hi,

Do you have "MYSITE_MYSITE" user at your database.

Please login to the database directly (I mean, without any pgbouncer and
check once.

select* from pg_user where usename ~~* 'MYSITE_MYSITE'; And also please
check your's pgbouncer.ini admin users list also.

Best Regards,
Dinesh
manojadinesh.blogspot.com


On Wed, Oct 3, 2012 at 11:31 PM, Phoenix Kiula wrote:

> On Tue, Oct 2, 2012 at 5:27 PM, Phoenix Kiula 
> wrote:
> > On Tue, Oct 2, 2012 at 11:29 AM, Phoenix Kiula 
> wrote:
> >> On Tue, Oct 2, 2012 at 12:59 AM, Phoenix Kiula 
> wrote:
>  Could you please check permission of /var/run/pgbouncer/ directory. If
>  pgbouncer directory does not have "postgres" user permissions,please
> assign
>  it and then start the pgbouncer.
> >>>
> >>>
> >>> The /var/run/pgbouncer/ directory has
> >>>
> >>>chown -R postgres:postgres ..
> >>>
> >>> The port number everywhere is already 6789.
> >>>
> >>> What else?
> >>
> >>
> >>
> >> And just to be safe, I also added pgbouncer user to postgres group:
> >>
> >>
> >> usermod -a -G postgres pgbouncer
> >>
> >>
> >> Now when I restart the pgbouncess service, it fails. The log has this
> message:
> >>
> >>
> >> 2012-10-01 23:25:24.004 21037 FATAL
> >> Cannot open logfile: '/var/log/pgbouncer.log':
> >> Permission denied
> >>
> >>
> >> That file is owned by "postgres:postgres" as indicated in a gazillion
> >> threads and documentation online (none of which is comprehensive) but
> >> just to be sure I also did this:
> >>
> >>
> >> chown :postgres /var/log/pgbouncer.log
> >>
> >>
> >> Still the same permission error. Seriously, why can't the log message
> >> be a little more useful? Why can't it say clearly WHICH USER is
> >> looking for permission to the log file? Both "pgbouncer" and
> >> "postgres" have permissions (through the group "postgres") on that
> >> file. So which is it?
> >
> >
> >
> > I made the port number 6389 everywhere. I changed the permissions of
> > the pgbouncer.log to:
> >
> >chown pgbouncer:postgres /var/log/pgbouncer.log
> >
> > Now at least the service starts. But when I try and connect via the
> > pgbouncer ID:
> >
> >psql -p 6389 -U  snipurl_snipurl snipurl
> >
> > I get this error:
> >
> >psql: ERROR:  No such user: MYSITE_MYSITE
> >
> > And yet, the authfile has this:
> >
> > "MYSITE_MYSITE" ""
> > "MYSITE_MYSITE" ""
> > "postgres" ""
> > "MYSITE_pgbouncer" ""
> >
> >
> > The authfile permissions are:
> >
> >283377983 -rw-r--r-- 1 pgbouncer postgres 262 Apr 14 11:15
> > /var/lib/pgsql/pgbouncer.txt
> >
> >
> > What else?
>
>
>
>
> No response.
>
> Is there anyone who can help me with pgbouncer?
>
> What are the permissions for the authfile, etc?
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Postgresql - 8.3 Replication in windows

2012-11-02 Thread dinesh kumar
Hi ,

In windows we do not have SCP/RSYNC utility commands, which helps us to
send the archives to remote/slave server.

I hope the below settings will help you while setting replication ..

Primary
==
archive_command = 'copy %p \\Archive_Location\\%f'

Slave
=
trigger_file = 'C:\\.trigger.5432'
restore_command = 'copy \\%f %p'

And also, make sure your Primary windows firewall is accepting a connection
from the salve server.

Best Regards,
Dinesh
manojadinesh.blogspot.com

On Fri, Nov 2, 2012 at 1:35 PM, John R Pierce  wrote:

> On 11/02/12 12:53 AM, raghu ram wrote:
>
>>
>>
>> Can you please tell me , how to set up replication in win7,
>> without slony tool..
>>
>>
>> Below URL provides a configuration steps of Slony Replication on Windows
>> Operating System:
>>
>> http://www.mokisystems.com/**blog/setting-up-slony-on-**windows/
>>
>
> he asked, how to setup replication without slony.
>
>
>
>
> --
> john r pierceN 37, W 122
> santa cruz ca mid-left coast
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>