[GENERAL] passing multiple records to json_populate_recordset

2013-09-23 Thread Raphael Bauduin
Hi,

I'm experimenting with the json data type and functions in 9.3.
I'm storing json objects of this form in the event column:
{type: 'event_type, products : [ {id:45, 'type': 3, 'gender':'F',..}, ...,
{} ] }

I can issue this query, but notice the limit 1:

select * from json_populate_recordset(null::product, (select
event->'products' from events limit 1));

The result is:

 type | gender |  id
--++---
   41 | F  | 40003
   41 | F  | 60043
   41 | F  | 27363
   41 | F  | 27373
   41 | F  | 28563

But all these products come from one event.
Is there a way to return the products from several events?, eg with a limit
2 rather than limit 1?

Thanks

Raph


Re: [GENERAL] Tree structure

2013-09-23 Thread Rémi Cura
BE carefull you have a number of limitation with recursive cte (I'm
thinking of update and so.)
You can work around with plpgsql but it might be painfull.

You forgot a solution :
if you need powerfull graph features,
use postgres as a database and a SPARQL speaking frontend.
It may be a bit of overkill ;-)

Cheers,
Rémi-C


2013/9/23 Kaare Rasmussen 

> Hi Alban
>
>
>  4. Using a recursive common table expression (CTE).
>> http://www.postgresql.org/**docs/9.2/static/queries-with.**html
>>
>
> Yes, you're right. In fact that's what I'm testing a way to replace, as
> I'm not confident in the performance in all situations. My fault entirely;
> I should have told so from the start.
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


Re: [GENERAL] passing multiple records to json_populate_recordset

2013-09-23 Thread Raphael Bauduin
On Mon, Sep 23, 2013 at 11:31 AM, Raphael Bauduin  wrote:

> Hi,
>
> I'm experimenting with the json data type and functions in 9.3.
> I'm storing json objects of this form in the event column:
> {type: 'event_type, products : [ {id:45, 'type': 3, 'gender':'F',..}, ...,
> {} ] }
>
> I can issue this query, but notice the limit 1:
>
> select * from json_populate_recordset(null::product, (select
> event->'products' from events limit 1));
>
> The result is (edited for conciseness):
>
>  type | gender |  id
> --++---
>41 | F  | 40003
>41 | F  | 60043
>41 | F  | 27363
>41 | F  | 27373
>41 | F  | 28563
>
> But all these products come from one event.
> Is there a way to return the products from several events?, eg with a
> limit 2 rather than limit 1?
>
>
Some more info, after searching further.

This query

  select
json_populate_record(null::product,row_to_json(json_populate_recordset(null::product,event->'products')))
from (select * from events limit 2) as foo ;

returns what I want but not in the format I want (why?):
   json_populate_record
--
 (33,61,M,3,51,12,54893)
 (20,61,M,3,1,15,59623)
 (17,61,M,3,453,12,59283)
 (30,61,M,3,51,19,55713)
 (26,61,M,3,51,19,54963)

I manage to get the results as json:

select
row_to_json(json_populate_recordset(null::product,event->'products')) from
(select * from events limit 2) as foo ;

row_to_json

 
{"price_advantage":33,"type":61,"gender":"M","status":3,"brand":51,"price":12,"id":54893}
 
{"price_advantage":20,"type":61,"gender":"M","status":3,"brand":1,"price":15,"id":59623}
 
{"price_advantage":17,"type":61,"gender":"M","status":3,"brand":453,"price":12,"id":59283}

but I don't manage to get the results as from a table like in the first
json_populate_recordset query I listed (with limit 1). Any suggestion?

Thanks

Raph


Re: [GENERAL] passing multiple records to json_populate_recordset

2013-09-23 Thread Merlin Moncure
On Mon, Sep 23, 2013 at 8:25 AM, Raphael Bauduin  wrote:
>
>
> On Mon, Sep 23, 2013 at 11:31 AM, Raphael Bauduin  wrote:
>>
>> Hi,
>>
>> I'm experimenting with the json data type and functions in 9.3.
>> I'm storing json objects of this form in the event column:
>> {type: 'event_type, products : [ {id:45, 'type': 3, 'gender':'F',..}, ...,
>> {} ] }
>>
>> I can issue this query, but notice the limit 1:
>>
>> select * from json_populate_recordset(null::product, (select
>> event->'products' from events limit 1));
>>
>> The result is (edited for conciseness):
>>
>>
>>  type | gender |  id
>> --++---
>>41 | F  | 40003
>>41 | F  | 60043
>>41 | F  | 27363
>>41 | F  | 27373
>>41 | F  | 28563
>>
>> But all these products come from one event.
>> Is there a way to return the products from several events?, eg with a
>> limit 2 rather than limit 1?
>>
>
> Some more info, after searching further.
>
> This query
>
>   select
> json_populate_record(null::product,row_to_json(json_populate_recordset(null::product,event->'products')))
> from (select * from events limit 2) as foo ;
>
> returns what I want but not in the format I want (why?):

you need to use LATERAL.

here's a summary of the technique (see lateral version -- you don't
need to use recursion).

http://www.reddit.com/r/PostgreSQL/comments/1hwu8i/postgresql_recursive_common_table_expression_and/caywoxw

melrin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Andrus

Query

SELECT * FROM toode
WHERE toode in (SELECT toode FROM tempkaive)
OR toode in (SELECT toode FROM tempalgsemu)

stops working after upgrading to 9.3 RTM in Windows from earlier version.

Task Manager shows that postgres.exe process has constantly 13% CPU usage 
(this is 8 core computer) and private working set memory is 16 MB


PgAdmin shows that this query is running .

toode field type is char(20) and it is toode table primary key.

tempkaive and tempalgsemu are temporary tables created eralier this 
transaction. They do not have indexes.

toode is real table which has 509873 records .
Probably tempkaive temp table size is bigger that toode table and 
templalgemu temp table size is smaller than in toode.


How to fix this or find the reason ?
How to rewrite the query so that it works ?

analyze command was executed but problem persists.
I tested it running Postgres 9.3 RTM in 32 bin Windows 7 and  64 bit Windows 
2008 R2 servers.

In both cases same problem occurs.
Only single user is using database and only this query is running.


Locks window shows:

7840toy53749admin7/13375AccessShareLockYes 
2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode 
FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
7840toy53652admin7/13375AccessShareLockYes 
2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode 
FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
7840toy54605admin7/13375AccessShareLockYes 
2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode 
FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
7840toy54608admin7/13375AccessShareLockYes 
2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode 
FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
7840toy49799admin7/13375AccessShareLockYes 
2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode 
FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
7840admin7/133757/13375ExclusiveLockYes 
2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode 
FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
7840toy53750admin7/13375AccessShareLockYes 
2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode 
FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)


Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Christoph Berg
Re: Andrus 2013-09-23 
> SELECT * FROM toode
> WHERE toode in (SELECT toode FROM tempkaive)
> OR toode in (SELECT toode FROM tempalgsemu)

Generally, WHERE IN (SELECT) should be rewritten as WHERE EXISTS
(SELECT):

SELECT * FROM toode o
WHERE EXISTS (SELECT toode FROM tempkaive i WHERE o.toode = i.toode)
OR EXISTS (SELECT toode FROM tempalgsemu i WHERE o.toode = i.toode)

Also, ANALYZEing the tables after the upgrade might help if this has
not yet been done.

Christoph
-- 
c...@df7cb.de | http://www.df7cb.de/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Jayadevan M
Could you please post EXPLAIN for that query?
How 'fat' are the temporary tables - just a couple of columns or really
wide?



On Mon, Sep 23, 2013 at 7:08 PM, Andrus  wrote:

> Query
>
> SELECT * FROM toode
> WHERE toode in (SELECT toode FROM tempkaive)
> OR toode in (SELECT toode FROM tempalgsemu)
>
> stops working after upgrading to 9.3 RTM in Windows from earlier version.
>
> Task Manager shows that postgres.exe process has constantly 13% CPU usage
> (this is 8 core computer) and private working set memory is 16 MB
>
> PgAdmin shows that this query is running .
>
> toode field type is char(20) and it is toode table primary key.
>
> tempkaive and tempalgsemu are temporary tables created eralier this
> transaction. They do not have indexes.
> toode is real table which has 509873 records .
> Probably tempkaive temp table size is bigger that toode table and
> templalgemu temp table size is smaller than in toode.
>
> How to fix this or find the reason ?
> How to rewrite the query so that it works ?
>
> analyze command was executed but problem persists.
> I tested it running Postgres 9.3 RTM in 32 bin Windows 7 and  64 bit
> Windows 2008 R2 servers.
> In both cases same problem occurs.
> Only single user is using database and only this query is running.
>
>
> Locks window shows:
>
> 7840toy53749admin7/13375AccessShareLockYes
> 2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode
> FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
> 7840toy53652admin7/13375AccessShareLockYes
> 2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode
> FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
> 7840toy54605admin7/13375AccessShareLockYes
> 2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode
> FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
> 7840toy54608admin7/13375AccessShareLockYes
> 2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode
> FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
> 7840toy49799admin7/13375AccessShareLockYes
> 2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode
> FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
> 7840admin7/133757/13375ExclusiveLockYes
> 2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode
> FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
> 7840toy53750admin7/13375AccessShareLockYes
> 2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode
> FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
>
> Andrus.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


Re: [GENERAL] passing multiple records to json_populate_recordset

2013-09-23 Thread Adrian Klaver

On 09/23/2013 06:25 AM, Raphael Bauduin wrote:



Some more info, after searching further.

This query

   select
json_populate_record(null::product,row_to_json(json_populate_recordset(null::product,event->'products')))
from (select * from events limit 2) as foo ;

returns what I want but not in the format I want (why?):


Maybe try:

select  * from 
json_populate_record(null::product,row_to_json(json_populate_recordset(null::product,event->'products'))) 
from (select * from events limit 2) as foo ;




Raph



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tree structure

2013-09-23 Thread Chris Travers
On Sun, Sep 22, 2013 at 9:48 PM, Kaare Rasmussen  wrote:

> Hi Alban
>
>
>  4. Using a recursive common table expression (CTE).
>> http://www.postgresql.org/**docs/9.2/static/queries-with.**html
>>
>
> Yes, you're right. In fact that's what I'm testing a way to replace, as
> I'm not confident in the performance in all situations. My fault entirely;
> I should have told so from the start.


It might be helpful for you to discuss what sorts of concerns you have and
how they fit into the specifics of your data.  Trees are an area where
different uses may have different recommended solutions.  I gave my
thoughts on performance on trees above.  There are a few really bad areas I
can think of.  For example, if you had a ten-layer deep scan where each
scan pulled around 10% or so of the table, you might be looking at 10
sequential scans and a fair bit of CPU time.  If the result set was very
large, you might see things written to disk.  There are a number of gotchas.

This being said, *usually* I find that recursive CTE's are one of the
better solutions out there for trees and I think they will perform better
in more situations than many of the other solutions.

>
> --
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more.shtml


Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Andrus

Hi,

Thank you.


Generally, WHERE IN (SELECT) should be rewritten as WHERE EXISTS
(SELECT):
SELECT * FROM toode o
WHERE EXISTS (SELECT toode FROM tempkaive i WHERE o.toode = i.toode)
   OR EXISTS (SELECT toode FROM tempalgsemu i WHERE o.toode = i.toode)


I re-wrote it. It now hangs in this line

SELECT * FROM toode o WHERE  exists (SELECT toode FROM tempkaive i where 
o.toode = i.toode ) OR EXISTS (SELECT toode FROM tempalgsemu i WHERE o.toode 
= i.toode)


I wait 18 minutes but query is still running.

Maybe it will take extremely long time.
How to make it work  ?

toode table structure is below. It contains 509873 records .
tempkaive and tempalgsemu are big temporary tables created earlier this
transaction. They do not have indexes and have lot of records.

Andrus.

CREATE TABLE firma1.toode
(
 grupp character(1),
 toode character(20) NOT NULL,
 ribakood character(20),
 ribakood2 character(20),
 ribakood3 character(20),
 nimetus character(50),
 yhik character(6),
 myygikood character(4),
 tykke numeric(9,2),
 liik character(10),
 kontonr character(10),
 engnimetus character(50),
 rusnimetus character(50),
 finnimetus character(50),
 lvlnimetus character(50),
 markused text,
 myygihind numeric(15,5),
 jaehind numeric(15,2),
 katteprots numeric(6,2),
 paritoluri character(2),
 ostuhind numeric(15,5),
 valmyygih numeric(15,5),
 valraha character(3),
 ovalraha character(3),
 aktsiis numeric(10,5),
 kogpak numeric(9,4) NOT NULL DEFAULT 0,
 soodkogus numeric(8,1),
 vaikkogus numeric(12,4),
 hinne numeric(8,2),
 yhikuteise numeric(9,4),
 norm numeric(8,4),
 soetaeg date,
 soetarve character(25),
 algmaksumu numeric(12,2),
 kasutaja character(12),
 kulum character(10),
 kulukonto character(10),
 oper character(3),
 objekt1 character(10),
 objekt2 character(10),
 objekt3 character(10),
 objekt4 character(10),
 objekt5 character(10),
 objekt6 character(10),
 objekt7 character(10),
 objekt8 character(10),
 objekt9 character(10),
 parimenne date,
 asukoht character(25),
 minkogus numeric(12,4),
 masin character(5),
 ryhm character(10),
 klass character(5),
 kaubasumma text,
 tasusumma text,
 pangateen ebool,
 analoog character(20),
 taara character(20),
 taara2 character(20),
 taarakaal numeric(9,5),
 taara2kaal numeric(9,5),
 hankija character(12),
 hinnak character(5),
 eelminekuu ebool,
 distribute ebool,
 plaanhind numeric(15,5),
 "timestamp" character(14) NOT NULL DEFAULT to_char(now(), 
'MMDDHH24MISS'::text),
 atimestamp character(14) NOT NULL DEFAULT to_char(now(), 
'MMDDHH24MISS'::text),

 username character(10),
 changedby character(10),
 kgasuvi numeric(2,0),
 ktasuvi numeric(2,0),
 kgatalv numeric(2,0),
 ktatalv numeric(2,0),
 kylmik numeric(2,0),
 tkmkoef numeric(3,1),
 paak numeric(4,0),
 kassakeeld ebool,
 kaalukaup ebool,
 saadakaalu ebool,
 sailivusae numeric(2,0),
 kaubakood character(10),
 netomass numeric(12,4),
 seisund character(1),
 tootjakood character(40),
 klassif3 numeric(7,0),
 prots1 numeric(6,2),
 prots2 numeric(6,2),
 prots3 numeric(6,2),
 ale1 numeric(8,2),
 ale2 numeric(8,2),
 ale3 numeric(8,2),
 tootja character(10),
 soomes numeric(12,4),
 originaal character(20),
 eekjaehind numeric(15,2),
 amordipiir numeric(12,2),
 pant character(20),
 hulgihind numeric(12,2),
 transportw ebool,
 tykke2 numeric(9,2),
 tootjaviit character(40),
 CONSTRAINT toode_pkey PRIMARY KEY (toode),
 CONSTRAINT toode_changedby_fkey FOREIGN KEY (changedby)
 REFERENCES kasutaja (kasutaja) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE SET NULL DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT toode_kasutaja_fkey FOREIGN KEY (kasutaja)
 REFERENCES firma1.klient (kood) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT toode_kaubakood_fkey FOREIGN KEY (kaubakood)
 REFERENCES nomenkla (kood) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT toode_kontonr_fkey FOREIGN KEY (kontonr)
 REFERENCES firma1.konto (kontonr) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT toode_kulukonto_fkey FOREIGN KEY (kulukonto)
 REFERENCES firma1.konto (kontonr) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT toode_kulum_fkey FOREIGN KEY (kulum)
 REFERENCES firma1.konto (kontonr) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT toode_liik_fkey FOREIGN KEY (liik)
 REFERENCES firma1.artliik (liik) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED,
 CONSTRAINT toode_myygikood_fkey FOREIGN KEY (myygikood)
 REFERENCES firma1.myygikoo (myygikood) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT toode_objekt1_fkey FOREIGN KEY (objekt1)
 REFERENCES firma1.yksus1 (yksus) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT t

Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Andrus
Hi,

thank you.
>Could you please post EXPLAIN for that query? 
As recommend I changed query to use exists :

SELECT * FROM toode o WHERE  exists (SELECT toode FROM tempkaive i where 
o.toode = i.toode ) OR
  EXISTS (SELECT toode FROM tempalgsemu i WHERE o.toode = i.toode)

It still hangs in same way. This query explain is:

Seq Scan on toode o  (cost=0.00..172913763.23 rows=382319 width=1681)
  Filter: ((SubPlan 1) OR (alternatives: SubPlan 2 or hashed SubPlan 3))
  SubPlan 1
->  Seq Scan on tempkaive i  (cost=0.00..4566.52 rows=14 width=0)
  Filter: (o.toode = toode)
  SubPlan 2
->  Seq Scan on tempalgsemu i_1  (cost=0.00..348.98 rows=27 width=0)
  Filter: (o.toode = toode)
  SubPlan 3
->  Seq Scan on tempalgsemu i_2  (cost=0.00..335.58 rows=5358 width=84)


> How 'fat' are the temporary tables - just a couple of columns or really wide?

tempalgsemu has 14 columns
tempkaive has 31 columns

structures are below. Too structure was posted in separate letter.

Andrus.

tempalgsemu :

Field  Field Name  TypeWidthDec   Index   Collate Nulls 
   NextStep
1  ID  Integer 4Yes
2  LAONR   Numeric 4Yes
3  KUUPAEV Date8Yes
4  KELLAAEGCharacter   5Yes
5  OSAKCharacter  10Yes
6  TOODE   Character  20Yes
7  PARTII  Character  15Yes
8  KOGUS   Numeric14  4 Yes
9  HINDNumeric17  5 Yes
   10  KULUM   Numeric17  5 Yes
   11  TEGKOGUSNumeric14  4 Yes
   12  STKUUPAEV   Date8Yes
   13  KLIENT  Character  12Yes
   14  MASIN   Character   5Yes
** Total **  156

 
tempkaive



Field  Field Name  TypeWidthDec   Index   Collate Nulls 
   NextStep
1  DOKTYYP Character   1Yes
2  DOKUMNR Integer 4Yes
3  KUUPAEV Date8Yes
4  KELLAAEGCharacter   5Yes
5  RAHACharacter   3Yes
6  EXCHRATENumeric16  8 Yes
7  KLIENT  Character  12Yes
8  ID  Integer 4Yes
9  TOODE   Character  20Yes
   10  PARTII  Character  15Yes
   11  KULUPARTII  Character  15Yes
   12  KOGPAK  Numeric11  4 Yes
   13  KOGUS   Numeric14  4 Yes
   14  HINDNumeric17  5 Yes
   15  MYYGIKOOD   Character   4Yes
   16  YHIKCharacter   6Yes
   17  NIMETUS Character  50Yes
   18  HINNAK  Character   5Yes
   19  TKOGUS  Numeric20  6 Yes
   20  UKOGUS  Numeric20  6 Yes
   21  KUSTPARTII  Character  15Yes
   22  KAUBASUMMA  Numeric17  5 Yes
   23  KULUOBJEKT  Character  10Yes
   24  FIFOEXPENS  Logical 1Yes
   25  KULUM   Numeric17  5 Yes
   26  SKAUBASUMM  Numeric17  5 Yes
   27  ST  Numeric 3Yes
   28  VM  Numeric 3Yes
   29  VKAUBASUMM  Numeric20  6 Yes
   30  YKSUS   Character  10Yes
   31  SIHTYKSUS   Character  10Yes
** Total **  378



On Mon, Sep 23, 2013 at 7:08 PM, Andrus  wrote:

  Query

  SEL

Re: [GENERAL] passing multiple records to json_populate_recordset

2013-09-23 Thread Merlin Moncure
On Mon, Sep 23, 2013 at 8:33 AM, Merlin Moncure  wrote:
> On Mon, Sep 23, 2013 at 8:25 AM, Raphael Bauduin  wrote:
>>
>>
>> On Mon, Sep 23, 2013 at 11:31 AM, Raphael Bauduin  wrote:
>>>
>>> Hi,
>>>
>>> I'm experimenting with the json data type and functions in 9.3.
>>> I'm storing json objects of this form in the event column:
>>> {type: 'event_type, products : [ {id:45, 'type': 3, 'gender':'F',..}, ...,
>>> {} ] }
>>>
>>> I can issue this query, but notice the limit 1:
>>>
>>> select * from json_populate_recordset(null::product, (select
>>> event->'products' from events limit 1));
>>>
>>> The result is (edited for conciseness):
>>>
>>>
>>>  type | gender |  id
>>> --++---
>>>41 | F  | 40003
>>>41 | F  | 60043
>>>41 | F  | 27363
>>>41 | F  | 27373
>>>41 | F  | 28563
>>>
>>> But all these products come from one event.
>>> Is there a way to return the products from several events?, eg with a
>>> limit 2 rather than limit 1?
>>>
>>
>> Some more info, after searching further.
>>
>> This query
>>
>>   select
>> json_populate_record(null::product,row_to_json(json_populate_recordset(null::product,event->'products')))
>> from (select * from events limit 2) as foo ;
>>
>> returns what I want but not in the format I want (why?):
>
> you need to use LATERAL.
>
> here's a summary of the technique (see lateral version -- you don't
> need to use recursion).
>
> http://www.reddit.com/r/PostgreSQL/comments/1hwu8i/postgresql_recursive_common_table_expression_and/caywoxw

follow up:

Raphael hit me up off list for more detail so I thought I'd post the query here:

select p.* from (select event from events limit 10) src CROSS JOIN
LATERAL json_populate_recordset(null::product,src.event->'products' )
p;

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread bricklen
On Mon, Sep 23, 2013 at 8:06 AM, Andrus  wrote:

>
> >Could you please post EXPLAIN for that query?
>

Could you also post the results of the following query?

SELECT name, current_setting(name), source
FROM pg_settings
WHERE source NOT IN ('default', 'override');

Also, what is the total memory in the server?


Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Andrus
I fixed the issue by creating indexes for temporary tables before running query:

create index on tempalgsemu(toode);
create index on temphetkes(toode);
SELECT * FROM toode o WHERE  exists (SELECT toode FROM tempkaive i where 
o.toode = i.toode ) OR
  EXISTS (SELECT toode FROM tempalgsemu i WHERE o.toode = i.toode);

Is this best fix ?

Andrus.


From: Andrus 
Sent: Monday, September 23, 2013 6:06 PM
To: Jayadevan M 
Cc: pgsql-general@postgresql.org 
Subject: Re: [GENERAL] Query runs forever after upgrading to 9.3

Hi,

thank you.
>Could you please post EXPLAIN for that query? 
As recommend I changed query to use exists :

SELECT * FROM toode o WHERE  exists (SELECT toode FROM tempkaive i where 
o.toode = i.toode ) OR
  EXISTS (SELECT toode FROM tempalgsemu i WHERE o.toode = i.toode)

It still hangs in same way. This query explain is:

Seq Scan on toode o  (cost=0.00..172913763.23 rows=382319 width=1681)
  Filter: ((SubPlan 1) OR (alternatives: SubPlan 2 or hashed SubPlan 3))
  SubPlan 1
->  Seq Scan on tempkaive i  (cost=0.00..4566.52 rows=14 width=0)
  Filter: (o.toode = toode)
  SubPlan 2
->  Seq Scan on tempalgsemu i_1  (cost=0.00..348.98 rows=27 width=0)
  Filter: (o.toode = toode)
  SubPlan 3
->  Seq Scan on tempalgsemu i_2  (cost=0.00..335.58 rows=5358 width=84)


> How 'fat' are the temporary tables - just a couple of columns or really wide?

tempalgsemu has 14 columns
tempkaive has 31 columns

structures are below. Too structure was posted in separate letter.

Andrus.

tempalgsemu :

Field  Field Name  TypeWidthDec   Index   Collate Nulls 
   NextStep
1  ID  Integer 4Yes
2  LAONR   Numeric 4Yes
3  KUUPAEV Date8Yes
4  KELLAAEGCharacter   5Yes
5  OSAKCharacter  10Yes
6  TOODE   Character  20Yes
7  PARTII  Character  15Yes
8  KOGUS   Numeric14  4 Yes
9  HINDNumeric17  5 Yes
   10  KULUM   Numeric17  5 Yes
   11  TEGKOGUSNumeric14  4 Yes
   12  STKUUPAEV   Date8Yes
   13  KLIENT  Character  12Yes
   14  MASIN   Character   5Yes
** Total **  156

 
tempkaive



Field  Field Name  TypeWidthDec   Index   Collate Nulls 
   NextStep
1  DOKTYYP Character   1Yes
2  DOKUMNR Integer 4Yes
3  KUUPAEV Date8Yes
4  KELLAAEGCharacter   5Yes
5  RAHACharacter   3Yes
6  EXCHRATENumeric16  8 Yes
7  KLIENT  Character  12Yes
8  ID  Integer 4Yes
9  TOODE   Character  20Yes
   10  PARTII  Character  15Yes
   11  KULUPARTII  Character  15Yes
   12  KOGPAK  Numeric11  4 Yes
   13  KOGUS   Numeric14  4 Yes
   14  HINDNumeric17  5 Yes
   15  MYYGIKOOD   Character   4Yes
   16  YHIKCharacter   6Yes
   17  NIMETUS Character  50Yes
   18  HINNAK  Character   5Yes
   19  TKOGUS  Numeric20  6 Yes
   20  UKOGUS  Numeric20  6 Yes
   21  KUSTPARTII  Character  15Yes
   22  KAUBASUMMA  Numeric17  5 Yes
   23  KULUOBJEKT  Character  10Yes
   24  FIFOEXPENS  Logical 1Yes
   25  KULUM   Numeric17  5 Yes
   26  SKAUBASUMM  Numeric17  5   

[GENERAL] streaming replication not working

2013-09-23 Thread John DeSoi
I have a 9.2.4 (CentOS 6.5) standby that has been working with no problems 
using log shipping. I wanted to add streaming replication which I thought would 
be as simple as adding primary_conninfo to recovery.conf and restarting the 
standby. But on restart there is no message or error about connecting to the 
primary for replication. pg_stat_replication is empty on the primary and I 
don't see any errors on the primary either. 

Here is what I have on the standby:

postgresql.conf

hot_standby = on
max_wal_senders = 2
wal_level = hot_standby


recovery.conf

standby_mode = 'on'
trigger_file = '/pgsql/9.2/data/failover.trigger'
primary_conninfo = 'host=localhost port=21333 user=postgres'
restore_command = 'pg_standby -t /pgsql/9.2/data/failover.trigger 
/shared/pgbackup %f %p %r'
archive_cleanup_command = 'pg_archivecleanup /shared/pgbackup %r'


I have a ssh tunnel setup on localhost and have verified the replication user 
can connect to the primary.

Am I missing something obvious? Do I have to back up the primary again to make 
this change? 

Thanks,

John DeSoi, Ph.D.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tree structure

2013-09-23 Thread Merlin Moncure
On Fri, Sep 20, 2013 at 6:29 AM, Kaare Rasmussen  wrote:
> Hi
>
> I'm trying to determine the best way to represent a simple tree structure
> (like a file/dir tree or a uri path). I guess that's done a zillion times
> before; I just don't seem to be able to find the right solution. I have one
> special request, that I'd like to find all 'shorter' paths, i.e. given
> 'a/b/c/d' it'll find
>
> a
> a/b
> a/b/c
> - but not
> b
> a/c
> b/a
>
> There are a number of options to test.
>
> 1. As strings
>   There's no dedicated function (@>)
>   WHERE clause should read something like 'a/b/c/d' LIKE column || '%',
> which is both ugly and (I guess) non indexable
>   Perhaps regex indexes would work, but not efficient and not optimal
>
> 2. As array of strings
>   My favorite, would be elegant. A GIN index on the whole array would make
> for fast performance
>   Alas @> treats the arrays as a set, not an array
>   WHERE col @> 'a/b/c/d' would find all of the above rows, including a, a/c,
> b/a, etc.
>
> 3. ltree contrib
>   The only option that actually works and uses index
>   @> works as I want it to.
>   But the single segments can only be alphanumeric and underscore
>   ltree only supports GIST
>   there's a length limit.
>
> The last option is the one I'm using right now, but I hope that somebody can
> point out where I'm wrong with regards to the other options, or tell me that
> there is a better solution somewhere I didn't look.

All materialized path approaches will face index limit so be warned.
For my money, I'd be using ltree if you need complex indexed searching
or some TEXT+btree for simple searching (LIKE '/foo/bar%') on the
'full path' side.  maview type approaches are faster to search but
slower to update than a recursive type structure where you use
parent/child relationships + recursive CTE to query.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread bricklen
On Mon, Sep 23, 2013 at 8:33 AM, Andrus  wrote:

>   Hi!
>   >Could you also post the results of the following query?
> >SELECT name, current_setting(name), source
> >FROM pg_settings
> >WHERE source NOT IN ('default', 'override');
>  In real server where problem is:
>
>
> 21  shared_buffers 2400MB configuration file
>


What are effective_cache_size and work_mem set to? The defaults? They are
good candidates to be increased. effective_cache_size could be set to (for
example) 10GB, depending on how much memory gets consumed by the other
application(s) running on that server.

The EXPLAIN ANALYZE plan of your query will show if work_mem needs to be
increased, as there will be a line saying something like "External merge:
disk sort" (or something like that, can't recall the exact message off the
top of my head).



>
>  In development computer from where explain was posted and problem with
> copy of database also occurs:
>
> "shared_buffers";"128MB";"configuration file"
>


You likely want to bump that up closer to 1GB.



>
> > Also, what is the total memory in the server?
>


>  In devel computer where tests are performed, 4 GB
> Real server  has 16 GB RAM
> Real server  is for  Postgres for this database and ASP.NET MVC3
> application which uses this same database from postgres.
>


Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Andrus
Hi,
>>21  shared_buffers 2400MB configuration file
>What are effective_cache_size and work_mem set to? The defaults? 

Yes. 

>They are good candidates to be increased. effective_cache_size could be set to 
>(for example) 10GB, depending on >how much memory gets consumed by the other 
>application(s) running on that server.

There are 10 human users and one web service user. Windows Task Manager cached 
value shows 10 GB 
in evening when nobody is working in server.

I changed those to

effective_cache_size= 10GB
work_mem = 400MB

Hope that this is OK.

>The EXPLAIN ANALYZE plan of your query will show if work_mem needs to be 
>increased, as there will be a line saying >something like "External merge: 
>disk sort" (or something like that, can't recall the exact message off the top 
>of my >head).


After adding indexes log contains

LOG:  duration: 11045.000 ms  statement: create index on 
tempkaive(toode);create index on tempalgsemu(toode);SELECT * FROM toode o WHERE 
 exists (SELECT toode FROM tempkaive i where o.toode=i.toode) OR EXISTS (SELECT 
toode FROM tempalgsemu i WHERE o.toode = i.toode)


  In development computer from where explain was posted and problem with copy 
of database also occurs:

  "shared_buffers";"128MB";"configuration file"



> You likely want to bump that up closer to 1GB.

I changed it. Thank you very much.

Andrus.

Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread bricklen
On Mon, Sep 23, 2013 at 9:12 AM, Andrus  wrote:

>
> work_mem = 400MB
>
> Hope that this is OK.
>

For cluster-wide setting you will probably want to drop that significantly
-- start lower, as in somewhere around 10MB and work up from there as
necessary. For the queries you are finding slow (the reason for these
emails) you can set work_mem specifically for the session.
Eg.
set work_mem to '400MB';

reset work_mem;


Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Andrus
Hi!
>Could you also post the results of the following query?
>SELECT name, current_setting(name), source
>FROM pg_settings
>WHERE source NOT IN ('default', 'override');

In real server where problem is:

1  DateStyle  ISO, DMY   session   
2  default_text_search_config pg_catalog.simple  configuration file
3  extra_float_digits 2  session   
4  lc_messagesEstonian_Estonia.1257  configuration file
5  lc_monetaryEstonian_Estonia.1257  configuration file
6  lc_numeric Estonian_Estonia.1257  configuration file
7  lc_timeEstonian_Estonia.1257  configuration file
8  listen_addresses   *  configuration file
9  log_destinationstderr configuration file
10  log_line_prefix%t %u %d   configuration file
11  log_lock_waits on configuration file
12  log_min_duration_statement 10sconfiguration file
13  log_min_error_statementwarningconfiguration file
14  log_temp_files 2000kB configuration file
15  log_timezone   Europe/Helsinkiconfiguration file
16  logging_collector  on configuration file
17  max_connections100configuration file
18  max_stack_depth2MBenvironment variable  
19  port   5432   configuration file
20  search_pathfirma1, public session   
21  shared_buffers 2400MB configuration file
22  TimeZone   Europe/Helsinkiconfiguration file

In development computer from where explain was posted and problem with copy of 
database also occurs:
 
"application_name";"pgAdmin III - Query Tool";"client"
"bytea_output";"escape";"session"
"client_encoding";"UNICODE";"session"
"client_min_messages";"notice";"session"
"DateStyle";"ISO, DMY";"session"
"default_text_search_config";"pg_catalog.simple";"configuration file"
"lc_messages";"Estonian_Estonia.1257";"configuration file"
"lc_monetary";"Estonian_Estonia.1257";"configuration file"
"lc_numeric";"Estonian_Estonia.1257";"configuration file"
"lc_time";"Estonian_Estonia.1257";"configuration file"
"listen_addresses";"*";"configuration file"
"log_destination";"stderr";"configuration file"
"log_line_prefix";"%t ";"configuration file"
"log_timezone";"Europe/Helsinki";"configuration file"
"logging_collector";"on";"configuration file"
"max_connections";"100";"configuration file"
"max_stack_depth";"2MB";"environment variable"
"port";"5432";"configuration file"
"shared_buffers";"128MB";"configuration file"
"TimeZone";"Europe/Helsinki";"configuration file"

 > Also, what is the total memory in the server?

In devel computer where tests are performed, 4 GB
Real server  has 16 GB RAM
Real server  is for  Postgres for this database and ASP.NET MVC3 application 
which uses this same database from postgres.
Can settings in real server changed to increase perfomance ?


Andrus.


Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Andrus
Hi,
>For cluster-wide setting you will probably want to drop that significantly -- 
>start lower, as in somewhere around 10MB >and work up from there as necessary. 
>For the queries you are finding slow (the reason for these emails) you can set 
>>work_mem specifically for the session.

>Eg.

>set work_mem to '400MB';

>

>reset work_mem; 

I changed it to 10MB. 
The problamatic commands runs now less than 10 seconds.

http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

wrote that max reasonable value in Windows for shared_buffers is 512MB

Is my setting shared_buffers= 2400MB reasonable in Windows ?

Andrus.

Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread bricklen
On Mon, Sep 23, 2013 at 9:50 AM, Andrus  wrote:

>  http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
> wrote that max reasonable value in Windows for shared_buffers is 512MB
> Is my setting shared_buffers= 2400MB reasonable in Windows ?
>

Someone else will hopefully answer that question, I have never run
Postgresql on Windows.


Re: [GENERAL] streaming replication not working

2013-09-23 Thread Ray Stell

On Sep 23, 2013, at 11:20 AM, John DeSoi  wrote:
> 
> Am I missing something obvious? Do I have to back up the primary again to 
> make this change? 

you didn't mention a pg_hba.conf rule.  did you add one for the replication 
user?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to create recurrence schedule event?

2013-09-23 Thread Quang Thoi
Hi,

I am trying to port sybase codes to posgresql. I have looked into the 
postgresql documentation
but couldn't find good information on how to  create an equivalent recurrence 
event
which similar to the sybase codes below.

Can some one please help?

sybase codes:

create event PURGE_ORDER schedule
start time '00:00:00' every 30 minutes
   on ( 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday','Saturday', 
'Sunday' )

handler begin

 declare @value   varchar(128);
 call cleanup_order();
end;

Thanks,
QT


Re: [GENERAL] How to create recurrence schedule event?

2013-09-23 Thread Raymond O'Donnell
On 23/09/2013 19:53, Quang Thoi wrote:
> Hi,
> 
>  
> 
> I am trying to port sybase codes to posgresql. I have looked into the
> postgresql documentation
> 
> but couldn't find good information on how to  create an equivalent
> recurrence event
> 
> which similar to the sybase codes below.

There isn't a built-in scheduler in PostgreSQL - the usual advice is to
use cron to execute a query via the psql client, or to install and use
pgAgent.

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reading cvs logs with pgadmin queries

2013-09-23 Thread Dave Cramer
OK,

I have a little more information.

Yes, in isolation I can import these lines, however something happens after
69000 lines. These lines cause an error.



Dave Cramer

dave.cramer(at)credativ(dot)ca
http://www.credativ.ca


On Fri, Sep 20, 2013 at 9:59 AM, Adrian Klaver wrote:

> On 09/19/2013 06:04 PM, Dave Cramer wrote:
>
>> Adrian,
>>
>
>
>> 2) The exact error message is  ERROR:  extra data after last expected
>> column
>>
>>
>> considering ~39000 lines go in before this line I am fairly certain it is
>> the line.
>>
>
> New day, new start. I am not sure now that the line you showed is the
> problem. I isolated that line in a file and did a COPY into postgres log.
> Note, I changed the time zone because my installation did not recognize WST:
>
> test=# COPY postgres_log FROM '/home/postgres/pg_csv_test.**csv' WITH csv;
> COPY 1
> test=# \x
> Expanded display is on.
> test=# SELECT * from postgres_log ;
> -[ RECORD 1 ]--+--**--
> **
> log_time   | 2013-09-12 10:33:19.145-07
> user_name  | user
> database_name  | dbname
> process_id | 14581
> connection_from| 192.168.1.22:58840
> session_id | 523126d3.38f5
> session_line_num   | 1
> command_tag| SELECT
> session_start_time | 2013-09-12 10:28:35-07
> virtual_transaction_id | 6/503023
> transaction_id | 0
> error_severity | ERROR
> sql_state_code | 42P01
> message| relation "dds_stores" does not exist
>
> detail |
> hint   |
> internal_query |
> internal_query_pos |
> context|
> query  | WITH RECURSIVE recursive_stores(id, name) AS (
>
>| SELECT id, name FROM customer_store WHERE
> id IN (1, 280, 864, 2376)
>| UNION ALL
>| SELECT ss.id, ss.name FROM
> recursive_stores sss, customer_store
>| ss WHERE sss.id = ss.parent_store_id
>| ) SELECT
>| to_char(i.timestamp, 'Mon-YY') AS "Month
> Name",
>| s.name AS "Target",
>| COUNT(DISTINCT i.id) / (SELECT COUNT(
> dds_stores.id) FROM
>| dds_stores) AS "Ticket Count"
>| FROM
>| customer_store s
>| INNER JOIN printdata_workstation w ON
> s.id = w.store_id AND s.id
>| IN (SELECT recursive_stores.id FROM
> recursive_stores)
>| INNER JOIN printdata_report r ON w.id =
> r.workstation_id AND
>| r.package_id IS NOT NULL
>| INNER JOIN printdata_page p ON r.id =
> p.report_id
>| INNER JOIN printdata_item i ON p.id =
> i.page_id
>| WHERE
>| r.timestamp >= '2012-09-01' AND
> r.timestamp <= '2013-08-31'
>| GROUP BY
>| "Month Name",
>| "Target"
> query_pos  | 367
> location   |
> application_name   | pgAdmin III - Query Tool
>
>
>
>>
>>
>> Dave
>>
>
>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] reading cvs logs with pgadmin queries

2013-09-23 Thread Alvaro Herrera
Dave Cramer escribió:
> OK,
> 
> I have a little more information.
> 
> Yes, in isolation I can import these lines, however something happens after
> 69000 lines. These lines cause an error.

Stray quotes in previous lines?  Perhaps check whether line 65999 was
imported correctly.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] streaming replication not working

2013-09-23 Thread John DeSoi

On Sep 23, 2013, at 1:00 PM, Ray Stell  wrote:

>> 
>> Am I missing something obvious? Do I have to back up the primary again to 
>> make this change? 
> 
> you didn't mention a pg_hba.conf rule.  did you add one for the replication 
> user?


You mean on the primary, right? Yes, I have one there. But even if I did not, I 
would expect to see a connection error in the log on the standby. No error or 
any indication the streaming replication process is running on the standby. 

John DeSoi, Ph.D.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reading cvs logs with pgadmin queries

2013-09-23 Thread Dave Cramer
I tried copying 500 lines above it and it still works :(

Dave Cramer

dave.cramer(at)credativ(dot)ca
http://www.credativ.ca


On Mon, Sep 23, 2013 at 4:15 PM, Alvaro Herrera wrote:

> Dave Cramer escribió:
> > OK,
> >
> > I have a little more information.
> >
> > Yes, in isolation I can import these lines, however something happens
> after
> > 69000 lines. These lines cause an error.
>
> Stray quotes in previous lines?  Perhaps check whether line 65999 was
> imported correctly.
>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services
>


Re: [GENERAL] Building with MinGW issue

2013-09-23 Thread Muhammad Bashir Al-Noimi

On 09/23/2013 04:05 AM, Adrian Klaver wrote:

If it where me I would try:

 --with-includes=/c/OpenSSL-Win32/include
 --with-libraries=/c/OpenSSL-Win32/lib/MinGW 
Thanks a lot Adrian; I successfully configured PG with MinGW & OpenSSL 
but I faced a new obstacle with make command:


In file included from ../../src/include/c.h:851:0,
 from ../../src/include/postgres_fe.h:25,
 from dirmod.c:21:
../../src/include/port.h:333:12: note: expected 'struct _stati64 *' but 
argument

 is of type 'struct stat *'
 extern int pgwin32_safestat(const char *path, struct stat * buf);
^
dirmod.c: At top level:
dirmod.c:630:1: error: conflicting types for 'pgwin32_safestat'
 pgwin32_safestat(const char *path, struct stat * buf)
 ^
In file included from ../../src/include/c.h:851:0,
 from ../../src/include/postgres_fe.h:25,
 from dirmod.c:21:
../../src/include/port.h:333:12: note: previous declaration of 
'pgwin32_safestat

' was here
 extern int pgwin32_safestat(const char *path, struct stat * buf);
^
make[2]: *** [dirmod.o] Error 1
make[2]: Leaving directory `/home/mbnoimi/postgresql-9.1.9/src/port'
make[1]: *** [all-port-recurse] Error 2
make[1]: Leaving directory `/home/mbnoimi/postgresql-9.1.9/src'
make: *** [all-src-recurse] Error 2

--
Best Regards,
Muhammad Bashir Al-Noimi



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] streaming replication not working

2013-09-23 Thread John DeSoi

On Sep 23, 2013, at 1:00 PM, Ray Stell  wrote:

>> 
>> Am I missing something obvious? Do I have to back up the primary again to 
>> make this change? 
> 
> you didn't mention a pg_hba.conf rule.  did you add one for the replication 
> user?


You mean on the primary, right? Yes, I have one there. But even if I did not, I 
would expect to see a connection error in the log on the standby. No error or 
any indication the streaming replication process is running on the standby. 

John DeSoi, Ph.D.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reading cvs logs with pgadmin queries

2013-09-23 Thread Adrian Klaver

On 09/23/2013 12:46 PM, Dave Cramer wrote:

OK,

I have a little more information.

Yes, in isolation I can import these lines, however something happens
after 69000 lines. These lines cause an error.


Is it the same error?
The exact error message is  ERROR:  extra data after last expected column

If so I would say the problem is in the transition between line 69000 
and 69001.



I wonder if you are getting bit by some variation of the below where 
partial lines are getting through in spite of the PK:


http://www.postgresql.org/docs/9.3/interactive/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-CSVLOG

"The table definition above includes a primary key specification. This 
is useful to protect against accidentally importing the same information 
twice. The COPY command commits all of the data it imports at one time, 
so any error will cause the entire import to fail. If you import a 
partial log file and later import the file again when it is complete, 
the primary key violation will cause the import to fail. Wait until the 
log is complete and closed before importing. This procedure will also 
protect against accidentally importing a partial line that hasn't been 
completely written, which would also cause COPY to fail."









Dave Cramer



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Building with MinGW issue

2013-09-23 Thread Adrian Klaver

On 09/23/2013 02:20 PM, Muhammad Bashir Al-Noimi wrote:

On 09/23/2013 04:05 AM, Adrian Klaver wrote:

If it where me I would try:

 --with-includes=/c/OpenSSL-Win32/include
 --with-libraries=/c/OpenSSL-Win32/lib/MinGW

Thanks a lot Adrian; I successfully configured PG with MinGW & OpenSSL
but I faced a new obstacle with make command:

In file included from ../../src/include/c.h:851:0,
  from ../../src/include/postgres_fe.h:25,
  from dirmod.c:21:
../../src/include/port.h:333:12: note: expected 'struct _stati64 *' but
argument
  is of type 'struct stat *'
  extern int pgwin32_safestat(const char *path, struct stat * buf);


Best guess is that you have a 32/64 bit mismatch problem. This is 
explained here.


http://www.postgresql.org/docs/9.3/interactive/install-windows-full.html#AEN26293


--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Building with MinGW issue

2013-09-23 Thread Muhammad Bashir Al-Noimi

  
  
On 09/24/2013 12:35 AM, Adrian Klaver
  wrote:

Best
  guess is that you have a 32/64 bit mismatch problem. This is
  explained here.
  
  
  http://www.postgresql.org/docs/9.3/interactive/install-windows-full.html#AEN26293


But I build under Windows 7 32bit!
-- 
Best Regards,
Muhammad Bashir Al-Noimi
  




Re: [GENERAL] Building with MinGW issue

2013-09-23 Thread Adrian Klaver

On 09/23/2013 03:50 PM, Muhammad Bashir Al-Noimi wrote:

On 09/24/2013 12:35 AM, Adrian Klaver wrote:

Best guess is that you have a 32/64 bit mismatch problem. This is
explained here.

http://www.postgresql.org/docs/9.3/interactive/install-windows-full.html#AEN26293



But I build under Windows 7 32bit!


Reported here:

http://www.postgresql.org/message-id/ovpbg9.x5ovpbg9.pq9n.w333.g...@asuka.myrkraverk.com


Not sure what the fix is, short of doing the manual intervention shown 
in the first message.


You might want to post as bug here:

http://www.postgresql.org/support/submitbug/

or contact MinGW developers.


--
Best Regards,
Muhammad Bashir Al-Noimi




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Howto import regularly some CSV files with variing names?

2013-09-23 Thread Andreas

Hi,

I need to import some log-files of an application running on a different 
host.
This app can't talk to the db but only creates daily a dump in a remote 
directory that is mountable via samba by a Linux host that runs the 
db-server.


The import would be easy if the files had a constant name but the app 
creates csv files with names like "ExportMMDD".


I could have cron to use "find" to search for all the files in the 
mounted directoy.
But how can I pipe a SQL script into the db-server that takes the 
filenames from "find" as a parameter?



Because of the somewhat limited intelligence of the application that 
creates the logs I have to read the contents of the log in a temporary 
table and insert from there only those lines that aren't allready in the 
actual log-table within the db.

I've got all covered but the filenames that change from day to day. :(

I could copy each of those files in a temp directory and import from 
there so that the sql script wouldn't have to deal with date within the 
file-name but I'd rather store the names in a table though so that the 
script could skip all those files that allready got imported previously.


So   how would I get the filenames into the sql-script?




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Howto import regularly some CSV files with variing names?

2013-09-23 Thread Adrian Klaver

On 09/23/2013 05:19 PM, Andreas wrote:

Hi,

I need to import some log-files of an application running on a different
host.
This app can't talk to the db but only creates daily a dump in a remote
directory that is mountable via samba by a Linux host that runs the
db-server.

The import would be easy if the files had a constant name but the app
creates csv files with names like "ExportMMDD".

I could have cron to use "find" to search for all the files in the
mounted directoy.
But how can I pipe a SQL script into the db-server that takes the
filenames from "find" as a parameter?


Because of the somewhat limited intelligence of the application that
creates the logs I have to read the contents of the log in a temporary
table and insert from there only those lines that aren't allready in the
actual log-table within the db.
I've got all covered but the filenames that change from day to day. :(

I could copy each of those files in a temp directory and import from
there so that the sql script wouldn't have to deal with date within the
file-name but I'd rather store the names in a table though so that the
script could skip all those files that allready got imported previously.

So   how would I get the filenames into the sql-script?


Do man on find and look for -exec.









--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Building with MinGW issue

2013-09-23 Thread Muhammad Bashir Al-Noimi

On 09/24/2013 01:29 AM, Adrian Klaver wrote:

Reported here:

http://www.postgresql.org/message-id/ovpbg9.x5ovpbg9.pq9n.w333.g...@asuka.myrkraverk.com 




Not sure what the fix is, short of doing the manual intervention shown 
in the first message.
I don't think this thread is related here because I'm using MinGW 32bit 
while he used 64bit version.




You might want to post as bug here:

http://www.postgresql.org/support/submitbug/

or contact MinGW developers. 
I think it's better to report it for Postgresql guys, but until that may 
any one help me here.


Postgresql unable to built by MinGW under Windows7 32bit!

--
Best Regards,
Muhammad Bashir Al-Noimi



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Howto import regularly some CSV files with variing names?

2013-09-23 Thread Andreas

Am 24.09.2013 02:25, schrieb Adrian Klaver:

On 09/23/2013 05:19 PM, Andreas wrote:


I need to import some log-files of an application [...]
The import would be easy if the files had a constant name but the app
creates csv files with names like "ExportMMDD".


So   how would I get the filenames into the sql-script?


Do man on find and look for -exec.



I could find the files and exec a shell script but how can I have a SQL 
script take the found filenames as parameter?


The SQL script needs to create a temp table
then COPY the file with the filename it got as parameter into the temp table
then insert from there into the log-table

How would I get the filenames into the SQL script?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create recurrence schedule event?

2013-09-23 Thread Michael Paquier
On Tue, Sep 24, 2013 at 3:55 AM, Raymond O'Donnell  wrote:
> There isn't a built-in scheduler in PostgreSQL - the usual advice is to
> use cron to execute a query via the psql client, or to install and use
> pgAgent.
Or with a 9.3 server to use a background worker that could do the job
for you as a cron would.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Building with MinGW issue

2013-09-23 Thread Adrian Klaver

On 09/23/2013 05:29 PM, Muhammad Bashir Al-Noimi wrote:

On 09/24/2013 01:29 AM, Adrian Klaver wrote:

Reported here:

http://www.postgresql.org/message-id/ovpbg9.x5ovpbg9.pq9n.w333.g...@asuka.myrkraverk.com



Not sure what the fix is, short of doing the manual intervention shown
in the first message.

I don't think this thread is related here because I'm using MinGW 32bit
while he used 64bit version.


Well the error message is exactly the same, so my guess the cause is the 
same.






You might want to post as bug here:

http://www.postgresql.org/support/submitbug/

or contact MinGW developers.

I think it's better to report it for Postgresql guys, but until that may
any one help me here.

Postgresql unable to built by MinGW under Windows7 32bit!


Sort of late in the thread to ask this, but any reason you are not using 
the precompiled binaries?







--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Howto import regularly some CSV files with variing names?

2013-09-23 Thread Adrian Klaver

On 09/23/2013 05:47 PM, Andreas wrote:

Am 24.09.2013 02:25, schrieb Adrian Klaver:

On 09/23/2013 05:19 PM, Andreas wrote:


I need to import some log-files of an application [...]
The import would be easy if the files had a constant name but the app
creates csv files with names like "ExportMMDD".


So   how would I get the filenames into the sql-script?


Do man on find and look for -exec.



I could find the files and exec a shell script but how can I have a SQL
script take the found filenames as parameter?

The SQL script needs to create a temp table
then COPY the file with the filename it got as parameter into the temp
table
then insert from there into the log-table

How would I get the filenames into the SQL script?


Just a thought:

1) Create a function that encapsulates the above logic where the 
argument is the file name.


2) Create a shell script. Use substitution to take the filename passed 
to the shell script to build a string and in the script do


psql -d db_name -U user_name -c 'select the function(filename)'

2a) Use alternate language to do 2).
--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create recurrence schedule event?

2013-09-23 Thread hxreno1
pgAgent is a job scheduler for PostgreSQL which may be managed using 
pgAdmin. Prior to pgAdmin v1.9, pgAgent shipped as part of pgAdmin. From 
pgAdmin v1.9 onwards, pgAgent is shipped as a separate application.


On 09/24/2013 02:53 AM, Quang Thoi wrote:


Hi,

I am trying to port sybase codes to posgresql. I have looked into the 
postgresql documentation


but couldn't find good information on how to  create an equivalent 
recurrence event


which similar to the sybase codes below.

Can some one please help?

sybase codes:

create event PURGE_ORDER schedule

start time '00:00:00' every 30 minutes

   on ( 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 
'Friday','Saturday', 'Sunday' )


handler begin

 declare @value   varchar(128);

 call cleanup_order();

end;

Thanks,

QT





Re: [GENERAL] streaming replication not working

2013-09-23 Thread Ray Stell

On Sep 23, 2013, at 4:47 PM, John DeSoi wrote:
> 
> You mean on the primary, right?

right


> Yes, I have one there. But even if I did not, I would expect to see a 
> connection error in the log on the standby. No error or any indication the 
> streaming replication process is running on the standby. 


you're right, I was firing from the hip.  sure enough, it does toss an error:
2013-09-23 18:00:01 EDT,0,authentication FATAL:  28000: no pg_hba.conf entry 
for replication connection from host "xxx", user "repuser", SSL off

I'd guess a firewall issue?  What happens with "telnet primary_host port"   
maybe use tcpdump to see what's happening with the traffic?  

Re: [GENERAL] streaming replication not working

2013-09-23 Thread Karl Denninger
On 9/23/2013 9:30 PM, Ray Stell wrote:
>
> On Sep 23, 2013, at 4:47 PM, John DeSoi wrote:
>>
>> You mean on the primary, right?
>
> right
>
>
>> Yes, I have one there. But even if I did not, I would expect to see a
>> connection error in the log on the standby. No error or any
>> indication the streaming replication process is running on the standby.
>
> you're right, I was firing from the hip.  sure enough, it does toss an
> error:
> 2013-09-23 18:00:01 EDT,0,authentication FATAL:  28000: no pg_hba.conf
> entry for replication connection from host "xxx", user "repuser", SSL off
>
> I'd guess a firewall issue?  What happens with "telnet primary_host
> port"   maybe use tcpdump to see what's happening with the traffic?  

No, there is a missing line in pg_hba.conf that should look something
like this:

hostreplication repuserxxx   trust

(where "xxx" is the hostname)

See the pg_hba.conf file for more examples.  Note that "replication" is
a special database tag and a replicating connection must have one of
these defined as "all" does not match it.

You can use "host", "hostssl" or "hostnossl"; "trust" means that no
password is demanded and for obvious reasons should NOT be used for
other than a local connection that can be trusted implicitly.  I prefer
not to use that method for other than local socket connections and then
only on a machine where nobody signs in that is untrusted (e.g. only
admins are permitted general access.)  If you are connecting over an
insecure channel or untrusted users are on the machine then consider SSL
to encrypt the traffic and either use md5 for the password or use a
certificate.

You can reload the file without restarting postgres with "pg_ctl -D
data-directory reload"

(where "data-directory" is wherever the data directory that has the
pg_hba.conf file -- and the rest of the base of the data store -- is)

-- 
Karl Denninger
k...@denninger.net
/Cuda Systems LLC/


smime.p7s
Description: S/MIME Cryptographic Signature


[GENERAL] SP to calc shipments vs receipts

2013-09-23 Thread Bret Stern
I have an inventory transaction table with several fields,
specifically:
part_no
trans_type
trans_qty

part_no | trans_type | trans_qty
abc REC 5000(receipt)
abc REC 400 (receipt)
abc SHP 1000(shipment)
abc ALL 1000(allocated)

Looking for the best way to show following totals with SQL

on_hand |   allocated   | available
34001000 4400

Thinking of writing a stored procedure that has the aggregate queries,
and returns the values defined above for this example.

Is this a recommended way?

B Stern







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SP to calc shipments vs receipts

2013-09-23 Thread John R Pierce

On 9/23/2013 10:13 PM, Bret Stern wrote:

I have an inventory transaction table with several fields,
specifically:
part_no
trans_type
trans_qty

part_no | trans_type | trans_qty
abc REC 5000(receipt)
abc REC 400 (receipt)
abc SHP 1000(shipment)
abc ALL 1000(allocated)

Looking for the best way to show following totals with SQL

on_hand |   allocated   | available
34001000 4400


select part_no,
sum(cast when trans_type='REC' then trans_qty else 0) as 
"on_hand",
sum(cast when trans_type='ALL' then trans_qty else 0) as 
"allocated",
sum(cast when trans_type='SHP' then trans_qty else 0) as 
"allocated"

from inventory_transaction_table
group by part_no;


except, your example output doesn't correlate with your sample input 
according to any rules I can see.



--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ?????? [GENERAL] SP to calc shipments vs receipts

2013-09-23 Thread DDT
hello, is the output calculated by following rule?

on_hand SUM(receipt) - SUM(shipment) - SUM(allocated)
available SUM(receipt) - SUM(shipment)

sql can be:
sum(case when trans_type='REC' then trans_qty when trans_type IN ('SHP', 'ALL') 
then -trans_qty else 0) as on_hand
sum(case when trans_type='REC' then trans_qty when trans_type = 'SHP' then 
-trans_qty else 0) as on_hand

but i'm courise about if something is allocated and then it shipped, will you 
delete the record or allocation?




On 9/23/2013 10:13 PM, Bret Stern wrote:
> I have an inventory transaction table with several fields,
> specifically:
> part_no
> trans_type
> trans_qty
>
> part_no | trans_type | trans_qty
> abc   REC 5000(receipt)
> abc   REC 400 (receipt)
> abc   SHP 1000(shipment)
> abc   ALL 1000(allocated)
>
> Looking for the best way to show following totals with SQL
>
> on_hand   |   allocated   | available
> 3400  1000 4400

select part_no,
 sum(cast when trans_type='REC' then trans_qty else 0) as 
"on_hand",
 sum(cast when trans_type='ALL' then trans_qty else 0) as 
"allocated",
 sum(cast when trans_type='SHP' then trans_qty else 0) as 
"allocated"
 from inventory_transaction_table
 group by part_no;


except, your example output doesn't correlate with your sample input 
according to any rules I can see.


-- 
john r pierce  37N 122W
somewhere on the middle of the left coast



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
.

Re: [GENERAL] SP to calc shipments vs receipts

2013-09-23 Thread Chris Travers
First, regarding the stored procedure recommendation, it depends on what
you are trying to do.  The decision to go with a stored procedure vs a view
depends on how this fits into the rest of your application.

Here is what I would do for the SQL though:

WITH base_agg AS (
select part_no,
sum(cast when trans_type='REC' then trans_qty else 0) as
"received",
sum(cast when trans_type='ALL' then trans_qty else 0) as
"allocated",
sum(cast when trans_type='SHP' then trans_qty else 0) as
"shipped"
from inventory_transaction_table
group by part_no
)
SELECT shipped, allocated, received - allocated - shipped as on_hand from
base_agg;


On Mon, Sep 23, 2013 at 11:01 PM, John R Pierce  wrote:

> On 9/23/2013 10:13 PM, Bret Stern wrote:
>
>> I have an inventory transaction table with several fields,
>> specifically:
>> part_no
>> trans_type
>> trans_qty
>>
>> part_no | trans_type | trans_qty
>> abc REC 5000(receipt)
>> abc REC 400 (receipt)
>> abc SHP 1000(shipment)
>> abc ALL 1000(allocated)
>>
>> Looking for the best way to show following totals with SQL
>>
>> on_hand |   allocated   | available
>> 34001000 4400
>>
>
> select part_no,
> sum(cast when trans_type='REC' then trans_qty else 0) as
> "on_hand",
> sum(cast when trans_type='ALL' then trans_qty else 0) as
> "allocated",
> sum(cast when trans_type='SHP' then trans_qty else 0) as
> "allocated"
> from inventory_transaction_table
> group by part_no;
>
>
> except, your example output doesn't correlate with your sample input
> according to any rules I can see.
>
>
> --
> john r pierce  37N 122W
> somewhere on the middle of the left coast
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more.shtml