Returning Vector of Pairs with a PostgreSQL C Extension Function

2018-08-25 Thread TalGloz
Hello,

I want to return an vector of pairs using a C extension. This is a simple
code I have:

extern "C" {
Datum pair(PG_FUNCTION_ARGS){

// For text aka. character varying parameter
text *t1 = PG_GETARG_TEXT_PP(0);
text *t2 = PG_GETARG_TEXT_PP(1);
std::string localT1 = text_to_cstring(t1);
std::string localT2 = text_to_cstring(t2);

/* Construct the return value of the C extention to PostgreSQl */
// Returns Text ponter of casted cstrings to text
//PG_RETURN_TEXT_P(cstring_to_text_with_len(localT1.c_str(),
localT1.size()));
//PG_RETURN_TEXT_P(cstring_to_text_with_len(encodedLocalT1.c_str(),
encodedLocalT1.size()));
//PG_RETURN_TEXT_P(cstring_to_text_with_len(outputParam.c_str(),
outputParam.size()));

// Return vector of pairs
std::vector> ret;
ret.emplace_back(encodedLocalT1, encodedLocalT2);
PG_RETURN_ARRAYTYPE_P(ret);

};
PG_FUNCTION_INFO_V1(pair);
}

But it doesn't work like it. Even using this doesn't work:
ArrayType   *array;
std::vector> ret;
ret.emplace_back(encodedLocalT1, encodedLocalT2);
array = ret;
PG_RETURN_POINTER(array);

Is it possible to return a vector of pairs? If not then how can I return 2
strings in an Array?

Best regards,
Tal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Returning Vector of Pairs with a PostgreSQL C Extension Function

2018-08-25 Thread Tom Lane
TalGloz  writes:
> I want to return an vector of pairs using a C extension.

You're going to have to work a lot harder than this:

> // Return vector of pairs
> std::vector> ret;
> ret.emplace_back(encodedLocalT1, encodedLocalT2);
> PG_RETURN_ARRAYTYPE_P(ret);

I do not know what the internal representation of std::vector is in
your C++ library, but it seems highly unlikely that it exactly matches
what Postgres arrays look like.  Nor has Postgres ever heard of a
std::pair.  You'd need to either create a composite type of two text
columns and then build an array of those, or else decide to return a
two-dimensional array to represent this case.  In any case, you must
use Postgres functions to construct the return value; the C++ library
is completely useless for that.

regards, tom lane



Re: Returning Vector of Pairs with a PostgreSQL C Extension Function

2018-08-25 Thread TalGloz
OK so I think I'm on the right way with this code and now my goal is to
return a text array (text[ ]) with 2 fields

extern "C" { 
Datum text_array(PG_FUNCTION_ARGS){ 

// For text aka. character varying parameter 
text *t1 = PG_GETARG_TEXT_PP(0); 
text *t2 = PG_GETARG_TEXT_PP(1); 
std::string localT1 = text_to_cstring(t1); 
std::string localT2 = text_to_cstring(t2); 

ArrayType  *array;
Datum*   elements[2];
int16   typlen;
booltypbyval;
chartypalign;

elements[0] = CStringGetDatum(localT1.c_str());
elements[1] = CStringGetDatum(localT2.c_str());

get_typlenbyvalalign(TEXTOID, &typlen, &typbyval, &typalign);
array = construct_array(elements, 2, TEXTOID, typlen, typbyval,
typalign);

PG_RETURN_ARRAYTYPE_P(array); 

}; 
PG_FUNCTION_INFO_V1(text_array); 
}

But I'm getting this:

g++ -std=c++17 -fPIC -Wall -Werror -g -O0 -pthread
-I/usr/pgsql-10/include/server -I/usr/local/include
-I/usr/local/include/cppcodec -o seal_diff_cpp.o -c seal_diff_cpp.cpp
In file included from seal_diff_cpp.cpp:2:
seal_diff_cpp.cpp: In function ‘Datum seal_diff_cpp(FunctionCallInfo)’:
/usr/pgsql-10/include/server/postgres.h:562:29: error: invalid conversion
from ‘Datum’ {aka ‘long unsigned int’} to ‘Datum*’ {aka ‘long unsigned
int*’} [-fpermissive]
 #define PointerGetDatum(X) ((Datum) (X))
~^~~~
/usr/pgsql-10/include/server/postgres.h:584:28: note: in expansion of macro
‘PointerGetDatum’
 #define CStringGetDatum(X) PointerGetDatum(X)
^~~
seal_diff_cpp.cpp:168:23: note: in expansion of macro ‘CStringGetDatum’
 elements[0] = CStringGetDatum(localT1.c_str());
   ^~~
/usr/pgsql-10/include/server/postgres.h:562:29: error: invalid conversion
from ‘Datum’ {aka ‘long unsigned int’} to ‘Datum*’ {aka ‘long unsigned
int*’} [-fpermissive]
 #define PointerGetDatum(X) ((Datum) (X))
~^~~~
/usr/pgsql-10/include/server/postgres.h:584:28: note: in expansion of macro
‘PointerGetDatum’
 #define CStringGetDatum(X) PointerGetDatum(X)
^~~
seal_diff_cpp.cpp:169:16: note: in expansion of macro ‘CStringGetDatum’
  elements[1] = CStringGetDatum(localT2.c_str());
^~~
seal_diff_cpp.cpp:171:26: error: cannot convert ‘Datum**’ {aka ‘long
unsigned int**’} to ‘Datum*’ {aka ‘long unsigned int*’}
  array = construct_array(elements, 2, TEXTOID, typlen, typbyval, typalign);
  ^~~~
In file included from /usr/pgsql-10/include/server/utils/acl.h:38,
 from
/usr/pgsql-10/include/server/catalog/objectaddress.h:18,
 from
/usr/pgsql-10/include/server/catalog/pg_publication.h:21,
 from /usr/pgsql-10/include/server/utils/rel.h:21,
 from seal_diff_cpp.cpp:6:
/usr/pgsql-10/include/server/utils/array.h:370:42: note:   initializing
argument 1 of ‘ArrayType* construct_array(Datum*, int, Oid, int, bool,
char)’
 extern ArrayType *construct_array(Datum *elems, int nelems,
   ~~~^
make: *** [Makefile:19: seal_diff_cpp.o] Error 1

Best regards,
Tal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Returning Vector of Pairs with a PostgreSQL C Extension Function

2018-08-25 Thread Tom Lane
TalGloz  writes:
> Datum*   elements[2];

Datum, not Datum*.

regards, tom lane



Re: Returning Vector of Pairs with a PostgreSQL C Extension Function

2018-08-25 Thread TalGloz
Yes, that was the problem and now everything works.

Thanks,
Tal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: unorthodox use of PG for a customer

2018-08-25 Thread Gary M
 Hmmm  I usually don't participate in forums and accidentally sent my reply
to  David Gauthier directly. This issue sounds interesting I'll give it
another try and send the reply to the group.

>From my simple, engineering hat perspective I feel this question is
difficult to answer without hard numbers which include:

1) the amount of data stored in the db including indexes
2) Expected peak number and type of db operations per second
3) Max latency the operations can consume
4) Number and type of CPUs on each workstation
5) Available RAM for each workstation
6) Data storage utilization (MBs, IOPs)
7) Data storage capacity and type (SSD/spinning/network)
8) Type of network ( number of networks and bw)
9)  Network bw utilization
10) during operation what is the swap space utilization

>From these numbers the approach and architecture can be determined.

For example, if the available ram has sufficient capacity, file system
cache and pgcache can be tuned to keep all the data in ram likely to
significantly improve write and query performance.  Another option is to
have a dedicate SSD for data and/or swap space.

I prefer these analytical "rules of 10" as  starting point to guide
alternatives

b/r
gary


On Fri, Aug 24, 2018 at 12:18 PM, David Gauthier 
wrote:

> Hi Everyone:
>
> I'm going to throw this internal customer request out for ideas, even
> though I think it's a bit crazy.  I'm on the brink of telling him it's
> impractical and/or inadvisable.  But maybe someone has a solution.
>
> He's writing a script/program that runs on a workstation and needs to
> write data to a DB.  This process also sends work to a batch system on a
> server farm external to the workstation that will create multiple, parallel
> jobs/processes that also have to write to the DB as well. The workstation
> may have many of these jobs running at the same time.  And there are 58
> workstation which all have/use locally mounted disks for this work.
>
> At first blush, this is easy.  Just create a DB on a server and have all
> those clients work with it.  But he's also adamant about having the DB on
> the same server(s) that ran the script AND on the locally mounted disk.  He
> said he doesn't want the overhead, dependencies and worries of anything
> like an external DB with a DBA, etc... . He also wants this to be fast.
>
> My first thought was SQLite.  Apparently, they now have some sort of
> multiple, concurrent write ability.  But there's no way those batch jobs on
> remote machines are going to be able to get at the locally mounted disk on
> the workstation. So I dismissed that idea.  Then I thought about having 58
> PG installs, one per workstation, each serving all the jobs pertaining to
> that workstation.  That could work.  But 58 DB instances ?  If he didn't
> like the ideal of one DBA, 58 can't be good.  Still, the DB would be on the
> workstation which seems to be what he wants.
>
> I can't think of anything better.  Does anyone have any ideas?
>
> Thanks in Advance !
>
>


Re: Linker errors while creating a PostgreSQL C extension function.

2018-08-25 Thread TalGloz
I removed the flag --no-undefined and it was compiled and worked like a
charm. I needed the flag to spot the linker errors for linking with the
external *libseal.a* library.

As for my other question regarding the unused linker path
*-L"/usr/local/lib"*. I'm using Fedora 28 (RedHat) and in this distribution
the default path for libs is /usr/lib64, so I can use this setting or change
it to include the */usr/local/lib* path as well.

Best regards,
Tal




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



How can I use Postgres "ROLLBACK TO SAVEPOINT" with "FOR UPDATE SKIP LOCKED"?

2018-08-25 Thread Andrew Stuart
Postgres 10

I am trying to use "FOR UPDATE SKIP LOCKED" to make a work queue in
Postgres.

My goal is to be able to set status to 'complete' or 'failed' as the
outcome by using "ROLLBACK TO SAVEPOINT;" in the event that processing
fails.

I expected the code below  to result in a final status of "failed", but it
appears the final status is "waiting".

BEGIN;
  DROP TABLE IF EXISTS foo;
  SELECT id
  INTO foo
  FROM jobs
  WHERE status = 'waiting'
  AND status != 'failed'
  ORDER BY created ASC
  FOR UPDATE SKIP LOCKED
  LIMIT 1;

UPDATE jobs SET status = 'failed' WHERE id = (SELECT id from foo)
RETURNING *;
SAVEPOINT blah;
UPDATE jobs SET status = 'complete' WHERE id = (SELECT id from foo)
RETURNING *;


ROLLBACK TO SAVEPOINT blah;


Can anyone please suggest what I can do to use either COMMIT to for status
to be 'complete' or ROLLBACK TO SAVEPOINT blah for statgus to be 'failed'?

thanks!


Size of the table is growing abnormally in my database.

2018-08-25 Thread Raghavendra Rao J S V
Hi All,

One of our database size is 50gb. Out of it one of the table has 149444622
records. Size of that table is 14GB and its indexes size is 16GB.
Total size of the table and its indexes are 30GB. I have perfomred the
below steps on that table.

reindex table table_name;

vacuum full verbose analyze on table_name;

But still the size of the table and its indexes size are not reduced.
Please guid me. How to proceed further.

Structure of the table as below.

   Column   |   Type   | Modifiers | Storage | Stats
target | Description
-+--+---+-+--+-
 col1 | bigint   |   | plain   |  |
 col2 | double precision |   | plain   |  |
 col3 | double precision |   | plain   |  |
 col4 | double precision |   | plain   |  |
 col5 | double precision |   | plain   |  |
 col6date | date |   | plain   |  |
 tkey| integer  |   | plain   |  |
 cid  | integer  |   | plain   |  |
 rtypeid  | integer  |   | plain   |  |
 rid  | integer  |   | plain   |  |
 ckey| bigint   |   | plain   |  |
Indexes:
"idx_tab_cid" btree (cid)
"idx_tab_ckey" btree (ckey)
"idx_tab_col6date" btree (col6date)
"idx_tab_rid" btree (rid)
"idx_tab_rtype_id" btree (rtypid)
"idx_tab_tkey" btree (tkey)


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: Size of the table is growing abnormally in my database.

2018-08-25 Thread Adrian Klaver

On 08/25/2018 08:36 PM, Raghavendra Rao J S V wrote:

Hi All,

One of our database size is 50gb. Out of it one of the table has 
149444622 records. Size of that table is 14GB and its indexes size is 16GB.
Total size of the table and its indexes are 30GB. I have perfomred the 
below steps on that table.


reindex table table_name;

vacuum full verbose analyze on table_name;

But still the size of the table and its indexes size are not reduced. 
Please guid me. How to proceed further.


Rough approximation:

14,000,000,000 / 150,000,000 = 93 bytes/record.

I am not seeing an issue. If you want to reduce the size of the table 
remove rows.





Structure of the table as below.

        Column           |       Type       | Modifiers | Storage | 
Stats target | Description

-+--+---+-+--+-
  col1| bigint           |           | plain   |              |
  col2 | double precision |           | plain   |              |
  col3| double precision |           | plain   |              |
  col4| double precision |           | plain   |              |
  col5| double precision |           | plain   |              |
  col6date| date             |           | plain   |              |
  tkey | integer          |           | plain   |              |
  cid | integer          |           | plain   |              |
  rtypeid | integer          |           | plain   |              |
  rid | integer          |           | plain   |              |
  ckey | bigint           |           | plain   |              |
Indexes:
     "idx_tab_cid" btree (cid)
     "idx_tab_ckey" btree (ckey)
     "idx_tab_col6date" btree (col6date)
     "idx_tab_rid" btree (rid)
     "idx_tab_rtype_id" btree (rtypid)
     "idx_tab_tkey" btree (tkey)


--
Regards,
Raghavendra Rao J S V
Mobile- 8861161425



--
Adrian Klaver
adrian.kla...@aklaver.com



pg_dump order of operation

2018-08-25 Thread Ron

Hi,

In v8.4, I noticed that the tables seemed to be dumped in alphabetical 
order.  Not so much, though, in a multithreaded 9.6 dump of an 8.4 database; 
there's no pattern that I can discern.


In what order does the 9.6 pg_dump dump tables?

Thanks

--
Angular momentum makes the world go 'round.



Re: Size of the table is growing abnormally in my database.

2018-08-25 Thread Raghavendra Rao J S V
Thank you very much for your prompt response.

Please guide me below things.

How to check rows got corrupted?

How to check table got corrupted?

How to check which row is occupied more space in the table?

Is this expected?

[image: image.png]

On Sun, 26 Aug 2018 at 09:46, Adrian Klaver 
wrote:

> On 08/25/2018 08:36 PM, Raghavendra Rao J S V wrote:
> > Hi All,
> >
> > One of our database size is 50gb. Out of it one of the table has
> > 149444622 records. Size of that table is 14GB and its indexes size is
> 16GB.
> > Total size of the table and its indexes are 30GB. I have perfomred the
> > below steps on that table.
> >
> > reindex table table_name;
> >
> > vacuum full verbose analyze on table_name;
> >
> > But still the size of the table and its indexes size are not reduced.
> > Please guid me. How to proceed further.
>
> Rough approximation:
>
> 14,000,000,000 / 150,000,000 = 93 bytes/record.
>
> I am not seeing an issue. If you want to reduce the size of the table
> remove rows.
>
>
> >
> > Structure of the table as below.
> >
> > Column   |   Type   | Modifiers | Storage |
> > Stats target | Description
> >
> -+--+---+-+--+-
> >   col1| bigint   |   | plain   |  |
> >   col2 | double precision |   | plain   |  |
> >   col3| double precision |   | plain   |  |
> >   col4| double precision |   | plain   |  |
> >   col5| double precision |   | plain   |  |
> >   col6date| date |   | plain   |  |
> >   tkey | integer  |   | plain   |  |
> >   cid | integer  |   | plain   |  |
> >   rtypeid | integer  |   | plain   |  |
> >   rid | integer  |   | plain   |  |
> >   ckey | bigint   |   | plain   |  |
> > Indexes:
> >  "idx_tab_cid" btree (cid)
> >  "idx_tab_ckey" btree (ckey)
> >  "idx_tab_col6date" btree (col6date)
> >  "idx_tab_rid" btree (rid)
> >  "idx_tab_rtype_id" btree (rtypid)
> >  "idx_tab_tkey" btree (tkey)
> >
> >
> > --
> > Regards,
> > Raghavendra Rao J S V
> > Mobile- 8861161425
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: Size of the table is growing abnormally in my database.

2018-08-25 Thread Paul Carlucci
There's a handful of hidden columns like Xmin and Xmax per row that you're
not accounting for, header info per page, reserve space, free space... The
physical size on disk is reasonable.

Otherwise you can reduce the number of rows by cleaning up and moving out
old data, reduce the width of each row by getting rid of any unused columns
or switching to narrower data types, or drop unused indexes.  If none of
that works for you then you're going to have to adjust your disk budget.

On Sun, Aug 26, 2018, 12:37 AM Raghavendra Rao J S V <
raghavendra...@gmail.com> wrote:

> Thank you very much for your prompt response.
>
> Please guide me below things.
>
> How to check rows got corrupted?
>
> How to check table got corrupted?
>
> How to check which row is occupied more space in the table?
>
> Is this expected?
>
> [image: image.png]
>
> On Sun, 26 Aug 2018 at 09:46, Adrian Klaver 
> wrote:
>
>> On 08/25/2018 08:36 PM, Raghavendra Rao J S V wrote:
>> > Hi All,
>> >
>> > One of our database size is 50gb. Out of it one of the table has
>> > 149444622 records. Size of that table is 14GB and its indexes size is
>> 16GB.
>> > Total size of the table and its indexes are 30GB. I have perfomred the
>> > below steps on that table.
>> >
>> > reindex table table_name;
>> >
>> > vacuum full verbose analyze on table_name;
>> >
>> > But still the size of the table and its indexes size are not reduced.
>> > Please guid me. How to proceed further.
>>
>> Rough approximation:
>>
>> 14,000,000,000 / 150,000,000 = 93 bytes/record.
>>
>> I am not seeing an issue. If you want to reduce the size of the table
>> remove rows.
>>
>>
>> >
>> > Structure of the table as below.
>> >
>> > Column   |   Type   | Modifiers | Storage |
>> > Stats target | Description
>> >
>> -+--+---+-+--+-
>> >   col1| bigint   |   | plain   |  |
>> >   col2 | double precision |   | plain   |  |
>> >   col3| double precision |   | plain   |  |
>> >   col4| double precision |   | plain   |  |
>> >   col5| double precision |   | plain   |  |
>> >   col6date| date |   | plain   |  |
>> >   tkey | integer  |   | plain   |  |
>> >   cid | integer  |   | plain   |  |
>> >   rtypeid | integer  |   | plain   |  |
>> >   rid | integer  |   | plain   |  |
>> >   ckey | bigint   |   | plain   |  |
>> > Indexes:
>> >  "idx_tab_cid" btree (cid)
>> >  "idx_tab_ckey" btree (ckey)
>> >  "idx_tab_col6date" btree (col6date)
>> >  "idx_tab_rid" btree (rid)
>> >  "idx_tab_rtype_id" btree (rtypid)
>> >  "idx_tab_tkey" btree (tkey)
>> >
>> >
>> > --
>> > Regards,
>> > Raghavendra Rao J S V
>> > Mobile- 8861161425
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>
>
> --
> Regards,
> Raghavendra Rao J S V
> Mobile- 8861161425
>


Re: Size of the table is growing abnormally in my database.

2018-08-25 Thread Raghavendra Rao J S V
Ok, thanks.

On Sun 26 Aug, 2018, 10:46 AM Paul Carlucci, 
wrote:

> There's a handful of hidden columns like Xmin and Xmax per row that you're
> not accounting for, header info per page, reserve space, free space... The
> physical size on disk is reasonable.
>
> Otherwise you can reduce the number of rows by cleaning up and moving out
> old data, reduce the width of each row by getting rid of any unused columns
> or switching to narrower data types, or drop unused indexes.  If none of
> that works for you then you're going to have to adjust your disk budget.
>
> On Sun, Aug 26, 2018, 12:37 AM Raghavendra Rao J S V <
> raghavendra...@gmail.com> wrote:
>
>> Thank you very much for your prompt response.
>>
>> Please guide me below things.
>>
>> How to check rows got corrupted?
>>
>> How to check table got corrupted?
>>
>> How to check which row is occupied more space in the table?
>>
>> Is this expected?
>>
>> [image: image.png]
>>
>> On Sun, 26 Aug 2018 at 09:46, Adrian Klaver 
>> wrote:
>>
>>> On 08/25/2018 08:36 PM, Raghavendra Rao J S V wrote:
>>> > Hi All,
>>> >
>>> > One of our database size is 50gb. Out of it one of the table has
>>> > 149444622 records. Size of that table is 14GB and its indexes size is
>>> 16GB.
>>> > Total size of the table and its indexes are 30GB. I have perfomred the
>>> > below steps on that table.
>>> >
>>> > reindex table table_name;
>>> >
>>> > vacuum full verbose analyze on table_name;
>>> >
>>> > But still the size of the table and its indexes size are not reduced.
>>> > Please guid me. How to proceed further.
>>>
>>> Rough approximation:
>>>
>>> 14,000,000,000 / 150,000,000 = 93 bytes/record.
>>>
>>> I am not seeing an issue. If you want to reduce the size of the table
>>> remove rows.
>>>
>>>
>>> >
>>> > Structure of the table as below.
>>> >
>>> > Column   |   Type   | Modifiers | Storage |
>>> > Stats target | Description
>>> >
>>> -+--+---+-+--+-
>>> >   col1| bigint   |   | plain   |  |
>>> >   col2 | double precision |   | plain   |  |
>>> >   col3| double precision |   | plain   |  |
>>> >   col4| double precision |   | plain   |  |
>>> >   col5| double precision |   | plain   |  |
>>> >   col6date| date |   | plain   |  |
>>> >   tkey | integer  |   | plain   |  |
>>> >   cid | integer  |   | plain   |  |
>>> >   rtypeid | integer  |   | plain   |  |
>>> >   rid | integer  |   | plain   |  |
>>> >   ckey | bigint   |   | plain   |  |
>>> > Indexes:
>>> >  "idx_tab_cid" btree (cid)
>>> >  "idx_tab_ckey" btree (ckey)
>>> >  "idx_tab_col6date" btree (col6date)
>>> >  "idx_tab_rid" btree (rid)
>>> >  "idx_tab_rtype_id" btree (rtypid)
>>> >  "idx_tab_tkey" btree (tkey)
>>> >
>>> >
>>> > --
>>> > Regards,
>>> > Raghavendra Rao J S V
>>> > Mobile- 8861161425
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>
>>
>> --
>> Regards,
>> Raghavendra Rao J S V
>> Mobile- 8861161425
>>
>