[GENERAL] query locks up when run concurrently

2016-11-22 Thread azhwkd
Greetings!

I'm using postgres version 9.5.5 on a ubuntu 16.04.1 server
installation which was installed through apt-get.

I have a query which if run alone usually completes in about 300ms.
When run in my application this query constantly locks up and bogs
down all connections of the connection pool (In the application this
query is run up to 10 times in parallel with different parameters).
What's really weird is that I can re-run one of the hung queries from
the command line while it's hung and it will complete as expected
while the hung queries continue to use 100% CPU time.

The query in question is this:

insert into group_history ("group", id, sub_category, "date", aa, ab,
bb, ba, quantity, "hour")
(select
a."group",
a.id,
b.sub_category,
to_timestamp($2)::date as "date",
max(a.aa / a.quantity) as aa,
min(a.aa / a.quantity) as ab,
max(a.bb / a.quantity) as bb,
min(a.bb/ a.quantity) as ba,
sum(a.quantity) as quantity,
extract('hour' from to_timestamp($2)) as "hour"
from tbla a
join tblb b on a.id = b.id
where a."group" = $1 and b."group" = $1
group by a."group", a.id, b.sub_category
);

When I'm running a perf on the system it looks like this while running
the query 10 times:

Samples: 4M of event 'cpu-clock', Event count (approx.): 18972107951
Overhead Shared Object Symbol
17.95% postgres [.] heap_hot_search_buffer
5.64% postgres [.] heap_page_prune_opt
4.62% postgres [.] hash_search_with_hash_value
3.80% postgres [.] LWLockRelease
3.73% postgres [.] 0x002f420d
2.50% postgres [.] _bt_checkkeys
2.48% postgres [.] hash_any
2.45% postgres [.] 0x002f41e7
2.10% postgres [.] slot_getattr
1.80% postgres [.] ResourceOwnerForgetBuffer
1.58% postgres [.] LWLockAcquire
1.58% postgres [.] ReadBufferExtended
1.54% postgres [.] index_fetch_heap
1.47% postgres [.] MemoryContextReset
1.43% postgres [.] btgettuple
1.38% postgres [.] 0x002d710c
1.36% postgres [.] 0x002d70a5
1.35% postgres [.] ExecQual

Explain (Analyze, Verbose) Output

QUERY PLAN

-
Insert on public.group_history (cost=10254.36..10315.16 rows=2432
width=62) (actual time=1833.967..1833.967 rows=0 loops=1)
-> Subquery Scan on "*SELECT*" (cost=10254.36..10315.16 rows=2432
width=62) (actual time=353.880..376.490 rows=6139 loops=1)
Output: "*SELECT*"."group", "*SELECT*".id,
"*SELECT*".sub_category, "*SELECT*"."when", "*SELECT*".aa,
"*SELECT*".ab, "*SELECT*".bb, "*SELECT*".ba, "*SELECT*".quantity,
"*SELECT*"."hour"
-> HashAggregate (cost=10254.36..10278.68 rows=2432 width=28)
(actual time=353.871..367.144 rows=6139 loops=1)
Output: a."group", a.id, b.sub_category, '2016-11-20'::date,
max((a.aa / a.quantity)), min((a.aa / a.quantity)), max((a.bb /
a.quantity)), min((a.bb / a.quantity)), sum(a.quantity), '21'::double
precision
Group Key: a."group", a.id, b.sub_category
-> Hash Join (cost=5558.64..10181.40 rows=2432 width=28)
(actual time=193.949..294.106 rows=30343 loops=1)
Output: a."group", a.id, a.aa, a.quantity, a.bb, b.sub_category
Hash Cond: (b.id = a.id)
-> Bitmap Heap Scan on public.auctions_extra b
(cost=685.19..4719.06 rows=30550 width=8) (actual time=56.678..111.038
rows=30343 loops=1)
Output: b.sub_category, b.id
Recheck Cond: (b."group" = 7)
Heap Blocks: exact=289
-> Bitmap Index Scan on auction_extra_pk
(cost=0.00..677.55 rows=30550 width=0) (actual time=55.966..55.966
rows=30343 loops=1)
Index Cond: (b."group" = 7)
-> Hash (cost=4280.62..4280.62 rows=30627 width=28)
(actual time=137.160..137.160 rows=30343 loops=1)
Output: a."group", a.id, a.aa, a.quantity, a.bb, a.id
Buckets: 16384 Batches: 4 Memory Usage: 638kB
-> Bitmap Heap Scan on public.tbla a
(cost=689.78..4280.62 rows=30627 width=28) (actual
time=58.530..117.064 rows=30343 loops=1)
Output: a."group", a.id, a.aa, a.quantity,
a.bb, a.id
Recheck Cond: (a."group" = 7)
Heap Blocks: exact=254
-> Bitmap Index Scan on tbla_pk
(cost=0.00..682.12 rows=30627 width=0) (actual time=57.801..57.801
rows=30343 loops=1)
Index Cond: (a."group" = 7)
Planning time: 0.475 ms
Trigger group_history_trigger: time=1442.561 calls=6139
Execution time: 1834.119 ms


group_history_trigger:

CREATE OR REPLACE FUNCTION public.group_history_partition_function()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
declare
_new_date timestamptz;
_tablename text;
_startdate text;
begin
-- Takes the current inbound "when" value and determines when
midnight is for the given date
_new_date := date_trunc('day', new."when");
_startdate := to_char(_new_date, '_MM_DD');
_tablename := 'group_history_'||_startdate;

-- Insert the current record into the correct partition
execute 'INSERT INTO public.' || quote_ident(_tablename) || ' VALUES ($1.*)
on conflict ("group", id, sub_category, "when", "hour") do
update set aa = excluded.aa,
ab = excluded.ab,
bb = excluded.bb,
ba = excluded.ba

Re: [GENERAL] How to open PGStrom (an extension of PostgreSQL) in Netbeans?

2016-11-22 Thread Mark Anns
Yes making the file is the problem. If you read my topic again, then you may
know about what is the exact question



--
View this message in context: 
http://postgresql.nabble.com/How-to-open-PGStrom-an-extension-of-PostgreSQL-in-Netbeans-tp5931425p5931594.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Wal files - Question | Postgres 9.2

2016-11-22 Thread Venkata B Nagothi
On Wed, Nov 23, 2016 at 1:59 PM, Patrick B  wrote:

>
>
> 2016-11-23 15:55 GMT+13:00 Venkata B Nagothi :
>
>>
>>
>> On Wed, Nov 23, 2016 at 1:03 PM, Patrick B 
>> wrote:
>>
>>> Hi guys,
>>>
>>> I currently have a slave02 server that is replicating from another
>>> slave01 via Cascading replication. The master01 server is shipping
>>> wal_files (via ssh) to both slaves.
>>>
>>>
>>> I'm doing some tests on slave02 to test the recovery via wal_files...
>>> The goal here is to stop postgres, wait few minutes, start postgres again,
>>> watch it recovering from wal_files, once it's done see the streaming
>>> replication start working again.
>>>
>>> 1 - Stop postgres on slave02(streaming replication + wal_files)
>>> 2 - Wait for 5 minutes
>>> 3 - Start postgres - The goal here is to tail the logs to see if the
>>> wal_files are being successfully recovered
>>>
>>> However, when doing step3 I get these messages:
>>>
>>> cp: cannot stat '/walfiles/00021AF800A4': No such file or
>>> directory
>>>
>>> cp: cannot stat '/walfiles/00021AF800A5': No such file or
>>> directory
>>>
>>> cp: cannot stat '/walfiles/00021AF800A6': No such file or
>>> directory
>>> LOG:  consistent recovery state reached at 1AF8/AB629F90
>>> LOG:  database system is ready to accept read only connections
>>> LOG:  streaming replication successfully connected to primary
>>>
>>>
>>>
>>> still on slave01: *Sometimes the log_delay time is bigger.. sometimes
>>> is lower*
>>>
>>> SELECT CASE WHEN pg_last_xlog_receive_location() =
>>> pg_last_xlog_replay_location() THEN 0 ELSE EXTRACT (EPOCH FROM now() -
>>> pg_last_xact_replay_timestamp()) END AS log_delay;
>>>
>>>  log_delay
>>>
>>> ---
>>>
>>>   0.386863
>>>
>>>
>>>
>>> On master01:
>>>
>>> select * from pg_current_xlog_location();
>>>
>>>  pg_current_xlog_location
>>>
>>> --
>>>
>>>  1AF8/D3F47A80
>>>
>>>
>>>
>>> *QUESTION:*
>>>
>>> So.. I just wanna understand what's the risk of those errors... what's
>>> happening?
>>> *cp: cannot stat '/walfiles/00021AF800A5': No such file or
>>> director*y - Means it didn't find the file. However, the file exists on
>>> the Master, but it didn't start shipping yet. What are the consequences of
>>> that?
>>>
>>
>> That is just saying that the slave cannot find the WAL file. That should
>> not be of big importance. Eventually, that will vanish when the log file
>> gets shipped from the master. Also "cp: cannot stat." errors have been been
>> fixed in 9.3 i believe.
>>
>
> Hi Venkata !
>
> Yeah that's fine.. the streaming replication is already working fine.
>
> But, as it didn't find/recover some of the wal_files, doesn't that mean
> that the DB isn't up-to-date?
>

Not necessarily. Standby periodically checks if the WAL file it is looking
for is available at restore_command location and generates that message if
the file is not available. These messages are not of any harm.

Below link might help you :

https://www.postgresql.org/message-id/4DDC9515.203%40enterprisedb.com

Regards,
Venkata B N
Database Consultant

Fujitsu Australia


Re: [GENERAL] Wal files - Question | Postgres 9.2

2016-11-22 Thread Patrick B
2016-11-23 15:55 GMT+13:00 Venkata B Nagothi :

>
>
> On Wed, Nov 23, 2016 at 1:03 PM, Patrick B 
> wrote:
>
>> Hi guys,
>>
>> I currently have a slave02 server that is replicating from another
>> slave01 via Cascading replication. The master01 server is shipping
>> wal_files (via ssh) to both slaves.
>>
>>
>> I'm doing some tests on slave02 to test the recovery via wal_files... The
>> goal here is to stop postgres, wait few minutes, start postgres again,
>> watch it recovering from wal_files, once it's done see the streaming
>> replication start working again.
>>
>> 1 - Stop postgres on slave02(streaming replication + wal_files)
>> 2 - Wait for 5 minutes
>> 3 - Start postgres - The goal here is to tail the logs to see if the
>> wal_files are being successfully recovered
>>
>> However, when doing step3 I get these messages:
>>
>> cp: cannot stat '/walfiles/00021AF800A4': No such file or
>> directory
>>
>> cp: cannot stat '/walfiles/00021AF800A5': No such file or
>> directory
>>
>> cp: cannot stat '/walfiles/00021AF800A6': No such file or
>> directory
>> LOG:  consistent recovery state reached at 1AF8/AB629F90
>> LOG:  database system is ready to accept read only connections
>> LOG:  streaming replication successfully connected to primary
>>
>>
>>
>> still on slave01: *Sometimes the log_delay time is bigger.. sometimes is
>> lower*
>>
>> SELECT CASE WHEN pg_last_xlog_receive_location() =
>> pg_last_xlog_replay_location() THEN 0 ELSE EXTRACT (EPOCH FROM now() -
>> pg_last_xact_replay_timestamp()) END AS log_delay;
>>
>>  log_delay
>>
>> ---
>>
>>   0.386863
>>
>>
>>
>> On master01:
>>
>> select * from pg_current_xlog_location();
>>
>>  pg_current_xlog_location
>>
>> --
>>
>>  1AF8/D3F47A80
>>
>>
>>
>> *QUESTION:*
>>
>> So.. I just wanna understand what's the risk of those errors... what's
>> happening?
>> *cp: cannot stat '/walfiles/00021AF800A5': No such file or
>> director*y - Means it didn't find the file. However, the file exists on
>> the Master, but it didn't start shipping yet. What are the consequences of
>> that?
>>
>
> That is just saying that the slave cannot find the WAL file. That should
> not be of big importance. Eventually, that will vanish when the log file
> gets shipped from the master. Also "cp: cannot stat." errors have been been
> fixed in 9.3 i believe.
>

Hi Venkata !

Yeah that's fine.. the streaming replication is already working fine.

But, as it didn't find/recover some of the wal_files, doesn't that mean
that the DB isn't up-to-date?
Otherwise what's the purpose of the wal_files if not be responsible to
contain the essential data to the DB be up-to-date?

Thanks!


Re: [GENERAL] Wal files - Question | Postgres 9.2

2016-11-22 Thread Venkata B Nagothi
On Wed, Nov 23, 2016 at 1:03 PM, Patrick B  wrote:

> Hi guys,
>
> I currently have a slave02 server that is replicating from another slave01
> via Cascading replication. The master01 server is shipping wal_files (via
> ssh) to both slaves.
>
>
> I'm doing some tests on slave02 to test the recovery via wal_files... The
> goal here is to stop postgres, wait few minutes, start postgres again,
> watch it recovering from wal_files, once it's done see the streaming
> replication start working again.
>
> 1 - Stop postgres on slave02(streaming replication + wal_files)
> 2 - Wait for 5 minutes
> 3 - Start postgres - The goal here is to tail the logs to see if the
> wal_files are being successfully recovered
>
> However, when doing step3 I get these messages:
>
> cp: cannot stat '/walfiles/00021AF800A4': No such file or
> directory
>
> cp: cannot stat '/walfiles/00021AF800A5': No such file or
> directory
>
> cp: cannot stat '/walfiles/00021AF800A6': No such file or
> directory
> LOG:  consistent recovery state reached at 1AF8/AB629F90
> LOG:  database system is ready to accept read only connections
> LOG:  streaming replication successfully connected to primary
>
>
>
> still on slave01: *Sometimes the log_delay time is bigger.. sometimes is
> lower*
>
> SELECT CASE WHEN pg_last_xlog_receive_location() =
> pg_last_xlog_replay_location() THEN 0 ELSE EXTRACT (EPOCH FROM now() -
> pg_last_xact_replay_timestamp()) END AS log_delay;
>
>  log_delay
>
> ---
>
>   0.386863
>
>
>
> On master01:
>
> select * from pg_current_xlog_location();
>
>  pg_current_xlog_location
>
> --
>
>  1AF8/D3F47A80
>
>
>
> *QUESTION:*
>
> So.. I just wanna understand what's the risk of those errors... what's
> happening?
> *cp: cannot stat '/walfiles/00021AF800A5': No such file or
> director*y - Means it didn't find the file. However, the file exists on
> the Master, but it didn't start shipping yet. What are the consequences of
> that?
>

That is just saying that the slave cannot find the WAL file. That should
not be of big importance. Eventually, that will vanish when the log file
gets shipped from the master. Also "cp: cannot stat." errors have been been
fixed in 9.3 i believe.

Regards,

Venkata B N
Database Consultant

Fujitsu Australia


[GENERAL] Wal files - Question | Postgres 9.2

2016-11-22 Thread Patrick B
Hi guys,

I currently have a slave02 server that is replicating from another slave01
via Cascading replication. The master01 server is shipping wal_files (via
ssh) to both slaves.


I'm doing some tests on slave02 to test the recovery via wal_files... The
goal here is to stop postgres, wait few minutes, start postgres again,
watch it recovering from wal_files, once it's done see the streaming
replication start working again.

1 - Stop postgres on slave02(streaming replication + wal_files)
2 - Wait for 5 minutes
3 - Start postgres - The goal here is to tail the logs to see if the
wal_files are being successfully recovered

However, when doing step3 I get these messages:

cp: cannot stat '/walfiles/00021AF800A4': No such file or
directory

cp: cannot stat '/walfiles/00021AF800A5': No such file or
directory

cp: cannot stat '/walfiles/00021AF800A6': No such file or
directory
LOG:  consistent recovery state reached at 1AF8/AB629F90
LOG:  database system is ready to accept read only connections
LOG:  streaming replication successfully connected to primary



still on slave01: *Sometimes the log_delay time is bigger.. sometimes is
lower*

SELECT CASE WHEN pg_last_xlog_receive_location() =
pg_last_xlog_replay_location() THEN 0 ELSE EXTRACT (EPOCH FROM now() -
pg_last_xact_replay_timestamp()) END AS log_delay;

 log_delay

---

  0.386863



On master01:

select * from pg_current_xlog_location();

 pg_current_xlog_location

--

 1AF8/D3F47A80



*QUESTION:*

So.. I just wanna understand what's the risk of those errors... what's
happening?
*cp: cannot stat '/walfiles/00021AF800A5': No such file or
director*y - Means it didn't find the file. However, the file exists on the
Master, but it didn't start shipping yet. What are the consequences of that?

Cheers
Patrick


Re: [GENERAL] max_connections limit violation not showing in pg_stat_activity

2016-11-22 Thread Kevin Grittner
On Tue, Nov 22, 2016 at 12:48 PM, Charles Clavadetscher
 wrote:

> We are using PostgreSQL 9.3.10 on RedHat (probably 6.x).

Is it possible to upgrade?  You are missing over a year's worth of
fixes for serious bugs and security vulnerabilities.

https://www.postgresql.org/support/versioning/

> Among other thing the database is the backend for a web application that
> expects a load of a some hundred users at a time (those are participans
> to online surveys that we use for computing economic indicators and
> access the system every month). The whole amount of people expected is
> above 5000, but we don't expect a too high concurrent access to the
> database. As mentioned a few hundreds at the beginning of the surveys.
>
> To be sure that we won't have problems with the peak times we created a
> load test using gatling that ramps up to 1000 users in 5 minutes in
> bunches of 10. At the beginning we had problems with the web server
> response that we were able to correct. Now we face problem with the
> max_connections limit of PostgreSQL. Currently it is set to the default
> of 100. We are going to look into it and either increase that limit or
> consider connections pooling.

On a web site with about 3000 active users, I found (through
adjusting the connection pool size on the production database and
monitoring performance) that we got best performance with a pool of
about 40 connections.  This was on a machine with 16 cores (never
count HT "threads" as cores), 512GB RAM, and a RAID with 40 drives
of spinning rust.

http://tbeitr.blogspot.com/2015/11/for-better-service-please-take-number.html

> What bothers me however is that running a query on pg_stat_activity with
> a watch of 1 seconds never shows any value higher than 37 of concurrent
> active connections.
>
> SELECT count(*) FROM pg_stat_activity; watch 1;

At the times when the resources are overloaded by more connections
than the resources can efficiently service -- well that's precisely
the time that a sleeping "monitoring" process is least likely to be
given a time slice to run.  If you can manage to get pgbadger to
run on your environment, and you turn on logging of connections and
disconnections, you will be able to get far more accurate
information.

> Increasing max_connections has repercussions on the configuration
> of work_mem (if I remember well)

Each connection can allocate one work_mem allocation per node which
requires a sort, hash, CTE, etc.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] min/max_wal_size

2016-11-22 Thread Adrian Klaver

On 11/22/2016 12:51 PM, Torsten Förtsch wrote:

Hi,

I am a bit confused about min_wal_size and max_wal_size. Previously,
there was this formula to estimate the max number of WAL segment files
in pg_xlog/
(https://www.postgresql.org/docs/9.4/static/wal-configuration.html):

  (2 + checkpoint_completion_target) * checkpoint_segments + 1 or
checkpoint_segments + wal_keep_segments + 1

I don't exactly know what the operation "or" means. Before writing this


'Or' distinguishes between the case where wal_keep_segments is the 
default of 0 and the case where you set it to some value > 0. In the 
second case you are forcing Postgres to keep segments it would not by 
default keep.



email I always thought of wal_keep_segments as a parameter that
configures how many segments to keep that would otherwise be deleted and
checkpoint_segments as the number of WAL files the database is allowed
to work with within a checkpoint_timeout interval.

The formula above makes more or less sense. The database is allowed to
write one set of WAL files during the checkpoint interval. While
performing the checkpoint it needs the previous set of WAL files. I
don't know where that checkpoint_completion_target comes in. But I trust


See the paragraph above the one with the equation for how 
checkpoint_completion_target applies.



the wisdom of the author of the documentation.

Now, I have a database with very low write activity. Archive_command is
called about once per hour to archive one segment. When the database was
moved to PG 9.5, it was initially configured with insanely high settings
for max_wal_size, min_wal_size and wal_keep_segments. I reset
min/max_wal_size to the default settings of 80MB and 1GB and reduced
wal_keep_segments to 150.

I am seeing in pg_xlog the WAL segments from

-rw--- 1 postgres postgres 16777216 Nov 17 04:01
pg_xlog/0001000400F9
...
-rw--- 1 postgres postgres 16777216 Nov 22 20:00
pg_xlog/00010005008E
-rw--- 1 postgres postgres 16777216 Nov 22 20:19
pg_xlog/00010005008F
-rw--- 1 postgres postgres 16777216 Nov 15 07:50
pg_xlog/000100050090
...
-rw--- 1 postgres postgres 16777216 Nov 15 07:52
pg_xlog/000100060017

As you can see, the files from 1/4/F9 to 1/5/8E are old. That are 150
files which matches exactly wal_keep_segments. If I understand
correctly, the file 1/5/8F is currently written. Further, the files from
1/5/90 to 1/6/17 seem to be old WAL files that have been renamed to be
reused in the future. Their count is 136.

Why does a database that generates a little more than 1 WAL file per
hour and has a checkpoint_timeout of 30 minutes with a
completion_target=0.7 need so many of them? The default value for
min_wal_size is 80MB which amounts to 5 segments. That should be totally
enough for this database.

Is this because of the previously insanely high setting (min=1GB,
max=9GB)? Should I expect this value to drop in a week's time? Or is
there anything that I am not aware of?


Are you talking about the recycled files?



Thanks,
Torsten




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] min/max_wal_size

2016-11-22 Thread Torsten Förtsch
Hi,

I am a bit confused about min_wal_size and max_wal_size. Previously, there
was this formula to estimate the max number of WAL segment files in
pg_xlog/ (https://www.postgresql.org/docs/9.4/static/wal-configuration.html
):

  (2 + checkpoint_completion_target) * checkpoint_segments + 1 or
checkpoint_segments + wal_keep_segments + 1

I don't exactly know what the operation "or" means. Before writing this
email I always thought of wal_keep_segments as a parameter that configures
how many segments to keep that would otherwise be deleted and
checkpoint_segments as the number of WAL files the database is allowed to
work with within a checkpoint_timeout interval.

The formula above makes more or less sense. The database is allowed to
write one set of WAL files during the checkpoint interval. While performing
the checkpoint it needs the previous set of WAL files. I don't know where
that checkpoint_completion_target comes in. But I trust the wisdom of the
author of the documentation.

Now, I have a database with very low write activity. Archive_command is
called about once per hour to archive one segment. When the database was
moved to PG 9.5, it was initially configured with insanely high settings
for max_wal_size, min_wal_size and wal_keep_segments. I reset
min/max_wal_size to the default settings of 80MB and 1GB and reduced
wal_keep_segments to 150.

I am seeing in pg_xlog the WAL segments from

-rw--- 1 postgres postgres 16777216 Nov 17 04:01
pg_xlog/0001000400F9
...
-rw--- 1 postgres postgres 16777216 Nov 22 20:00
pg_xlog/00010005008E
-rw--- 1 postgres postgres 16777216 Nov 22 20:19
pg_xlog/00010005008F
-rw--- 1 postgres postgres 16777216 Nov 15 07:50
pg_xlog/000100050090
...
-rw--- 1 postgres postgres 16777216 Nov 15 07:52
pg_xlog/000100060017

As you can see, the files from 1/4/F9 to 1/5/8E are old. That are 150 files
which matches exactly wal_keep_segments. If I understand correctly, the
file 1/5/8F is currently written. Further, the files from 1/5/90 to 1/6/17
seem to be old WAL files that have been renamed to be reused in the future.
Their count is 136.

Why does a database that generates a little more than 1 WAL file per hour
and has a checkpoint_timeout of 30 minutes with a completion_target=0.7
need so many of them? The default value for min_wal_size is 80MB which
amounts to 5 segments. That should be totally enough for this database.

Is this because of the previously insanely high setting (min=1GB, max=9GB)?
Should I expect this value to drop in a week's time? Or is there anything
that I am not aware of?

Thanks,
Torsten


[GENERAL] max_connections limit violation not showing in pg_stat_activity

2016-11-22 Thread Charles Clavadetscher
Hello

We are using PostgreSQL 9.3.10 on RedHat (probably 6.x).

The database is hosted by an internal service provider and we have
superuser access to it over a PG client, e.g. psql, but not to the OS.
For that reason we only have access to the log files indirectly using
some of the built in system functions like pg_ls_dir, etc.

Among other thing the database is the backend for a web application that
expects a load of a some hundred users at a time (those are participans
to online surveys that we use for computing economic indicators and
access the system every month). The whole amount of people expected is
above 5000, but we don't expect a too high concurrent access to the
database. As mentioned a few hundreds at the beginning of the surveys.

To be sure that we won't have problems with the peak times we created a
load test using gatling that ramps up to 1000 users in 5 minutes in
bunches of 10. At the beginning we had problems with the web server
response that we were able to correct. Now we face problem with the
max_connections limit of PostgreSQL. Currently it is set to the default
of 100. We are going to look into it and either increase that limit or
consider connections pooling.

What bothers me however is that running a query on pg_stat_activity with
a watch of 1 seconds never shows any value higher than 37 of concurrent
active connections.

SELECT count(*) FROM pg_stat_activity; watch 1;

Due to that fact it took us quite a time to figure out that the
bottleneck had become the database. We discovered it after looking into
the log files (as mentioned above this is not very straightforward, in
particular because the logs tend to become quite huge).

I assume that the peaks of requests violating the limit happen between
two calls of the query. Is there a better way to keep track of this kind
of problems? I felt a bit weird not to be able to discover the issue sooner.

And what would be a reasonable strategy to deal with the problem at
hand? Increasing max_connections has repercussions on the configuration
of work_mem (if I remember well) or on the other hand on the amount of
physical memory that must be available on the system.

On Thursday we are going to have a meeting with our DB hosting provider
to discuss which improvement need to be made to meet the requirements of
our applications (the web application mentioned is not the only one
using the database, but is the only one where we expect such peaks).

So I'd be very grateful for advice on this subject.

Thank you.
Regards
Charles

-- 
Swiss PostgreSQL Users Group
c/o Charles Clavadetscher
Treasurer
Motorenstrasse 18
CH – 8005 Zürich

http://www.swisspug.org

+---+
|     __  ___   |
|  /)/  \/   \  |
| ( / ___\) |
|  \(/ o)  ( o)   ) |
|   \_  (_  )   \ ) _/  |
| \  /\_/\)/|
|  \/ |
|   _|  |   |
|   \|_/|
|   |
| PostgreSQL 1996-2016  |
|  20 Years of Success  |
|   |
+---+


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_basebackup on slave running for a long time

2016-11-22 Thread Subhankar Chattopadhyay
Thanks John, Well that clarifies about archive a lot!

On 22 Nov 2016 22:22, "John R Pierce"  wrote:

On 11/22/2016 3:41 AM, Subhankar Chattopadhyay wrote:

> John,
>
> Can you explain the Wal Archive procedure, how it can be setup so that
> the slave never goes out of sync, even if master deletes the WAL
> files?
>

The WAL archive will typically be a separate file server that both the
master and slave can reach...  it could be accessed via NFS or via scp or
whatever is appropriate for your environment.   The master is configured
with an archive command (cp in the case of nfs,  or scp for ssh/scp, or
whatever) to copy WAL segments to the archive.   The slave is setup with an
recovery command (cp, scp, etc) to fetch from this same archive.

The archive will continue grow without limit if you don't do some cleanup
on it.   one strategy is to periodically (weekly?  monthly?) do a base
backup of the master (possibly by using rsync or another file copy method,
rather than pg_basebackup), and keep 2 of these full backups, and all wal
archives since the beginning of the oldest one.with this backup +
archive, you can initialize a new slave without bothering the master (rsync
or scp or cp the latest backup, then let the slave recover from the wal
archive).

this backup+archive will also let you do point-in-time-recovery (aka
PITR).   say something catastrophic happens and the data in the master is
bad after some point in time (maybe a jr admin accidentally clobbers key
data, but the app kept running).   you can restore the last good base
backup, and recover up to but not including the point in time of the
transaction that clobbered your data.



-- 
john r pierce, recycling bits in santa cruz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming replication failover/failback

2016-11-22 Thread Israel Brewster
On Nov 18, 2016, at 5:48 AM, Jehan-Guillaume de Rorthais  wrote:
> 
> On Thu, 17 Nov 2016 08:26:59 -0900
> Israel Brewster mailto:isr...@ravnalaska.net>> wrote:
> 
>>> On Nov 16, 2016, at 4:24 PM, Adrian Klaver 
>>> wrote:
>>> 
>>> On 11/16/2016 04:51 PM, Israel Brewster wrote:  
 I've been playing around with streaming replication, and discovered that
 the following series of steps *appears* to work without complaint:
 
 - Start with master on server A, slave on server B, replicating via
 streaming replication with replication slots.
 - Shut down master on A
 - Promote slave on B to master
 - Create recovery.conf on A pointing to B
 - Start (as slave) on A, streaming from B
 
 After those steps, A comes up as a streaming replica of B, and works as
 expected. In my testing I can go back and forth between the two servers
 all day using the above steps.
 
 My understanding from my initial research, however, is that this
 shouldn't be possible - I should need to perform a new basebackup from B
 to A after promoting B to master before I can restart A as a slave. Is
 the observed behavior then just a "lucky fluke" that I shouldn't rely  
>>> 
>>> You don't say how active the database is, but I going to say it is not
>>> active enough for the WAL files on B to go out for scope for A in the time
>>> it takes you to do the switch over.  
>> 
>> Yeah, not very - this was just in testing, so essentially no activity. So
>> between your response and the one from Jehan-Guillaume de Rorthais, what I'm
>> hearing is that my information about the basebackup being needed was
>> obsoleted with the patch he linked to, and as long as I do a clean shutdown
>> of the master, and don't do too much activity on the *new* master before
>> bringing the old master up as a slave (such that WAL files are lost)
> 
> Just set up wal archiving to avoid this (and have PITR backup as a side 
> effect).

Good point. Streaming replication may not *need* WAL archiving to work, but 
having it can provide other benefits than just replication. I'll have to look 
more into the PITR backup though - that's something that sounds great to have, 
but I have no clue, beyond the concept, how it works. :-)

---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---

> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org 
> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general 
> 


Re: [GENERAL] pg_basebackup on slave running for a long time

2016-11-22 Thread John R Pierce

On 11/22/2016 3:41 AM, Subhankar Chattopadhyay wrote:

John,

Can you explain the Wal Archive procedure, how it can be setup so that
the slave never goes out of sync, even if master deletes the WAL
files?


The WAL archive will typically be a separate file server that both the 
master and slave can reach...  it could be accessed via NFS or via scp 
or whatever is appropriate for your environment.   The master is 
configured with an archive command (cp in the case of nfs,  or scp for 
ssh/scp, or whatever) to copy WAL segments to the archive.   The slave 
is setup with an recovery command (cp, scp, etc) to fetch from this same 
archive.


The archive will continue grow without limit if you don't do some 
cleanup on it.   one strategy is to periodically (weekly?  monthly?) do 
a base backup of the master (possibly by using rsync or another file 
copy method, rather than pg_basebackup), and keep 2 of these full 
backups, and all wal archives since the beginning of the oldest one.
with this backup + archive, you can initialize a new slave without 
bothering the master (rsync or scp or cp the latest backup, then let the 
slave recover from the wal archive).


this backup+archive will also let you do point-in-time-recovery (aka 
PITR).   say something catastrophic happens and the data in the master 
is bad after some point in time (maybe a jr admin accidentally clobbers 
key data, but the app kept running).   you can restore the last good 
base backup, and recover up to but not including the point in time of 
the transaction that clobbered your data.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_basebackup on slave running for a long time

2016-11-22 Thread John R Pierce

On 11/22/2016 2:34 AM, Subhankar Chattopadhyay wrote:

So, the question here is while I apply update on Slave, how do I know
if if it will be able to catch up or I need Wal archive? Is there a
way I can determine this? In my case, while applying update on slave,
the db process will be stopped, so the query, even if it gives correct
value, won't help. Can anybody help in here?


if the slave it setup with the proper recovery commands to fetch from 
the WAL archive, then when the slave is woken up after a slumber it will 
attempt to recover as many WAL's as it can from the archive before it 
resumes streaming.  This will happen automatically.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to open PGStrom (an extension of PostgreSQL) in Netbeans?

2016-11-22 Thread John R Pierce

On 11/22/2016 12:29 AM, Mark Anns wrote:

Nope. I am not asking about installation instructions. I have installed it.
And I know how to run it from command line.

I just wanted to compile it in netbeans.


netbeans is a java-centric tool.   if you can get it to run the make 
files, it should work.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.5 and Shell scripts/variables vs. C programming/defining a value to be used

2016-11-22 Thread John McKown
On Tue, Nov 22, 2016 at 8:22 AM, Poul Kristensen  wrote:

> I think I understand.
> When I use this in my code I get
> "undefined reference to `PQexecParms'
>

​The correct name is PQexecParams (note the last "a"). Sorry I missed that
when first looking.​ Also, just to be sure, did you include the argument
"-lpq" on the compile command to point to the PostgreSQL library for
linking?



> when compiling.
>
> references in main is
>
> const char *conninfo; /* connection string  to the database */
> PGconn *conn; /* connection to the database */
> PGresult *res; /* result of sql query */
> int   nFields;  /* print out the attribute names */
> int i; / * print the columns */
>  j;
>
> Is the a reserved reference to use with
>
> Reserved res = PQexecParms(conn )
>
> Then I assume that I have to use another reference than res.
>
> Thanks.
>
> /Poul
>
>
-- 
Heisenberg may have been here.

Unicode: http://xkcd.com/1726/

Maranatha! <><
John McKown


Re: [GENERAL] Database migration to RDS issues permissions

2016-11-22 Thread Adrian Klaver

On 11/21/2016 03:34 PM, Fran ... wrote:

Hi Adrian,


I followed you link and I had again errors:


What was the command you used?




/pg_restore: [archiver (db)] Error from TOC entry 4368; 2606 151317 FK
CONSTRAINT type_id_3940becf ownersuser/
/pg_restore: [archiver (db)] could not execute query: ERROR:  constraint
"type_id_3940becf" of relation "store" does not exist/
/Command was: ALTER TABLE ONLY public.store DROP CONSTRAINT
type_id_3940becf;/


Can't DROP what does not exist. The end result is the same anyway. You 
can avoid this type of error with --if-exists.

/
/
/pg_restore: [archiver (db)] Error from TOC entry 4273; 1259 1179680
INDEX profile_id owneruser/
/pg_restore: [archiver (db)] could not execute query: ERROR:  index
"profile_id" does not exist/
/Command was: DROP INDEX public.profile_id;/


See above.


/
/
/pg_restore: [archiver (db)] Error from TOC entry 4751; 0 0 COMMENT
EXTENSION plpgsql /
/pg_restore: [archiver (db)] could not execute query: ERROR:  must be
owner of extension plpgsql/
/Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural
language';/


Not adding a COMMENT, not necessarily fatal. Best guess plpgsql is 
actually installed, have you checked?



/
/

/pg_restore: [archiver (db)] Error from TOC entry 4756; 0 0 USER MAPPING
USER MAPPING dwhuser SERVER pg_rest postgres/
/pg_restore: [archiver (db)] could not execute query: ERROR:  role
"user" does not exist/
/Command was: CREATE USER MAPPING FOR user SERVER pg_rest OPTIONS (/
/password 'X',/
/"user" 'user'/
/);/


This is probably because you could not import the global roles from your 
original database.




Regards.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.5 and Shell scripts/variables vs. C programming/defining a value to be used

2016-11-22 Thread Tom Lane
Poul Kristensen  writes:
> When I use this in my code I get
> "undefined reference to `PQexecParms'
> when compiling.

IIRC, it's PQexecParams not PQexecParms

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.5 and Shell scripts/variables vs. C programming/defining a value to be used

2016-11-22 Thread Poul Kristensen
I think I understand.
When I use this in my code I get
"undefined reference to `PQexecParms'
when compiling.

references in main is

const char *conninfo; /* connection string  to the database */
PGconn *conn; /* connection to the database */
PGresult *res; /* result of sql query */
int   nFields;  /* print out the attribute names */
int i; / * print the columns */
 j;

Is the a reserved reference to use with

Reserved res = PQexecParms(conn )

Then I assume that I have to use another reference than res.

Thanks.

/Poul










2016-11-22 0:48 GMT+01:00 John McKown :

> On Mon, Nov 21, 2016 at 11:22 AM, Poul Kristensen 
> wrote:
>
>> Thank you for fast repons!
>>
>> The $1 substitution below. I assume that it refers to "joe's place". But
>> it is not very clear to me, how "joe's place" will appear instead of $1
>> when running. Where is it possiible to read more about this? There just
>> is'nt much about substitution in C online. Any recommended books to buy?
>>
>>
>> /* Here is our out-of-line parameter value */
>> paramValues[0] = "joe's place";
>>
>> res = PQexecParams(conn,
>>"SELECT * FROM test1 WHERE t = $1",
>>1,   /* one param */
>>NULL,/* let the backend deduce param type */
>>paramValues,
>>NULL,/* don't need param lengths since text */
>>NULL,/* default to all text params */
>>1);  /* ask for binary results */
>> }
>>
>> /Poul
>>
>>
>>
> ​It is described better here: https://www.postgresql.org/
> docs/9.6/static/libpq-exec.html
> than I can do. But I just noticed a mistake in your code, or maybe just
> something left out. I would say:
>
> char *value1 = "joe's place";
> ​​
> char **paramV
> ​a​
> lues = &value1;
> ​ /* closer match to the documentation's syntax */​
>
> //char *paramValues[] = {"joe's place"}; /* same as above, different
> syntax */
> //
> //char *paramValues[1]; /* this looks to be missing */
> //paramValues[0]="joe's place"; /* what you had */
> res = PQexecParms(conn,
>"SELECT * FROM test1 WHERE t = $1",
>1, /* there is only 1 entry in paramValues array */
>paramValues, /* address of parameter value array */
>NULL, /* don't need param lengths since text */
>NULL, /* defaul to all text params */
>1); /* return all values as binary */
>
> Well, you have an array of pointers to characters called paramValues. The
> $1 refers to whatever is pointed to by paramValues[0]​, which is a pointer
> to value1 which is a C "string". Basically in the second parameter, the
> command, the $n is used as a 1-based index into the paramValues[] array.
> This means that the actual C language array value is one less (since C
> arrays are 0-based). Which means that "$n" (n>=1) in the "command" string
> refers to value pointed to by paramValues[n-1]. The 3rd value, 1 in this
> case, tells PQexecParms how many entries there are in the paramValues[]
> array. I guess this is a type of validity check that the $n in the command
> string is not too large for the array.
>
> Note: please keep the discussion on the list, not to me personally. It may
> be of help to others (or maybe not, I don't know.)
>
> --
> Heisenberg may have been here.
>
> Unicode: http://xkcd.com/1726/
>
> Maranatha! <><
> John McKown
>



-- 
Med venlig hilsen / Best regards
Poul Kristensen
Linux-OS/Virtualizationexpert and Oracle DBA


Re: [GENERAL] pg_rewind and WAL size...

2016-11-22 Thread Michael Paquier
On Tue, Nov 22, 2016 at 9:40 PM,   wrote:
> Am I doing something wrong? Would tweaking some checkpoint parameters help
> reduce the WAL volume?

The answer to this question is likely yes. wal_log_hints enforces a
FPW for the first modification of a page even if that's for some hint
bits. This generates traffic for OLTP types of workloads where many
pages are dirtied and flushed at checkpoint.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_rewind and WAL size...

2016-11-22 Thread marin

Hi,

I did a series of test to see the WAL size impact of enabling data 
checksums/wal_log_hints on our databases (so that we can use pg_rewind 
to fix split brain situations). Having a set of servers available the 
last few days I did a few tests. Here are the results:


No data checksums and wal_log_hints=off :
createdb benchdisk
pgbench -i -s 1 benchdisk
-> This creates a WAL archive of 121GB
pgbench -c 32 -j 16 -t 10 benchdisk
-> The WAL archive is now 167GB. Increase of 167 - 121 = 46GB
pgbench -c 32 -j 16 -t 10 -N benchdisk
-> The WAL archive is now 209GB. Increase of 209 - 167 = 42GB


Data checksums or wal_log_hints=on :
createdb benchdisk
pgbench -i -s 1 benchdisk
-> This creates a WAL archive of 245GB
pgbench -c 32 -j 16 -t 10 benchdisk
-> The WAL archive is now 292GB. Increase of 292 - 245 = 47GB
pgbench -c 32 -j 16 -t 10 -N benchdisk
-> The WAL archive is now 334GB. Increase of 334 - 292 = 42GB

The tests run on two identical servers on a freshly initialized data 
folder.


During the testing for read-write and simple write test I expected some 
additional WAL volume (a couple of percentages is tolerable), but the 
100% increase when creating the test database is very disturbing. I 
assume a dump/restore would work the same way.


Am I doing something wrong? Would tweaking some checkpoint parameters 
help reduce the WAL volume?


I was planing to turn data checksums on (for data integrity) for a 
larger migration but with this numbers I would have to turn it off, and 
use the wal_log_hints after the loading.


Regards,
Mladen Marinović


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_basebackup on slave running for a long time

2016-11-22 Thread Subhankar Chattopadhyay
John,

Can you explain the Wal Archive procedure, how it can be setup so that
the slave never goes out of sync, even if master deletes the WAL
files?

On Tue, Nov 22, 2016 at 4:04 PM, Subhankar Chattopadhyay
 wrote:
> So, the question here is while I apply update on Slave, how do I know
> if if it will be able to catch up or I need Wal archive? Is there a
> way I can determine this? In my case, while applying update on slave,
> the db process will be stopped, so the query, even if it gives correct
> value, won't help. Can anybody help in here?
>
> On Mon, Nov 21, 2016 at 12:40 PM, John R Pierce  wrote:
>> On 11/20/2016 11:00 PM, Subhankar Chattopadhyay wrote:
>>
>> Yes so if the slave is behind I need to start over pgbasebackup. I saw
>> according to the documentation this query gives us the replication state.
>> Can somebody tell me if this would be sufficient to know if I need to start
>> over the backup ?
>>
>>
>>
>>
>> if the slave is behind but is catching up, no, restarting replication would
>> be overkill.only if the slave gets so far behind that it can't catch up,
>> and in that case, a wal archive would be a better choice than a new base
>> backup.
>>
>> I've never run into these problems as I run on dedicated hardware servers,
>> which don't have all these reliability and performance problems.   a
>> complete server failure requiring a full rebuild is something that would
>> happen less than annually.
>>
>>
>>
>> --
>> john r pierce, recycling bits in santa cruz
>
>
>
> --
>
>
>
>
> Subhankar Chattopadhyay
> Bangalore, India



-- 




Subhankar Chattopadhyay
Bangalore, India


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_basebackup on slave running for a long time

2016-11-22 Thread Subhankar Chattopadhyay
So, the question here is while I apply update on Slave, how do I know
if if it will be able to catch up or I need Wal archive? Is there a
way I can determine this? In my case, while applying update on slave,
the db process will be stopped, so the query, even if it gives correct
value, won't help. Can anybody help in here?

On Mon, Nov 21, 2016 at 12:40 PM, John R Pierce  wrote:
> On 11/20/2016 11:00 PM, Subhankar Chattopadhyay wrote:
>
> Yes so if the slave is behind I need to start over pgbasebackup. I saw
> according to the documentation this query gives us the replication state.
> Can somebody tell me if this would be sufficient to know if I need to start
> over the backup ?
>
>
>
>
> if the slave is behind but is catching up, no, restarting replication would
> be overkill.only if the slave gets so far behind that it can't catch up,
> and in that case, a wal archive would be a better choice than a new base
> backup.
>
> I've never run into these problems as I run on dedicated hardware servers,
> which don't have all these reliability and performance problems.   a
> complete server failure requiring a full rebuild is something that would
> happen less than annually.
>
>
>
> --
> john r pierce, recycling bits in santa cruz



-- 




Subhankar Chattopadhyay
Bangalore, India


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Replication of a database or schemas from a database

2016-11-22 Thread Johann Spies
We would like to have a master(read/write) version of a database  (or a
schema or two) on one server and a readonly version of of the same
database.  The only changed on the second one may be to duplicate changes
to views, materialized_views and indexes that also happened on the first
one.

We work with different versions of data of which the content version in
production will not change except for the changes described in the previous
paragraph.

About all the replication/load sharing solutions I have read about work on
the cluster/server-level.

I have seen one person referring to Slony or Londiste for a situation like
this but also referring that it might be an in house option from PG 9.5
onwards.

Do I have to use something like Slony or Londiste or can it be done with a
standard 9.6 installation?

We would like read queries to be run on both servers in a distributed way
if possible.

Recommendations for a solution would be welcomed.

Regards
Johann

-- 
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)


Re: [GENERAL] How to open PGStrom (an extension of PostgreSQL) in Netbeans?

2016-11-22 Thread Mark Anns
Nope. I am not asking about installation instructions. I have installed it.
And I know how to run it from command line.

I just wanted to compile it in netbeans.



--
View this message in context: 
http://postgresql.nabble.com/How-to-open-PGStrom-an-extension-of-PostgreSQL-in-Netbeans-tp5931425p5931431.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to open PGStrom (an extension of PostgreSQL) in Netbeans?

2016-11-22 Thread John R Pierce

On 11/21/2016 11:28 PM, Mark Anns wrote:

Where I am missing? How can I do it? It needs CUDA also I think.


did you read the installation instructions?  it says nothing about 
netbeans, it says to use make and gcc etc.


http://strom.kaigai.gr.jp/install.html#install-os


don't ask me any questions about this, I've never done it, but 15 
seconds with google found that page.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general