Re: [GENERAL] Problems with the time in data type timestamp without time zone

2017-10-19 Thread Achilleas Mantzios

On 18/10/2017 18:02, Root2 wrote:

Hi,

I have a program that saves information in a DB Postgresql need to extract
data from date and time of that DB but when I retrieve the date and time
information is always ahead 3 hours, the type of data that has that field is
timestamp without time zone,


Your program is Java? Anyways, if your local TZ is +3 (like mine at the moment) 
it would be likely the case.



I appreciate your time and attention.

Best regards.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Finally upgrading to 9.6!

2017-10-18 Thread Achilleas Mantzios

On 18/10/2017 17:34, Igal @ Lucee.org wrote:

On 10/18/2017 6:24 AM, Ron Johnson wrote:

On 10/17/2017 11:17 AM, Tom Lane wrote:

Ron Johnson  writes:

Where can I look to see (roughly) how much more RAM/CPU/disk needed when
moving from 8.4 and 9.2?

It's entirely possible you'll need *less*, as you'll be absorbing the
benefit of several years' worth of performance improvements. But this
is such a workload-dependent thing that there's no general answer.


XML stored in blobs (not sure whether text or bytea) and b-tree indexes.



A bit off-topic here, but why upgrade to 9.6 when you can upgrade to 10.0?


Had the same question, we are moving from 9.3 -> 10.0 near the start of summer 
(I hope).
10.0's pg_upgrade supports 8.4 . One reason to upgrade in smaller steps is 
maybe to grasp the new changes / features better?



Obviously you're not one to upgrade often so shouldn't you take advantage of all of the 
new features and improvements when "finally" (to use your own word) upgrading?

Igal Sapir
Lucee Core Developer
Lucee.org <http://lucee.org/>



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Using Substitution Variables In PostgreSQL

2017-10-16 Thread Achilleas Mantzios

On 16/10/2017 17:08, Osahon Oduware wrote:

Hi All,

I wanted to find out how to use a substitution variable in an SQL statement that would 
cause the user to be prompted for a value. Something similar to the ampersand 
(&&) in ORACLE.

For example, given the SQL statement below:
    SELECT ,,
    FROM 
    WHERE  = 35

I want the user to be prompted for the value in the WHERE (filter) clause, e.g.
    SELECT ,,
    FROM 
    WHERE  = ?

I would be glad if someone could point me in the right direction.


Just write a bash script that asks for values and then use the -v feature of 
psql .


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange checkpoint behavior - checkpoints take a long time

2017-10-10 Thread Achilleas Mantzios

Hello Vladimir,

maybe your update triggered auto_vacuum on those tables ? Default 
autovacuum_freeze_max_age is exactly set at 200,000,000 . Did you check your 
vacuum stats afterwards (pg_stat_*_tables) ?
Can you show the code which performed the deletes?

On 10/10/2017 16:56, Vladimir Nicolici wrote:


I experimented some more with the settings this weekend, while doing some large write operations (deleting 200 million records from a table), and I realized that the database is capable of 
generating much more WAL than I estimated.


And it seems that spikes in write activity, when longer than a few minutes, can cause the checkpoint process to “panic” and start a checkpoint earlier, and trying to complete it as soon as possible, 
estimating, correctly, that if that level of activity continues it will hit the max_wal_size limit.


Based on that, I reduced the checkpoint_timeout from 30 minutes to 20 minutes, while keeping max_wal_size at 144GB . Alternatively I could have increased the maximum WAL size more, but I’m not sure 
it’s a good idea to set it higher than the shared buffers, which are also set at 144GB. After this change, on Monday all checkpoints were triggered by “time”, I didn’t have any more checkpoints 
triggered by “xlog”.


I also set checkpoint_completion_target to 0.5 to see if our hardware can handle concentrating the write activity for 20 minutes in just 10 minutes, and that worked very well too, checkpoints 
finished on time. The %util (busy%) for the SSDs as reported by sar was around 20% when not doing a checkpoint, and 60% during the checkpoint, so it seems the hardware will be able to handle future 
increases in activity just fine.


The lesson I learned here is that max_wal_size needs to be configured based on the **maximum** volume of wal the database can generate in the checkpoint_timeout interval. Initially I had it set 
based on the **average** volume of wal generated in that interval, setting it to 3 times that average, but that was not enough, triggering the unexpected behavior.


Thanks,

Vlad



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Using cp to back up a database?

2017-10-09 Thread Achilleas Mantzios

On 09/10/2017 17:13, Michael Paquier wrote:

On Mon, Oct 9, 2017 at 11:09 PM, Achilleas Mantzios
 wrote:

In all pg conferences I have been, ppl scream : do not use pg_dump for
backups :)

It depends on what you are trying to achieve, pg_dump can be fine for
small-ish databases. By relying on both logical (pg_dump) and physical
backups (base backups) brings more insurance in face of a disaster.


Basically yes, it depends on size. Where I work in order to do a proper pg_dump 
would take ~ one week. Business can't wait for that long.


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using cp to back up a database?

2017-10-09 Thread Achilleas Mantzios

On 09/10/2017 16:51, Larry Rosenman wrote:


If you want a consistent database (you **REALLY** do), pg_dump is the correct 
tool.



In all pg conferences I have been, ppl scream : do not use pg_dump for backups 
:)


--

Larry Rosenman http://www.lerctr.org/~ler <http://www.lerctr.org/%7Eler>

Phone: +1 214-642-9640 E-Mail: l...@lerctr.org 
<mailto:l...@lerctr.org>

US Mail: 5708 Sabbia Drive, Round Rock, TX 78665-2106

*From: * on behalf of Ron Johnson 

*Date: *Monday, October 9, 2017 at 8:41 AM
*To: *"pgsql-general@postgresql.org" 
*Subject: *[GENERAL] Using cp to back up a database?

Hi,

v8.4.20

This is what the current backup script uses:

/usr/bin/psql -U postgres -c "SELECT pg_start_backup('Incrementalbackup',true);"

cp -r /var/lib/pgsql/data/* $dumpdir/data/

/usr/bin/psql -U postgres template1 -c "SELECT pg_stop_backup();"



Should it use rsync or pg_dump instead?

Thanks


--
World Peace Through Nuclear Pacification



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] [HACKERS] USER Profiles for PostgreSQL

2017-09-19 Thread Achilleas Mantzios

On 19/09/2017 20:09, Tom Lane wrote:

chiru r  writes:

We are looking  for User profiles in ope source PostgreSQL.
For example, If a  user password failed n+ times while login ,the user
access has to be blocked few seconds.
Please let us know, is there any plan to implement user profiles in feature
releases?.

Not particularly.  You can do that sort of thing already via PAM,
for example.

Or LDAP as in our case.


regards, tom lane




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC: logical replication and LSN feedback

2017-09-19 Thread Achilleas Mantzios

On 19/09/2017 16:37, Yason TR wrote:

Hi all,

I am developing an application which connects to a logical replication slot, to 
consume the WAL events. These WAL events are then forwarded to a MQ broker.

The heart of the code can be seen as:

while (true) {
 Connection connection = null;
 PGReplicationStream stream = null;

 try {
 connection = 
DriverManager.getConnection("jdbc:postgresql://localhost:5432/db", properties);
 stream = 
connection.unwrap(PGConnection.class).getReplicationAPI().replicationStream().logical().withSlotName("slot").start();

 while (true) {
 final ByteBuffer buffer = stream.read();

 // ... MQ logic here ... omitted ...

 stream.setAppliedLSN(stream.getLastReceiveLSN());
 stream.setFlushedLSN(stream.getLastReceiveLSN());
 }
 } catch (final SQLException e) {
 // ... log exception ... omitted ...
 } finally {
 // ... close stream and connection ... omitted ...
 }
}

I notice some behavior which I cannot explain and would like to understand so I 
can alter my code:

- When I restart the application, I notice that the application is retrieving 
the last event from the previous run again. The result is that this event is 
sent twice to the MQ broker after a restart of the application. Why is that? 
Isn't calling `setAppliedLSN(stream.getLastReceiveLSN())` and/or 
`setFlushedLSN(stream.getLastReceiveLSN())` enough to acknowledge an event, so 
it will removed from the WAL log and it will not be resent?

- When receiving an event, the corresponding LSN from that event (which is sent 
in the payload) is not the same as the result of `stream.getLastReceivedLSN()`. 
Why is that? Which one should I use? Maybe this is correlated to my first 
question.

- What is the difference between `setAppliedLSN(LSN)` and `setFlushedLSN(LSN)`? 
The Javadocs are not really helpful here.


The stages of a wal location generally go like : sent -> write -> flush -> 
replay , at least in terms of physical replication.
I guess applied=replayed ?

Note that from the docs : 
https://jdbc.postgresql.org/documentation/head/replication.html#logical-replication
it says :
"
In the event that replication has been restarted, it's will start from last 
successfully processed LSN that was sent via feedback to database.
"



FYI, I also asked this question on 
https://stackoverflow.com/questions/46301578/postgres-jdbc-logical-replication-lsn-feedback.

Thanks a lot and kind regards,

Yason TR




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CREATE TABLE LIKE including all not including storage parameters?

2017-09-05 Thread Achilleas Mantzios

On 05/09/2017 13:26, Michael Paquier wrote:

On Tue, Sep 5, 2017 at 6:45 PM, Achilleas Mantzios
 wrote:

Am I doing something wrong here?

I didn't sat that :) but you are right, STORAGE settings for copied col defs != 
tables's storage parameters. The key here is that LIKE deals with columns only, 
not other parameters of the table.

 From the documentation:
https://www.postgresql.org/docs/devel/static/sql-createtable.html
STORAGE settings for the copied column definitions will be copied only
if INCLUDING STORAGE is specified. The default behavior is to exclude
STORAGE settings, resulting in the copied columns in the new table
having type-specific default settings. For more on STORAGE settings,
see Section 66.2.

And in this case storage parameters refer to column-specific settings,
not table-level storage parameters, which are defined here by toast:
https://www.postgresql.org/docs/devel/static/storage-toast.html



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CREATE TABLE LIKE including all not including storage parameters?

2017-09-05 Thread Achilleas Mantzios

On 05/09/2017 11:56, Milen Blagojevic wrote:

Hi all,

I am using CREATE TABLE LIKE  for creating partitions :

Lets say this is my main table:

\d+ test_tabl
Table "public.test_tabl"
Column|Type | Modifiers | Storage  | Stats 
target | Description
--+-+---+--+--+-
 id   | integer | not null  | plain|
  |
 test_name| character varying(10)   |   | extended |
  |
 test_value   | numeric(19,3)   |   | main |
  |
 time_created | timestamp without time zone |   | plain|
  |
Indexes:
"test_tabl_pkey" PRIMARY KEY, btree (id)
"ix_test_tabl_time_created" btree (time_created)
Child tables: test_tabl_20170905
Options: fillfactor=75


I am creating new partitions with following query:

create table test_tabl_20170906 (like test_tabl INCLUDING ALL) inherits 
(test_tabl);

\d+ test_tabl_20170906
   Table "public.test_tabl_20170906"
Column|Type | Modifiers | Storage  | Stats 
target | Description
--+-+---+--+--+-
 id   | integer | not null  | plain|
  |
 test_name| character varying(10)   |   | extended |
  |
 test_value   | numeric(19,3)   |   | main |
  |
 time_created | timestamp without time zone |   | plain|
  |
Indexes:
"test_tabl_20170906_pkey" PRIMARY KEY, btree (id)
"test_tabl_20170906_time_created_idx" btree (time_created)
Inherits: test_tabl

According to PostgreSQL documentation:
INCLUDING ALL is an abbreviated form of INCLUDING DEFAULTS INCLUDING 
CONSTRAINTS INCLUDING INDEXES INCLUDING STORAGE INCLUDING COMMENTS.

But in this case child table didn't inherit filfactor (behaviour is the same 
for autovacuum parameters)

Version is 9.4.13:
  version

 PostgreSQL 9.4.13 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.5 
20150623 (Red Hat 4.8.5-11), 64-bit




Same in PostgreSQL 10beta3 on x86_64-pc-linux-gnu.
testdb=# create table dad(foo text) WITH (fillfactor=99, 
autovacuum_freeze_max_age=2000);
CREATE TABLE
testdb=# create table dadkid1 (like dad INCLUDING STORAGE);
CREATE TABLE
testdb=# \d+ dadkid1
 Table "public.dadkid1"
 Column | Type | Collation | Nullable | Default | Storage  | Stats target | 
Description
+--+---+--+-+--+--+-
 foo| text |   |  | | extended |  |




Am I doing something wrong here?

Thanks in advance.

Regards,
Milen Blagojevic



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Table create time

2017-08-31 Thread Achilleas Mantzios

On 31/08/2017 18:20, Melvin Davidson wrote:

>you could just create an event trigger looking for CREATE TABLE as 
filter_value:

I have tried that. Unfortunately, I have been unable to extract the table name 
from the event because TG_TABLE_NAME is not
available during an event trigger, albeit perhaps I am missing something?

You need to use ddl_command_end event and then select from 
pg_event_trigger_ddl_commands() . Search for some example how to do this.

That being said, I still believe it is extra work that could easily be avoided 
and should be added to the postgresql catalogs simply to be
more feature competitive with  Oracle & SQL Server, as well as a boost to the 
PostgreSQL community.

On Thu, Aug 31, 2017 at 10:54 AM, Michael Paquier mailto:michael.paqu...@gmail.com>> wrote:

On Thu, Aug 31, 2017 at 10:21 PM, Melvin Davidson mailto:melvin6...@gmail.com>> wrote:
> Wolfgang, as David said, a column in pg_class for the creation time of a 
table does not exist. I long ago requested that feature as it is
> in other DB's (Oracle & MS SQL Server), but the main reason that it was 
not done was that no one was interested in doing it.

Is there any need for a column in pg_class for that? You could just
create an event trigger looking for CREATE TABLE as filter_value:
https://www.postgresql.org/docs/9.6/static/sql-createeventtrigger.html 
<https://www.postgresql.org/docs/9.6/static/sql-createeventtrigger.html>
And then have this event trigger just save the timestamp value of
now() in a custom table with the name and/or OID of the relation
involved.
--
Michael




--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Table create time

2017-08-31 Thread Achilleas Mantzios

On 31/08/2017 16:12, Achilleas Mantzios wrote:

On 31/08/2017 14:03, haman...@t-online.de wrote:

On 31/08/2017 09:56, haman...@t-online.de wrote:

Hi,

is there a way to add a table create (and perhaps schema modify) timestamp to 
the system?
I do occasionally create semi-temporary tables (meant to live until a problem 
is solved, i.e. longer
than a session) with conveniently short names.

In FreeBSD you'd do smth like this to find the file creation time :
ls -lU /data/PG_9.3_201306121/16425/12344

where 12344 is the filenode of the relation in question. In ext4 you may do 
this albeit with more difficulty.


Hello Achilleas,

many thanks for responding. There are two problems;
a) accessing the filesystem will likely require some extra effort (e.g. 
installing an untrusted programming
language)

No need for this. You may use builtin pg_stat_file function . I see it supports a 
"OUT creation timestamp with time zone" parameter.


Sorry, just tested that against both FreeBSD pgsql9.3 and Ubuntu/ext4 10beta3, 
and .creation returns null in all tests. So yes you might need to write your 
own function .


b) a dump/restore will modify the dates

That would be a problem, but this is not a common use case. Anyways you can 
always write an event trigger and store some message in a log file. This should 
survive dump/restores .



best regards
Wolfgang Hamann







--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table create time

2017-08-31 Thread Achilleas Mantzios

On 31/08/2017 14:03, haman...@t-online.de wrote:

On 31/08/2017 09:56, haman...@t-online.de wrote:

Hi,

is there a way to add a table create (and perhaps schema modify) timestamp to 
the system?
I do occasionally create semi-temporary tables (meant to live until a problem 
is solved, i.e. longer
than a session) with conveniently short names.

In FreeBSD you'd do smth like this to find the file creation time :
ls -lU /data/PG_9.3_201306121/16425/12344

where 12344 is the filenode of the relation in question. In ext4 you may do 
this albeit with more difficulty.


Hello Achilleas,

many thanks for responding. There are two problems;
a) accessing the filesystem will likely require some extra effort (e.g. 
installing an untrusted programming
language)

No need for this. You may use builtin pg_stat_file function . I see it supports a 
"OUT creation timestamp with time zone" parameter.

b) a dump/restore will modify the dates

That would be a problem, but this is not a common use case. Anyways you can 
always write an event trigger and store some message in a log file. This should 
survive dump/restores .



best regards
Wolfgang Hamann





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table create time

2017-08-31 Thread Achilleas Mantzios

On 31/08/2017 09:56, haman...@t-online.de wrote:

Hi,

is there a way to add a table create (and perhaps schema modify) timestamp to 
the system?
I do occasionally create semi-temporary tables (meant to live until a problem 
is solved, i.e. longer
than a session) with conveniently short names.

In FreeBSD you'd do smth like this to find the file creation time :
ls -lU /data/PG_9.3_201306121/16425/12344

where 12344 is the filenode of the relation in question. In ext4 you may do 
this albeit with more difficulty.


Also, is there a simple query to identify tables without a table comment? (so a 
weekly cron
could remind me of tables that are already a few days old but have no 
explanatory comment)

I am running PG 9.3

Best regards
Wolfgang Hamann






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Begginers question

2017-08-16 Thread Achilleas Mantzios

On 16/08/2017 13:46, Alex Samad wrote:



On 16 August 2017 at 16:16, Michael Paquier mailto:michael.paqu...@gmail.com>> wrote:

On Wed, Aug 16, 2017 at 2:32 PM, Alex Samad mailto:a...@samad.com.au>> wrote:
> 1) why did it fill up this time and not previously
> I add this
> archive_command = '/bin/true'
> wal_keep_segments = 1000 # <<< I'm guessing its this
>
> 2) how do I fix up, can I just remove the files from the pg_xlog directory

Don't do that. those files are managed by Postgres so you may finish
with a corrupted cluster. Instead you should lower the value of


Too late, its okay its a learning experience.

wal_keep_segments, reload the server parameters, and then enforce two
checkpoints to force WAL segments to be recycled. Note that this


how do I force check points

checkpoint ;
(the ; is not meant as a smiley or whatever )


depends also on the values of checkpoint_segments
(max_wal_size/min_wal_size in Postgres 9.5 and onwards).
--
Michael


thanks



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Receive a string in Composite-type Arguments

2017-08-11 Thread Achilleas Mantzios

On 11/08/2017 16:09, Fabiana Zioti wrote:


Hi!

I am studying how to program extensions for PostgreSQL in C language.

In the example to return to return Composite-type Arguments, from the 
PostgreSQL 9.6 documentation, I could not make the cash from a cstring to the 
HeapTupleHeader type.

That is, instead of the function receive as a parameter a table:

CREATE FUNCTION c_overpaid (emp, integer) RETURNS boolean
AS DIRECTORY

SELECT name, c_overpaid (emp, 1500) AS overpaid
From emp
WHERE name = 'Bill' OR name = 'Sam';

And in the code in C receive the argument as:

HeapTupleHeader t = PG_GETARG_HEAPTUPLEHEADER (0);


The sql function would be defined as:

CREATE FUNCTION c_overpaid (cstring, integer) RETURNS boolean
AS 'DIRECTORY / funcs', 'c_overpaid'
LANGUAGE C STRICT;

But how would I receive this argument, for example:
Char * str = PG_GETARG_CSTRING (0),

And do the conversion to the HeapTupleHeader type?

Why do you want to convert this to HeapTupleHeader since this is *not* a tuple but a string value? What do you want to achieve? IMHO you better start with simpler examples, make them run, and proceed 
as you understand more and more.



What do you guys suggest?

Thanks in advance



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-08-09 Thread Achilleas Mantzios

On 09/08/2017 15:27, ADSJ (Adam Sjøgren) wrote:

On 2017-06-21 Adam Sjøgren  wrote:


Adam Sjøgren  wrote:

Meanwhile, I can report that I have upgraded from 9.3.14 to 9.3.17 and
the errors keep appearing the log.

Just to close this, for the record: We haven't seen the errors since
2017-06-30. We upgraded to 9.3.17 (latest 9.3 point-release at the time
of writing) on 2017-06-10.

Whether this means that the affected rows gradually got overwritten
after switching to .17 and thus got fixed, or if something subtle in our
workflow changed, so we aren't hitting this anymore, or something else
entirely is the answer, we're not sure.

Glad you sorted it out! You have been consistent in your effort to chase this 
down, and reverted back with your findings to close the case. Thumbs up!

We didn't get to trying Alvaro Herrera's suggestion of removing
6c243f90ab6904f27fa990f1f3261e1d09a11853 before the errors stopped
appearing "by themselves".


   Best regards,

 Adam



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] monitoring PostgreSQL

2017-07-24 Thread Achilleas Mantzios

pgbadger is a very nice reporting tool, overall, albeit not exactly at the 
system side, but more to the DBA side.
For system level monitoring maybe take a look here : 
https://wiki.postgresql.org/wiki/Monitoring#check_postgres .
Also you might want to write a script that parses logs for FATAL and PANIC and 
sends out emails.

On 24/07/2017 14:27, PAWAN SHARMA wrote:

Hi All,


Please provide me a list of tools which we can use for monitoring PostgreSQL.

-Monitor all the services and health of server
-Able to send critical and warning alert on mail.

OS: Redhat-7
PostgreSQL Version: 9.5.7


-Pawan



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Dealing with ordered hierarchies

2017-07-24 Thread Achilleas Mantzios

On 24/07/2017 10:02, Tim Uckun wrote:

I have read many articles about dealing with hierarchies in postgres including 
nested sets, ltree, materialized paths, using arrays as parentage,  CTEs etc 
but nobody talks about the following scenario.

Say I have a hierarchy like this

1
1.1
1.1.1
1.1.2
1.2
1.3
2
2.1

In this hierarchy the order is very important and I want to run frequent(ish) 
re-ordering of both subsets and entire trees and even more frequent inserts.

Scenario 1: I want to insert a child into the 1.1 subtree. The next item should be 1.1.3 and I can't figure out any other way to do this other than to subquery the children and to figure out the max 
child ID, add one to it which is a race condition waiting to happen.


Scenario 2: I now decide the recently inserted item is the second most important so I reset the ID to 1.1.2 and then increment 1.1.2 (and possibly everything below).  Again this is both prone to 
race conditions and involves a heavy update.


Is there a better way to deal with this or is the complexity unavoidable?

Maybe you could try a hybrid approach with genealogical paths, represented by 
arrays, and a (possible bidirectional) linked list storing the siblings of the 
same parent.
Basically what you'd normally want is to convert your problem into something 
that can be represented in such a way that it can run fast on postgresql.


I should state that like most database reads will be much more frequent than 
writes and inserts will be more frequent than updates (re-ordering)



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replication archive_command is really needed?

2017-07-20 Thread Achilleas Mantzios

On 20/07/2017 23:07, Leonardo M. Ramé wrote:



El 20/07/17 a las 16:57, Andreas Kretschmer escribió:

On 20 July 2017 21:46:09 GMT+02:00, "Leonardo M. Ramé"  
wrote:

Hi, I wonder if archive_mode=on and archive_command parameters in
postgresql.conf are really needed for streaming replication between two

servers (master-slave).

Regards,


No.

Andreas



So, can I just comment those commands and remove the main/archive directory?

You'll lose the capability of PITR. Also what do you mean main/archive 
directory? I hope you don't mean data/pg_xlog .


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] log_filename

2017-07-18 Thread Achilleas Mantzios

On 18/07/2017 11:36, Walter Nordmann wrote:

May be i'm blind: what did you really do?  changed postgresql.conf?


He hardcoded it in the postgresql.conf .

regards
walter


Am 18.07.2017 um 10:31 schrieb Ahtesham Karajgi:

Add the version in the log_filename itself.

Below is the test case.

naveed=# show log_filename ;
  log_filename

 postgresql-%Y-%m-%d_%H%M%S.log
(1 row)

naveed=# select pg_reload_conf();
 pg_reload_conf

 t
(1 row)

naveed=# show log_filename ;
log_filename

 postgresql-9.5-%Y-%m-%d_%H%M%S.log
(1 row)

test=# \q
[postgres@localhost ~]$
[postgres@localhost ~]$ ls -lrth /usr/local/pgsql/data/pg_log | tail -2
-rw---. 1 postgres postgres 3.3K Jul 18 01:25 
postgresql-2017-07-18_00.log
-rw---. 1 postgres postgres 4.1K Jul 18 01:27 
postgresql-9.5-2017-07-18_012530.log
[postgres@localhost ~]$






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] RAM, the more the merrier?

2017-06-29 Thread Achilleas Mantzios

On 29/06/2017 17:19, Willy-Bas Loos wrote:

Hi,

We have a postgresql database that is now 1.4TB in disksize and slowly growing.
In the past, we've had (read) performance trouble with this database and the solution was to buy a server that can fit the db into memory. It had 0.5 TB of RAM and at the time it could hold all of 
the data easily.

Those servers are now old and the db has outgrown the RAM and we are doing more 
reads and writes too (but the problem has not yet returned).

So i am looking into buying new servers. I'm thinking of equipping it with 1TB of RAM and room to expand. So the database will not fit completely, but largely anyway. Also, if we can afford it, it 
will have SSDs instead of RAID10 SAS spindles.


But I've read that there is some kind of maximum to the shared_buffers, where 
increasing it would actually decrease performance.
Is 1TB of RAM, or even 2TB always a good thing?
And is there anything special that I should look out for when configuring such 
a server?
Or would it be much better to buy 2 smaller servers and tie them together 
somehow? (partitioning, replication, ...)


Our DB is also on the 1T+ range. It is hosted in a cloud VM, with only 32GB RAM 
but ultra fast SSD disks. No problems.
IIRC the "fit DB into RAM" was a trend many years back. The new recommendation for shared buffers is about 25% of RAM. Leaving the rest to be utilized mainly by the kernel cache, also by other 
programs in the system.

--
Willy-Bas Loos



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Accessing DB2 tables from postgresql

2017-06-27 Thread Achilleas Mantzios

On 27/06/2017 13:11, Swapnil Vaze wrote:

Hello,

I am trying to access few table present in DB2 LUW from postgres9.5 database.

I have installed unixODBC driver and connection to DB2 is working fine.
I have installed CartoDB/odbc_fdw foreign data wrappers.
I have user below commands to create foreign table:
$ create extension odbc_fdw;
$ CREATE USER MAPPING FOR postgres SERVER odbc_server_db2 options(odbc_UID 
'', odbc_PWD '');
$ CREATE FOREIGN TABLE odbc_table (CTGRY_ID INTEGER, CTGRY_CD 
VARCHAR(10),UPDT_TS TIMESTAMP) SERVER odbc_server_db2 OPTIONS (database 
'TESTV9', schema 'schema1', table 'table1' );



You have omitted the CREATE SERVER command. Can you query the mainframe using 
isql ?


All commands work fine, however when I try to select data from table it throws 
error:
$ select * from odbc_table;
ERROR:  Executing ODBC query

Can anyone help me here?
How can I access DB2 LUW or zOS database tables from postgres?


I used to work with MVS many years ago. Good luck with your project.
For better diagnosis open all logs in both machines (postgresql, odbc, MVS, 
DB2) and have a detailed view on them.



--
Thanks & Regards,
Swapnil Vaze



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] pglogical vs. built-in logical replication in pg-10

2017-06-23 Thread Achilleas Mantzios

On 22/06/2017 20:30, Andres Freund wrote:

On 2017-06-22 18:10:40 +0300, Achilleas Mantzios wrote:

Once again having pg_largeobject as a system-catalog prevents LOs
from working smoothly. Neither replication nor having LOs on a
different tablespace (by moving pg_largeobject) works.

I think logical decoding was designed for supporting DML SQL commands
(i.e. a finite set of commands) and not specific functions (lo_*)
which by nature can be arbitrary, infinite and version specific.

That's not really the reason. The first reason its currently unsupported
is that LOs are stored in a system catalog, and currently all system
catalogs are excluded from the change stream.  The second problem is how
exactly to represent the changes - we can't represent it as the whole LO
being changed, as that'd increase the volume of WAL and replicated
writes dramatically.  Thus we need to invent an API that can represent
creation, deletion, and writes to arbitrary offsets, for output plugins.


Thanx for the insight.




I wish PG in some future version will address these quirks so one can operate 
on LOs more smoothly.

You're welcome to help...


Greetings,

Andres Freund




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pglogical vs. built-in logical replication in pg-10

2017-06-22 Thread Achilleas Mantzios

On 22/06/2017 17:46, Andreas Joseph Krogh wrote:

På torsdag 22. juni 2017 kl. 15:25:20, skrev Achilleas Mantzios 
mailto:ach...@matrix.gatewaynet.com>>:

On 22/06/2017 13:38, Andreas Joseph Krogh wrote:

På torsdag 22. juni 2017 kl. 11:43:02, skrev Achilleas Mantzios 
mailto:ach...@matrix.gatewaynet.com>>:

On 22/06/2017 11:21, Andreas Joseph Krogh wrote:

Hi.
1. Why should one prefer built-in logical replication in pg-10 to 
pglogical, does it do anything pglogical doesn't?
It seems pglogical is more feature-rich...
2. As I understand built-in logical replication in pg-10 doesn't 
support large-objects, which we use a lot. Does pglogical replicate large 
objects? I cannot find any notes about
large-objects under "Limitations and Restrictions": 
https://www.2ndquadrant.com/en/resources/pglogical/pglogical-docs/

You may do a simple test, create a table with a largeobject and try to 
read the logical stream, if it cannot represent the lo_import, lo_open, 
lowrite, lo_close (and I 'd bet they can't be
encoded) then neither pglogical (being based on the same logical 
decoding technology) will support them

The point of email-lists like this is that one may share knowledge so one 
doesn't have to test everything one self, and can build on knowledge from 
others. I'm looking for an answer from
someone who's not betting, but knows.

I gave you enough knowledge already. Here's some more :
- go and install 10
- create a table containing one col with type oid (large object) and one 
bytea
- follow the simple setup here : 
https://www.postgresql.org/docs/10/static/logicaldecoding-example.html
- insert a row
- Do again : SELECT * FROM pg_logical_slot_get_changes('regression_slot', 
NULL, NULL);

Do you see any of your oid image data in the output? Do you see any of the bytea ? (the answer 
here in 9.5 is  "no"/"yes").
If in 10.0 is still the case, then you should think about moving to bytea.

Hm, it turns out it's not quite that simple...
Test-case:
create table drus(id bigint primary key, lo oid, data bytea);
SELECT * FROM pg_create_logical_replication_slot('my_slot', 'test_decoding');
INSERT INTO drus (id, lo, data) values(1, lo_import('/tmp/faktura_27.pdf'), 
decode('AAAEEE', 'hex'));

select * from drus;
┌┬─┬──┐
│ id │   lo│   data   │
├┼─┼──┤
│  1 │ 2873269 │ \xaaaeee │
└┴─┴──┘


SELECT * FROM pg_logical_slot_get_changes('my_slot', NULL, NULL);
┌┬──┬┐
│lsn │ xid  │  data 
 │
├┼──┼┤
│ B/E585B858 │ 9391 │ BEGIN 9391
 │
│ B/E586BE78 │ 9391 │ table public.drus: INSERT: id[bigint]:1 lo[oid]:2873269 
data[bytea]:'\xaaaeee' │
│ B/E586BF80 │ 9391 │ COMMIT 9391   
 │
└┴──┴┘
(3 rows)

So far so good, the oid-value (2873269) is apparently in the change-set, but...

If the data itself of the LO are not there then this is not so good.

Set up publication:
CREATE PUBLICATION bolle FOR ALL TABLES;
CREATE PUBLICATION

=== ON REPLICA ===
# create table on replica:
create table drus(id bigint primary key, lo oid, data bytea);
# create subscription:
CREATE SUBSCRIPTION mysub CONNECTION 'host=localhost port=5433 user=andreak 
dbname=fisk' PUBLICATION bolle;
NOTICE:  created replication slot "mysub" on publisher
CREATE SUBSCRIPTION
2017-06-22 16:38:34.740 CEST [18718] LOG:  logical replication apply worker for 
subscription "mysub" has started
2017-06-22 16:38:34.747 CEST [18720] LOG:  logical replication table synchronization worker for 
subscription "mysub", table "drus" has started
2017-06-22 16:38:35.746 CEST [18720] LOG:  logical replication table synchronization worker for 
subscription "mysub", table "drus" has finished

Looks good:
select * from drus;
┌┬─┬──┐
│ id │   lo│   data   │
├┼─┼──┤
│  1 │ 2873269 │ \xaaaeee │
└┴─┴──┘
(1 row)

...until :
SELECT lo_export(drus.lo, '/tmp/faktura.pdf') from drus where id = 1;
2017-06-22 16:40:04.967 CEST [18657] ERROR:  large object 2873269 does not exist
2017-06-22 16:40:04.967 CEST [18657] STATEMENT:  SELECT lo_export(drus.lo, 
'/tmp/faktura.pdf') from drus where id = 1;
ERROR:  large object 28732

Re: [GENERAL] pglogical vs. built-in logical replication in pg-10

2017-06-22 Thread Achilleas Mantzios

On 22/06/2017 13:38, Andreas Joseph Krogh wrote:

På torsdag 22. juni 2017 kl. 11:43:02, skrev Achilleas Mantzios 
mailto:ach...@matrix.gatewaynet.com>>:

On 22/06/2017 11:21, Andreas Joseph Krogh wrote:

Hi.
1. Why should one prefer built-in logical replication in pg-10 to 
pglogical, does it do anything pglogical doesn't?
It seems pglogical is more feature-rich...
2. As I understand built-in logical replication in pg-10 doesn't support 
large-objects, which we use a lot. Does pglogical replicate large objects? I 
cannot find any notes about large-objects
under "Limitations and Restrictions": 
https://www.2ndquadrant.com/en/resources/pglogical/pglogical-docs/

You may do a simple test, create a table with a largeobject and try to read 
the logical stream, if it cannot represent the lo_import, lo_open, lowrite, 
lo_close (and I 'd bet they can't be
encoded) then neither pglogical (being based on the same logical decoding 
technology) will support them

The point of email-lists like this is that one may share knowledge so one doesn't have to test everything one self, and can build on knowledge from others. I'm looking for an answer from someone 
who's not betting, but knows.

I gave you enough knowledge already. Here's some more :
- go and install 10
- create a table containing one col with type oid (large object) and one bytea
- follow the simple setup here : 
https://www.postgresql.org/docs/10/static/logicaldecoding-example.html
- insert a row
- Do again : SELECT * FROM pg_logical_slot_get_changes('regression_slot', NULL, 
NULL);

Do you see any of your oid image data in the output? Do you see any of the bytea ? (the answer here 
in 9.5 is  "no"/"yes").
If in 10.0 is still the case, then you should think about moving to bytea.


Thanks.
--
*Andreas Joseph Krogh*
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
<https://www.visena.com>



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] pglogical vs. built-in logical replication in pg-10

2017-06-22 Thread Achilleas Mantzios

On 22/06/2017 11:21, Andreas Joseph Krogh wrote:

Hi.
1. Why should one prefer built-in logical replication in pg-10 to pglogical, 
does it do anything pglogical doesn't?
It seems pglogical is more feature-rich...
2. As I understand built-in logical replication in pg-10 doesn't support large-objects, which we use a lot. Does pglogical replicate large objects? I cannot find any notes about large-objects under 
"Limitations and Restrictions": https://www.2ndquadrant.com/en/resources/pglogical/pglogical-docs/
You may do a simple test, create a table with a largeobject and try to read the logical stream, if it cannot represent the lo_import, lo_open, lowrite, lo_close (and I 'd bet they can't be encoded) 
then neither pglogical (being based on the same logical decoding technology) will support them.

--
*Andreas Joseph Krogh*
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
<https://www.visena.com>



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Question about jsonb and data structures

2017-06-21 Thread Achilleas Mantzios

On 21/06/2017 01:01, Emilie Laffray wrote:

Hello,

I have been playing with Postgresql recently with a large table and I have 
started looking at reducing the number of rows in that table.
One idea to reduce the actual size, I thought I would "compress" the data 
structure into a JSON object (more on that later).
The table is pretty straightforward in itself
other_id integer
type_id integer
label_id integer
rank_id integer
value real

and the goal is to move to a data structure where we have
other_id integer
value jsonb

There are many things in the table that is not optimal for legacy reasons and I 
can't just get rid of them.

I looked at several json object data structure to see if I could make it work 
notably
[{"type":1,"label":1,"rank":1,"value":.5},{"type":1,"label":2,"rank":1,"value":.25}]
{"1":{"1":{"rank":1,"value":.5},"2":{"rank":1,"value":.25}}}

For better or worse, the first one would be the best for me as I can do a 
simple query like this using the GIN index built on top of value:
SELECT *
FROM mytable
WHERE value @> '[{"type":1,"rank":1,"label":2}]'

Effectively, I would want to extract the value corresponding to my condition 
using simple SQL aka not having to write a function extracting the json.

The experiment on the second data structure shows that it is not as convenient 
as I may need to perform search on either type, label, rank and various 
combinations of the fields.


Maybe you could try smth like :
test=# select * from lala;
 id |txt
+
  1 | one
  2 | two
  3 | ZZZbabaZZZ
  4 | ZZZbabaZZZ
  5 | ZZZbabaZZZ
  6 | ZZZbabaZZZ
  7 | ZZZbabaZZZ
  8 | ZZZbabaZZZ
  9 | ZZZbabaZZZ
 10 | ZZZbabaZZZ
 11 | ZZZbabaZZZ
 12 | ZZZbabaZZZ
 13 | ZZZbabaZZZ
(13 rows)

select foo.* FROM (select id,to_jsonb(lala) as jzon from lala) as foo where jzon @> 
'{"id":5}';


Am I missing something?

Thanks in advance,
Emilie Laffray



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Updatable By Trigger Only

2017-06-20 Thread Achilleas Mantzios

On 20/06/2017 14:43, Osahon Oduware wrote:

Hi All,

I have a trigger on a PostGIS table (say table A) that automatically updates another PostGIS table (say table B). Also, users connect to these tables (table A and B) using QGIS. However, I want the 
updates to table B to be done by the trigger only (i.e. I don't want table B to be updated from QGIS).


I have tried revoking UPDATE permissions on table B, but this prevents the 
trigger from updating the table also as the trigger has to work with the 
permissions of the user.

*Is there a way of making table B updatable by the trigger only?*



Write an ON UPDATE trigger on table B, and inside the code check for pg_trigger_depth() . If this is == 1 (called by user UPDATE) then RAISE an exception. If it is >1 then it is called by the other 
trigger,



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-12 Thread Achilleas Mantzios

On 12/06/2017 10:46, Harry Ambrose wrote:

Hi,

> Their suggestion is to upload to Google Drive. That or use a third party 
site, like Dropbox.

I have uploaded the jar to dropbox, link below (please let me know if you have 
any issues downloading):

https://www.dropbox.com/s/96vm465i7rwhcf8/toast-corrupter-aio.jar?dl=0


After 2 full attempts, (and after bringing my poor - old workstation to its 
knees) it still does not produce the supposed ERROR :

update 16
update 18
Updated all
Attempting vacuum
Vacuum completed
Dropping the table
=
New attempt - number 3
Creating the table if it does not exist
Inserting the rows
Executing  0
Executing  4
^C

PostgreSQL version : 9.3.4


Best wishes,
Harry



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-11 Thread Achilleas Mantzios

On 09/06/2017 19:02, Harry Ambrose wrote:

Hi,
No error messages found.

- is your RAM ECC? Did you run any memtest?

Yes, memory is ECC. No error messages found.


So I guess you run memtest86+ and it reported that your memory is indeed ECC 
and also that it is working properly?

Best wishes,
Harry



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-09 Thread Achilleas Mantzios

On 09/06/2017 14:44, Harry Ambrose wrote:

Hi Tom,

Thanks for attempting to replicate the issue.

Anyway, the bad news is I couldn't reproduce the problem then and I can't
now.  I don't know if it's a timing issue or if there's something critical
about configuration that I'm not duplicating.  Can you explain what sort
of platform you're testing on, and what nondefault configuration settings
you're using?


Further details about the environment that I can replicate on below:

- Non default postgresql.conf settings:
checkpoint_segments = 192
checkpoint_completion_target = 0.9
checkpoint_timeout = 5min
wal_keep_segments = 256
wal_writer_delay = 200ms
archive_mode = on
archive_command = 'rsync -e ssh -arv  /wal/pg_xlog/%f postgres@:/wal/pg_xlog'
archive_timeout = 60
syslog_facility = 'LOCAL0'
log_statement = 'mod'
syslog_ident = 'postgres'
log_line_prefix = '%h %m  %p %c %u %a  %e '
log_timezone = 'GB'
track_activities = on
track_counts = on
datestyle = 'iso, mdy'
timezone = 'GB'
default_text_search_config = 'pg_catalog.english'
array_nulls = on
sql_inheritance = on
standard_conforming_strings = on
synchronize_seqscans = on
transform_null_equals = off
- Two node master/slave setup using streaming replication (without slots).
- CentOS 6.9 (2.6.32-696.el6.x86_64).
- PostgreSQL 9.4.10 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 
20120313 (Red Hat 4.4.7-17), 64-bit.
- 64GiB RAM.
- AMD Opteron(TM) Processor 6238.
- pg_default sat on 2 disk RAID-1 conifugration (ext3 filesystem).
- Custom tablespaces (where the errors occur) sat on 4 disk RAID-10 (ext3 
filesystem).
- All disks are HP 600G SAS 6.0Gbps with P420(i) controllers and battery backed 
cache enabled.


Maybe you could give some info on :
- your ext3 mkfs and mount options (journal, barriers, etc)
- your controller setup (battery should be working good and cache mode set to 
write back)
- your disks setup (write cache should be disabled)
- you should check your syslogs/messages for any errors related to storage
- is your RAM ECC? Did you run any memtest?
- is your CPU overheating ?
- have you experienced any crashes/freezes ?



Please let me know if you require further info.

Best wishes,
Harry




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-07 Thread Achilleas Mantzios

On 07/06/2017 17:49, Harry Ambrose wrote:

Hi,
Out of interest, are you using any tablespaces other than pg_default? I can 
only replicate the issue when using separately mounted tablespaces.
One lesson I learned from the BSD camp when dealing with random freezes and panics : when all else fails to give an answer it is time to start blaming my hardware. Are those tablespaces on any cheap 
SSD's ?


I have been investigating this quite extensively and everything I can find on the web suggests data corruption. However running the the following DO reports no errors and I can dump the database 
without issue.



You don't use index when pg_dump . If only the index is corrupted you can get 
away with dump/reload (but for big DBs this is unrealistic)

I also found the following has been reported: 
https://www.postgresql.org/message-id/20161201165505.4360.28...@wrigleys.postgresql.org

Best wishes,
Harry


On 7 Jun 2017, at 15:22, Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

On 07/06/2017 16:33, ADSJ (Adam Sjøgren) wrote:

Our database has started reporting errors like this:

  2017-05-31 13:48:10 CEST ERROR:  unexpected chunk number 0 (expected 1) for 
toast value 14242189 in pg_toast_10919630
  ...
  2017-06-01 11:06:56 CEST ERROR:  unexpected chunk number 0 (expected 1) for 
toast value 19573520 in pg_toast_10919630

(157 times, for different toast values, same pg_toast_nnn). pg_toast_10919630
corresponds to a table with around 168 million rows.

These went away, but the next day we got similar errors from another
table:

  2017-06-02 05:59:50 CEST ERROR:  unexpected chunk number 0 (expected 1) for 
toast value 47060150 in pg_toast_10920100
  ...
  2017-06-02 06:14:54 CEST ERROR:  unexpected chunk number 0 (expected 1) for 
toast value 47226455 in pg_toast_10920100

(Only 4 this time) pg_toast_10920100 corresponds to a table with holds
around 320 million rows (these are our two large tables).

The next day we got 6 such errors and the day after 10 such errors. On
June 5th we got 94, yesterday we got 111, of which one looked a little
different:

  2017-06-06 17:32:21 CEST ERROR:  unexpected chunk size 1996 (expected 1585) 
in final chunk 0 for toast value 114925100 in pg_toast_10920100

and today the logs have 65 lines, ending with these:

  2017-06-07 14:49:53 CEST ERROR:  unexpected chunk number 0 (expected 1) for 
toast value 131114834 in pg_toast_10920100
  2017-06-07 14:53:41 CEST ERROR:  unexpected chunk number 0 (expected 1) for 
toast value 131149566 in pg_toast_10920100

First try to find which tables those toast relations refer to :
select 10919630::regclass , 10920100::regclass ;
Are those critical tables? Can you restore them somehow?

Also you may consider
REINDEX TABLE pg_toast.pg_toast_10920100;
REINDEX TABLE pg_toast.pg_toast_10919630;
REINDEX TABLE ;
REINDEX TABLE ;

also VACUUM the above tables.

You might want to write a function which iterates over the damaged table's rows 
in order to identify the damaged row(s). And then do some good update to create 
a new version.


The database is 10 TB on disk (SSDs) and runs on a 48 core server with 3
TB RAM on Ubuntu 14.04 (Linux 3.18.13).

We are updating rows in the database a lot/continuously.

There are no apparent indications of hardware errors (like ECC) in
dmesg, nor any error messages logged by the LSI MegaRAID controller, as
far as I can tell.

We are running PostgreSQL 9.3.14 currently.

The only thing I could see in the release notes since 9.3.14 that might
be related is this:

 "* Avoid very-low-probability data corruption due to testing tuple
visibility without holding buffer lock (Thomas Munro, Peter Geoghegan,
Tom Lane)"

Although reading more about it, it doesn't sound like it would exhibit
the symptoms we see?

We have recently increased the load (to around twice the number of
cores), though, which made me think we could be triggering corner cases
we haven't hit before.

We will be upgrading to PostgreSQL 9.3.17 during the weekend, but I'd like to 
hear
if anyone has seen something like this, or have some ideas of how to
investigate/what the cause might be.


  Best regards,

Adam



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org 
<mailto:pgsql-general@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-07 Thread Achilleas Mantzios

On 07/06/2017 16:33, ADSJ (Adam Sjøgren) wrote:

Our database has started reporting errors like this:

   2017-05-31 13:48:10 CEST ERROR:  unexpected chunk number 0 (expected 1) for 
toast value 14242189 in pg_toast_10919630
   ...
   2017-06-01 11:06:56 CEST ERROR:  unexpected chunk number 0 (expected 1) for 
toast value 19573520 in pg_toast_10919630

(157 times, for different toast values, same pg_toast_nnn). pg_toast_10919630
corresponds to a table with around 168 million rows.

These went away, but the next day we got similar errors from another
table:

   2017-06-02 05:59:50 CEST ERROR:  unexpected chunk number 0 (expected 1) for 
toast value 47060150 in pg_toast_10920100
   ...
   2017-06-02 06:14:54 CEST ERROR:  unexpected chunk number 0 (expected 1) for 
toast value 47226455 in pg_toast_10920100

(Only 4 this time) pg_toast_10920100 corresponds to a table with holds
around 320 million rows (these are our two large tables).

The next day we got 6 such errors and the day after 10 such errors. On
June 5th we got 94, yesterday we got 111, of which one looked a little
different:

   2017-06-06 17:32:21 CEST ERROR:  unexpected chunk size 1996 (expected 1585) 
in final chunk 0 for toast value 114925100 in pg_toast_10920100

and today the logs have 65 lines, ending with these:

   2017-06-07 14:49:53 CEST ERROR:  unexpected chunk number 0 (expected 1) for 
toast value 131114834 in pg_toast_10920100
   2017-06-07 14:53:41 CEST ERROR:  unexpected chunk number 0 (expected 1) for 
toast value 131149566 in pg_toast_10920100

First try to find which tables those toast relations refer to :
select 10919630::regclass , 10920100::regclass ;
Are those critical tables? Can you restore them somehow?

Also you may consider
REINDEX TABLE pg_toast.pg_toast_10920100;
REINDEX TABLE pg_toast.pg_toast_10919630;
REINDEX TABLE ;
REINDEX TABLE ;

also VACUUM the above tables.

You might want to write a function which iterates over the damaged table's rows 
in order to identify the damaged row(s). And then do some good update to create 
a new version.


The database is 10 TB on disk (SSDs) and runs on a 48 core server with 3
TB RAM on Ubuntu 14.04 (Linux 3.18.13).

We are updating rows in the database a lot/continuously.

There are no apparent indications of hardware errors (like ECC) in
dmesg, nor any error messages logged by the LSI MegaRAID controller, as
far as I can tell.

We are running PostgreSQL 9.3.14 currently.

The only thing I could see in the release notes since 9.3.14 that might
be related is this:

  "* Avoid very-low-probability data corruption due to testing tuple
 visibility without holding buffer lock (Thomas Munro, Peter Geoghegan,
 Tom Lane)"

Although reading more about it, it doesn't sound like it would exhibit
the symptoms we see?

We have recently increased the load (to around twice the number of
cores), though, which made me think we could be triggering corner cases
we haven't hit before.

We will be upgrading to PostgreSQL 9.3.17 during the weekend, but I'd like to 
hear
if anyone has seen something like this, or have some ideas of how to
investigate/what the cause might be.


   Best regards,

 Adam



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inheritance and foreign keys

2017-05-25 Thread Achilleas Mantzios

The way I do it is the following :
- ensure a common sequence for the ID for all tables in the inheritance tree 
(usually one parent and one or more children)
- enforce normal FK constraints for all FK relations within the same 
"realm"/"tenant"/"schema" etc, i.e. where it makes sense
- for enforcing FK constraints between tables in different "realms", you should implement this as a pair of CONSTRAINT triggers which implement the two sides of the FK dependency. For the referencing 
tables you'd want to check upon INSERT or UPDATE, with smth like :


CREATE OR REPLACE FUNCTION 
public.accounting_docs_cases_fk_to_public_accounting_docs() RETURNS TRIGGER
LANGUAGE plpgsql
AS $$
DECLARE
tmp INTEGER;
BEGIN
IF (TG_OP = 'DELETE') THEN
  RAISE EXCEPTION 'TRIGGER : % called on unsuported op : %',TG_NAME, 
TG_OP;
END IF;
SELECT ad.id INTO tmp FROM public.accounting_docs ad WHERE 
ad.id=NEW.acct_doc_id;
IF NOT FOUND THEN
  RAISE EXCEPTION '%''d % (id=%) with NEW.acct_doc_id (%) does not 
match any accounting_docs ',TG_OP, TG_TABLE_NAME, NEW.id, NEW.acct_doc_id USING 
ERRCODE = 'foreign_key_violation';
END IF;
RETURN NEW;
END
$$
;

-- here public.accounting_docs is a top level INHERITANCE table. Has 
bcompanyFOO.accounting_docs and bcompanyBAR.accounting_docs as inherited tables

CREATE CONSTRAINT TRIGGER accounting_docs_cases_fk_to_public_accounting_docs_tg 
AFTER INSERT OR UPDATE
ON public.accounting_docs_cases FROM public.accounting_docs DEFERRABLE FOR EACH 
ROW EXECUTE PROCEDURE 
public.accounting_docs_cases_fk_to_public_accounting_docs();

For the referenced tables you'd want to check upon UPDATE or DELETE with smth 
like :

CREATE OR REPLACE FUNCTION 
public.accounting_docs_fk_from_accounting_docs_cases() RETURNS TRIGGER
LANGUAGE plpgsql
AS $$
DECLARE
tmp INTEGER;
BEGIN
IF (TG_OP = 'INSERT') THEN
  RAISE EXCEPTION 'TRIGGER : % called on unsuported op : %',TG_NAME, 
TG_OP;
END IF;
IF (TG_OP = 'DELETE' OR OLD.id <> NEW.id) THEN
  SELECT adc.id INTO tmp FROM accounting_docs_cases adc WHERE 
adc.acct_doc_id=OLD.id;
  IF FOUND THEN
RAISE EXCEPTION '%''d % (OLD id=%) matches existing 
accounting_docs_cases with id=%',TG_OP, TG_TABLE_NAME, OLD.id,tmp USING ERRCODE 
= 'foreign_key_violation';
  END IF;
END IF;
IF (TG_OP = 'UPDATE') THEN
RETURN NEW;
ELSE
RETURN OLD;
END IF;
END
$$
;

CREATE CONSTRAINT TRIGGER accounting_docs_fk_from_accounting_docs_cases AFTER 
DELETE OR UPDATE
ON public.accounting_docs FROM accounting_docs_cases DEFERRABLE FOR EACH ROW 
EXECUTE PROCEDURE accounting_docs_fk_from_accounting_docs_cases();

CREATE CONSTRAINT TRIGGER accounting_docs_fk_from_accounting_docs_cases AFTER 
DELETE OR UPDATE
ON bcompanyFOO.accounting_docs FROM accounting_docs_cases DEFERRABLE FOR EACH 
ROW EXECUTE PROCEDURE accounting_docs_fk_from_accounting_docs_cases();

CREATE CONSTRAINT TRIGGER accounting_docs_fk_from_accounting_docs_cases AFTER 
DELETE OR UPDATE
ON bcompanyBAR.accounting_docs FROM accounting_docs_cases DEFERRABLE FOR EACH 
ROW EXECUTE PROCEDURE accounting_docs_fk_from_accounting_docs_cases();


Note that still this is not a proper case of a FK constraint, since this 
requires a true common unique index across all tables of the inheritance tree, 
which is not possible as of today.

On 25/05/2017 14:48, Jayadevan M wrote:

Hi,

I designed three tables so that one table inherits another, and the third table references the parent table. If a record is inserted into the third table and the value does exist in the parent table 
indirectly, because it is present in the inherited table, I still get an error.

Is some option available while creating the foreign key so that it will 
consider the data in the child tables also while doing a constraint validation?

create table myt(id serial primary key);
create table mytc (like myt);
alter table mytc inherit myt;
insert into myt values(1);
insert into mytc values(2);
 select * from myt;
 id

  1
  2

create table a (id integerreferences myt(id));
insert into a values(2);
ERROR:  insert or update on table "a" violates foreign key constraint 
"a_id_fkey"
DETAIL:  Key (id)=(2) is not present in table "myt".


Regards,
Jayadevan



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Suggestion to improve select pg_reload_conf()

2017-04-03 Thread Achilleas Mantzios

On 03/04/2017 10:31, Thomas Kellerer wrote:

I would like to suggest an improvement to the select pg_reload_conf() function.

Currently this will only return true or false indicating if reloading was 
successful.

I think it would be a "nice-to-have" if the function would also return the GUCs 
that have been changed, similar to what is being written to the logfile.

To not break existing code (e.g. scripts that only expect true/false), this could be done 
through an optional boolean parameter (e.g. named "verbose").

To my understanding pg_reload_conf just SIGHUP's the postmaster. So it might 
not be so trivial.


Any thoughts?

Thomas






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy database to another host without data from specific tables

2017-03-07 Thread Achilleas Mantzios

On 07/03/2017 09:02, Panagiotis Atmatzidis wrote:

Hello,

I have 2 RDS instances on AWS running PSQL 9.4.7.

I want to make a clone of database1 which belongs to user1, to database2 which 
belongs to user2. Database1 has 20+ tables. I want to avoid copying the DATA 
sitting on 5 tables on database1 (many Gigs).

I've read one too many posts about how to perform the actions with "pg_dump" and 
"pg_restore" but I'm trying to figure out the easiest way to do this. The process I have 
in mind is this:

1) pg_dump the schema from DB1 to DB2 using --no-owner and pg_restore with 
--role=user2
2) pg_dump -Fc --no-owner --data-only -t 'table1' from DB1 and then restore with 
pg_restore -t 'table' --role=user2  to DB2
So you are talking about DBs on the same PgSQL cluster (RDS instance) or you are trying to clone a DB from 1st RDS instance to the 2nd? In each case, you cannot avoid copying. But if we're talking 
about the whole cluster 

This procedure though is very time consuming (although it could be scripted). 
Is there any better / faster / safer way to do this?

you could design smth based on replication, have a warm/hot standby applying 
changes from the primary, and then implement smth like :
- promote (i.e. recover and start in a new timeline as a new primary)
- run a script to change ownership to user2.
So the idea is to have pre-copied the data, so that the whole final procedure is very fast. Note, however, that after you do that, you would have to re-setup replication again, and that would be 
costly (you can't avoid ultimately copying data). Maybe it could help if you tell us the whole use case.



Thanks.

--
Panagiotis (atmosx) Atmatzidis

email:  a...@convalesco.org
URL:http://www.convalesco.org
GnuPG ID: 0x1A7BFEC5
gpg --keyserver pgp.mit.edu --recv-keys 1A7BFEC5

"Everyone thinks of changing the world, but no one thinks of changing himself.” 
- Leo Tolstoy









--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG on SSD

2017-03-03 Thread Achilleas Mantzios

On 03/03/2017 01:31, Scott Marlowe wrote:

On Thu, Mar 2, 2017 at 12:42 PM, scott ribe  wrote:

Is it reasonable to run PG on a mirrored pair of something like the Intel SSD 
DC 3610 series? (For example:
http://ark.intel.com/products/82935/Intel-SSD-DC-S3610-Series-480GB-2_5in-SATA-6Gbs-20nm-MLC)
 I'd *hope* that anything Intel classifies as a "Data Center SSD" would be 
reasonably reliable, have actually-working power loss protection etc, but is that the 
case?


 From the spec sheet they certainly seem to be safe against power loss.
I'd still test by pulling the power cables while running benchmarks to
be sure.

I've used the other Intel enterprise class ssds with good results on
the power plug pull tests.




+ Intel not only markets this as "Data Center SSD", moreover this seems to be in the 
respective high-end range within "Data Center SSDs".

--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replication Without Downtime

2017-02-20 Thread Achilleas Mantzios

On 21/02/2017 01:49, Venkata B Nagothi wrote:


On Tue, Feb 21, 2017 at 6:53 AM, Gabriel Ortiz Lour mailto:ortiz.ad...@gmail.com>> wrote:

Hi!

  Thanks for pointing out pg_basebackup

  The issue I'm facing now is about missing WAL files.

  What i'm doing:
# su postgres -c 'pg_basebackup -D /var/lib/postgresql/9.1/main/ -x -h 
master -U sa_rep' ; service postgresql start

Ok but between pg_basebackup and service postgresql start you must configure the db as a stand by. You need to copy a pre-configured recovery file into the data dir (or whatever debian demands) and 
then start. In 9.3 there is --write-recovery-conf which does this for you.


  The idea is to call "postgresql start" as soon as pg_basebackup ends.
  But I'm getting the following error:

FATAL:could notreceive data fromWAL stream:FATAL:requested WAL segment 
XXX has already been removed

  Shoud I just increase 'wal_keep_segments' ?


Yes, that is the way to go. But, you need to know what number you need to 
increase the wal_keep_segments to ?
Which purely depends on the number of WALs being generated. Which version of 
PostgreSQL are you using by the way ?

Apparently 9.1. In 9.2 you can always use -X stream , thus making increasing of 
the wal_keep_segments redundant.


  If I do so, do I need do restart the master or just a reload will do it?


No need to restart, "reload" will do.

Venkata B N
Database consultant

    -- Forwarded message --
From: *Achilleas Mantzios* mailto:ach...@matrix.gatewaynet.com>>
Date: 2017-02-17 11:20 GMT-02:00
Subject: Re: [GENERAL] Streaming Replication Without Downtime
To: pgsql-general@postgresql.org <mailto:pgsql-general@postgresql.org>


Gabriel you are thinking this in the correct way, but its really :

pg_basebackup -D  --write-recovery-conf --progress 
--xlog-method=stream -h 
then you just edit recovery.conf (if needed), tweak postgersql.conf (if 
needed) and start the standby .

On 17/02/2017 15:09, Gunnar "Nick" Bluth wrote:

(sorry for the toppost, mobile device)
What you're looking for is pg_basebackup with - - xlog=stream, I guess.

Regards,

Nick


Am 17. Februar 2017 14:06:36 MEZ schrieb Gabriel Ortiz Lour 
 <mailto:ortiz.ad...@gmail.com>:

Hi all,
  I've been searching for a way to initialize a new Hot Standby node 
with Streaming Replication withou the need for stop or even restarting the 
master.
  Of course the master is already with the needed SR configs.

  I know I have to use pg_start_backup/pg_stop_backup, but i'd like 
some tips, or a link to some tutorial, with the order of the steps.

  I assume will be something like:

-  configure Slave for SR
-  pg_start_backup()
-  rsync PGDATA to slave
-  start PG on the slave
-  pg_stop_backup()

  Anything i'm thinking wrong?

Thanks in advance,
Gabriel


-- 
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet. 



    -- 
Achilleas Mantzios

IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Streaming Replication Without Downtime

2017-02-17 Thread Achilleas Mantzios

Gabriel you are thinking this in the correct way, but its really :

pg_basebackup -D  --write-recovery-conf --progress 
--xlog-method=stream -h 
then you just edit recovery.conf (if needed), tweak postgersql.conf (if needed) 
and start the standby .

On 17/02/2017 15:09, Gunnar "Nick" Bluth wrote:

(sorry for the toppost, mobile device)
What you're looking for is pg_basebackup with - - xlog=stream, I guess.

Regards,

Nick


Am 17. Februar 2017 14:06:36 MEZ schrieb Gabriel Ortiz Lour 
:

Hi all,
  I've been searching for a way to initialize a new Hot Standby node with 
Streaming Replication withou the need for stop or even restarting the master.
  Of course the master is already with the needed SR configs.

  I know I have to use pg_start_backup/pg_stop_backup, but i'd like some 
tips, or a link to some tutorial, with the order of the steps.

  I assume will be something like:

-  configure Slave for SR
-  pg_start_backup()
-  rsync PGDATA to slave
-  start PG on the slave
-  pg_stop_backup()

  Anything i'm thinking wrong?

Thanks in advance,
Gabriel


--
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet. 



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Auto-Rollback option

2017-02-13 Thread Achilleas Mantzios

Take a look at ON_ERROR_STOP variable.

\set ON_ERROR_STOP 1

On 13/02/2017 15:55, Małgorzata Hubert wrote:

Hi,
is there any way to set Auto-Rollback : ON, automaticly during instalation 
process or using query (maybe something like set autocommit = 'on')?
We need it to automaticly close the transaction if an error occures during 
implementing patches.

Thanks in advanced for the answear.
Best regards,
Malgorzata Pomykacz



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Error dumping 9.4: could not parse numeric array

2017-01-03 Thread Achilleas Mantzios

Hi Devrim HNY

On 03/01/2017 13:19, Devrim Gündüz wrote:

Hi,

I'm trying to take backup on my laptop, but getting an error. This is
PostgreSQL 9.4.10 on Fedora 25, installed using the community RPMS.

pg_dump: could not parse numeric array "2281": too many numbers
  
I can see this string in src/bin/pg_dump/common.c, but no idea why this

happens. gdb also did not give much info:

You need to recompile with -g , then put a breakpoint in parseOidArray to see 
the call stack.


===
Reading symbols from /usr/pgsql-9.4/bin/pg_dump...Reading symbols from 
/usr/lib/debug/usr/pgsql-9.4/bin/pg_dump.debug...done.
done.
(gdb) run
Starting program: /usr/pgsql-9.4/bin/pg_dump
Missing separate debuginfos, use: dnf debuginfo-install glibc-2.24-4.fc25.x86_64
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
pg_dump: could not parse numeric array "2281": too many numbers
[Inferior 1 (process 13625) exited with code 01]
===

pg_dump -v output is:

===
pg_dump: reading extensions
pg_dump: identifying extension members
pg_dump: reading schemas
pg_dump: reading user-defined tables
pg_dump: reading user-defined functions
pg_dump: could not parse numeric array "2281": too many numbers
===

Any idea why this is happening, and how can I find a solution?

Thanks!

Regards,



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replacing an existing unique constraint (not UNIQUE INDEX) with primary key

2016-12-20 Thread Achilleas Mantzios

On 20/12/2016 12:27, Andreas Joseph Krogh wrote:

På tirsdag 20. desember 2016 kl. 11:02:27, skrev Achilleas Mantzios 
mailto:ach...@matrix.gatewaynet.com>>:

On 20/12/2016 11:43, Andreas Joseph Krogh wrote:

[snip]

BEGIN;
ALTER TABLE person ADD CONSTRAINT person_pk PRIMARY KEY (entity_id);
alter table person drop constraint person_entity_id_key CASCADE;
alter table phone add CONSTRAINT phone_fk FOREIGN KEY (person_entity_id) 
REFERENCES person(entity_id);
alter table address add CONSTRAINT address_fk FOREIGN KEY (person_id) 
REFERENCES person(entity_id);
COMMIT;

Yea, I was hoping to avoid having to manually add the FK's to the referencing 
tables (34).
Is there really no way to accomplish this without DROP CONSTRAINT ...  CASCADE, 
hacking the system-catalogs or something?

You may write a script to output those 34 FK constraints. Definitely safer than 
hacking pg_constraint.conindid .

Thanks.
--
*Andreas Joseph Krogh*
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
<https://www.visena.com>



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Replacing an existing unique constraint (not UNIQUE INDEX) with primary key

2016-12-20 Thread Achilleas Mantzios

On 20/12/2016 11:43, Andreas Joseph Krogh wrote:

Hi all.
For historical reasons I have a table which at first had an "id"-column (the PK) and 
later got an "entity_id"-column (which is a UNIQUE CONSTRAINT).
I'm now trying to get rid of the "id"-column and make the "entity_id"-column the new PK. The tricky part is that both of these columns are referenced as FK's from /many/ tables, so 
disabling/removing FKs is not so easy. I'm facing a problem when issuing:

ALTER TABLE personADD PRIMARY KEY USING INDEX person_entity_id_key;
ERROR:  index "person_entity_id_key" is already associated with a constraint
A full example of what I'm trying to do (replacing the PK of the 
"person"-table) is here:

DROP TABLE IF EXISTS phone;
DROP TABLE IF EXISTS address;
DROP TABLE IF EXISTS person;
CREATE TABLE person (
 idBIGINT PRIMARY KEY,
 entity_idBIGINT NOT NULL UNIQUE,
 name VARCHAR NOT NULL );

CREATE TABLE address (
 idBIGINT PRIMARY KEY,
 person_idBIGINT NOT NULL REFERENCES person (id)
);

CREATE TABLE phone (
 idBIGINT PRIMARY KEY,
 person_entity_idBIGINT NOT NULL REFERENCES person (entity_id),
 numberVARCHAR NOT NULL );

INSERT INTO person (id, entity_id,name)VALUES (1,101,'Andreas'), 
(2,102,'Santa');
INSERT INTO address (id, person_id)VALUES (1,1), (2,2);
INSERT INTO phone (id, person_entity_id, number)VALUES (1,101,'1800555123'), 
(2,102,'1800555456');

-- Drop the deprecated foreign key on address ALTER TABLE address
 DROP CONSTRAINT address_person_id_fkey;

-- Update address and make person_id point to person.entity_id instead of 
person.id UPDATE address a
SET person_id = p.entity_id
FROM person p
WHERE p.id = a.person_id;
ALTER TABLE address
 ADD FOREIGN KEY (person_id)REFERENCES person (entity_id);

-- Drop the deprecated id-column ALTER TABLE person
 DROP COLUMN id;

-- Try to make new PK using the UNIQUE CONSTRAINT person_entity_id_key ALTER 
TABLE person
 ADD PRIMARY KEY USING INDEX person_entity_id_key;
ERROR:  index "person_entity_id_key" is already associated with a constraint

BEGIN;
ALTER TABLE person ADD CONSTRAINT person_pk PRIMARY KEY (entity_id);
alter table person drop constraint person_entity_id_key CASCADE;
alter table phone add CONSTRAINT phone_fk FOREIGN KEY (person_entity_id) 
REFERENCES person(entity_id);
alter table address add CONSTRAINT address_fk FOREIGN KEY (person_id) 
REFERENCES person(entity_id);
COMMIT;



I see that if I had declared person.entity_id without the UNIQUE-keyword and 
instead created a UNIQUE INDEX:
create UNIQUE INDEX person_entity_id_keyon person(entity_id);
Then the ADD PRIMARY KEY USING INDEX command would have succeeded.
I have lots of queries which have GROUP BY person.id which now should use GROUP 
BY person.entity_id, and not having to also list all other columns selected 
from the person-table.
How do I proceed with this?
Thanks.
--
*Andreas Joseph Krogh*
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
<https://www.visena.com>



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] When updating row that has TOAST column, is the TOAST column also reinserted ? Or just the oid to the value?

2016-12-13 Thread Achilleas Mantzios

Great info Albe!

On 13/12/2016 16:20, Albe Laurenz wrote:

Dorian Hoxha wrote:

When updating row that has TOAST column, is the TOAST column also inserted ? Or 
just the oid?

Say I have a 1MB value in the TOAST column, and I update the row by changing 
another column, and since
every update is an insert, will it also reinsert the toast-column ? The column 
that I will update will
have an index so I think hot-update won't work in this case ? The same question 
also when full-page-
writes is enabled ?


Using 9.6.

The TOAST table will remain unchanged by the UPDATE; you can see that with the
"pageinspect" contrib module:

CREATE TABLE longtext (
id integer primary key,
val text NOT NULL,
other integer NOT NULL
);

INSERT INTO longtext VALUES (
4,
(SELECT string_agg(chr((random()*25+65)::integer), '')
FROM generate_series(1, 2000)),
42
);

SELECT reltoastrelid, reltoastrelid::regclass FROM pg_class
WHERE oid = 'longtext'::regclass;

  reltoastrelid |  reltoastrelid
---+-
  25206 | pg_toast.pg_toast_25203
(1 row)

Use "pageinspect" to see the tuples in the table and the TOAST table:

SELECT t_xmin, t_xmax, t_ctid, t_attrs[1] AS id, t_attrs[2] AS val, t_attrs[3] 
AS other
FROM heap_page_item_attrs(get_raw_page('longtext', 0), 25203);

  t_xmin | t_xmax | t_ctid | id |  val  
 |   other
+++++
2076 |  0 | (0,1)  | \x0400 | 
\x0112d407d0077b627662 | \x2a00
(1 row)

SELECT t_xmin, t_xmax, t_ctid, t_attrs[1] AS chunk_id, t_attrs[2] AS chunk_seq
FROM heap_page_item_attrs(get_raw_page('pg_toast.pg_toast_25203', 0), 
25206);

  t_xmin | t_xmax | t_ctid |  chunk_id  | chunk_seq
++++
2076 |  0 | (0,1)  | \x7b62 | \x
2076 |  0 | (0,2)  | \x7b62 | \x0100
(2 rows)

Now let's UPDATE:

UPDATE longtext SET other = -1 WHERE id = 4;

Let's look at the tuples again:

SELECT t_xmin, t_xmax, t_ctid, t_attrs[1] AS id, t_attrs[2] AS val, t_attrs[3] 
AS other
FROM heap_page_item_attrs(get_raw_page('longtext', 0), 25203);

  t_xmin | t_xmax | t_ctid | id |  val  
 |   other
+++++
2076 |   2077 | (0,2)  | \x0400 | 
\x0112d407d0077b627662 | \x2a00
2077 |  0 | (0,2)  | \x0400 | 
\x0112d407d0077b627662 | \x
(2 rows)

A new tuple has been entered, but "val" still points to chunk ID 0x627b
(this is a little-endian machine).

SELECT t_xmin, t_xmax, t_ctid, t_attrs[1] AS chunk_id, t_attrs[2] AS chunk_seq
FROM heap_page_item_attrs(get_raw_page('pg_toast.pg_toast_25203', 0), 
25206);

  t_xmin | t_xmax | t_ctid |  chunk_id  | chunk_seq
++++
2076 |  0 | (0,1)  | \x7b62 | \x
2076 |  0 | (0,2)  | \x7b62 | \x0100
(2 rows)

The TOAST table is unchanged!

This was a HOT update, but it works the same for a non-HOT update:

UPDATE longtext SET id = 1 WHERE id = 4;

SELECT t_xmin, t_xmax, t_ctid, t_attrs[1] AS id, t_attrs[2] AS val, t_attrs[3] 
AS other
FROM heap_page_item_attrs(get_raw_page('longtext', 0), 25203);

  t_xmin | t_xmax | t_ctid | id |  val  
 |   other
+++++
2076 |   2077 | (0,2)  | \x0400 | 
\x0112d407d0077b627662 | \x2a00
2077 |   2078 | (0,3)  | \x0400 | 
\x0112d407d0077b627662 | \x
2078 |  0 | (0,3)  | \x0100 | 
\x0112d407d0077b627662 | \x
(3 rows)

SELECT t_xmin, t_xmax, t_ctid, t_attrs[1] AS chunk_id, t_attrs[2] AS chunk_seq
FROM heap_page_item_attrs(get_raw_page('pg_toast.pg_toast_25203', 0), 
25206);

  t_xmin | t_xmax | t_ctid |  chunk_id  | chunk_seq
++++
2076 |  0 | (0,1)  | \x7b620000 | \x
2076 |  0 | (0,2)  | \x7b62 | \x0100
(2 rows)

Yours,
Laurenz Albe




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] After upgrade to 9.6: waiting column does not exist

2016-12-09 Thread Achilleas Mantzios

On 09/12/2016 10:04, John R Pierce wrote:

On 12/8/2016 11:59 PM, Johann Spies wrote:

Now, when running the 9.6 server the following error message shows up regularly 
in the log:

postgres@template1 ERROR:  column "waiting" does not exist at character 217


As far as I understand the following remark in the release notes, the column 
'waiting'  should not exist:

 *

Improve the pg_stat_activity 
<https://www.postgresql.org/docs/9.6/static/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW>
 view's information about what a process is waiting for (Amit Kapila, Ildus
Kurbangaliev)

Historically a process has only been shown as waiting if it was waiting for 
a heavyweight lock. Now waits for lightweight locks and buffer pins are also 
shown in pg_stat_activity. Also, the
type of lock being waited for is now visible. These changes replace the 
waiting column with wait_event_type and wait_event.


How do I correct this?


are you running some sort of monitoring software thats periodically looking at pg_stat_activity ?   whatever query its doing needs to be rewritten to take into account the pg_stat_activity changes 
in 9.6




Also It would help to modify your log line prefix (log_line_prefix) to include 
client's address, PID, and application name.


--
john r pierce, recycling bits in santa cruz



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] pgbasebackup necessary for master slave change?

2016-12-08 Thread Achilleas Mantzios

On 08/12/2016 11:33, Subhankar Chattopadhyay wrote:

Hi,

Few days back I had asked if it is needed to to pg_basebackup for
every database update. From John I understood that it is unnecessary
and if the slave is syncing, even if it is catching up, it would be
able to sync without doing pg_basebackup. This is working also for me.

However, for a failover scenario, where a master goes down, and I make
the slave as master, and then when the old master comes back as a
slave again, if I don't take pg_basebackup from the new master, it
cannot follow the new master. This is kind of an overhead. Is there a
way I can make the old master follow the new master without having to
do full backup?

pg_rewind




Subhankar Chattopadhyay
Bangalore, India





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replication delay getting bigger

2016-12-06 Thread Achilleas Mantzios

On 06/12/2016 04:20, Patrick B wrote:

Hi guys,

I've got some database servers in USA (own data center) and also @ AWS Japan.

*USA:*
master01
slave01 (Streaming Replication from master01 + wal_files)
slave02 (Streaming Replication from master01 + wal_files)

*Japan: (Cascading replication)*
slave03 (Streaming Replication from slave02 + wal_files)
slave04 (Streaming Replication from slave02)

*Running this query on slave02:*

select now() - pg_last_xact_replay_timestamp() AS replication_delay;
 replication_delay
---
 00:00:00.802012
(1 row)

*Same query on slave03 and slave04:*

select now() - pg_last_xact_replay_timestamp() AS replication_delay;
 replication_delay
---
 00:56:53.639516
(1 row)


*slave02:*

SELECT client_hostname, client_addr, 
pg_xlog_location_diff(pg_stat_replication.sent_location, 
pg_stat_replication.replay_location) AS byte_lag FROM pg_stat_replication;
 client_hostname |  client_addr  | byte_lag
-+---+--

 | slave03  |  2097400

 | slave04 |  3803888

(2 rows)


Why is that delay that big? Is it because networking issue? I tried to find out 
what the cause is, but couldn't find anything.

SCP and FTP (big files) between those servers are really fast, +1.0MB/s.


Are you sure the upstream does not produce WAL activity at a higher rate than 
0.5MB/s ?


I'm using PostgreSQL 9.2.14

Thanks!
Patrick.



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Postgres Traffic accounting

2016-12-05 Thread Achilleas Mantzios

On 05/12/2016 14:35, William Ivanski wrote:


Does it need to be done by listening to network packets? You can get statistics 
by query and user with pgbadger.


I guess he'd have to use some tool like this :
https://www.vividcortex.com/resources/network-analyzer-for-postgresql
https://www.vividcortex.com/blog/2015/05/13/announcing-vividcortex-network-analyzer-mysql-postgresql/

This works by listening to the network and then correlate network activity with 
PIDs and then somehow via (pg_stat_* or ps) with queries.



Em 9h41 Seg, 05/12/2016, basti mailto:mailingl...@unix-solution.de>> escreveu:

Hallo,

I have to try traffic accounting for postgres using tcpdump and nfdump.
I can see what traffic is produced but cant see the query / activity who
do it. because there is an ssl connection. use plain text is not an option.

I also try to use tcap-postgres. this does not compile on my server and
its very old (year 2004).

Is there a way to see traffic network of postgres sort by query?

best regards
basti


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org 
<mailto:pgsql-general@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

--

William Ivanski




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Moving pg_xlog

2016-12-01 Thread Achilleas Mantzios

On 01/12/2016 15:55, Robert Inder wrote:

I'm running  Postgres9.4 in master/hot-standby mode on a few pairs of servers.

While recovering from A Bit Of Bother last week, I came across a
posting saying that pg_xlog should be on a separate partition.

I tried to find out more about this, by consulting the PostgresQL
documentation (i.e.
https://www.postgresql.org/docs/9.4/static/index.html )
But all I could find was a mention that "It is advantageous if the log
is located on a different disk from the main database files".

The questions:
1. WHY is this good?  Is it (just) to stop pg_xlog filling the
database disk/partition?  Or are there performance implications?
SPECIFICALLY: my database is currently in "/", which is on SSD.  Is it
better to move pg_xlog to another partition on the same SSD?  Or to a
physical disk or SAN?


Performance is the reason. You would benefit from moving pg_xlog to a different controller with its own write cache or to a different SSD with a write cache which is capacitor-backed. So in 
enterprise/server-class setups the above would boost the performance. Using the same SSD with a different partition won't give you much.




2. What are the implications for doing a base backup?  I believe I
read that putting pg_xlog on a different partition meant it would be
omitted from a file-system bulk copy (e.g. rsync), and this was a GOOD
thing, because the copy operation would be faster -- not copying
pg_xlog would not prevent the standby server from starting, because
the information it needed would be in the WAL files that would be
shipped separately.  Have I got that right?
Rsync does cross fs boundaries unless you give it the -x option. It is true that the files in pg_xlog won't be useful to be taken in the backup. However the wal files to be shipped separately is not 
smth done by itself, you need to enable/implement WAL archiving. What you describe seems to be the "legacy" old-fashioned way circa 9.0. pg_basebackup (9.1) is more convenient, can create complete 
standalone copies (without the need of any additional wals), can use wal streaming so that you don't depend on wal archiving or wal_keep_segment, supports repl slots, can create a ready to go hot 
standby, etc.



Finally, the suggestion.

I'd really like to read an explicit discussion of this in the official
documentation, rather than just glean what I can from answers to
questions.
The possibility of moving pg_xlog to another disk is mentioned in the
documentation, but I almost missed it because it is in "the wrong
place".  It is in Section 29.5 -- "Reliability and the Write Ahead
Log" / "WAL Internals".  But I wasn't interested in anything INTERNAL:
I wanted to know where I should try to locate it/them.  So I'd looked
in "the obvious places" -- Section 18 (Server configuration), and in
particular 18.2 "File Locations".  Could I suggest that the motivation
for doing this, and the consequences for backups, should be discussed
in "the right place" -- in or near the section that talks about file
locations in the context of server configuration.


All I can tell you is I haven't found one single piece of free (or not so free) 
software with more complete documentation than pgsql.


Robert.




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump system catalog

2016-11-28 Thread Achilleas Mantzios

On 28/11/2016 19:21, Melvin Davidson wrote:

On Mon, Nov 28, 2016 at 10:50 AM, Adrian Klaver mailto:adrian.kla...@aklaver.com>> wrote:

On 11/28/2016 07:44 AM, Melvin Davidson wrote:





*To clarify, you cannot dump the pg_catalog schema. It is the main
control of how all other objects are


Actually you can. I would not of thought so, but tried Achilleas's 
suggestion and it worked. Example:

--
-- Name: pg_aggregate; Type: TABLE; Schema: pg_catalog; Owner: postgres
--

CREATE TABLE pg_aggregate (
aggfnoid regproc NOT NULL,
aggkind "char" NOT NULL,
aggnumdirectargs smallint NOT NULL,
aggtransfn regproc NOT NULL,
aggfinalfn regproc NOT NULL,
aggmtransfn regproc NOT NULL,
aggminvtransfn regproc NOT NULL,
aggmfinalfn regproc NOT NULL,
aggfinalextra boolean NOT NULL,
aggmfinalextra boolean NOT NULL,
aggsortop oid NOT NULL,
aggtranstype oid NOT NULL,
aggtransspace integer NOT NULL,
aggmtranstype oid NOT NULL,
aggmtransspace integer NOT NULL,
agginitval text,
aggminitval text
);

COPY pg_aggregate (aggfnoid 


and you do get errors:

pg_dump: WARNING: typtype of data type "any" appears to be invalid
pg_dump: WARNING: typtype of data type "anyarray" appears to be invalid
pg_dump: WARNING: typtype of data type "anyelement" appears to be invalid
pg_dump: WARNING: typtype of data type "anyenum" appears to be invalid
pg_dump: WARNING: typtype of data type "anynonarray" appears to be invalid
pg_dump: WARNING: typtype of data type "anyrange" appears to be invalid
pg_dump: WARNING: typtype of data type "cstring" appears to be invalid



Still not sure why you would want to, but you can.


stored in the cluster. There is no point in dumping it and all it's
tables and views are already clearly
documented.
https://www.postgresql.org/docs/9.6/static/catalogs.html 
<https://www.postgresql.org/docs/9.6/static/catalogs.html>

pg_catalog itself is generated with the initdb command when a new
postgresql cluster is generated.
https://www.postgresql.org/docs/9.6/static/creating-cluster.html 
<https://www.postgresql.org/docs/9.6/static/creating-cluster.html>
https://www.postgresql.org/docs/9.6/static/app-initdb.html* 
<https://www.postgresql.org/docs/9.6/static/app-initdb.html*>

--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.



-- 
Adrian Klaver

adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>


*Hmmm, well you learn something new every day. Albeit, although you can view 
the dump file, I'm pretty sure you cannot restore it to a database because,
*
*by definition, those tables would already exist in the database.*


That's right I was kind of suspicious about the OP's intentions. I agree, no 
user should attempt to restore anything from this dump, unless (s)he is a pgsql 
hacker and knows exactly what he's doing.



--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] pg_dump system catalog

2016-11-25 Thread Achilleas Mantzios

On 25/11/2016 13:11, Juliano wrote:

Hi everyone,

How can I *use pg_dump* to do a backup of *"system catalogs"* only?



pg_dump --schema=pg_catalog



Regards,
Juliano




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] How pg_dump works

2016-10-10 Thread Achilleas Mantzios

On 10/10/2016 14:50, Moreno Andreo wrote:

Hi all,
I need to pg_dump a database to another server.
The particularity is that the database is bigger than remaining space on disk. Obviously, this is not a problem because i'm dumping to another host, but I need to know if the procedure streams data 
to remote host or the data itself is packed up in temporary file(s) that are sent to remote host.

The former would be such a problem, because there are good chances I'll have my 
disk filled up...

I've not found details on this in documentation.



In fact, in the docs there is all the info you might ever need. PostgreSQL 
project excels on that compared to *many* free-software projects. Now what you 
could do is something like :
createdb -h your_big_server your_database
pg_dump -h your_small_server your_database | psql -h your_big_server -f - 
your_database


Thanks
Moreno.








--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multi tenancy : schema vs databases

2016-10-02 Thread Achilleas Mantzios

On 30/09/2016 18:45, Rakesh Kumar wrote:

I've been reading this discussion with great interest, to see what other
Postgres experts think. :-)

I am bit disappointed that most of the replies are questioning why we are
doing what we are doing. Once again, we (db designers) have no choice
in that.  What I would like to know that which one is better :- multiple db
vs multiple schema.  Read few interesting arguments and noted that
connection pooling works better with multiple schemas than dbs. Anything else?


Since you missed it, i write it once again :

same organization (e.g. holding) -> schemas
different organizations -> DBs



thanks




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multi tenancy : schema vs databases

2016-09-30 Thread Achilleas Mantzios

Via schemata if the tenants represent sub entities of the same organization.
This gives the top level mgmt the ability to have a consolidated view of the 
whole organization.

On 30/09/2016 12:06, Rakesh Kumar wrote:



From: Venkata B Nagothi 
Sent: Friday, September 30, 2016 02:48
To: Rakesh Kumar
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Multi tenancy : schema vs databases

On Fri, Sep 30, 2016 at 10:16 AM, Rakesh Kumar 
mailto:rakeshkumar...@outlook.com>> wrote:



From: Venkata B Nagothi mailto:nag1...@gmail.com>>
Sent: Thursday, September 29, 2016 17:25
To: Rakesh Kumar
Cc: pgsql-general@postgresql.org<mailto:pgsql-general@postgresql.org>
Subject: Re: [GENERAL] Multi tenancy : schema vs databases

On Fri, Sep 30, 2016 at 5:18 AM, Rakesh Kumar 
mailto:rakeshkumar...@outlook.com><mailto:rakeshkumar...@outlook.com<mailto:rakeshkumar...@outlook.com>>>
 wrote:

Hi

I would like to know which technique is better for supporting multi-tenancy=
  applications, going upto hundreds or even thousands of tenants.

1 - One database with difference schemas (one schema per tenant)
or
2 - One database per tenant.

Did you mean one database with-in a postgresql cluster ?

Yes.  Say something like this within a PG cluster

db4978
db6234
...
100s of such databases.

That would make things worst if you are going for one database per tenant. As 
said by John just now, it would end up in an very complex and bad design 
contributing to very poor performance and high maintenance overhead.
A schema per tenant would be a good idea and its hard to say without knowing 
the data isolation levels you require for each tenant.


We require complete data isolation. Absolutely nothing should be shared between 
two tenants.

WHy would multiple dbs be any worse than multiple schemas in performance?







--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-08-16 Thread Achilleas Mantzios

On 29/07/2016 21:06, Larry Rosenman wrote:

On 2016-07-29 12:59, Bruce Momjian wrote:

On Fri, Jul 29, 2016 at 07:49:36PM +0200, Maeldron T. wrote:

And yes, I hate upgrading PostgreSQL especially on FreeBSD where pg_upgrade
isn’t really an option.


Is that because it is hard to install the old and new clusters on the
same server on FreeBSD?


The current FreeBSD Ports collection ports only allow ONE version to be 
installed at a time.




In our installation, pgsql is one of the very few packages that we prefer to 
deal by hand.

--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-28 Thread Achilleas Mantzios

On 28/07/2016 21:09, Edson Richter wrote:

Em 28/07/2016 13:07, Chris Travers escreveu:



On Thu, Jul 28, 2016 at 3:38 PM, Scott Marlowe mailto:scott.marl...@gmail.com>> wrote:

On Wed, Jul 27, 2016 at 9:51 AM, Geoff Winkless mailto:pgsqlad...@geoff.dj>> wrote:
> On 27 July 2016 at 15:22, Scott Mead mailto:sco...@openscg.com>> wrote:
>>
>>  "The bug we ran into only affected certain releases of Postgres 9.2 and
>> has been fixed for a long time now. However, we still find it worrisome 
that
>> this class of bug can happen at all. A new version of Postgres could be
>> released at any time that has a bug of this nature, and because of the 
way
>> replication works, this issue has the potential to spread into all of the
>> databases in a replication hierarchy."
>>
>>
>> ISTM that they needed a tire swing and were using a dump truck.  
Hopefully
>> they vectored somewhere in the middle and got themselves a nice sandbox.
>
>
> At least his bug got fixed. The last 2 bugs I reported to MySQL resulted 
in
> an initial refusal to accept any problem existed, followed by (once that
> particular strategy had run out of steam) the developer simply ignoring 
the
> bug until it was closed automatically by their bug system. As far as I'm
> aware those bugs still exist in the most recent version.

Best / worst MySQL bug was one introduced and fixed twice. Someone put
in a short cut that sped up order by by quite a bit. It also meant
that order by desc would actually get order by asc output. It was
inserted into the code due to poor oversite / code review practices,
then fixed about 9 months later, then introduced again, and again,
took about a year to fix.

The fact that it was introduced into a General Release mid stream with
no testing or real reviews speaks volumes about MySQL and its
developers. The fact that it took months to years to fix each time
does as well.



As for MySQL issues, personally I love the fact that a single query inserting a bunch of rows can sometimes deadlock against itself.  And I love the fact that this is obliquely documented as 
expected behavior.  May I mention I am *really glad* PostgreSQL doesn't go the whole multi-threaded backend route and that this is exhibit A as to why (I am sure it is a thread race issue between 
index and table updates)?


Sorry, I think this is a biased vision. Multi-threading will show as much problems as multi-process - both has to have simultaneous access (or, at least, right semaphor implementation to serialize 
writes and syncronize reads).

The fact is **on this point at least** is that Postgres is correctly 
implemented, and MySQL is faulty.
I've faced the "lost FK integrity hell" (caused by the problem above) with 
MySQL long before decided to migrate all systems to PostgreSQL.
My personal experience is that MySQL is excellent for data that is not sensitive (web site, e-mail settings, etc). Everything else goes to PostgreSQL (or Oracle, or MS SQL Server, or Sybase, or DB2 
- in *my* order of preference).




+1



Regards,

Edson Richter




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Achilleas Mantzios

On 27/07/2016 10:15, Condor wrote:

On 26-07-2016 21:04, Dorian Hoxha wrote:

Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/

On Tue, Jul 26, 2016 at 7:39 PM, Guyren Howe  wrote:


Honestly, I've never heard of anyone doing that. But it sounds like
they had good reasons.

https://eng.uber.com/mysql-migration/

Thoughts?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



They are right for upgrades.
It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade to 
finish upgrade and meanwhile database is offline.
In some distros after upgrade of PG version you don't have old binary and 
library, need to do full dump and restore that take time and disk space.


Our last 1TB upgrade from 9.0 -> 9.3 went like a charm in something like 
seconds. (with the -k option)
However, be warned that the planing and testing took one full week.




Regards,
Hristo S.








--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MediaWiki + PostgreSQL is not ready for production?

2016-07-19 Thread Achilleas Mantzios

On 19/07/2016 12:41, Andrew Sullivan wrote:

On Tue, Jul 19, 2016 at 12:37:10PM +0300, Achilleas Mantzios wrote:

Better to run, even slowly, than not run at all, or require special porting 
team for every mysql client out there.


I'm not sure I agree.  If you teach every naïve user that, when they
compare Postgres to MySQL, MySQL always wins, what you teach them is
"Postgres performance sucks."


It seems we have made already a verdict about mysql's code migrated to PostgreSQL being slow, although far fetched assumption in itself,  even if we accept it, it is far more productive having one 
dedicated small project for this mysql2pgsql conversion rather than N dedicated small teams for every mysql client out there.




Best regards,

A




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MediaWiki + PostgreSQL is not ready for production?

2016-07-19 Thread Achilleas Mantzios

On 19/07/2016 12:05, Tatsuo Ishii wrote:

My colleague has been working on making the latest version of
WordPress
work with PostgreSQL (there used be a PostgreSQL plugin but it has not
been maintained and does not work with the latest version of
WordPress). I imagine there are someone who are attacking the classic
problem and I wonder as PostgreSQL community, what we can do for this.

IMHO the way to solve this is not running to catch up with the various
projects using a *SIMPLE* SQL backened, but rather create a new
postgresql project providing a mysql compatibility layer, something
like a server side parser that would translate the mysql commands to
real SQL (PostgreSQL) statements.
Then only one (this) project should be maintained, with no work wasted
in specific client software.
So, PostgreSQL can continue to do what it knows to do best, with no
worries of not being natively compatible with simplistic yet
proprietary systems like mysql.

I'm not sure that's the best way ever. Sometimes the approach results
in lesser performance of PostgreSQL than MySQL because of the SQL is
not optimized for PostgreSQL. For toy project, that's fine. But for
serious project it might bring bad performance and users will be
disappointed and speak like "PostgreSQL is slower than MySQL". I saw
that with Zabbix, for example.


Better to run, even slowly, than not run at all, or require special porting 
team for every mysql client out there.



Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MediaWiki + PostgreSQL is not ready for production?

2016-07-19 Thread Achilleas Mantzios

On 19/07/2016 10:20, Tatsuo Ishii wrote:

On 7/18/2016 9:14 PM, Tatsuo Ishii wrote:

I found following comment for using PostgreSQL with MediaWiki:

https://www.mediawiki.org/wiki/Compatibility#Database

"Anything other than MySQL or MariaDB is not recommended for
production use at this point."

This is a sad and disappointed statement for us. Should we help
MediaWiki community to enhance this?

the classic problem with any of these sorts of open source projects,
while you can convert the core system to postgres, there's a huge
community of contributed plugins, and many of these authors have zero
interest in anything but their default database, mysql/mariadb.  I ran
into this with Drupal, Wordpress, a couple different forum projects.
Drupal even tried to offer a database API so plugin developers
wouldn't touch SQL directly, but too many ignored it.

Yeah, that's a classic problem. The reason why I raise the particular
problem was, I hoped situations were better with MediaWiki since it
has been used for PostgreSQL official site. But the truth is even
MediaWiki is not an exception.

My colleague has been working on making the latest version of WordPress
work with PostgreSQL (there used be a PostgreSQL plugin but it has not
been maintained and does not work with the latest version of
WordPress). I imagine there are someone who are attacking the classic
problem and I wonder as PostgreSQL community, what we can do for this.


IMHO the way to solve this is not running to catch up with the various projects using a *SIMPLE* SQL backened, but rather create a new postgresql project providing a mysql compatibility layer, 
something like a server side parser that would translate the mysql commands to real SQL (PostgreSQL) statements.

Then only one (this) project should be maintained, with no work wasted in 
specific client software.
So, PostgreSQL can continue to do what it knows to do best, with no worries of 
not being natively compatible with simplistic yet proprietary systems like 
mysql.


This may not be the brightest idea but what about starting with
creating wiki page listing such that works? This will help users who
want to use PostgreSQL with WordPress, MediaWikim Drupal etc. (and
many plugins).

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Achilleas Mantzios

On 08/07/2016 17:07, amatv...@bitec.ru wrote:

Hi


The test performs about 11K lines of code
Oracle: about 5M
postgreSql: about 160М



Do you have 100 CPUs on this system which apparently doesn't have 16G
of RAM available for PG to use?

We can say at fact:
We currently  work at oracle.
Our code base about 4000 k line of code
In out last project we have:
3000 current connection
200 active session
So 16g it's very optimistic.
Of course  we think about buy hardware or software.
It's other question.
So with this memory consumption it can be really cheaper to by Oracle.

If not, you should probably consider connection pooling to reduce the
number of PG sessions to something approaching the number of CPUs/cores
you have in the system.

It's  possible only  with  application  server,


No, you can deploy PgPool or PgBouncer.
Apart from that, I just checked in my system. User sessions have size of 16M. 
Not 160M.


for local network thick client has reasonable advantages.
  We just can't implement today all functions on thin client.






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Achilleas Mantzios

On 08/07/2016 14:11, amatv...@bitec.ru wrote:

Hi.
We have tested postgreSql,Oracle,MSSqlServer.
The test performs about 11K lines of code
Memory usage per session:
Oracle: about 5M
MSSqlServer: about 4M
postgreSql: about 160М
The result of postgreSql is very sad(Our typical business logic has about 30K 
lines of code).
How can I reduce memory consumption per session?
Note, we cant move the business logic to an application server as it will lead 
us to performance problem.

I can send the test script on request.

Test description:
PostgreSQL 9.5.3, compiled by Visual C build 1800, 64-bit


Visual C???
You will have to run PostgreSQL on a proper Unix system to test for performance.


At the same time we run 50 sessions that perform the following functions:
CREATE OR REPLACE FUNCTION perfa.func9
...
BEGIN
   svSql:='';
   PERFORM perfb."func91"();
   ...
END;

CREATE OR REPLACE FUNCTION perfb.func91
...
BEGIN
   PERFORM perfc."func911"();
   ...
END;

CREATE OR REPLACE FUNCTION perfc.func911 (
)
RETURNS void AS
$body$
DECLARE
   svSql BIGINT;
BEGIN
   svSql:=0;
   ...
   svSql:=svSql+10;
END;
$body$






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Switching roles as an replacement of connection pooling tools

2016-05-31 Thread Achilleas Mantzios

On 31/05/2016 17:23, Melvin Davidson wrote:

Actually, you do not need to SWITCH, you just need permission to change to path 
and gain access to all user2 privs, which is exactly what SET ROLE user2 does.
There is no need for a password, since user1 is already connected to the DB. 
Any superuser can give the GRANT ROLE to any other user.

Still, PgSQL logs report the original user everywhere. Not useful for auditing, 
debugging, etc



That being said, IMHO, I believe having a separate schema for every user is 
poor database design

I agree about this, there are much better ways to utilize schemata.


On Tue, May 31, 2016 at 10:18 AM, Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

On 31/05/2016 10:45, CN wrote:

I have a feeling that slight enhancement to commands "SET ROLE" or "SET
SESSION AUTHORIZATION" can obsolete and outperform external connection
pooling tools in some use cases.

Assume we are in the following situation:

- There are a million schemas each owned by a distinct role.
- Every role is not allowed to access any other schema except its own.

If command "SET SESSION AUTHORIZATION" is enhanced to accept two
additional arguments

PASSWORD 

, then a client simply establishes only one connection to server and do
jobs for a million roles.

Say I want to gain full access to "schema2", I simply issue these two
commands

SET SESSION AUTHORIZATION user2 PASSWORD p2;
SET SEARCH_PATH TO schema2,pg_category;

, where "p2" is the password associated with role "user2".

If the current role is superuser "postgres" and it wants to downgrade
itself to role "user3", then it simply sends these commands:

SET SESSION AUTHORIZATION user3;
SET SEARCH_PATH TO schema3,pg_category;

Does my points make sense?
Is it eligible for feature request?


I believe your thoughts are on the same line with an idea some people had about using 
connection pools on Java EE environments, in a manner that does not use a generic 
"catch all" user, but uses
the individual users sharing the security context from the app server.
This way one could have the benefits of the connection pool, and the 
benefits of fine-grained and rich PostgreSQL security framework, the ability to 
log user's activity, debug the system easier,
see real users on pg_stat_activity, on ps(1), on top(1) etc etc.
The way we do it currently is by having personalized connection pools for 
pgsql in jboss. It does the job from every aspect, except one : it sucks as far 
as performance is concerned. Every user
is tied to his/her number of connections. It creates a sandbox around each user, so 
that a "malicious" greedy user (with the help of a poorly designed app of 
course) can only bring down his own
pool, while others run unaffected, but still performance suffers. The idea 
would be to use a common pool of connections and assign users on demand as they 
are taken from the common pool, and
later also return them to the common pool, once closed.
Whenever I talked to PG ppl about it, they told me that redesigning the SET 
ROLE functionality so that it correctly applies all the security checks and 
also so that it results in reflecting the
effective user in all references in logs, sys views, OS (ps, top, etc) 
etc.. was hard to do, and the convo stopped right there.

With all the new and modern cloud-inspired paradigms out there, our 
traditional architecture might not of much interest any more, still I would 
love to make the above happen some time.

Best Regards,
CN



-- 
Achilleas Mantzios

IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org <mailto:pgsql-general@postgresql.org>)

To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Switching roles as an replacement of connection pooling tools

2016-05-31 Thread Achilleas Mantzios

On 31/05/2016 10:45, CN wrote:

I have a feeling that slight enhancement to commands "SET ROLE" or "SET
SESSION AUTHORIZATION" can obsolete and outperform external connection
pooling tools in some use cases.

Assume we are in the following situation:

- There are a million schemas each owned by a distinct role.
- Every role is not allowed to access any other schema except its own.

If command "SET SESSION AUTHORIZATION" is enhanced to accept two
additional arguments

PASSWORD 

, then a client simply establishes only one connection to server and do
jobs for a million roles.

Say I want to gain full access to "schema2", I simply issue these two
commands

SET SESSION AUTHORIZATION user2 PASSWORD p2;
SET SEARCH_PATH TO schema2,pg_category;

, where "p2" is the password associated with role "user2".

If the current role is superuser "postgres" and it wants to downgrade
itself to role "user3", then it simply sends these commands:

SET SESSION AUTHORIZATION user3;
SET SEARCH_PATH TO schema3,pg_category;

Does my points make sense?
Is it eligible for feature request?


I believe your thoughts are on the same line with an idea some people had about using connection pools on Java EE environments, in a manner that does not use a generic "catch all" user, but uses the 
individual users sharing the security context from the app server.
This way one could have the benefits of the connection pool, and the benefits of fine-grained and rich PostgreSQL security framework, the ability to log user's activity, debug the system easier, see 
real users on pg_stat_activity, on ps(1), on top(1) etc etc.
The way we do it currently is by having personalized connection pools for pgsql in jboss. It does the job from every aspect, except one : it sucks as far as performance is concerned. Every user is 
tied to his/her number of connections. It creates a sandbox around each user, so that a "malicious" greedy user (with the help of a poorly designed app of course) can only bring down his own pool, 
while others run unaffected, but still performance suffers. The idea would be to use a common pool of connections and assign users on demand as they are taken from the common pool, and later also 
return them to the common pool, once closed.
Whenever I talked to PG ppl about it, they told me that redesigning the SET ROLE functionality so that it correctly applies all the security checks and also so that it results in reflecting the 
effective user in all references in logs, sys views, OS (ps, top, etc) etc.. was hard to do, and the convo stopped right there.


With all the new and modern cloud-inspired paradigms out there, our traditional 
architecture might not of much interest any more, still I would love to make 
the above happen some time.


Best Regards,
CN




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to view creation date and time of a relation

2016-05-19 Thread Achilleas Mantzios

On 19/05/2016 10:57, Sameer Kumar wrote:



On Thu, May 19, 2016 at 3:29 PM John R Pierce mailto:pie...@hogranch.com>> wrote:

On 5/19/2016 12:18 AM, Shrikant Bhende wrote:


Our application executes come scripts with the code consist of  DDL which 
creates lot of objects in the database in various schemas,also there are lot of 
connections firing the same code. I am
able to locate the IP from where the script is initiated (which is causing 
more load on the database ), but I would like to know if I can pinpoint the 
relations which are created on a specific
 date and time or else I can do something through which I get creation date 
and time of the objects.


you would need to have postgres configured to log DDL, and set the log 
prefix to include timestamping, then you could scan those logs to get that 
information.its not otherwise stored in the
database.


Logging is the best way of capturing these events.

You can probably find out the relfilenode from pg_class for a given relation name (relname) and then go to the datadirectory (or tablespace directory) --> db directory (mapped to oid of pg_database) 
--> filename. Check the date time of the file when it was created.



Is this creation timestamp info exposed in Linux (ext4,xfs,etc?)? Last time I 
checked this info was available in FreeBSD out of the box.

Though I don't think this infra has been built for this very purpose.

-- 
john r pierce, recycling bits in santa cruz


--
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Thoughts on "Love Your Database"

2016-05-17 Thread Achilleas Mantzios

On 17/05/2016 20:38, Guyren Howe wrote:

On May 17, 2016, at 2:22 , Achilleas Mantzios  
wrote:

Sorry if I missed something but what's wrong with pgadmin3 ?

Apart from it's awful, clunky, bug-ridden and crash prone, nothing.


There is a solution for that : either find the bugs and submit patches or pay 
the developers to fix the bugs or make you a custom version
or go buy some other tool, or write your own or live with psql (like most 
people do).
In the occasions (once/twice a week) that I needed pgadmin during my last 16 
years with postgresql, it delivered just fine.
Some guys here use some better tools to access pgsql but they all cost money. I 
have paid for RazorSQL for accessing our MS SQL server,
then felt lucky I was accessing pgsql via psql (or even pgqdmin3) all those 
years.

--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Thoughts on "Love Your Database"

2016-05-17 Thread Achilleas Mantzios

On 17/05/2016 12:16, Geoff Winkless wrote:

On 17 May 2016 at 09:34, Pierre Chevalier Géologue
 wrote:

On this matter, I hear *very* often from such guys that the only reproach
they have to PostgreSQL is that it does not come with a slick GUI like
Access.  PGAdmin does not suit their needs at all: they want to design their
forms, directly write into the tables by using quick'n easy/dirty copy/paste
from/to their Excel (yes, it is still lying around).  I understand them,
somehow.
There are a few tools around, many proprietary ones, some Free/Libre ones,
but they are still looking for a sort of Holy Grail that would definitely
convince them.  A standard client tool that would come with any PostgreSQL
installation would please them.  Some sort of psqlGUI, I guess.

Why reinvent the wheel? I would say that putting the development
effort into the OpenOffice Base app would be time better spent.

Sorry if I missed something but what's wrong with pgadmin3 ?

Geoff





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Thoughts on "Love Your Database"

2016-05-08 Thread Achilleas Mantzios

On 04/05/2016 15:55, Szymon Lipiński wrote:


at my previous jobs I was working with many programmers, and almost none of them understood SQL. The problem was even deeper. They didn't want to learn it. When I was working among java programmers, 
I noticed that they hated SQL, but there was no problem with learning HQL, which in fact is quite similar. I really have no idea why it is like that.




Similar here. IMHO it is called "job trends hype". Look at all the linkedin ads, less than 1% is about Pgsql/DBs, 99% is about app coders. Ok rough numbers, but it reflects reality. One of my past 
programmers (a fine kid always looking to learn) now writes IOS and Android apps in another country. Another one who didn't do much well with SQL, but rock-star programmer otherwise, now writes 
javascript in another company.



--

regards Szymon Lipiński



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Issue during postgresql startup

2016-04-28 Thread Achilleas Mantzios

On 28/04/2016 10:21, Yogesh Sharma wrote:

Dear All,

Thanks for your support.

Could you please tell me, how to recover my system.

I am facing below errors.
---
could not open relation 1663/16385/1299988: No such file or directory
---

This message is continuous occurred in system .

I have tried below procedure for the same but after recover this system, all DB 
tables are destroyed.
http://www.hivelogik.com/blog/?p=513

Please let me know if any solution.


Any recent backups?



Regards,
Yogesh



DISCLAIMER:
---
The contents of this e-mail and any attachment(s) are confidential and
intended
for the named recipient(s) only.
It shall not attach any liability on the originator or NEC or its
affiliates. Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the
opinions of NEC or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is
strictly prohibited. If you have
received this email in error please delete it and notify the sender
immediately. .
---





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql color hostname prompt

2016-04-25 Thread Achilleas Mantzios

On 25/04/2016 16:55, Cal Heldenbrand wrote:

Hi everyone,

The default psql prompt can be a little frustrating when managing many hosts.  
Typing the wrong command on the wrong host can ruin your day.  ;-)

I whipped up a psqlrc and companion shell script to provide a colored prompt 
with the hostname of the machine you're connected to.  It works for both local 
sockets and remote connections too.

The only outside tool it requires is lsof to determine the hostname of the 
remote socket.  Otherwise it uses plain stuff like awk / sec and bash tools.

If everyone gives this a thumbs up, I'd like to submit this for inclusion in 
the official postgres source.  (Maybe as an additional 
psqlrc.sample.color_hostname file or similar)


Hello, have done that, looked really nice, but unfortunately this resulted in a 
lot of garbled output, in case of editing functions, huge queries, up arrows, 
etc...
You might want to test with those before submitting.



Inline paste of the two files below.  Replace the paths with your environment:

/usr/local/pgsql/etc/psqlrc
==
-- PROMPT1 is the primary prompt
\set PROMPT1 '%[%033[1;31m%]%`/usr/local/pgsql/etc/psql_hostname.sh`%[%033[0m%] 
%n@%/%R%#%x '

-- PROMPT2 is the secondary (query continue) prompt
\set PROMPT2 '%[%033[1;31m%]%`/usr/local/pgsql/etc/psql_hostname.sh`[%033[0m%] 
%n@%/%R %# '
==

/usr/local/pgsql/etc/psql_hostname.sh
==
#!/bin/bash
# Intelligently return local hostname, or remote server connection

# - list file descriptors of my parent PID (psql command)
# - include only FD #3, which is the postgres socket
# - print the NAME column
name=$(/usr/sbin/lsof -p $PPID -a -d 3 | tail -1 | awk '{print $9}')

if [[ "$name" == "socket" ]]; then
  # We're on the local socket
  hostname -f
else
  # Cut out the destination machine from the socket pair
  echo $( sed 's/.*->\(.*\):postgres/\1/' <<< $name )
fi
==

Thank you!

---
Cal Heldenbrand
   Web Operations at FBS
   Creators of flexmls <http://flexmls.com>® and Spark Platform 
<http://sparkplatform.com>
c...@fbsdata.com <mailto:c...@fbsdata.com>



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] what database schema version management system to use?

2016-04-06 Thread Achilleas Mantzios

On 06/04/2016 13:55, Alexey Bashtanov wrote:

Hi all,

I am searching for a proper database schema version management system.

My criteria are the following:
0) Open-source, supports postgresql
1) Uses psql to execute changesets (to have no problems with COPY, transaction 
management or sophisticated DDL commands, and to benefit from scripting)
2) Support repeatable migrations (SQL files that get applied every time they 
are changed, it is useful for functions or views tracking).

Reasonable?

But unfortunately it looks like each of Liquibase, Flyway, SqlHawk, MigrateDB, 
Schema-evolution-manager, Depesz's Versioning, Alembic and Sqitch does not 
satisfy some of those, right?

What DB VCS do you use and how does it related with the criteria listed above?
Do you have any idea what other systems to try?


Maybe Git then interface with smth like teamcity to apply your changes. Honestly you are asking too much. The classic problem is to find a tool that would translate DDL diffs into ALTER commands, if 
you want to store pure DDL CREATE statements. I have watched many presentations of people on the same boat as you, and they all implemented their own solutions. Good luck with your solution and keep 
us posted, many ppl might benefit from this.




Regards,
  Alexey





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Horrible/never returning performance using stable function on WHERE clause

2016-03-29 Thread Achilleas Mantzios

Hello David

On 29/03/2016 14:04, David Rowley wrote:

On 29 March 2016 at 20:01, Achilleas Mantzios
 wrote:

We are using PostgreSQL 9.3. Shouldn't the optimizer use a single call to
get_machdef_sister_defids in get_machdef_sister_defids_maxrh ??

It shouldn't be up to the optimizer to evaluate a STABLE function.
Only IMMUTABLE functions will be evaluated during planning.
What's not that clear to me is if the planner might be able to work a
bit harder to create an "Initplan" for stable functions with Const
arguments. Right now I can't quite see a reason why that couldn't be
improved upon, after all, the documentation does claim that a STABLE
function during a "single table scan it will consistently return the
same result for the same argument values".

And to add here the docs 
(http://www.postgresql.org/docs/9.3/static/xfunc-volatility.html) also say :
"A STABLE function cannot modify the database and is guaranteed to return the same results given the same arguments for all rows within a single statement. *This category allows the optimizer to 
optimize multiple calls of the function to a single call*. In particular, it is safe to use an expression containing such a function in an index scan condition."

However it would be quite simple just for you to force the STABLE
function to be evaluated once, instead of once per row, just by
modifying your query to become:

select max(rh) into tmp from items where vslwhid=vvslid and
itoar(defid) ~ (select get_machdef_sister_defids(vdefid));

Viewing the EXPLAIN of this, you'll notice the InitPlan, which will
evaluate the function and allow the use the output value as a
parameter in the main query.

That's true, this worked indeed. But still cannot understand why the 
distinction between ~ get_machdef_sister_defids(...) and  ~ (SELECT 
get_machdef_sister_defids(...)).
Why is the planner forced in the second case and not in the first, since 
clearly the input argument is not dependent on any query result? (judging by 
the docs).


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



[GENERAL] Horrible/never returning performance using stable function on WHERE clause

2016-03-29 Thread Achilleas Mantzios

Hello list,
I have written some functions to extract some data from our DB, from an hierarchical structure, the problem is that if a function doing lookups is defined as STABLE in the WHERE clause the performance 
is horrible. What I am trying to achieve is given a specific node in an hierarchical structure (a machine definition) to find all its equivalent sister nodes and then for a specific instance of this 
hierarchy to find the max RH (running hours) among all sister nodes.I am using some functions/opers from intarray. Here are the functions :


Compares two nodes for sister property:

CREATE OR REPLACE FUNCTION public.is_defid_sister_node(vdefid1 integer, vdefid2 
integer)
 RETURNS boolean
 LANGUAGE plpgsql
 STABLE
AS $function$DECLARE
vparents1 INTEGER[];
vparents2 INTEGER[];
descr1 TEXT;
descr2 TEXT;
i INTEGER;
BEGIN

SELECT 
COALESCE(partid,0)||'__'||regexp_replace(coalesce(description,''),'[nN][oO]([0-9]+)',''),parents
 into descr1,vparents1 FROM machdefs where defid=vdefid1;
SELECT 
COALESCE(partid,0)||'__'||regexp_replace(coalesce(description,''),'[nN][oO]([0-9]+)',''),parents
 into descr2,vparents2 FROM machdefs where defid=vdefid2;

IF (level(vparents1) = 0 AND level(vparents2) = 0) THEN
RETURN vdefid1=vdefid2;
ELSIF (level(vparents1) <> level(vparents2)) THEN
RETURN false;
ELSE
RETURN ((descr1=descr2) AND 
is_defid_sister_node(first(vparents1),first(vparents2)));
END IF;

END;$function$

Finds the set of sister nodes for a given node:

CREATE OR REPLACE FUNCTION public.get_machdef_sister_defids(vdefid integer)
 RETURNS INTEGER[]
 LANGUAGE plpgsql
 STABLE
AS $function$
DECLARE
tmp INTEGER[];
BEGIN

select (select array_agg(mdsis.defid) FROM machdefs mdsis WHERE mdsis.machtypeid=md.machtypeid AND level(mdsis.parents)=level(md.parents) AND last(mdsis.parents)=last(md.parents) AND 
is_defid_sister_node(mdsis.defid,md.defid)  ) INTO tmp from machdefs md where md.defid=vdefid;


IF (tmp IS NULL) THEN
tmp := '{}';
END IF;
RETURN tmp;

END;
$function$

Finds max RH for a given tree instance among all sister nodes of a given node :

CREATE OR REPLACE FUNCTION public.get_machdef_sister_defids_maxrh(vvslid 
INTEGER,vdefid INTEGER)
 RETURNS INTEGER
 LANGUAGE plpgsql
 STABLE
AS $function$
DECLARE
tmp INTEGER;
BEGIN
select max(rh) into tmp from items where vslwhid=vvslid and 
itoar(defid) ~ get_machdef_sister_defids(vdefid);
RETURN tmp;
END;
$function$


Query :
select get_machdef_sister_defids_maxrh(479,319435);

never ends (I have waited till 2-3 minutes), however, *doing the wrong thing* 
and declaring get_machdef_sister_defids as IMMUTABLE makes the above call 
return fast :

# select get_machdef_sister_defids_maxrh(479,319435);
 get_machdef_sister_defids_maxrh
-
   10320
(1 row)

Time: 110.211 ms

We are using PostgreSQL 9.3. Shouldn't the optimizer use a single call to
get_machdef_sister_defids in get_machdef_sister_defids_maxrh ??

Defining get_machdef_sister_defids back to STABLE and forcing 
get_machdef_sister_defids_maxrh to only call get_machdef_sister_defids once 
makes things work again :

CREATE OR REPLACE FUNCTION public.get_machdef_sister_defids_maxrh(vvslid 
integer, vdefid integer)
 RETURNS integer
 LANGUAGE plpgsql
 STABLE
AS $function$
DECLARE
tmp INTEGER;
tmppars INTEGER[];
BEGIN
tmppars := get_machdef_sister_defids(vdefid);
select max(rh) into tmp from items where vslwhid=vvslid and 
itoar(defid) ~ tmppars;
RETURN tmp;
END;
$function$

# select get_machdef_sister_defids_maxrh(479,319435);
 get_machdef_sister_defids_maxrh
-----
   10320
(1 row)

Time: 111.318 ms

Is this expected ?

--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FreeBSD x86 and x86_64

2016-02-23 Thread Achilleas Mantzios

On 24/02/2016 09:20, John R Pierce wrote:

On 2/23/2016 10:57 PM, Achilleas Mantzios wrote:
Of course it is. You can install PostgreSQL via packages, via ports or manually. I prefer the manual way and have been doing so for many years. FreeBSD peculiarities will start to bite (maybe, 
maybe not) once you start entering territories like threading, but you are far from that from what I gather.


postgres doesn't use threading, so there shouldn't be any issues there 


I was referring to projects like pljava.






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FreeBSD x86 and x86_64

2016-02-23 Thread Achilleas Mantzios

On 23/02/2016 23:02, John R Pierce wrote:

On 2/23/2016 12:45 PM, Larry Rosenman wrote:


The ports tree has postgresql:
...
I'm running 9.5.1 on both 11-CURRENT, and 10.x 


and I might add, postgres behaves very nicely in a FreeBSD Jail.



Quite off-topic, but did you ever manage to run postgres on identical jails, 
i.e. same user, same port? Was FreeBSD IPC ever jailified? Or did the recent 
switch to mmap resolve this?


I'm running pg 9.4.6 in a FreeNAS (FreeBSD 9.3) jail without any hassle, and 
very good performance, installed from ports via pkg install ...






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FreeBSD x86 and x86_64

2016-02-23 Thread Achilleas Mantzios

On 23/02/2016 11:47, MEERA wrote:

Hi all,

Any information regarding PostgreSQL support on FreeBSD platform?


Of course it is. You can install PostgreSQL via packages, via ports or manually. I prefer the manual way and have been doing so for many years. FreeBSD peculiarities will start to bite (maybe, maybe 
not) once you start entering territories like threading, but you are far from that from what I gather.




On Wed, Feb 17, 2016 at 12:26 PM, preeti soni mailto:preeti_soni...@yahoo.com>> wrote:

Hi,

There is no clear information available for FreeBSD supported versions.

Would you please let em know if Postgres is supported on both FreeBSD x86 
and x86_64.

Thanks in advance,

Preeti




--
thanks and regards,
Meera R Nair



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] foreign key to "some rows" of a second table

2016-02-22 Thread Achilleas Mantzios

On 22/02/2016 13:03, Chris Withers wrote:

Hi All,

So, I have a table that looks like this:

CREATE TABLE config (
regionvarchar(10),
namevarchar(10),
valuevarchar(40)
);

Another looks like this:

CREATE TABLE tag (
hostvarchar(10),
typevarchar(10),
valuevarchar(10)
);

What's the best way to set up a constraint on the 'config' table such that the 
'region' column can only contain values that exist in the 'tag' table's value 
column where the 'type' is 'region'?


Hi,
that's the reason CONSTRAINT TRIGGERS were introduced in PostgreSQL, I guess.
Just write an AFTER INSERT OR UPDATE TRIGGER ON config,
which checks for integrity.



cheers,

Chris



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] pg_multixact issues

2016-02-11 Thread Achilleas Mantzios
1:55 000F
-rw---   1 postgres dba   262144 Jan 22 07:50 0010
-rw---   1 postgres dba   262144 Jan 26 16:35 0011
-rw---   1 postgres dba   262144 Jan 29 10:16 0012
-rw---   1 postgres dba   262144 Feb  3 13:17 0013
-rw---   1 postgres dba   262144 Feb  3 16:13 0014
-rw---   1 postgres dba   262144 Feb  4 08:24 0015
-rw---   1 postgres dba   262144 Feb  5 13:20 0016
-rw---   1 postgres dba   262144 Feb  8 11:26 0017
-rw---   1 postgres dba   262144 Feb  8 11:46 0018
-rw---   1 postgres dba   262144 Feb  8 12:25 0019
-rw---   1 postgres dba   262144 Feb  8 13:19 001A
-rw---   1 postgres dba   262144 Feb  8 14:23 001B
-rw---   1 postgres dba   262144 Feb  8 15:32 001C
-rw---   1 postgres dba   262144 Feb  8 17:01 001D
-rw---   1 postgres dba   262144 Feb  8 19:19 001E
-rw---   1 postgres dba   262144 Feb  8 22:11 001F
-rw---   1 postgres dba   262144 Feb  9 01:44 0020
-rw---   1 postgres dba   262144 Feb  9 05:57 0021
-rw---   1 postgres dba   262144 Feb  9 10:45 0022
-rw---   1 postgres dba98304 Feb 10 13:35 0023

the members directory has 15723 files:
ls -l|wc -l
   15723



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-10 Thread Achilleas Mantzios

On 10/02/2016 12:40, Karsten Hilbert wrote:

On Wed, Feb 10, 2016 at 10:34:53AM +0200, Achilleas Mantzios wrote:

PG on tankers:


About checksums in our office master DB that's a fine idea, too bad that 
pg_upgrade doesn't cope with them

I am sure you have considered "failing over" the master to an
in-office slave which has got checksums turned on ?


Is that possible with standard streaming replication? As far as I am concerned the (master/hot standby) images have to be identical (no initdb involved). I guess you mean some sort of external 
(logical?) replication mechanism? We are trying to avoid initdb and restore for the obvious reasons.
But anyway, we have streaming replication to a hot standby (non checksum server) + WAL archiving for some years now. For 10+ years we survived (surprisingly!!) without those, we are better than ever 
now. BTW, the checksum feature would definitely make sense to run on our vessels where the vibrations and harsh conditions tend to affect hardware badly. Unfortunately migrating from 8.3 is a huge 
project, which we won't be forever postponing and should deal with some day.




Karsten



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-10 Thread Achilleas Mantzios

On 10/02/2016 06:10, ioan ghip wrote:
I have a Firebird SQL database running on one of my servers which has about 50k inserts, about 100k updates and about 30k deletes every day. There are about 4 million records in 24 tables. I have a 
bunch of stored procedures, triggers, events and views that I'm using.
Firebird works fairly well, but from time to time the database gets corrupted and I couldn't figure out yet (after many years of running) what's the reason. When this happens I run "gfix -mend -full 
-ignore", backup and restore the db and everything is fine until next problem in a week, or a month.


I never used PostgreSQL. Yesterday I installed it on my development machine and 
after few tests I saw that it's fairly easy to use.

Does anyone have experience with both, Firebird and PostgreSQL? Is PostgreSQL way better performing than Firebird? Is it worth the effort moving away from Firebird? Would I gain stability and 
increased performance?


Thanks.



Hello,

we have been running over 100 PostgerSQL servers (8.3) on remote tanker vessels 
in harsh conditions
under heavy vibrations due to both weather and mechanical vibrations, on 
commodity PC workstations
for years, and only one of them (hardware) was damaged beyond repair (not 
PgSQL's fault).
In other cases with databases corrupted due to heavily damaged disks, we 
managed to recover
and rescue all of the data except some few rows which could be re-generated 
anyway.

PostgreSQL *is* a reliable DB.

About checksums in our office master DB that's a fine idea, too bad that 
pg_upgrade doesn't cope with them
(and upgrading without pg_upgrade is out of the question)

--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] BSD initdb without ICU support and switch later

2016-01-29 Thread Achilleas Mantzios

On 29/01/2016 16:46, Maeldron T. wrote:

 Hello,

the ICU patch isn’t ready for PostgreSQL on FreeBSD.


https://people.freebsd.org/~girgen/postgresql-icu/readme.html



Is there any risk (more than 0) in executing the initdb without ICU
support and recompiling PostgreSQL later when the ICU patch is ready? I
mean any risk without making a dump and import before the switch.

If this is okay for sure, what should I do later when the ICU is
available? Do I have to reindex everything with the ICU patched database?

Thank you.

 M.






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Code of Conduct: Is it time?

2016-01-12 Thread Achilleas Mantzios

On 05/01/2016 18:47, Joshua D. Drake wrote:

Hello,

I had a hard time writing this email. I think Code of Conducts are non-essential, a waste of respectful people's time and frankly if you are going to be a jerk, our community will call you out on 
it. Unfortunately a lot of people don't agree with that. I have over the course of the last year seen more and more potential users very explicitly say, "I will not contribute to a project or attend 
a conference that does not have a CoC".


Some of us may be saying, "Well we don't want those people". I can't argue with some facts though. Ubuntu has had a CoC[1] since the beginning of the project and they grew exceedingly quick. Having 
walls in the hallway of interaction isn't always a bad thing.


In reflection, the only thing a CoC does is put in writing what behaviour we as 
a project already require, so why not document it and use it as a tool to 
encourage more contribution to our project?

Sincerely,

JD


1. http://www.ubuntu.com/about/about-ubuntu/conduct


Well, while I don't have an opinion, since after 16+ years I don't think I am 
going anywhere away from PostgreSQL, let me share my initial feelings about the 
community.
It was back in 2003, having spent already 3 years with the database and just starting to implement our own hierarchical solution based on postgresql arrays and intarray contrib module, and heavily 
hack DBMirror, when someone (high ranking) on -sql called me "newbie".
My immediate reaction was to start looking for alternatives. Obviously I failed (no OS DB was this good). Other times I had my favorite OS (FreeBSD) being bashed by pgsql ppl, but held on, I am still 
here, and ppl at pgsql conferences now talk about a company who has deployed over 100 pgsql installations in the seven seas communicating over satellite by a hacked version of uucp and replicated via 
a heavily hacked version of DBmirror.


So while I think that a CoC might help beginners stay, I don't think that this 
is a major part, neither do I think that the ppl themselves will easily conform.

--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xa compatibility

2015-11-10 Thread Achilleas Mantzios

On 10/11/2015 09:34, Xaver Thum wrote:


Hi all,

is there an option (provided by Postgres) accessing a Postgres DB via  the 
standard XA interface ?

I don't mean the usage of JDBC's class PGXADataSource,
but the usual XA methods xa_open, xa_prepare, xa_commit, ... of the XA standard.
  


You might look here : 
http://www.enterprisedb.com/postgres-plus-edb-blog/ahsan-hadi/edb-advances-xa-compatibility


Thanks in advance,
  
Xaver





Avast logo <https://www.avast.com/antivirus>  

Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
www.avast.com <https://www.avast.com/antivirus>





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Recursive Arrays 101

2015-11-05 Thread Achilleas Mantzios

On 04/11/2015 17:53, Rob Sargent wrote:

On 11/04/2015 03:03 AM, Achilleas Mantzios wrote:

Sorry for being kind of late to the party (I was in 2015.PgConf.EU !!), and not 
having read
most of the replies, what we have been successfully doing for this problem for 
our app
is do it this way :
parents int[] -- where parents stores the path from the node to the root of the 
tree
and then have those indexes :
btree (first(parents))
btree (level(parents)) -- length
btree (last(parents))
gin (parents gin__int_ops) -- the most important

This has been described as "genealogical tree" approach, and works very good, 
IMHO much better
than nested sets.

Is there a more complete description of this approach available? By the title one might assume could be applied to populations as opposed to phylogeny (the OP's use case).  Does it deal with 
consanguinity?  Does it perform well going "up" the tree (which is of course branched at every level)?


From here https://en.wikipedia.org/wiki/Phylogenetic_tree I assume that 
phylogenetic trees are normal
trees, and I see no reason why not be modeled with the genealogical approach 
described. The earliest paper
I based my work on was :
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CCUQFjABahUKEwiR6auUlvnIAhXGvhQKHVyDA-s&url=https%3A%2F%2Fdownload.samba.org%2Fpub%2Funpacked%2Fldb%2Fldb_sqlite3%2Ftrees.ps&usg=AFQjCNEktJsibP435MBki5cdGmO_CzKmwg&sig2=I9yC_tpyeWrEueDJTXbyAA&bvm=bv.106674449,d.d24&cad=rja

Finding the root is O(1). Going "up" the tree or finding common ancestry is 
reduced to the problem
of finding overlap/intersections/contains/contained between postgresql arrays.

The indexes, functions and operators provided by contrib/intarray were a basic 
element for the success of this
approach.


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Recursive Arrays 101

2015-11-04 Thread Achilleas Mantzios

Sorry for being kind of late to the party (I was in 2015.PgConf.EU !!), and not 
having read
most of the replies, what we have been successfully doing for this problem for 
our app
is do it this way :
parents int[] -- where parents stores the path from the node to the root of the 
tree
and then have those indexes :
btree (first(parents))
btree (level(parents)) -- length
btree (last(parents))
gin (parents gin__int_ops) -- the most important

This has been described as "genealogical tree" approach, and works very good, 
IMHO much better
than nested sets.

On 27/10/2015 14:46, David G. Johnston wrote:

On Monday, October 26, 2015, John R Pierce mailto:pie...@hogranch.com>> wrote:

On 10/26/2015 7:44 PM, David G. Johnston wrote:

​They both have their places.  It is usually quite difficult to 
automate and version control the manual work that goes into using command line 
tools.​


I hope you mean, its difficult to automate and version control 
clickity-clicky work that goes into using GUI tools

automating shell scripts is trivial.   putting said shell scripts into 
version control is also trivial.


Yes, that is a typo on my part.



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] checkpoints anatomy

2015-10-11 Thread Achilleas Mantzios

http://dba.stackexchange.com/questions/61822/what-happens-in-postgresql-checkpoint

and the now classic :
http://www.depesz.com/2011/07/14/write-ahead-log-understanding-postgresql-conf-checkpoint_segments-checkpoint_timeout-checkpoint_warning/

On 12/10/2015 04:39, Richardson Hinestroza wrote:

Hello, excuse me for my poor english. i am writting from Colombia and i am 
postgresql fan.

I want to know if postgresql checkpoints prevent current transactions to write 
the same page being flush to disk by checkpoint proccess.

And I want know if the postgresql checkpoint use the ARIES algorithmo. and 
known technical details about postgresql checkpoints.

i can not foud in the web answers for my question.

i would apreciate your answer. thanks a lot



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] DB on mSATA SSD

2015-04-23 Thread Achilleas Mantzios

On 23/04/2015 15:28, Vick Khera wrote:


On Thu, Apr 23, 2015 at 7:07 AM, Job mailto:j...@colliniconsulting.it>> wrote:

Are there some suggestions with SSD drives?
Putting the DB into RAM and backing up periodically to disk is a valid 
solutions?


I have some very busy databases on SSD-only systems. I think you're using SSDs 
that are not rated for server use.

Your strategy of using in-ram disk and backing up to the SSD is sensible, depending on what guarantees you need for the data to survive an unclean system shutdown. You will want to use a file system 
that allows you to snapshot and backup or logical DB backups. Postgres 9.4 has some features that will make taking the backup from the file system much easier and cleaner, too.




FS Snapshots are an option but one should make sure that all file systems are 
snapshot atomically, which is not very common unless you use ZFS or similarly 
high-end FS.
Regarding file filesys based backups, apart from pg_basebackup which is a nice utility but built on top of the existing "Continuous Archiving" philosophy, the very feature was already implemented 
since 8.*


--
Achilleas Mantzios
Head of IT DEV
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] PostgreSQL Inheritance and column mapping

2014-10-03 Thread Achilleas Mantzios

On 03/10/2014 05:54, Jim Nasby wrote:

On 10/2/14, 9:00 AM, Tom Lane wrote:

Achilleas Mantzios  writes:

Was there ever any discussion.thought about being able to follow a non-strict 
by name
column mapping between inherited tables and father tables?

No.  You could use a view with UNION ALL perhaps.

FWIW, I've had some less than stellar results with that (admittedly, back on 
8.4).

The other thing you could do is something like:

ALTER TABLE invoice_document RENAME TO invoice_document_raw;
ALTER TABLE invoice_document_raw RENAME invoice_no TO doc_no;
CREATE VIEW invoice_document AS
SELECT ...
, doc_no AS invoice_no
, ...
FROM invoice_document_raw
;

If you make that view writable then no one needs to know that you renamed the 
column in the underlying table.


That is a brilliant idea, thank you!
One problem is that the tables are a part of a 100-node replication system base 
on a heavily hacked
version of DBMirror, over a non-TCPIP Satellite network. That would require 
rewriting rules
and deploying this across the remote nodes.
I would be afraid to run the ALTER TABLE ... RENAME TO command in this system.
So, we could just bite the bullet and get our team rewrite all programs.

--
Achilleas Mantzios
Head of IT DEV
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL Inheritance and column mapping

2014-10-02 Thread Achilleas Mantzios

Hi,

Was there ever any discussion.thought about being able to follow a non-strict 
by name
column mapping between inherited tables and father tables?

If for instance someone wants to build an hierarchy on a schema of tables being 
defined
in an established production system, designed with no inheritance in mind, is 
there any workaround
or a way to map the non-common column names, but semantically similar?

E.g. Imagine we have the following tables in a legacy non-OO system :
create table receipt_document(id serial primary key,doc_no text, date_entered 
date);
create table invoice_document(id serial primary key,invoice_no text, 
date_entered date, date_due date);
.
.
Then at a (much) later date we decide we want to have an overview of all the 
documents having to do
with purchasing, or even add generic purchase documents for which no special 
application or structure exists
(as of yet)
We create the new generic table :
create table purchase_document(id serial primary key,doc_no text, date_entered 
date);
And then make this the father table to the two tables with the detailed data :

test=# alter table receipt_document INHERIT purchase_document ;
-- that works

test=# alter table invoice_document INHERIT purchase_document ;
ERROR:  child table is missing column "doc_no"

Here the problem is that invoice_document lacks col doc_no, which semantically 
has the same meaning as invoice_no.

One work around would be to rename the col and massively replace all 
occurrences of this in the applications.
However i am just wondering if it would be a good idea to extend the way PgSQL 
inheritance works and
have a mapping between columns as well. e.g. somehow denote that purchase_document.doc_no 
should be merged and "mapped" with invoice_document.invoice_no.

After all, generally speaking invoices have invoice_no's while general docs 
have doc_no's , right?
So I think, the above scenario could be indeed be found a lot of times in 
systems designed with no OO in mind.

--
Achilleas Mantzios
Head of IT DEV
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why isn't Java support part of Postgresql core?

2014-09-15 Thread Achilleas Mantzios

On 15/09/2014 08:22, cowwoc wrote:

Hi,

Out of curiosity, why is Postgresql's Java support so poor? I am
specifically looking for the ability to write triggers in Java.

I took a look at the PL/Java project and it looked both incomplete and dead,
yet other languages like Javascript are taking off. I would have expected to
see very strong support for Java because it's the most frequently used
language on the server-side.


This is far from dead. I works perfectly with java 1.7 and postgresql 9.3 ,
but you need maybe a little bit more extra homework + some skills with
maven.
If i managed to build this on a FreeBSD machine, in linux it should a piece of 
cake.
The docs suck, granted, but the community is very much alive and helpful.
We use it for production environment. We had some really complex Java code, that
we were unwilling to port to pl/pgsql, therefore we gave pl/java a try.
It was worth it. + it has proven to be really stable. No JVM crashes after 2 
years in production.




What's going on? Why isn't this a core language supported alongside SQL,
Perl and Python as part of the core project?

Thanks,
Gili



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Why-isn-t-Java-support-part-of-Postgresql-core-tp5819025.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Achilleas Mantzios
Head of IT DEV
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SQL query runs fine on one platform (FreeBSD), but hangs on another (Windows)

2014-04-29 Thread Achilleas Mantzios

On 29/04/2014 12:54, David Noel wrote:

'health'<>'' (if that is what you have) means a boolean expression that
compares the
literal 'health' with the empty literal '' which is of course always false.

Ah. Gotcha. Thanks. I didn't know you could use a single double
quotation mark in a query -- I thought like in most languages that you
needed two of them for it to be valid.

But there are two of them : ' and ' makes ''. If you use only one psql/parser 
will complain.



Maybe *health* is a column name somewhere ? In this case it should be
written :
"health" <> '' (i.e. comparison between the value of column "health" and the
literal value '')

'health' is one of the accepted values of the page table's
"Classification" column.

Many thanks,

-David



--
Achilleas Mantzios
Head of IT DEV
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SQL query runs fine on one platform (FreeBSD), but hangs on another (Windows)

2014-04-29 Thread Achilleas Mantzios

On 29/04/2014 12:39, David Noel wrote:

Ehh, to clarify I'm referring to the lone _double_ quotation mark at
the end of the condition 'health'<>''. I called it a "single quotation
mark" because it was a quotation mark all by itself, but realize that
could be misread. Single quotation marks are technically this: '

" (double quotation mark) designates a column name, table name, and rest of 
database objects.
' (single quotation mark) designates a text literal e.g. 'john', 'david', etc...
'health'<>'' (if that is what you have) means a boolean expression that 
compares the
literal 'health' with the empty literal '' which is of course always false.
Maybe *health* is a column name somewhere ? In this case it should be written :
"health" <> '' (i.e. comparison between the value of column "health" and the 
literal value '')


Sorry for the newbie spam -- I can't run
less-than/greater-than/quotation marks through Google for answers.

On 4/29/14, David Noel  wrote:

select p.*, s.NoOfSentences
from page p,
  lateral (select count(*) as NoOfSentences
   from sentence s
   where s."PageURL" = p."URL") s
where "Classification" like case ... end
order by "PublishDate" desc
limit 100;

Great. Thanks so much!

Could I make it even simpler and drop the case entirely?

select p.*, s.NoOfSentences
from page p,
  lateral (select count(*) as NoOfSentences
   from sentence s
   where s."PageURL" = p."URL") s
where "Classification" like 'health'
order by "PublishDate" desc
limit 100;

I'm not sure what "case WHEN 'health'<>'' THEN 'health' ELSE '%' end"
does. I follow everything just fine until I get to the 'health'<>''
condition. What does the single quotation mark mean? I can't seem to
find it in the documentation.

-David




--
Achilleas Mantzios
Head of IT DEV
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SQL query runs fine on one platform (FreeBSD), but hangs on another (Windows)

2014-04-29 Thread Achilleas Mantzios

On 29/04/2014 09:59, David Noel wrote:

The query I'm running is:

"select page.*, coalesce((select COUNT(*) from sentence where
sentence."PageURL" = page."URL" group by page."URL"), 0) as
NoOfSentences from page WHERE "Classification" LIKE CASE WHEN "<>"
THEN " ELSE '%' END ORDER BY "PublishDate" DESC Offset 0 LIMIT 100"


In all honesty, this query is very badly written. It seems like it was ported 
from some other
system. The inner group by in the coalesce is redundant since the result is 
always one row,
moreover, it is wrong since coalesce accepts a scalar value, it hits the eye at 
first sight.
Additionally, ''<>'' always returns false, what's the purpose of the CASE 
statement?


I can post the table definitions if that would be helpful but I don't
have them on hand at the moment.

The gist of it though is that "page" and "sentence" are two tables.
page.URL maps to sentence.PageURL. The page table has the columns
"Classification", and "PublishDate". URL, PageURL, and Classification
are strings. PublishDate is a timestamp with timezone.

Both queries are run from a Java project using the latest JDBC driver.
The PostgreSQL Server versions it's being run on are 9.2 and 9.3. The
query executes and returns just fine when run on a FreeBSD-based
platform, but executes forever when run under Windows.

Does anyone have any idea why this might be happening? Are there
platform/syntax compatibility issues I'm triggering here that I'm
unaware of? Is there something wrong with the query?

We're going to try to test it under Linux too, but that system will
have to be set up first so it might be a while before we know those
results.

Any thoughts would be appreciated,


Try to re-write the query in a good form, and then perform EXPLAIN ANALYZE on 
both systems
to see what's wrong.



David Noel





--
Achilleas Mantzios
Head of IT DEV
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux vs FreeBSD

2014-04-11 Thread Achilleas Mantzios

On 11/04/2014 15:05, Alban Hertroys wrote:
Although it is getting a bit specific, would you care to elaborate why you would advice strongly against using ports or packages for Postgres on FreeBSD? Because that’s a rather strong statement 
you’re making and so far the only argument I’ve seen is that there is no port for pl/java. I’m curious as to why you are so strongly set on custom-compiling Postgres. BTW, isn’t the usual solution 
to a missing port to create your own (local) port? I can’t claim I have ever done that (never needed to), but apparently that’s the way to go about it. The obvious benefit is that it will fit in 
with the package management system, while you could even provide the port to others if you’d be willing to take responsibility for maintaining that port. 


pl/java has nothing to do with this. The argument against using packages/ports 
for postgresql
upgrades, is that upgrades in general involve :
- reading HISTORY thoroughly and understanding every bit of it, especially the 
migration part,
and the changes part
- backing up the current database
- installing the new binaries
- running pg_upgrade
- solving problems that pg_upgrade detects and trying again
- testing your in house C/Java/etc... functions
- testing your whole app + utilities against the new version

Now, tell me, how much of this can the /usr/ports/databases/postgresqlXX-server 
port can do?
Would you trust the system to do this for you in an automated maybe weekly  pkg 
upgrade task
that would handle e.g. cdrecord and postgresql-xxx in the same manner ?

Now about writing ports, i can say to you this is a PITA. Its a great concept, 
but you must truly
commit to having a part of your life slot maintaining the port you submitted. 
This could be fun at first,
but in the long run, this is not easy.


Fair enough.


You are welcome :)


Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.






--
Achilleas Mantzios
Head of IT DEV
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux vs FreeBSD

2014-04-11 Thread Achilleas Mantzios

On 11/04/2014 13:05, Alban Hertroys wrote:

On 11 Apr 2014, at 8:04, Achilleas Mantzios  
wrote:


Basically it goes beyond what ppl would describe as OS holly wars.
If one chooses to go by FreeBSD, then he better be prepared to handle the 
burden, both the part that is
imposed by the OS administration itself, as well as the part that is a side 
effect of the different base system.

Example of admin part :
Generally, compiling postgresql from source gives more freedom than be stuck on 
the OS's ports or PKGng
system. (the later being a very handy and welcome addition to FreeBSD).
Now what if e.g. the user wants pgsql software X (e.g. pgadmin3, p5-Pg, etc...) 
only to find out that most of those
ports need postgresql client as a dependency. He/she must be prepared to work 
his way through :
- manual installations (gmake config && gmake && gmake install)
- /usr/ports
- PKG binary installations
in decreasing order of freedom but increasing order of easiness, and in many 
cases work through a combination
of the above.

That argument holds for any package system on any OS I know of. Once you start 
custom compiling things outside the control of the package management system, 
you’re on your own.

I am not against FreeBSD in any way, as a matter of fact i am struggling for 
about 20 years
to keep it alive at least in my working environment, being my primary 
development workstation.

Custom compiling may give more freedom, but it’s hardly ever necessary on 
FreeBSD. For example, the only ports that I ever had to custom compile were 
ports for software I was developing, which of course no package management 
system can keep track of.

Try to install/setup PgSQL-backed openldap with unixODBC when your KDE has 
iodbc as a prerequisite.
Or try to install pljava, for which of course no OS port/OS package/PgSQL 
extension exists, yet.
Good luck with any of those.

In general, the various options the port Makefile provides for customisation 
are quite sufficient. It’s a plus to the ports system that you get any options 
at all.


Example of base system part :
Recently I had to install pl-java on my FreeBSD workstation. There was a 
problem with libtrh, postgresql should be recompiled
with explicitly setting : -lpthread in 
/usr/local/src/postgresql-9.3.4/src/backend/Makefile, without this the backend 
would simply hang
at the very first invocation of a java function. This came after detailed 
following or email exchange of various hackers groups
in both pgsql and FreeBSD lists, to describe the issue as accurately as 
possible, to help debug as most as possible, to talk
to the right people, to give them incentive to answer back, etc.

It seems to me that the reason you were custom compiling Postgres in the first 
place was a problem with the port. I’m sure tracking down the problem wasn’t 
easy, but that is not really relevant to the topic. Ports break sometimes (on 
any OS) and it would have been sufficient to contact the port maintainer about 
the issue.

No, i wasn't compiling postgresql from standard distribution because of a 
problem with the port.
(although the port had the same exact behavior)
I always run postgresql compiled by hand, since I see no reason to sacrifice my 
peace of mind
for a short-lived joy going with the ports or PKGng system.
As a matter of fact, PostgreSQL is among the few software packages that i would 
advice strongly
against using ports or pkgs of any kind. Might work in Debian. Would not risk 
this in FreeBSD.

For a quick (temporary) fix, you could probably have fixed the port by editing 
the port Makefile. With that, there’s no reason anymore to “custom compile” 
postgres and it leaves the dependency tracking of the port in place. Editing 
Makefiles is indeed not for everyone, but at least you _can_ do that on 
FreeBSD. Not every package management system will let you do that.

Sure, but the way to do this is not by simply editing a Makefile, but with 
writing an extra patch
inside /usr/ports/databases/postgresql93-server/files/ . Which is more burden 
than easiness.

And yes, I have edited Makefiles, although the need hasn’t risen recently.

With plain vanilla ports it is rarely needed.

I don't mean to scare the OP, but FreeBSD is not for everyone.

And that (again) could be said about any OS. Even Windows or OS X.
It depends on what you intend to use it for and what prior experience, 
preconceptions and expectations you might have.

Playing with words aside, going with FreeBSD is not for the average Ubuntu user.


Oh, and please try not to top-post when replying on this list.

I did just for this message, because i did not feel appropriate to quote 
anything that the previous
poster wrote.

On 11/04/2014 00:50, Jan Wieck wrote:

On 04/10/14 17:25, Christofer C. Bell wrote:

I'm not wanting to get after anyone here, but I want it on the record
that I am not the source of the above quote discouraging the use of
Ubuntu in a server role.  That wo

Re: [GENERAL] Linux vs FreeBSD

2014-04-10 Thread Achilleas Mantzios

Basically it goes beyond what ppl would describe as OS holly wars.
If one chooses to go by FreeBSD, then he better be prepared to handle the 
burden, both the part that is
imposed by the OS administration itself, as well as the part that is a side 
effect of the different base system.

Example of admin part :
Generally, compiling postgresql from source gives more freedom than be stuck on 
the OS's ports or PKGng
system. (the later being a very handy and welcome addition to FreeBSD).
Now what if e.g. the user wants pgsql software X (e.g. pgadmin3, p5-Pg, etc...) 
only to find out that most of those
ports need postgresql client as a dependency. He/she must be prepared to work 
his way through :
- manual installations (gmake config && gmake && gmake install)
- /usr/ports
- PKG binary installations
in decreasing order of freedom but increasing order of easiness, and in many 
cases work through a combination
of the above.

Example of base system part :
Recently I had to install pl-java on my FreeBSD workstation. There was a 
problem with libtrh, postgresql should be recompiled
with explicitly setting : -lpthread in 
/usr/local/src/postgresql-9.3.4/src/backend/Makefile, without this the backend 
would simply hang
at the very first invocation of a java function. This came after detailed 
following or email exchange of various hackers groups
in both pgsql and FreeBSD lists, to describe the issue as accurately as 
possible, to help debug as most as possible, to talk
to the right people, to give them incentive to answer back, etc.

I don't mean to scare the OP, but FreeBSD is not for everyone.

On 11/04/2014 00:50, Jan Wieck wrote:

On 04/10/14 17:25, Christofer C. Bell wrote:

I'm not wanting to get after anyone here, but I want it on the record
that I am not the source of the above quote discouraging the use of
Ubuntu in a server role.  That would be Bruce Momjian. While Bruce is
entitled to his opinion, it's not one I agree with and I don't want a
Google search years from now to tie my name to that viewpoint.


Who (in their right mind) would ever think of anything but BSD in a server role?




Jan




--
Achilleas Mantzios
Head of IT DEV
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux vs FreeBSD

2014-04-04 Thread Achilleas Mantzios

As a side note, when we migrated the exact same pgsql 8.3 system from linux 
kernel 2.6 to 3.6,
we experienced an almost dramatic slowdown by 6 times.
Linux Kernel's were known to have issues around those dates, i recall.
We had to set synchronous_commit to off, this gave a huge boost ,
but this was no longer "apples" vs "apples".

On 04/04/2014 07:03, François Beausoleil wrote:

Hi all!

Does PG perform that much better on FreeBSD? I have some performance issues on 
a Ubuntu 12.04 which I'd like to resolve. iowait varies a lot, between 5 and 
50%. Does FreeBSD better schedule I/O, which could alleviate some of the 
issues, or not at all? I have no experience administering FreeBSD, but I'm 
willing to learn if I'll get some performance enhancements out of the switch.

Our workload is lots of data import, followed by many queries to summarize 
(daily and weekly reports). Our main table is a wide table that represents 
Twitter and Facebook interactions. Most of our reports work on a week's worth 
of data (table is partitioned by week), and the tables are approximately 25 GB 
plus 5 GB of indices, per week. Of course, while reports are ongoing, we're 
also importing next week's data.

The host is a dedicated hardware machine at online.fr: 128 GB RAM, 2 x 3TB disk 
in RAID 1 configuration.

I started thinking of this after reading "PostgreSQL pain points" at 
https://lwn.net/Articles/591723/. In the comments, bronson says FreeBSD does not exhibit 
the same problems (slow fsync, double buffering). On the list here, I've read about 
problems with certain kernel versions on Ubuntu.

I'm not expecting anything magical, just some general guidelines and hints. Did 
anybody do the migration and was happier after?

Thanks for any hints!
François Beausoleil

$ uname -a
Linux munn.ca.seevibes.com 3.2.0-58-generic #88-Ubuntu SMP Tue Dec 3 17:37:58 
UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

$ psql -U postgres -c "select version()"
version
-
  PostgreSQL 9.1.11 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 
4.6.3-1ubuntu5) 4.6.3, 64-bit

/proc/cpuinfo says: 8 CPUs, identified as "Intel(R) Xeon(R) CPU E5-2609 0 @ 
2.40GHz"




--
Achilleas Mantzios
Head of IT DEV
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux vs FreeBSD

2014-04-04 Thread Achilleas Mantzios

FreeBSD is OK if you are experienced. As a system it requires much more 
maturity by the admin
than lets say Ubuntu which is targeted at a larger user base.
I'd say, explore your other Linux options first, since you already have 
experience with Linux.
FreeBSD requires a much bigger learning curve.

On 04/04/2014 07:03, François Beausoleil wrote:

Hi all!

Does PG perform that much better on FreeBSD? I have some performance issues on 
a Ubuntu 12.04 which I'd like to resolve. iowait varies a lot, between 5 and 
50%. Does FreeBSD better schedule I/O, which could alleviate some of the 
issues, or not at all? I have no experience administering FreeBSD, but I'm 
willing to learn if I'll get some performance enhancements out of the switch.

Our workload is lots of data import, followed by many queries to summarize 
(daily and weekly reports). Our main table is a wide table that represents 
Twitter and Facebook interactions. Most of our reports work on a week's worth 
of data (table is partitioned by week), and the tables are approximately 25 GB 
plus 5 GB of indices, per week. Of course, while reports are ongoing, we're 
also importing next week's data.

The host is a dedicated hardware machine at online.fr: 128 GB RAM, 2 x 3TB disk 
in RAID 1 configuration.

I started thinking of this after reading "PostgreSQL pain points" at 
https://lwn.net/Articles/591723/. In the comments, bronson says FreeBSD does not exhibit 
the same problems (slow fsync, double buffering). On the list here, I've read about 
problems with certain kernel versions on Ubuntu.

I'm not expecting anything magical, just some general guidelines and hints. Did 
anybody do the migration and was happier after?

Thanks for any hints!
François Beausoleil

$ uname -a
Linux munn.ca.seevibes.com 3.2.0-58-generic #88-Ubuntu SMP Tue Dec 3 17:37:58 
UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

$ psql -U postgres -c "select version()"
version
-
  PostgreSQL 9.1.11 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 
4.6.3-1ubuntu5) 4.6.3, 64-bit

/proc/cpuinfo says: 8 CPUs, identified as "Intel(R) Xeon(R) CPU E5-2609 0 @ 
2.40GHz"




--
Achilleas Mantzios
Head of IT DEV
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL with ZFS on Linux

2014-01-16 Thread Achilleas Mantzios

http://www.unix-experience.fr/2013/2451/

FreeBSD is also a very mature platform for ZFS/postgresql.

On 16/01/2014 11:57, Sébastien Lorion wrote:

On Thu, Jan 16, 2014 at 4:22 AM, Sébastien Lorion mailto:s...@thestrangefactory.com>> wrote:

Hello,

Since ZFS on Linux (http://zfsonlinux.org/) has been declared production 
ready last March (v0.6.1), I am curious if anyone is using it with PostgreSQL 
on production servers (either main or
backup) and if so, what is their experience so far ?

Thank you,

Sébastien


FYI, a recent (Sept. 2013) presentation I found about using ZoL in production 
(albeit, not with PostgreSQL) and the current status of the project:

http://lanyrd.com/2013/linuxcon-north-america/scqmfb/



--
Achilleas Mantzios
Head of IT DEV
IT DEPT
Dynacom Tankers Mgmt



  1   2   >