Re: [GENERAL] journaled FS and and WAL

2016-10-14 Thread Michael Paquier
On Fri, Oct 14, 2016 at 11:27 PM, Albe Laurenz  wrote:
> After a successful commit, the WAL file and its metadata are on disk.
> Moreover, the file metadata won't change (except for the write and access
> timestamps) because WAL files are created with their full size and never
> extended, so no WAL file should ever get "lost" because of partial metadata
> writes.

This behavior depends as well on the value of wal_sync_method. For
example with fdatasync the metadata is not flushed. It does not matter
any for for WAL segments as Albe has already mentioned, but the choice
here impacts performance.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Bucardo-general] doubts about target db?

2016-10-14 Thread Periko Support
Thanks Rakesh.

Let me explain my reason to have a replica server.

We run odoo server 7.x with ubuntu 14.

But we are working on a BI system right now(Pentaho), them we want to
run those heavy reports on the replica and let the current production
system on the master.

This is why I was thinking on a M-S setup.

Why don't u use PSQL built in replica tools? We try but exist  a lets
behavior of odoo that if u run a report he try to update some internal
tables and won't allow us to run our reports in the slave(streaming
replica).
Because the slave is in a ReadOnly mode.

Them we start searching for a solution and we came to bucardo.

Now, the BI server will just for that.

What setup guys recommend, M-M or M-S for my solution?

Now let me think in the future, I can think on a M-M-S, If the
master-1 fails we have the 2nd and nothing happen right and our
current S will continue running, bucardo let me do this?

Now let me back in the setup of bucardo, we u are in the setup, how
can I told to bucardo is a M-M or M-S?

Sorry for to much question, I want to understand all this and make my
own manual, I see a lot of settings of bucardo in the man page but not
to much examples.

youtube and google are giving me a lot info.

Thanks for your time and help!!!

On Fri, Oct 14, 2016 at 8:54 AM, Rakesh Kumar
 wrote:
>>Other thing, with bucardo what is the difference between in a
>>master-master replica vs master-slave, at the end in both ways the
>>user has a 2nd DB with R/W permission?
>
> I think in a M-M mode, updates from both dbs will be replicated to the other 
> db.
> In other words they will be eventually consistent.
> In a M-S mode, even though Slaves can be updated by the app, its updates will
> not be replicated back to the master. While updates from M->S will happen all
> the time, updates from S will not go to M.
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Bucardo-general] doubts about target db?

2016-10-14 Thread Rakesh Kumar
>Other thing, with bucardo what is the difference between in a
>master-master replica vs master-slave, at the end in both ways the
>user has a 2nd DB with R/W permission?

I think in a M-M mode, updates from both dbs will be replicated to the other db.
In other words they will be eventually consistent.
In a M-S mode, even though Slaves can be updated by the app, its updates will
not be replicated back to the master. While updates from M->S will happen all
the time, updates from S will not go to M.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Bucardo-general] doubts about target db?

2016-10-14 Thread Periko Support
Thanks Rosser for your answer.

Other thing, with bucardo what is the difference between in a
master-master replica vs master-slave, at the end in both ways the
user has a 2nd DB with R/W permission?

Thanks.

On Thu, Oct 13, 2016 at 9:40 PM, Rosser Schwarz
 wrote:
> On Thursday, October 13, 2016, Periko Support 
> wrote:
>>
>> 1) My target db must have all the tables created without data on it before
>> sync?
>
>
> Yes, tables must exist on the target. Generally, it's easiest to have them
> empty, but the bigger concern is conflicts with data from the master.
>
>>
>> 2) In a master-master or master slave, bucardo is best to be reside in the
>> main master side?
>
>
>  It doesn't really matter, as long as it can reach all the dbs it's
> managing. I've run it on the master, on the slave, and on another host
> entirely.
>
> rls
>
>
>
> --
> :wq


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple multithreaded insert

2016-10-14 Thread FarjadFarid(ChkNet)
Hi, 

 

Personally where possible I would always internal features of any DB engine. 
These are pre-compiled, tested for performance and are ingrained into the 
system. So they can naturally be expected to be slightly faster than even 
triggers. In the case of your question, why not use serial data types. 

 

As to the threads, in any OS, unsynchronised threads can’t guarantee sequence 
of operations. 

 

Hope this helps. 

 

 

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of ? 
Sent: 14 October 2016 14:13
To: pgsql-general 
Subject: [GENERAL] Multiple multithreaded insert

 

Hi, everyone! 

I have a table:

create table testpr(id serial,priority integer,unique(priority) DEFERRABLE, 
primary key(id));

and a trigger which, when added to this table, automatically sets priority as 
the maximum value +1

CREATE OR REPLACE FUNCTION PriorityCheck() RETURNS trigger AS $$

BEGIN

NEW.priority := (SELECT coalesce(max(priority),0)+1 from testpr);

RETURN NEW;

END;

$$ LANGUAGE plpgsql;

CREATE TRIGGER InsertTrigger BEFORE INSERT on testpr for EACH ROW 

EXECUTE PROCEDURE PriorityCheck();

Will the priority be serial, when there is multithreaded addition of values to 
the table? 

Which is: 

Thread 1 * insert into testpr(priority) values (1), (1), (1), (1), (1), (1), 
(1);

Thread 2 * insert into testpr(priority) values (2), (2), (2), (2), (2), (2), 
(2);

The result (priority):

Thread 1: (1) (2) (3) (4) (5) (6) (7)

Thread 2: (8) (9) (10) (11) (12) (13) (14)

-- 

Arsen Arutyunyan



Re: [GENERAL] Multiple multithreaded insert

2016-10-14 Thread Scott Marlowe
On Fri, Oct 14, 2016 at 7:12 AM, Арсен Арутюнян  wrote:
> Hi, everyone!
>
> I have a table:
>
> create table testpr(id serial,priority integer,unique(priority) DEFERRABLE,
> primary key(id));
>

This:

> and a trigger which, when added to this table, automatically sets priority
> as the maximum value +1

Leads to THIS:

> The result (priority):
>
> Thread 1: (1) (2) (3) (4) (5) (6) (7)
>
> Thread 2: (8) (9) (10) (11) (12) (13) (14)

If you have to have monotonically increasing priorities with no gaps,
that's the price you pay, unless you can pre-allocate them or
something.

Basically max(id)+1 is a db anti-pattern.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Multiple multithreaded insert

2016-10-14 Thread Арсен Арутюнян

Hi, everyone! 
I have a table:
create table testpr(id serial,priority integer,unique(priority) DEFERRABLE, 
primary key(id));
and a trigger which, when added to this table, automatically sets priority as 
the maximum value +1
CREATE OR REPLACE FUNCTION PriorityCheck() RETURNS trigger AS $$
BEGIN
    NEW.priority := (SELECT coalesce(max(priority),0)+1 from testpr);
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER InsertTrigger BEFORE INSERT on testpr for EACH ROW 
EXECUTE PROCEDURE PriorityCheck();
Will the priority be serial, when there is multithreaded addition of values to 
the table? 
Which is: 
Thread 1  insert into testpr(priority) values (1), (1), (1), (1), (1), (1), 
(1);
Thread 2  insert into testpr(priority) values (2), (2), (2), (2), (2), (2), 
(2);
The result (priority):
Thread 1: (1) (2) (3) (4) (5) (6) (7)
Thread 2: (8) (9) (10) (11) (12) (13) (14)
--  Arsen Arutyunyan

Re: [GENERAL] journaled FS and and WAL

2016-10-14 Thread Albe Laurenz
t.dalpo...@gmail.com wrote:
>   two question related to the WAL.
> 
> 1) I read in the doc that journaled FS is not important as WAL is
> journaling itself.  But who garantees that the WAL is written correctly?
> I know that it's sequential and a partial update of WAL can be discarded
> after a restart. But am I sure that without a journaled FS, if there is
> a crash during the WAL update, nothing already updated in the WAL before
> my commit can get corrupted?

At commit time, the WAL is "synchronized": PostgreSQL instructs the operating
system to write the data to the physical medium (not just a memory cache)
and only return success if that write was successful.

After a successful commit, the WAL file and its metadata are on disk.
Moreover, the file metadata won't change (except for the write and access
timestamps) because WAL files are created with their full size and never
extended, so no WAL file should ever get "lost" because of partial metadata
writes.

> 2) Let's suppose that I have one database, one table of 10 rows,
> each 256 bytes. Now, in a single SQL commit, I update row 10, row 3
> and row 8. How much should I expect the WAL increase by? (supposing
> no WAL segments will be deleted). I could guess 8192x3 but I'm not sure

It will be that much immediately after a checkpoint, but for subsequent writes
to the same disk block only the actually changed parts of the data block will
be written to WAL.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SERIALIZABLE and INSERTs with multiple VALUES

2016-10-14 Thread Kevin Grittner
On Thu, Oct 13, 2016 at 5:26 PM, Thomas Munro
 wrote:
> On Fri, Oct 14, 2016 at 2:04 AM, Kevin Grittner  wrote:

>> Where do you see a problem if REPEATABLE READ handles INSERT/ON
>> CONFLICT without error?

> I think the ON CONFLICT
> equivalent might be something like the following (rather contrived)
> schedule, which happily commits if you comment out Peter's check:
>
> (1)  postgres=# create table bank_account (id int primary key, cash int);
> (1)  CREATE TABLE
> (1)  postgres=# begin transaction isolation level serializable ;
> (1)  BEGIN
>
> (2)  postgres=# begin transaction isolation level serializable ;
> (2)  BEGIN
>
> (1)  postgres=# select * from bank_account where id = 1;
> (1)  ┌┬──┐
> (1)  │ id │ cash │
> (1)  ├┼──┤
> (1)  └┴──┘
> (1)  (0 rows)
>
> (2)  postgres=# insert into bank_account values (1, 100);
> (2)  INSERT 0 1
>
> (1)  postgres=# insert into bank_account values (1, 200) on conflict do 
> nothing;
> (1)  ...waits for tx2...
>
> (2)  postgres=# commit;
> (2)  COMMIT
>
> (1)  INSERT 0 0
> (1)  postgres=# commit;
> (1)  COMMIT
>
> If tx1 ran before tx2, then it would have succeeded in inserting (1,
> 200), and tx2 would have failed with unique_violation.  If tx2 ran
> before tx1, then tx1's SELECT command would have seen (1, 100) and
> possibly taken a different course of action.  So this schedule is
> non-serializable, right?

Right.  This is a case that needs something done if we take out the
rather overzealous check that is there now.  Thanks for finding an
example.  The trick now is to generalize to find the boundaries of
what is a problem and what isn't, so we can know what we are aiming
for as an "ideal" solution, and compare possible solutions for how
close they come.

> If you remove ON CONFLICT DO NOTHING, then tx1 gets a unique_violation
> after tx2 commits, which is similar to the last case in
> read-write-unique-4.spec.  To be able to produce a cycle that SSI can
> detect, perhaps an INSERT containing an implicit uniqueness check
> would need to be modelled as a read followed by a write.  I couldn't
> make that work, but I'm not sure if it's sensible anyway: wouldn't
> overlapping transactions consisting of just a single INSERT with the
> same key then produce a false positive, instead of unique_violation in
> one transaction?

If two transactions simultaneously attempted an INSERT of the same
key, one would block (as it would now) and if the other
successfully committed the blocked transaction would then get a
serialization failure error.  If the transactions did not overlap
you would get a duplicate key error.  That would arguably be nicer
behavior than we have now.  I think that if, within a serializable
transaction, we internally add a predicate lock for each page as we
descend to the point of insertion on a unique index, we might get
exactly that behavior.

-- 
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] journaled FS and and WAL

2016-10-14 Thread t.dalpo...@gmail.com

Hi,
 two question related to the WAL.

1) I read in the doc that journaled FS is not important as WAL is 
journaling itself.  But who garantees that the WAL is written correctly? 
I know that it's sequential and a partial update of WAL can be discarded 
after a restart. But am I sure that without a journaled FS, if there is 
a crash during the WAL update, nothing already updated in the WAL before 
my commit can get corrupted?


2) Let's suppose that I have one database, one table of 10 rows, 
each 256 bytes. Now, in a single SQL commit, I update row 10, row 3 
and row 8. How much should I expect the WAL increase by? (supposing 
no WAL segments will be deleted). I could guess 8192x3 but I'm not sure


Regards
Pupillo


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DB Corruption after running out of disk space?

2016-10-14 Thread Deven Phillips
NVM, I guess that I just didn't wait long enough for the re-index operation
to complete... We're good now...

On Fri, Oct 14, 2016 at 6:45 AM, Deven Phillips 
wrote:

> Here's hoping someone can help me... Overnight, our primary database
> server (part of a master/slave replication pair) ran out of disk space. I
> have cleaned up space now, but while performing some VACUUM ANALYZE
> commands I find that there is some corruption.
>
> I'm getting messages like: ERROR:  missing chunk number 0 for toast value
> 304627805 in pg_toast_16421
>
> I searched around and previous posts to the mailing list suggested a
> re-index of the pg_toast.pg_toast_16421 table, which I did to no effect.
> Still getting the error.
>
> Any suggestions or ideas would be appreciated.
>
> Thanks in advance!
>
> Deven Phillips
>


[GENERAL] DB Corruption after running out of disk space?

2016-10-14 Thread Deven Phillips
Here's hoping someone can help me... Overnight, our primary database server
(part of a master/slave replication pair) ran out of disk space. I have
cleaned up space now, but while performing some VACUUM ANALYZE commands I
find that there is some corruption.

I'm getting messages like: ERROR:  missing chunk number 0 for toast value
304627805 in pg_toast_16421

I searched around and previous posts to the mailing list suggested a
re-index of the pg_toast.pg_toast_16421 table, which I did to no effect.
Still getting the error.

Any suggestions or ideas would be appreciated.

Thanks in advance!

Deven Phillips