Re: [GENERAL] Unique UUID value - PostgreSQL 9.2

2016-03-18 Thread drum.lu...@gmail.com
>
>
>>
> SELECT ​'test_'​ || substring(uuid_generate_v4()::text, 1, 1);
>
> Or use a sequence, or just compute "ROW_NUMBER() OVER ()" and tack that on.
>
> David J.
> ​
>
>
Well.. I was able to do it by using:

> INSERT INTO dm.billables_links (billable_id) VALUES ((SELECT billable_id
> FROM junk.wm_260_billables2 WHERE info ilike '%Alisha%')),
> INSERT INTO dm.billables_links (mobiuser_id) VALUES ((SELECT id FROM
> public.ja_mobiusers WHERE name_first LIKE 'Alisha%' AND name_last LIKE
> 'Dson%'))



The problem is that I need to do that at the same time, because of a
constraint:

ALTER TABLE dm.billables_links
  ADD CONSTRAINT cc_one_and_only_one_target CHECK ((("customer_id" IS
NOT NULL)::integer + ("role_id" IS NOT NULL)::integer + ("mobiuser_id"
IS NOT NULL)::integer) = 1);

I'm having trouble by creating that SQL... can anyone help please?

FYI - It has to be in the same transaction because the mobiuser_id must go
to the selected billable_id on the first select.


Re: [GENERAL] Request - repeat value of \pset title during \watch interations

2016-03-18 Thread David G. Johnston
On Thursday, March 17, 2016, Michael Paquier 
wrote:

> On Fri, Mar 18, 2016 at 8:16 AM, Tom Lane  > wrote:
> > David Steele > writes:
> >> On 3/17/16 7:00 PM, Tom Lane wrote:
> >>> The message I saw was post-1-March.  If it was in fact submitted in
> >>> time for 2016-03, then we owe it a review.
> >
> >> I meant to add the CF record and forgot:
> >> https://commitfest.postgresql.org/9/480
> >> It was added 2016-01-13 by Michael Paquier.
> >
> > OK, so it looks like David's 10-Mar patch was actually just a repost of
> > Michael's 28-Jan patch, which was already in the queue to be reviewed in
> > 2016-03 (and hasn't yet been).  So the addition to 2016-09 was simply
> > erroneous and should be deleted.
>
> My mistake I guess. I should have mentioned as well on this thread
> that I registered it.
>
>
And I didn't look hard enough on the CF site...sorry 'bout that.  All good
now and I've learned for next time.

David J.


[GENERAL] Drop only temporary table

2016-03-18 Thread Durumdara
Dear PG Masters!

As I experienced I can create normal and temp table with same name.

create table x (kod integer);

CREATE TEMPORARY TABLE x (kod integer);

select tablename from pg_tables where schemaname='public'
 union all
select c.relname from pg_class c
join pg_namespace n on n.oid=c.relnamespace
where
n.nspname like 'pg_temp%'
and c.relkind = 'r'
and pg_table_is_visible(c.oid);

---

I can see two x tables.

As I see that drop table stmt don't have "temporary" suboption to determine
which to need to eliminate - the real or the temporary.

Same thing would happen with other DDL/DML stmts - what is the destination
table - the real or the temporary?

"insert into x(kod) values(1)"

So what do you think about this problem?

I want to avoid to remove any real table on resource closing (= dropping of
temporary table).
How to I force "drop only temporary"? Prefix, option, etc.

Thanks for your help!

dd


[GENERAL] Deleting schema - saving up space - PostgreSQL 9.2

2016-03-18 Thread drum.lu...@gmail.com
Hi all,

I've got four servers:

1 - Master
2 - Slave Hot Standby (Same hardware)
3 - Slave Hot Standby (Same hardware)
4 - Slave Hot Standby (VM - Very slow machine)

On the master server, I've got a schema named "GORFS" with 80 GB, according
to this SQL:

SELECT schema_name,
>pg_size_pretty(sum(table_size)::bigint),
>(sum(table_size) / pg_database_size(current_database())) * 100
> FROM (
>   SELECT pg_catalog.pg_namespace.nspname as schema_name,
>  pg_relation_size(pg_catalog.pg_class.oid) as table_size
>   FROM   pg_catalog.pg_class
>  JOIN pg_catalog.pg_namespace ON relnamespace =
> pg_catalog.pg_namespace.oid
> ) t
> GROUP BY schema_name
> ORDER BY schema_name


On that schema, we have all the user's files, like Photos, notes, docs,
etc...

We're migrating it to a NFS server, taking out from the DB to save up space
and also related to performance

*QUESTION:*

Once the migration is completed, how can I save up (remove) the schema from
the DB?

1 - The problem here is that a VACUUM FULL will lock all the DB to wirte,
am I right? My DB is 1.7 TB, so it will take a while and the System can't
be offline

   1. Migrate the files to the NFS server
   2. Delete the schema from the MASTER DB
   3. Put the slaves into read-only servers
   4. Run Vacuum FULL into the MASTER DB
   5. Once the vacuum is done, do a DUMP from the MASTER to the slaves
   (excluding the GORFS schema of course)

2 - I think this is the most recommended option - But I've some questions
about doing this.

   1. Put a slave as a new Master
   2. Do a dump excluding the GORFS schema in the OLD master
   3. DELETE the old DB from the old master
   4. IMPORT the new dump file to the old master
   5. Turn the old master into the NEW master (*What has been changed into
   the slave that became a master, how can those changes be in the new master?*
   )
   6. Import the dump into the others slaves and make them re-sync from the
   new master


Thank you.
Lucas


Re: [GENERAL] Partition

2016-03-18 Thread Sándor Daku
On 18 March 2016 at 10:55, Leonardo M. Ramé  wrote:

> Hi, I have read and re-read the Partitioning chapter (
> http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html), but I
> still don't see how to implement this use case:
>
> One table storing current data, let's call it the "master table", then one
> or more partitions with old data.
>
> For example, the master table is this:
>
> create table log(
>   idlog serial not null,
>   date_time timestamp not null default now(),
>   log varchar(255),
>   primary key(idlog)
> );
>
> The documentation says the master table should be empty, then a trigger
> must evaluate a condition, the date_time field for example, and insert the
> data in the corresponding table. This is a *rare* condition, because in the
> log example, new content is created without knowing its date and time in
> advance. For example:
>
> insert into log(log) values('log this please.');
>
> The date_time column will set the now() value.
>
> Now, by following the example, to create a child table I'll do
>
> create table log_old( ) inherits (log);
>
> This raises the 1nst question, how can I define a *dynamic* check,
> for checking older than X days?. Is this possible?.
>
> An idea (didn't test):
>
> check (date_time::date < now()::date - '30 day'::interval)
>
> Then, the trigger, after each insert should *move* old data to log_old.
>
> The only problem I see here is the master table isn't empty, but contains
> current data. The question is, will it work as expected?, I mean when I do
> "select * from log" I'll get an *union* of new and old data?.
>
>
I'm quite(but not completely) sure the dynamic constraint won't work.
Also the log data - I guess - will be actual so nothing goes to the _old
table, except you keep nudging the records and use an update trigger to
move the data around.
Oh, and you should keep the parent table empty.
The correct way would be to define fixed date ranges for the child tables
and keep adding new ones as time advances.(And dropping old ones if you
want.)
log ->parent
log_201603 -> child of log, check date_time>'2016.03.01' and
date_time<='2016.04.01'
log_201604 -> child of log, check date_time>'2016.04.01' and
date_time<='2016.05.01'

Or take a look to the pg_partman extension which promises to do the legwork
for you

Regards,
Sándor.



Ezt az e-mailt egy Avast védelemmel rendelkező, vírusmentes számítógépről
küldték.
www.avast.com

<#DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: [GENERAL] Confusing deadlock report

2016-03-18 Thread Albe Laurenz
Thomas Kellerer wrote:
>> Can you determine what statements were executed in these transactions before 
>> the deadlock?
>> It was probably one of these that took the conflicting lock.
> 
> Unfortunately not. Statement logging is not enabled on that server 
> (space-constrained).
> 
> And while we know the statements that can possibly be executed by these parts 
> of the application,
> several on them depend on the actual data, so it's hard to tell which path 
> the two transactions
> actually used.

But that's where the solution to your problem must be...

Look at all statements that modify "alpha" and could be in the same transaction
with the INSERT to "bravo".

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] which db caused postgres to stop serving due to wraparound prevention?

2016-03-18 Thread Steve Kehlet
Sorry, seems like such a noob problem, but I'm stumped. This is postgres
9.4.5. I'll post my custom settings if desired but I don't think they're
needed.

We recently had an issue where the autovacuumer wasn't starting because
postgres couldn't resolve the hostname 'localhost' (we had bad perms on
/etc/hosts). We're still working on getting that fixed on all affected
boxes.

In the meantime: today, one particular database unexpectedly stopped
serving with this error:

2016-03-17 12:31:52 EDT [5395]: [787-1] ERROR:  database is not accepting
commands to avoid wraparound data loss in database with OID 0
2016-03-17 12:31:52 EDT [5395]: [788-1] HINT:  Stop the postmaster and
vacuum that database in single-user mode.
You might also need to commit or roll back old prepared
transactions.

What has me confused is I ran the following command to keep an eye on this,
and it seemed fine, the max(age(datfrozenxid)) was only about 330 million:

postgres=# select datname,age(datfrozenxid) from pg_database;
  datname  |age
---+---
 mydb  | 330688846
 postgres  | 215500760
 template1 | 198965879
 template0 | 146483694
 mydb2 | 175585538
(5 rows)

We shutdown postgres, started it in single user mode, and VACUUMed each
database. Then postgres started up fine, and the crisis is averted, for now.

However my understanding must be wrong: I thought we could just look around
for max(age(datfrozenxid)), make sure it's "low" (<2 billion), and be sure
that this wouldn't happen. What am I misunderstanding?

And then, I don't know which db has OID 0?

postgres=# SELECT oid,datname from pg_database;
oid |  datname
+---
  16422 | mydb
  12921 | postgres
  1 | template1
  12916 | template0
 1575433129 | mydb2
(5 rows)

Thank you for your help!


[GENERAL] Insert data in two columns same table

2016-03-18 Thread drum.lu...@gmail.com
I'm trying to insert data from TABLE A to TABLE B.

1 - Select billable_id from dm.billable
2 - Select mobiuser_id from ja_mobiusers
3 - Insert the billable_id and the mobiuser_id to the dm.billables_links
table.


*FYI -* It has to be in the same transaction because the mobiuser_id must
go to the selected billable_id on the first select.

Well... Would be something like:

> INSERT INTO dm.billables_links (billable_id) VALUES ((SELECT billable_id
> FROM junk.wm_260_billables2 WHERE info ilike '%Alisha%')),
> INSERT INTO dm.billables_links (mobiuser_id) VALUES ((SELECT id FROM
> public.ja_mobiusers WHERE name_first LIKE 'Alisha%' AND name_last LIKE
> 'Dson%'))



The problem is that I need to do that at the same time, because of a
constraint:

ALTER TABLE dm.billables_links
  ADD CONSTRAINT cc_one_and_only_one_target CHECK ((("customer_id" IS
NOT NULL)::integer + ("role_id" IS NOT NULL)::integer + ("mobiuser_id"
IS NOT NULL)::integer) = 1);

I'm having trouble by creating that SQL... can anyone help please?


Re: [GENERAL] spurious /dev/shm related errors on insert

2016-03-18 Thread rob stone
On Fri, 2016-03-18 at 10:45 -0400, Michael Charnoky wrote:
> I should have noted: the application is using PostgreSQL 9.5.0,
> running on Ubuntu 14.04
> 
> Mike
> 
> On Fri, Mar 18, 2016 at 10:41 AM, Michael Charnoky 
> wrote:
> > Hi, I'm seeing random errors from an application that is performing
> > DB inserts. The error happens spuriously and looks like this from
> > the application side:
> > 
> > could not open file "/dev/shm/postgres_apptable_47861701461760" for
> > reading: No such file or directory
> > 
> > The PostgreSQL logs show:
> > 
> > 2016-03-18 07:25:01 UTC ERROR:  could not open file
> > "/dev/shm/postgres_apptable_47861701461760" for reading: No such
> > file or directory
> > 2016-03-18 07:25:01 UTC STATEMENT:  COPY urltable FROM
> > '/dev/shm/postgres_apptable_47861701461760' USING DELIMITERS '#'
> > WITH NULL AS '\null';
> > 
> > Any clues? I couldn't find any similar issues reported by other
> > users. Thanks!
> > 
> > Mike
> > 
> > 


Running 9.5.1 installed from Debian packages, when Postgres is started
it creates this file in /dev/shm


-rw--- 1 postgres postgres 2316 Mar 19 05:24 PostgreSQL.1804289383

It's binary.

Bye.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about shared_buffer cache behavior

2016-03-18 Thread Rakesh Kumar
PG loads data at the block level to shared_buffers. Most likely it is
because the second sql selects different set of rows (from different
blocks) than the first sql.

On Fri, Mar 18, 2016 at 4:24 PM, Paul Jones  wrote:
> In Postgres 9.5.1 with a shared_buffer cache of 7Gb, a SELECT from
> a single table that uses an index appears to read the table into the
> shared_buffer cache.  Then, as many times as the exact same SELECT is
> repeated in the same session, it runs blazingly fast and doesn't even
> touch the disk.  All good.
>
> Now, in the *same* session, if a different SELECT from the *same* table,
> using the *same* index is run, it appears to read the entire table from
> disk again.
>
> Why is this?  Is there something about the query that qualifies the
> contents of the share_buffer cache?  Would this act differently for
> different kinds of indexes?
>
> PJ
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Insert data in two columns same table

2016-03-18 Thread Adrian Klaver

On 03/16/2016 07:07 PM, drum.lu...@gmail.com wrote:





I see a lot of other problems: you have 3 independet tables. Your 2
queries
(selects) returns 2 independet results, you can't use that for
insert into the
3rd table. And i think, you are looking for an update, not insert.
So you have
to define how your tables are linked together (join).

Can you explain how these tables are linked together?




Hi Andreas!

Well...

There are two tables that I need to get data from(dm.billables /
public.ja_mobiusers), and a third table (dm.billables_links) that I need
to insert data from those two tables.

The table dm.billables has four (important) columns:

*billable_id / customer_id / role_id / mobiuser_id*

I wanna add data there. The data is not there yet, so it's not an UPDATE.

*1 -* select the billable_id: (SELECT1)
SELECT billable_id FROM junk.wm_260_billables2 WHERE info ilike '%Alisha%'

*2 -* select the mobiuser_id: (SELECT2)
SELECT id FROM public.ja_mobiusers WHERE name_first LIKE 'Alisha%' AND
name_last LIKE 'Dadryl%'

*3 -* Insert those two data into the dm.billables_links table (EXAMPLE):
INSERT INTO dm.billables_links (billable_id, mobiuser_id) VALUES
(SELECT1, SELECT2);


CREATE TABLE
*billables*
 (
 billable_id BIGINT DEFAULT
"nextval"('"dm"."billables_billable_id_seq"'::"regclass") NOT
 NULL,
 account_id BIGINT NOT NULL,
 code CHARACTER VARYING(64) NOT NULL,
 info "TEXT",
 CONSTRAINT pk_billables PRIMARY KEY (billable_id),
 CONSTRAINT uc_billable_code_unique_per_account UNIQUE
("account_id", "code"),
 );
CREATE TABLE
*billables_links*
 (
 billable_link_id BIGINT DEFAULT
"nextval"('"dm"."billables_links_billable_link_id_seq"'::
 "regclass") NOT NULL,
 billable_id BIGINT NOT NULL,
 customer_id BIGINT,
 role_id BIGINT,
 mobiuser_id BIGINT,
 CONSTRAINT pk_billables_links PRIMARY KEY
(billable_link_id),
 CONSTRAINT fk_billable_must_exist FOREIGN KEY
(billable_id) REFERENCES billables
 (billable_id),
 CONSTRAINT cc_one_and_only_one_target CHECK
("customer_id" IS NOT NULL))::INTEGER + (
 ("role_id" IS NOT NULL))::INTEGER) + (("mobiuser_id" IS
NOT NULL))::INTEGER) = 1)


Would it not be easier if instead of customer_id, role_id, mobiuser_id 
you had id_type('customer', 'role', 'mobi') and user_id(id). Then you 
could eliminate the CHECK, which as far as I can see is just restricting 
entry to one user id anyway.



 );
CREATE TABLE
*ja_mobiusers*
 (
 id BIGINT DEFAULT
"nextval"('"ja_mobiusers_id_seq"'::"regclass") NOT NULL,
 clientid BIGINT DEFAULT 0,
[...]
 PRIMARY KEY (id),
 CONSTRAINT fk_account_must_exist FOREIGN KEY
(clientid) REFERENCES ja_clients (id),
 );









--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] Request - repeat value of \pset title during \watch interations

2016-03-18 Thread Tom Lane
David Steele  writes:
> On 3/17/16 7:00 PM, Tom Lane wrote:
>> The message I saw was post-1-March.  If it was in fact submitted in
>> time for 2016-03, then we owe it a review.

> I meant to add the CF record and forgot:
> https://commitfest.postgresql.org/9/480
> It was added 2016-01-13 by Michael Paquier.

OK, so it looks like David's 10-Mar patch was actually just a repost of
Michael's 28-Jan patch, which was already in the queue to be reviewed in
2016-03 (and hasn't yet been).  So the addition to 2016-09 was simply
erroneous and should be deleted.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Schema Size - PostgreSQL 9.2

2016-03-18 Thread drum.lu...@gmail.com
Hi all,

Can you please provide me a Query that tells me how much space is a Schema
in my DB?

I'm using one but don't think it's right

>
> SELECT schema_name,
>pg_size_pretty(sum(table_size)::bigint),
>(sum(table_size) / pg_database_size(current_database())) * 100 as a
> FROM (
>   SELECT pg_catalog.pg_namespace.nspname as schema_name,
>  pg_relation_size(pg_catalog.pg_class.oid) as table_size
>   FROM   pg_catalog.pg_class
>  JOIN pg_catalog.pg_namespace ON relnamespace =
> pg_catalog.pg_namespace.oid
> ) t
> GROUP BY schema_name
> ORDER BY schema_name


Re: [GENERAL] Confusing deadlock report

2016-03-18 Thread Albe Laurenz
Thomas Kellerer wrote:
> we have a strange (at least to me) deadlock situation which does not seem to 
> fall into the "usual"
> deadlock category.
> 
> The error as reported in the Postgres log file is this:
> 
> 2016-03-12 13:51:29.305 CET [23912]: [1-1] user=arthur,db=prod,app=[unknown] 
> ERROR: deadlock detected
> 2016-03-12 13:51:29.305 CET [23912]: [2-1] user=arthur,db=prod,app=[unknown] 
> DETAIL: Process 23912
> waits for ShareLock on transaction; blocked by process 24342.
> Process 24342 waits for ShareLock on transaction 39632974; blocked by 
> process 23912.
> Process 23912: UPDATE alpha SET some_flag = $1 WHERE (id = $2)
> Process 24342: INSERT INTO bravo (..., alpha_id) VALUES ($1, $2, $3, 
> $4, $5, $6, $7, $8, $9,
> $10)
> 
> (I have "obfuscated" the table names)
> 
> 
> Process 24342 did update table alpha in an earlier step, but a different row 
> than Process 23912
> updated.
> Table bravo has a foreign key to table alpha.
> 
> My understanding of the deadlock report is that the statements shown in the 
> log are the actual
> statements on which the two processes were waiting.
> 
> What I think is unusual in this situation is the INSERT statement that is 
> part of the deadlock
> situation.
> 
> The only way I can think of how a deadlock could happen during an insert, is 
> if process 23912 had
> inserted a row into bravo with the same PK value that process 24342 is trying 
> to insert. But process
> 23912 never even touches that table, so I am a bit confused on how this can 
> happen.
> 
> Can the foreign key between bravo and alpha play a role here? With some 
> simple test setups I could not
> get the insert to wait even if it was referencing the row that the other 
> process has updated.
> 
> This happened on 9.3.10 running on Debian

The probable culprit is a foreign key between these tables.

What foreign keys are defined?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ON CONFLICT DO NOTHING RETURNING

2016-03-18 Thread Peter Geoghegan
On Fri, Mar 18, 2016 at 9:14 AM, Jeff Janes  wrote:
> He wants to retrieve a value from the conflicting row.  Now getting
> the value that caused the conflict should be easy, because you
> provided it in the first place.   But he wants a value from a
> different column of the conflicting row than the column(s) on which
> there is conflict.  DO NOTHING RETURNING returns no rows.  Which is
> reasonable, because nothing was inserted.  But it isn't what he wants.

I see.

> I think the dummy update is his best bet, but it does seem like there
> should be a better way.  Maybe ON CONFLICT DO SELECT where the select
> operates over the target row.

Seems reasonable.


-- 
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] "Tardis" approach for history tables

2016-03-18 Thread Iván Perdomo
Hi all,

I'm trying to follow Magnus Hagander's [1] approach for having a history
tables. Basically having a copy of the schema table in a `history`
schema, plus an extra time range column for the validity period of a row.

I made a basic setup [2] and the version logging part is working "as
expected". On each INSERT|UPDATE|DELETE my history table changes the
`_validrange`.

I intend to use this approach on a "usual" web application, where
operations (INSERT|UPDATE|DELETE) happens frequently but not in a high
volume (as in many cases reads are more frequent). Another difference is
that we're not going to use an ORM, just plain SQL via JDBC.

My question is what edge case I'm missing by commenting out part of the
checks when dealing with the UPDATE operation [3].

I have tested the code with a single client making a transaction and
updating the row more than one time, and I got the expected result. Only
the last operation gets reflected in the history table.

[1] http://www.hagander.net/talks/tardis_orm.pdf
[2] https://gist.github.com/iperdomo/74890685f3c76b0fc139
[3]
https://gist.github.com/iperdomo/74890685f3c76b0fc139#file-tardis-sql-L53-L58

Thanks for your support,

-- 
Iván



signature.asc
Description: OpenPGP digital signature