Re: [GENERAL] Indexed views like SQL Server - NOT Materialized Views

2015-06-09 Thread William Dunn
Though I'm sure you've already looked into it, for your specific issue of
getting row counts:
- In PostgreSQL 9.2 and above this operation can be made much faster with
index-only scans so ensure you are on a recent version and do your count on
a column of a candidate key with an index (for example, the primary key)
- An approximate rowcount is maintained in pg_stat_user_tables, if an
approximate value is acceptable you can obtain one there very fast

As for PostgreSQL implementing Microsoft SQL Server features:
In general, Microsoft SQL Server is famous for it's lack of standards
compliance while PostgreSQL is famously ANSI/ISO standards compliant. If a
SQL Server non-standard feature is not adopted by Oracle and/or DB2 and/or
the standards it is unlikely PostgreSQL will adopt it unless the feature is
very highly desired or a contributor has a deep interest. However it is
more likely for non-standard features to be implemented as a PostgreSQL
plug-in.
On Jun 9, 2015 7:28 PM, "inspector morse" 
wrote:

> SQL Server has a feature called Indexed Views that are similiar to
> materialized views.
>
> Basically, the Indexed View supports COUNT/SUM aggregate queries. You
> create a unique index on the Indexed View and SQL Server automatically
> keeps the COUNT/SUM upto date.
>
> Example:
> CREATE VIEW ForumTopicCounts
> AS
> SELECT  ForumId, COUNT_BIG(*) AS TopicsCount
> FROM Topics
> GROUP BY ForumId
>
> CREATE UNIQUE CLUSTERED INDEX idx ON ForumTopicCounts(ForumId);
>
> After doing that, if you add or delete a topic from the Topics Table, SQL
> Server automatically keeps the count updated.and it's fast because of
> the unique index.
>
>
> Doing the same thing in Postgresql using Materialized views is slow and
> the developer has to manually issue a "refresh materialized view" command.
> The alternative is to write additional sql to update count
> columnsuneccessary work.
>
>
> Do you know when Postgresql will implement such a feature? Counting is
> already slow in Postgresql, adding similiar feature like SQL Server will
> really help.
>


[GENERAL] Indexed views like SQL Server - NOT Materialized Views

2015-06-09 Thread inspector morse
SQL Server has a feature called Indexed Views that are similiar to
materialized views.

Basically, the Indexed View supports COUNT/SUM aggregate queries. You
create a unique index on the Indexed View and SQL Server automatically
keeps the COUNT/SUM upto date.

Example:
CREATE VIEW ForumTopicCounts
AS
SELECT  ForumId, COUNT_BIG(*) AS TopicsCount
FROM Topics
GROUP BY ForumId

CREATE UNIQUE CLUSTERED INDEX idx ON ForumTopicCounts(ForumId);

After doing that, if you add or delete a topic from the Topics Table, SQL
Server automatically keeps the count updated.and it's fast because of
the unique index.


Doing the same thing in Postgresql using Materialized views is slow and the
developer has to manually issue a "refresh materialized view" command. The
alternative is to write additional sql to update count
columnsuneccessary work.


Do you know when Postgresql will implement such a feature? Counting is
already slow in Postgresql, adding similiar feature like SQL Server will
really help.


[GENERAL] The purpose of the core team

2015-06-09 Thread Bruce Momjian
There has been some confusion by old and new community members about the
purpose of the core team, and this lack of understanding has caused some
avoidable problems.  Therefore, the core team has written a core charter
and published it on our website:

http://www.postgresql.org/developer/core/

Hopefully this will be helpful to people.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] user constructed where clause

2015-06-09 Thread David G. Johnston
On Tue, Jun 9, 2015 at 4:48 AM, Yelai, Ramkumar IN BLR STS <
ramkumar.ye...@siemens.com> wrote:

>  Now, the requirement is if user provides filter information based on
> every column from the web UI, this filter will let the user construct the
> “where clause” and provide to postgresql.
> In a month this table exceeds millions of record. If  I use the
> user-constructed query then it would take lot of time as the user may not
> include indexed column in the user-constructed query.
>

​The first solution that comes to mind is to construct a document
containing the relevant information, index that, and then provide a single
search input field that is used to construct a text search query that you
apply against the indexed document.

In short, forget the fact that there are fields and just index and search
the content.

Add additional controls to the UI for just those fields that are indexed.

David J.
​


[GENERAL] user constructed where clause

2015-06-09 Thread Yelai, Ramkumar IN BLR STS
Hi All,

I have one requirement in my project and don't know how to achieve.

My project is receiving the log information from PC in the network, and that 
information is stored in the below table.

CREATE TABLE "PCLogTable"
(
  "LOG_ID" serial NOT NULL,
  "LOG_USER_ID" integer DEFAULT 0,
  "LOG_TYPE_ID" integer,
  "LOG_PC_ID" integer NOT NULL,
  "LOG_PORT" text,
  "LOG_NOTE" text,
  "LOG_ACK" boolean,
  "LOG_TIME" timestamp without time zone,
  "LOG_NON_PENDING_STATUS" text,
  "LOG_STATUS" text,
  "LOG_MONITORED_STATE" text,
  "LOG_RSE_RAISE_TIMESTAMP" text,
  "LOG_ADD_INFO" jsonb,
  CONSTRAINT "LOG_ID" PRIMARY KEY ("LOG_ID"),
  CONSTRAINT "LOG_TYPE_ID" FOREIGN KEY ("LOG_TYPE_ID")
  REFERENCES "LogTextTable" ("LOG_TYPE_ID") MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE CASCADE
)

CREATE INDEX log_table_index
  ON "PCLogTable"
  USING btree
  ("LOG_TIME" DESC, "LOG_PC_ID");

At present, I have written a query to get latest 5000 log information from this 
table and it executes in 15 seconds periodically.

Now, the requirement is if user provides filter information based on every 
column from the web UI, this filter will let the user construct the "where 
clause" and provide to postgresql.
In a month this table exceeds millions of record. If  I use the 
user-constructed query then it would take lot of time as the user may not 
include indexed column in the user-constructed query.

Also, they want to see all the record that matches the user-constructed query.

With best regards,
Ramkumar Yelai

Siemens Technology and Services Private Limited
CT DC AA I HOUSE DEV GL4
84, Hosur Road
Bengaluru 560100, Indien
Tel.: +91 80 33136494
Fax: +91 80 33133389
Mobil: +91 9886182031
mailto:ramkumar.ye...@siemens.com
http://www.siemens.co.in/STS

Registered Office: 130, Pandurang Budhkar Marg, Worli, Mumbai 400 018. 
Telephone +91 22 39677000. Fax +91 22 39677075. Other Offices: Bangalore, 
Chennai, Gurgaon, Noida, Kolkata, Pune. Corporate Identity 
number:U9MH1986PLC093854




Re: [GENERAL] Postgresql BDR Replication Setup Issue

2015-06-09 Thread Ravi Tammineni
Hi Ian,

Thank you so much for your help.

After changing the hot_standby = on to hot_standby = off, everything is working 
fine. We have copied all parameters from current production environment. 


Thanks
ravi

-Original Message-
From: Ian Barwick [mailto:i...@2ndquadrant.com] 
Sent: Monday, June 08, 2015 4:17 PM
To: Ravi Tammineni; tomas.von...@2ndquadrant.com; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Postgresql BDR Replication Setup Issue

Hi

On 15/06/09 2:31, Ravi Tammineni wrote:
> We are setting up 2-node Postgresql BDR replication at our place and I 
> am getting the following error after bdr_init_copy command.
(...)
> 
> Could not update security label: ERROR:  cannot execute DELETE in a 
> read-only transaction

(...)

> 
> == Data from Node2 log file
> 
> < 2015-06-08 10:08:45.957 PDT >LOG:  entering standby mode

It looks like you're trying to run BDR on a streaming replication standby.


Regards

Ian Barwick


-- 
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, RemoteDBA, Training & Services

The preceding e-mail message (including any attachments) contains
information that may be confidential, be protected by the
attorney-client or other applicable privileges, or constitute non-public
information. It is intended to be read only by the individual or entity
to whom it is addressed or by their designee. If you are not an intended
recipient of this message, please notify the sender by replying to this
message and then delete it from your system. You are on notice that
further use, dissemination, distribution, or reproduction of this
message is strictly prohibited and may be unlawful.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inserting from multiple processes?

2015-06-09 Thread Dave Johansen
On Mon, Jun 8, 2015 at 10:15 AM, Dave Johansen 
wrote:

> I'm using Postgres 8.4 on RHEL 6 and I need to support inserting data into
> a table from multiple processes with there occasionally being duplicates
> from the different processes. Here's a simple example table:
> CREATE TABLE test (tutc TIMESTAMP WITHOUT TIME ZONE, id INTEGER, value
> INTEGER, PRIMARY KEY (tutc, id));
> If I do the following query from 2 processes, then it's fine:
> INSERT INTO test (tutc, id, value) SELECT '2015-01-01 01:02:03', 4, 5
> WHERE NOT EXISTS (SELECT 1 FROM test WHERE tutc='2015-01-01 01:02:03' AND
> id=4);
>
> But if I put the operation in a transaction, then the second process will
> block until the transaction of the first is commited (which is fine) but
> then the insert fails with a "duplicate key value violation". I'm guessing
> that this is because the transaction is making it so that the SELECT only
> sees the values from before the transaction of the second process began.
>
> Using an "upsert" type of function, like the one shown in the
> documentation ( see
> http://www.postgresql.org/docs/8.4/static/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE
> ) seems like it might work, but I don't need to support updating and was
> hoping to not have to use a custom function. So is there some way to catch
> the unique_violation exception without creating a function? Or some other
> solution to this?
>

For the sake of documentation, here's the function that I used to
accomplish this:
CREATE FUNCTION insert_test_no_dup(tutc_ TIMESTAMP WITHOUT TIME ZONE, id_
INTEGER, value_ INTEGER) RETURNS VOID AS
$$
BEGIN
  BEGIN
INSERT INTO test(tutc, id, value) VALUES (tutc_, id_, value_);
RETURN;
  EXCEPTION WHEN unique_violation THEN
-- do nothing because the record already exists
  END;
END;
$$
LANGUAGE plpgsql;

Both using a rule and using a trigger had the same issue with transactions.
Here's the declarations for documentation:
CREATE RULE ignore_duplicate_inserts AS
  ON INSERT TO test
  WHERE (EXISTS (SELECT 1 FROM test WHERE tutc=new.tutc AND id=new.id)) DO
INSTEAD NOTHING;

CREATE FUNCTION tf_insert_test_ignore_duplicates() RETURNS trigger
AS $$
DECLARE
  found BOOLEAN;
BEGIN
  SELECT 1 INTO found FROM test WHERE tutc=new.tutc AND id=new.id;
  IF found THEN
RETURN NULL;
  ELSE
RETURN new;
  END IF;
END;
$$
LANGUAGE plpgsql;
CREATE TRIGGER insert_test_before BEFORE INSERT ON test
  FOR EACH ROW EXECUTE PROCEDURE tf_insert_test_ignore_duplicates();


Re: [GENERAL] BDR: DDL lock problem with function inside a schema

2015-06-09 Thread sym39

Le 09/06/2015 02:29, Craig Ringer a écrit :



Questions:
---
-Is it a known bug? If yes should I report it (and where)?


Please open a report on github for this - 
https://github.com/2ndQuadrant/bdr/issues

=> https://github.com/2ndQuadrant/bdr/issues/87


-How to recover such DDL lock problems operation without
recreating the database? In other words, what is the proper way to
cleanly restart both nodes?


Restarting both nodes should be sufficient, and I am surprised to see 
that is not the case here. This needs more investigation.


-Is it a well known method to properly detach and attach nodes?


I'm not sure I understand.

If you mean join and remove nodes, join with bdr.bdr_group_join, 
remove with bdr.bdr_part_by_node_names .


I was trying to find a way to recover the situation. I though that 
detaching all nodes with bdr.bdr_part_by_node_names(), then reattaching 
then with bdr.bdr_group_join() could help, but I did not succeed to do it.


In fact, this question is orthogonal / not related to the problem : I 
just would like to detach /attach a node dynamically. Is it well 
supported / planned to be supported ?


Thanks and Regards,
Sylvain