Hi all,
I've notice not all my functions are tracked by pg_stat_user_functions view.
Interesting thing is similar functions in different db are tracked
correctly.
query:
SELECT p.* FROM pg_proc p
LEFT JOIN pg_stat_user_functions stat
ON (p.OID = stat.funcid)
INNER JOIN pg_language l
ON (l.oid =
On 2013-01-16, Robert James srobertja...@gmail.com wrote:
Is there a recommended, high performance method to check for subdomains?
Something like:
- www.google.com is subdomain of google.com
- ilikegoogle.com is not subdomain of google.com
There are many ways to do this (lowercase and
On 2013-01-16, Marcel van Pinxteren marcel.van.pinxte...@gmail.com wrote:
--90e6ba6140da259e8204d36d0fa3
Content-Type: text/plain; charset=ISO-8859-1
From the Microsoft site I learned
http://msdn.microsoft.com/en-us/library/ms188046(v=sql.105).aspx
that they combine collation and
Tom Lane wrote:
b) If the length in characters of X is not equal to the length
in characters of Y, then the shorter string is effectively
replaced, for the purposes of comparison, with a copy of itself
that has been extended to the length of the longer string by
Thanks for the quick response, I would like to add a patch to postgres to do a
simple check for one table, since this is straight forward. Unfortunatly, I
am not familiar too much with postgres source code. So my question is , what
are the source files that I need to check and can you please
Desired behaviour:
1. If there is a row with 'ABC' (in a unique column) in the table, a row
with 'abc' should not be allowed
2. If I do SELECT * FROM aTable WHERE aColumn = 'ABC', I should see a row
with 'abc' as well (if there is one in the table)
This has been described in this mailing list a
You could look into running the DB on an OS that does support case
insensitive collation. It'll likely perform better too.
On 16 January 2013 20:40, Marcel van Pinxteren
marcel.van.pinxte...@gmail.com wrote:
From the Microsoft site I learned
Marcel van Pinxteren, 18.01.2013 14:13:
Desired behaviour:
1. If there is a row with 'ABC' (in a unique column) in the table, a row with
'abc' should not be allowed
That's an easy one:
create unique index on foo (lower(the_column));
--
Sent via pgsql-general mailing list
Yes, my mistake, I meant to ask about fifteen_min_stacked_view definition, and
Postgres parameters from postgresql.conf configuration file, at least those -
modified from default setting and related to resource consumption and query
tuning.
Regards,
Igor Neyman
-Original Message-
Hello,
I have been fighting a problem with an update rule on a view. I have a view
that combines two tables where the 'sub' table (scont) can have several rows
per row in the 'top' table (icont). The view combines these to show only one
record per row in the top table. To be able to
Albe Laurenz laurenz.a...@wien.gv.at writes:
I tested not only with string literals, but also comparing
table columns of the respective types.
I came up with the following table of semantics used for
comparisons:
| CHAR(n)=CHAR(n) | VARCHAR(n)=VARCHAR(n) | CHAR(n)=VARCHAR(n) |
On Jan 18, 2013, at 8:10 AM, Igor Neyman iney...@perceptron.com wrote:
Yes, my mistake, I meant to ask about fifteen_min_stacked_view definition,
and Postgres parameters from postgresql.conf configuration file, at least
those - modified from default setting and related to resource
Leif Jensen l...@crysberg.dk writes:
I have been fighting a problem with an update rule on a view. I have a
view that combines two tables where the 'sub' table (scont) can have several
rows per row in the 'top' table (icont). The view combines these to show only
one record per row in the
Kirk,
Are you doing un-pivoting in most of your queries?
Did you try normalized design for fifteen_minute table?
Is there specific reason for de-normalization?
Regards,
Igor Neyman
-Original Message-
From: Kirk Wythers [mailto:kwyth...@umn.edu]
Sent: Friday, January 18, 2013 10:50 AM
On Jan 18, 2013, at 10:05 AM, Igor Neyman iney...@perceptron.com wrote:
Kirk,
Are you doing un-pivoting in most of your queries?
Did you try normalized design for fifteen_minute table?
Is there specific reason for de-normalization?
Regards,
Igor Neyman
Thanks Igor. The only reason I'm
On Wed, 16 Jan 2013 23:42:23 +0100
T. E. Lawrence t.e.lawre...@icloud.com wrote:
On 15.01.2013, at 17:32, Jeff Janes jeff.ja...@gmail.com wrote:
T.E., Fortunately in point releases from August 2012 (9.0.9, 9.1.5,
etc.), the default server log settings will log both the cancel and
the
Kirk,
Are you limited to pure SQL or procedural language (PgPlSQL) allowed?
If PgPlSQL is allowed, you could normalize fifteen_min table, break it into
several tables (one for a_dc, another for a_dif, another for a_targettemp, and
so on...) and use dynamic sql inside PlPgSQL function to join
I'd like to understand better why manually using a temp table can
improve performance so much.
I had one complicated query that performed well. I replaced a table
in it with a reference to a view, which was really just the table with
an inner join, and performance worsened by 2000x. Literally.
I'd like to better understand TIMESTAMP WITH TIME ZONE.
My understanding is that, contrary to what the name sounds like, the
time zone is never stored. It simply stores a UTC timestamp,
identical to what TIMESTAMP WITHOUT TIME ZONE stores.
And then the only difference is that WITH TIME ZONE
Hello
2013/1/18 Robert James srobertja...@gmail.com:
I'd like to understand better why manually using a temp table can
improve performance so much.
one possible effect - there should be different statistic
did you look on EXPLAIN ANALYZE?
Regards
Pavel Stehule
I had one complicated query
On 01/18/2013 09:31 AM, Robert James wrote:
I'd like to better understand TIMESTAMP WITH TIME ZONE.
My understanding is that, contrary to what the name sounds like, the
time zone is never stored. It simply stores a UTC timestamp,
identical to what TIMESTAMP WITHOUT TIME ZONE stores.
And then
On Fri, Jan 18, 2013 at 6:13 AM, Marcel van Pinxteren
marcel.van.pinxte...@gmail.com wrote:
Desired behaviour:
1. If there is a row with 'ABC' (in a unique column) in the table, a row
with 'abc' should not be allowed
2. If I do SELECT * FROM aTable WHERE aColumn = 'ABC', I should see a row
On Fri, Jan 18, 2013 at 9:29 AM, Robert James srobertja...@gmail.com wrote:
I'd like to understand better why manually using a temp table can
improve performance so much.
I had one complicated query that performed well. I replaced a table
in it with a reference to a view, which was really
Jeff Janes jeff.ja...@gmail.com writes:
On Fri, Jan 18, 2013 at 9:29 AM, Robert James srobertja...@gmail.com wrote:
In other words: Since my query is 100% identical algebraicly to not
using a temp table, why is it so much faster? Why can't the planner
work in the exact same order?
Unless you
On 1/18/13, Tom Lane t...@sss.pgh.pa.us wrote:
Jeff Janes jeff.ja...@gmail.com writes:
On Fri, Jan 18, 2013 at 9:29 AM, Robert James srobertja...@gmail.com
wrote:
In other words: Since my query is 100% identical algebraicly to not
using a temp table, why is it so much faster? Why can't the
Robert James srobertja...@gmail.com writes:
On 1/18/13, Tom Lane t...@sss.pgh.pa.us wrote:
Whether that's the explanation is of course impossible to know from
the given (lack of) information.
What information would be helpful to post?
Both forms of the query, EXPLAIN ANALYZE output for both,
On 01/18/2013 09:31 AM, Robert James wrote:
I'd like to better understand TIMESTAMP WITH TIME ZONE.
My understanding is that, contrary to what the name sounds like, the
time zone is never stored. It simply stores a UTC timestamp,
identical to what TIMESTAMP WITHOUT TIME ZONE stores.
And then
Robert James wrote:
What information would be helpful to post?
That question comes up so often we have a page to help answer it. :-)
http://wiki.postgresql.org/wiki/SlowQueryQuestions
-Kevin
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
I have created a couple of PostgreSQL sample databases at
https://github.com/vrajmohan/pgsql-sample-data:
1. The supplier-part-project database from C J Date's book.
2. The Employee sample database from the MySQL project. This db
occupies 300MB and and has ~4 million total rows.
Please let me
My Web searching foo fails me, and I don't see the answer in the postgres
docs so I hope someone here can point me in the proper direction.
There is a table for bacteriological data that contains two columns for
the date and time the water was collected and another two columns for the
date
I am sure that I am using seperate threads with seperate connection objects and
libpq is compiled to be threadsafe.
I get access violation both with ssl and without it (without ssl it seems to be
more stable, however afer several hundred connects/disconnects it fails).
J.
--
Sent via
On 01/18/2013 03:31 PM, Rich Shepard wrote:
My Web searching foo fails me, and I don't see the answer in the
postgres
docs so I hope someone here can point me in the proper direction.
There is a table for bacteriological data that contains two columns for
the date and time the water was
I finally store points in structure with arrays, and pack it once at day.
create type t_point as (
x smallint,
y smallint,
hits integer
);
CREATE TABLE clickmap (
page_id integer,
date date,
points t_point[]
);
This method save 6x more space than previous
Thanks for all!
When i run a query , i just only want to the affected rows are logged to log
file,What can i do ? .Is there a parameter in postgresql.conf to do that ?
.Thanks in advance
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Logging-affected-rows-tp5740689.html
Sent from
Hi all
I have two function,one for calculation and another for calling the other
function.I called calculation function in main function through for
loop.These
loop may more than 5000 times iterate,but calculation updation in sub
function
happend only after full execution of main function,I
On Fri, 18 Jan 2013, Adrian Klaver wrote:
How are they stored, as date and time type, strings, other?
Adrian,
ISO date and time.
A sample of the data would help also.
Example: 2012-10-29 | 10:19 | 2012-10-30 | 09:40
Rich
--
Sent via pgsql-general mailing list
On 01/18/2013 04:26 PM, Rich Shepard wrote:
On Fri, 18 Jan 2013, Adrian Klaver wrote:
How are they stored, as date and time type, strings, other?
Adrian,
ISO date and time.
A sample of the data would help also.
Example: 2012-10-29 | 10:19 | 2012-10-30 | 09:40
test= SELECT
On Jan 18, 2013, at 4:26 PM, Rich Shepard rshep...@appl-ecosys.com wrote:
On Fri, 18 Jan 2013, Adrian Klaver wrote:
How are they stored, as date and time type, strings, other?
Adrian,
ISO date and time.
A sample of the data would help also.
Example: 2012-10-29 | 10:19 |
On 2013-01-16, Meta Seller Dev/Admin metasel...@gmail.com wrote:
Hi! (I'm Chris Angelico posting from a work account - usually I'm here
under the name ros...@gmail.com.)
I've run into a bit of an awkwardness in PostgreSQL setup and am
hoping for some expert advice.
Several of the tables I
On 01/18/2013 04:26 PM, Rich Shepard wrote:
On Fri, 18 Jan 2013, Adrian Klaver wrote:
How are they stored, as date and time type, strings, other?
Adrian,
ISO date and time.
A sample of the data would help also.
Example: 2012-10-29 | 10:19 | 2012-10-30 | 09:40
Realized this
On 2013-01-18, Leif Jensen l...@crysberg.dk wrote:
I have been fighting a problem with an update rule on a view. I
have a view that combines two tables where the 'sub' table (scont) can
have several rows per row in the 'top' table (icont). The view
combines these to show only one record
On Wed, Jan 16, 2013 at 06:47:12PM -0800, Charles Porter wrote:
Bruce -
Yep. That's it. Thank you
Maximum data alignment: 4
Maximum data alignment: 8
There are several differences, including TOAST chunk 2000 vs 1996.
I take it that this means that I
Charles Porter charles.por...@gmail.com writes:
Yep. That's it. Thank you
Maximum data alignment: 4
Maximum data alignment: 8
There are several differences, including TOAST chunk 2000 vs 1996.
I take it that this means that I cannot us pg_upgrade.
Well, not if
On Thu, Jan 17, 2013 at 9:18 AM, classical_89 luongnx...@gmail.com wrote:
When i run a query , i just only want to the affected rows are logged to
log
file,What can i do ? .Is there a parameter in postgresql.conf to do that ?
.Thanks in advance
AFAIK, I don't think we can log only affected
On Sat, Jan 19, 2013 at 12:53 PM, Raghavendra
raghavendra@enterprisedb.com wrote:
On Thu, Jan 17, 2013 at 9:18 AM, classical_89 luongnx...@gmail.comwrote:
When i run a query , i just only want to the affected rows are logged to
log
file,What can i do ? .Is there a parameter in
45 matches
Mail list logo