olicy every time.
Thanks,
Joe
On 10/17/2017 03:06 PM, Tom Lane wrote:
Tomas Vondra writes:
On 10/17/2017 10:44 PM, Joe Carlson wrote:
What I was wondering is what is the performance differences between a
row level security implementation:
...
and an implementation where I add on the constr
re be excessive overhead from the CREATE/DROP POLICY
statements?
Thanks,
Joe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
the patch originally I saw significant
improvements, e.g. 8x in early versions. The thread is here:
https://www.postgresql.org/message-id/flat/44C4451A.4010906%40joeconway.com#44c4451a.4010...@joeconway.com
Joe
--
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development
signature.asc
Description: OpenPGP digital signature
My Apologies , was in the wrong email/forum, please disregard my email!
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Joe Proietti
Sent: Friday, September 30, 2016 8:03 AM
To: Jake Nielsen ; Tom Lane
Cc: pgsql-performance
of out dated statistics
Have done an analyze table but no changes.
Thanks,
Joe
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Jake Nielsen
Sent: Wednesday, September 28, 2016 2:11 PM
To: Tom Lane
Cc: pgsql-performance@postgresql.org
S
16 at 2:02 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Fri, Feb 26, 2016 at 12:43 PM, joe meiring
> wrote:
>
>> Also available on S.O.:
>>
>>
>> http://stackoverflow.com/questions/35658238/postgres-odd-behavior-with-indices
>>
&g
Also available on S.O.:
http://stackoverflow.com/questions/35658238/postgres-odd-behavior-with-indices
I've got a datavalue table with ~200M rows or so, with indices on both
site_id and parameter_id. I need to execute queries like "return all sites
with data" and "return all parameters with data"
You're right, brain fart. Nevermind! :)
On Sat, Nov 7, 2015 at 4:00 PM, Tom Lane wrote:
> Joe Van Dyk writes:
> > I noticed that querying for
> >product_attributes @> '{"upsell":["true"]}'
> > is much slower than querying for
I noticed that querying for
product_attributes @> '{"upsell":["true"]}'
is much slower than querying for
product_attributes @> '{"upsell": 1}'
Is that expected? I have a gin index on product_attributes. I'm using 9.4.1.
explain analyze
select count(*) from products where product_attributes
aggregate
plans, but I suspect there is a lot to be gained there as well.
HTH,
Joe
- --
Joe Conway
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAEBAgAGBQJVfHITAAoJEDfy90M199hlGvcP/ijyCsXnWZAeZSUAW4qb20YJ
AHKn0Gl8D9mH9cfPfJeCO+60dcWINzUE6l7qOWWN8JtT6pgbRPGvQsCkx9xRz
On Sat, Jan 24, 2015 at 11:14 PM, Pavel Stehule
wrote:
>
>
> 2015-01-25 7:38 GMT+01:00 Joe Van Dyk :
>
>>
>>
>> On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule
>> wrote:
>>
>>> Hi
>>>
>>> this plan looks well
>>>
On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule
wrote:
> Hi
>
> this plan looks well
>
> Regards
>
> Pavel
>
Here's one that's not quite as well: http://explain.depesz.com/s/SgT
Joe
>
> 2015-01-25 6:45 GMT+01:00 Joe Van Dyk :
>
>> Oops, di
Oops, didn't run vacuum analyze after deleting the events. Here is another
'explain analyze': http://explain.depesz.com/s/AviN
On Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk wrote:
> On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk wrote:
>
>> I have an events table
On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk wrote:
> I have an events table that records page views and purchases (type =
> 'viewed' or type='purchased'). I have a query that figures out "people who
> bought/viewed this also bought/viewed that".
>
>
I have an events table that records page views and purchases (type =
'viewed' or type='purchased'). I have a query that figures out "people who
bought/viewed this also bought/viewed that".
It worked fine, taking about 0.1 seconds to complete, until a few hours ago
when it started taking hours to c
On Mon, Dec 16, 2013 at 4:14 PM, Tom Lane wrote:
> Joe Van Dyk writes:
> > Hm, setting set join_collapse_limit = 9 seemed to fix the problem. Is
> that
> > my best/only option?
>
> Yup, that's what I was just about to suggest. You might want to use
> 10 or 12 in
Hm, setting set join_collapse_limit = 9 seemed to fix the problem. Is that
my best/only option?
On Mon, Dec 16, 2013 at 1:52 PM, Joe Van Dyk wrote:
> The actual query selects columns from each of those tables.
>
> If I remove the join on order_shipping_addresses, it's very fast.
The actual query selects columns from each of those tables.
If I remove the join on order_shipping_addresses, it's very fast. Likewise,
if I remove the join on skus, base_skus, or products, it's also very fast.
I'm pretty sure I have all the necessary indexes.
The below is also at
https://gist.g
manage so many databases, but I
would test it carefully before committing.
Joe
- --
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting, & 24x7 Support
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comm
On Fri, Apr 5, 2013 at 6:54 PM, Greg Williamson wrote:
> Joe --
>
> >____
> > From: Joe Van Dyk
> >To: pgsql-performance@postgresql.org
> >Sent: Friday, April 5, 2013 6:42 PM
> >Subject: Re: [PERFORM] slow joins?
> >
&g
chased_items pi (cost=0.00..7.60 rows=2 width=4) (actual
time=0.005..0.005 rows=1 loops=8178)
Index Cond: (line_item_id = li.id)
Heap Fetches: 144
Total runtime: 103.442 ms
(11 rows)
On Fri, Apr 5, 2013 at 6:38 PM, Joe Van Dyk wrote:
> On 9.2.4, running two
(
https://gist.github.com/joevandyk/df0df703f3fda6d14ae1/raw/c15cae813913b7f8c35b24b467a0c732c0100d79/gistfile1.txtshows
a non-wrapped version of the queries and plan)
On Fri, Apr 5, 2013 at 6:38 PM, Joe Van Dyk wrote:
> On 9.2.4, running two identical queries except for the value of a col
On 9.2.4, running two identical queries except for the value of a column in
the WHERE clause. Postgres is picking very different query plans, the first
is much slower than the second.
Any ideas on how I can speed this up? I have btree indexes for all the
columns used in the query.
explain analyz
On 11/21/2012 09:28 AM, Craig James wrote:
>
>
> On Wed, Nov 21, 2012 at 9:25 AM, Joe Conway <mailto:m...@joeconway.com>> wrote:
>
> On 11/21/2012 08:05 AM, Heikki Linnakangas wrote:
> > Rather than telling the planner what to do or not to do, I'd m
after sprinkling your
SQL with hints, you could easily periodically (e.g. after a Postgres
upgrade) test what would happen if the hints were removed.
Joe
--
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting, & 24x
I was always under the impression that pg_dump and pg_dumpall cause all data to
be read in to the buffers and then out, (of course squeezing out whatever may
be active). That is the big advantage to using PITR backups and using a tar or
cpio method of backing up active containers and shipping o
I have used this in the past ... run this against the database that you want to
inspect.
SELECT
current_database(), schemaname, tablename, /*reltuples::bigint,
relpages::bigint, otta,*/
ROUND(CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS
tbloat,
CASE WHEN relpages <
Thanks for fixing the docs, but if that's the case, I shouldn't be
seeing the behavior that I'm seeing.
Should I flesh out this test case a little better and file a bug?
Thanks,
Joe
On Tue, Sep 21, 2010 at 4:44 PM, Tom Lane wrote:
> Joe Miller writes:
>> I was l
The biggest single problem with "select count(*)" is that it is
seriously overused. People use that idiom to establish existence, which
usually leads to a performance disaster in the application using it,
unless the table has no more than few hundred records. SQL language, of
which PostgreSQL offe
less clear as concurrency was increased.
Joe
--
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting, & 24x7 Support
signature.asc
Description: OpenPGP digital signature
On Mon, Sep 20, 2010 at 6:28 PM, Kevin Grittner
wrote:
> Joe Miller wrote:
>
>> I can set up a cron job to run the ANALYZE manually, but it seems
>> like the autovacuum daemon should be smart enough to figure this
>> out on its own. Deletes can have as big an impact on
compared to the total number of tuples inserted or updated since
the last ANALYZE.
I guess that should be updated to read "insert, updated or deleted".
On Mon, Sep 20, 2010 at 10:12 PM, Tom Lane wrote:
> Joe Miller writes:
>> The autovacuum daemon currently uses the num
flect reality. See
example below.
I can set up a cron job to run the ANALYZE manually, but it seems like
the autovacuum daemon should be smart enough to figure this out on its
own. Deletes can have as big an impact on the stats as inserts and
updates.
Joe Miller
---
testdb
ere id in (42, 4242, 424242);
explain analyze select * from testbit where status & 512 = 512;
QUERY PLAN
--
Index Scan using idx32 on testbit (cost=0.00..4712.62 rows=5000
width=22) (actual time=0.080..0.085 rows=3
there are also some good examples of array handling in PL/R, e.g.
pg_array_get_r() in pg_conversion.c
HTH,
Joe
--
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting, & Support
signature.asc
Description: OpenPGP digital signature
On 06/10/2010 01:21 PM, Anne Rosset wrote:
>>
> I tried that and it didn't make any difference. Same query plan.
A little experimentation suggests this might work:
create index item_rank_project on item_rank(project_id, rank) where
pf_id IS NULL;
Joe
signature.asc
Descri
On 06/10/2010 01:10 PM, Joe Conway wrote:
> try:
>
> create index item_rank_null_idx on item_rank(pf_id)
> where rank IS NOT NULL AND pf_id IS NULL;
oops -- that probably should be:
create index item_rank_null_idx on item_rank(project_id)
where rank IS NOT NULL AND pf_id I
any
> improvements: (seems that the index is not used)
> Filter: ((rank IS NOT NULL) AND (pf_id IS NULL) AND
> ((project_id)::text = 'proj2783'::text))
> Total runtime: 11.988 ms
> (6 rows)
>
> Time: 13.654 ms
try:
create index item_rank_
]
Sent: Monday, March 01, 2010 6:51 AM
To: Plugge, Joe R.
Cc: Scott Marlowe; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Autovacuum Tuning advice
storing all fields as varchar surely doesn't make:
- indicies small,
- the thing fly,
- tables small.
...
---++-+---
public | log_events_y2010m02| 356 GB | 610 GB
-Original Message-
From: Scott Marlowe [mailto:scott.marl...@gmail.com]
Sent: Monday, March 01, 2010 12:58 AM
To: Plugge, Joe R.
Cc: pgsql-performanc
I have a very busy system that takes about 9 million inserts per day and each
record gets updated at least once after the insert (all for the one same
table), there are other tables that get hit but not as severely. As suspected
I am having a problem with table bloat. Any advice on how to be m
your partitioning scheme
4) create one index as (ts, a)
5) use dynamically generated SQL and table names in the application
code to create (conditionally) and load the tables
But of course test both this and your proposed method and compare ;-)
Also you might consider PL/R for some of your anal
On Mon, Oct 19, 2009 at 6:58 AM, Joe Uhl wrote:
I have a similar, recent thread titled Partitioned Tables and ORDER BY with
a decent break down. I think I am hitting the same issue Michal is.
Essentially doing a SELECT against the parent with appropriate constraint
columns in the WHERE clause
On Mon, Oct 12, 2009 at 10:14 AM, Michal Szymanski wrote:
We have performance problem with query on partitioned table when query
use order by and we want to use first/last rows from result set.
More detail description:
We have big table where each row is one telephone call (CDR).
Definitni
This seems like a pretty major weakness in PostgreSQL partitioning. I
have essentially settled on not being able to do queries against the
parent table when I want to order the results. Going to have to use a
Hibernate interceptor or something similar to rewrite the statements so
they hit spe
itions are setup correctly, this query also has
excellent performance:
select * from people where list_id = 'the_unique_list_id' and first_name
= 'JOE';
Here is the explain analyze for that:
Result (cost=0.00..963.76 rows=482 width=37739) (actual
time=6.031..25.394 rows=2319
rst). I've run dozens of
distributions and this works well for us (a startup with nontrivial
Linux experience). I imagine at a larger company it definitely would
not be an option.
Joe Uhl
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to you
n.r-project.org/web/views/HighPerformanceComputing.html
Joe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Mar 20, 2009, at 4:58 PM, Scott Marlowe wrote:
On Fri, Mar 20, 2009 at 2:49 PM, Joe Uhl wrote:
On Mar 20, 2009, at 4:29 PM, Scott Marlowe wrote:
What does the cs entry on vmstat say at this time? If you're cs is
skyrocketing then you're getting a context switch storm
On Mar 20, 2009, at 4:58 PM, Scott Marlowe wrote:
On Fri, Mar 20, 2009 at 2:49 PM, Joe Uhl wrote:
On Mar 20, 2009, at 4:29 PM, Scott Marlowe wrote:
What does the cs entry on vmstat say at this time? If you're cs is
skyrocketing then you're getting a context switch storm
On Mar 20, 2009, at 4:29 PM, Scott Marlowe wrote:
On Fri, Mar 20, 2009 at 2:26 PM, Joe Uhl wrote:
On Mar 17, 2009, at 12:19 AM, Greg Smith wrote:
On Tue, 17 Mar 2009, Gregory Stark wrote:
Hm, well the tests I ran for posix_fadvise were actually on a
Perc5 --
though
who knows if it was
On Mar 17, 2009, at 12:19 AM, Greg Smith wrote:
On Tue, 17 Mar 2009, Gregory Stark wrote:
Hm, well the tests I ran for posix_fadvise were actually on a Perc5
-- though
who knows if it was the same under the hood -- and I saw better
performance
than this. I saw about 4MB/s for a single drive
ly, Aretec and Promise are good, Adaptec good, depending on
model, and the ones that Dell ship w/their servers haven't had good
reviews/reports.
On 03/16/2009 01:10 PM, Joe Uhl wrote:
Here is vmstat 1 30. We are under peak load right now so I can
gather
information from the real deal :)
back to the
list on first try.
On Mar 16, 2009, at 3:52 PM, Alan Hodgson wrote:
On Monday 16 March 2009, Joe Uhl wrote:
Right now (not under peak load) this server is running at 68% CPU
utilization and its SATA raid 10 is doing about 2MB/s writes and
11MB/
s reads. When I run dd I can hi
greatly appreciated.
Joe Uhl
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
on their own.
If the second one doesn't change often, can you pre-calculate it,
perhaps once a day?
Joe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ion I learned was that
adding the oids in the table adds a significasnt amount of space to the data
AND the index.
As you may gather from this we are relatively new on Postgres.
Any suggestions you can give me would be most helpful.
Cheers,
Joe
On Mon, Mar 10, 2008 at 11:16 AM, Gregory Stark &l
Magnus Hagander wrote:
> Ron St-Pierre wrote:
>
>> Joe Uhl wrote:
>>
>>> I realize there are people who discourage looking at Dell, but i've been
>>> very happy with a larger ball of equipment we ordered recently from
>>> them. Our databas
brutalized indexes to the SAS disks very soon.
If you do use Dell, get connected with a small business account manager
for better prices and more attention.
Joe
Ketema Harris wrote:
> I am trying to build a very Robust DB server that will support 1000+
> concurrent users (all ready have se
total in the past 2 years.
Just my personal experience, i'd be happy to pass along the account
manager's information if anyone is interested.
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
e from someone but I
knew this list would provide some excellent ideas and feedback to get us
started.
Joe Uhl
[EMAIL PROTECTED]
On Thu, 9 Aug 2007 16:02:49 -0500, "Scott Marlowe"
<[EMAIL PROTECTED]> said:
> On 8/9/07, Joe Uhl <[EMAIL PROTECTED]> wrote:
> > We have
configured, but want to get some hardware ballparks in order to get
quotes and potentially request a trial unit.
Any thoughts or recommendations? We are running openSUSE 10.2 with
kernel 2.6.18.2-34.
Regards,
Joe Uhl
[EMAIL PROTECTED]
---(end of broadcast
About six months ago, our normally fast postgres server started
having performance issues. Queries that should have been instant were
taking up to 20 seconds to complete (like selects on the primary key
of a table). Running the same query 4 times in a row would yield
dramatically different
Is there a reason you are not using postgis. The R tree indexes are
designed for exactly this type of query and should be able to do it very
quickly.
Hope that helps,
Joe
> I have this table:
>
> CREATE TABLE test_zip_assoc (
> id serial NOT NULL,
> f_id integer DEF
ize is changing dramatically. I have attached an
example script.
Hope that helps,
Joe
#! /usr/bin/python
import psycopg
import sys
def fixName(name):
return name[:19]
if len(sys.argv) > 1 and sys.argv[1] ==
of this list to
help make decisions for the new machine(s), was just very interested in
hearing feedback on software vs. hardware raid.
We will likely be using the 2.6.18 kernel.
Thanks for everyone's input,
Joe
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED]
input. As an additional question,
does anyone have any strong recommendations for vendors that offer both
consulting/training and support? We are currently speaking with Command
Prompt, EnterpriseDB, and Greenplum but I am certainly open to hearing
any other recommendations.
Thanks,
Joe
Yes it does:
SET EXPLAIN ON;
It writes the file to sqexplain.out
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Tuesday, January 09, 2007 9:13 AM
To: Gregory S. Williamson
Cc: [EMAIL PROTECTED]; pgsql-performance@postgresql.org
Subject:
ast night on PATCHES. Something like "insert
into abc (123); insert into abc (234); ..." actually seems to work
pretty well as long as you don't drive the machine into swapping. If
you're doing a very large number of INSERTs, break it up into bite-siz
Gabriele Turchi wrote:
Il giorno sab, 15/07/2006 alle 13.04 -0700, Joe Conway ha scritto:
Why not just periodically (once an hour?) run "ANALYZE registrazioni;"
during the day. This will only update the statistics, and should be very
low impact.
This is my "solution" t
n "ANALYZE registrazioni;"
during the day. This will only update the statistics, and should be very
low impact.
HTH,
Joe
---(end of broadcast)---
TIP 6: explain analyze is your friend
great!
Thanks Markus and Tom!
On Jun 30, 2006, at 10:29 AM, Markus Schaber wrote:
Hi, Joe,
Joe Lester wrote:
Aggregate (cost=22695.28..22695.28 rows=1 width=0) (actual
time=2205.688..2205.724 rows=1 loops=1)
-> Seq Scan on purchase_order_items (cost=0.00..21978.08
rows=286882
wi
I have a index question. My table has 800K rows and I a doing a basic query on an indexed integer field which takes over 2 seconds to complete because it's ignoring the index for some reason. Any ideas as to why it's ignoring the index? I'm using postgres 8.0.2.SELECT count(*) FROM purchase_order_i
have, but we
use a dedicated gigabit interface to the NetApp, with jumbo (9K) frames,
and an 8K NFS blocksize. We use this for both Oracle and Postgres when
the database resides on NetApp.
Joe
---(end of broadcast)---
TIP 4: Have you searche
on already in a nearby reply -- see
preload_libraries on this page:
http://www.postgresql.org/docs/8.1/interactive/runtime-config-resource.html
Joe
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
having the database do aggregation and
sorting as you mentioned in your other email).
FWIW, I usually use timestamptz for both created and updated fields.
IIRC 'created' ended up as a DATE because MySQL 4 has a restriction about a
single TIMESTAMP colu
lter: ((topic_id2 = 1252) OR (topic_id1 = 1252))
-> Seq Scan on topic t (cost=0.00..30.94 rows=494 width=216) (never
executed)
Total runtime: 0.000 ms
(13 rows)
The overall execution time for the Economists page for PostgreSQL is within 4%
of the MySQL time, so for the time being
rformance of SELECTs).
Yes indeed. When I added the REFERENCES to the schema and reran the conversion
scripts, aside from having to reorder the table creation and loading (they used
to be in alphabetical order), I also found a few referential integrity errors in
t
tests were repeated
five times and the quoted results are averages). Second, is that PostgreSQL's
performance appears to be much more consistent in certain queries. For example,
the query that retrieves the list of subtopics (the names and description of
economists), took 17 msec in P
icture in a local
Windows client/server environment?
Joe
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
ted
internally with PHP, at the heading level, before display.
Maybe there is some way to merge all the queries (some already fairly complex)
that fetch the data for the entries box but I believe it would be a monstrosity
with over 100 lines of SQL.
Thanks,
Joe
in, I'm using PostgreSQL 8.0.3, Apache 1.3.28, PHP 4.3.4, MySQL 4.0.16 and
I'm comparing both databases on XP (on a Pentium 4, 1.6 GHz, 256 MB RAM).
Thanks for any feedback.
Joe
---(end of broadcast)---
TIP 6: explain analyze is your friend
itional tables that perhaps could be merged into the entry table (and that
would reduce the number of queries) but I do not want to make major changes to
the schema (and the app) for the PostgreSQL conversion.
Joe
---(end of broadcast)---
TIP 3
ky
behavior on almost every page: the upper 1/2 or 2/3 of the page is displayed
first and you can see a blank bottom (or you can see a half-filled completion
bar). With MySQL each page is generally displayed in one swoop.
Joe
---(end of broadcast)--
7;m a relative newcomer to PostgreSQL (but
not to relational databases), so I'm not sure if this belongs in the novice or
general lists.
Joe
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
cho
Aditya wrote:
On Mon, Apr 11, 2005 at 10:59:51AM -0700, Joe Conway wrote:
Any particular reason? Our NetApp technical rep advised nfs over iSCSI,
IIRC because of performance.
I would mount the Netapp volume(s) as a block level device on my server
using
iSCSI (vs. a file-based device like NFS) so
ly opt for iSCSI over GigE with
a NetApp.
Any particular reason? Our NetApp technical rep advised nfs over iSCSI,
IIRC because of performance.
Joe
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
mydatabase < /path/to/contrib/scripts/tablefunc.sql
I have no idea where that would be on Mandrake, but you could probably do:
locate tablefunc.sql
On Fedora Core 1 I find it here:
/usr/share/pgsql/contrib/tablefunc.sql
Also find and read README.tablefunc.
HTH,
Joe
---(end
nion all select ''C'' union all select ''D'''
) as (product_id int, a int, c int, d int);
product_id | a | c | d
+--+--+--
906 | 3000 | 3000 | 1935
907 | 1500 | 1500 | 4575
924 | 6000 | 1575 |
mply don't
have, or work very poorly.
I never said I had a "bad experience" with Oracle. I pointed out the
gotchas. We have several large Oracle boxes running, several MSSQL, and
several Postgres -- they all have their strengths and weaknesses.
Nuff said -- this thread is way off t
hyperthreaded Intel CPU
count as 1 or 2 CPUs from a licensing standpoint? We were eventually
told 1, but that the decision was "subject to change in the future".
Joe
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
chine too.
The $5K edition is just there to get you hooked ;-) By the time you add
up what you really want/need, figure you'll spend a couple of orders of
magnatude higher, and then > 20% per year for ongoing
maintenance/upgrades/support.
Joe
---(end of broadcast)---
://www.postgresql.org/docs/current/static/libpq-async.html
HTH,
Joe
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
s with IBM hardware, and found their sales
and support to be responsive.
Joe
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
width=16) (actual time=0.008..0.009 rows=1 loops=1)
Index Cond: (f2 = '2004-02-15'::date)
-> Index Scan using foo_2004_03_idx2 on foo_2004_03 foo
(cost=0.00..17.07 rows=5 width=16) (actual time=0.029..0.029 rows=0 loops=1)
Index Cond: (f2 = &
Christopher Browne wrote:
In the last exciting episode, [EMAIL PROTECTED] (Joe Conway) wrote:
That's exactly what we're doing, but using inherited tables instead of
a union view. With inheritance, there is no need to rebuild the view
each time a table is added or removed. Basical
hey have none it is very quick. In a
real life example I got the following results just this afternoon:
- aggregate row count = 471,849,665
- total number inherited tables = 216
(many are future dated and therefore contain no data)
- select one month's worth of data f
Simon Riggs wrote:
Joe,
Your application is very interesting. I've just read your OSCON paper. I'd
like to talk more about that. Very similar to Kalido.
...but back to partitioning momentarily: Does the performance gain come from
partition elimination of the inherited tables under t
ind the time to work on it myself, but
for the moment I'm satisfied with the workarounds we've made.
Joe
---(end of broadcast)---
TIP 8: explain analyze is your friend
[EMAIL PROTECTED] wrote:
Joe Conway <[EMAIL PROTECTED]> wrote on 15.09.2004, 06:30:24:
We're not completely done with our data conversion (from a commercial
RDBMSi), but so far the results have been excellent. Similar to what
others have said in this thread, the conversion involved re
1 - 100 of 143 matches
Mail list logo