I used VALUES as a replacement for the temporary table since for this
application, it is a lot more useful.
The point is :
SELECT * FROM table WHERE value IN ( 1000 integers ) : does 1000
comparisons for each row
SELECT * FROM table WHERE value IN ( VALUES (1000 integerss) ) : bu
1. Does an indexed column on a table have to be a potential primary
key?
Nope, create as many index as you need/must/should.
I've been working with a couple of rather large tables where a common
select is on a foreign key called 'cntrct_id' (Varchar(9) in format).
However, the
Followup to my previous test, with an index this time
EXPLAIN ANALYZE SELECT * FROM test WHERE value IN ( 1000 integers )
Bitmap Heap Scan on test (cost=3519.09..7156.83 rows=1000 width=4)
(actual time=5.843..8.897 rows=999 loops=1)
Recheck Cond: (value = ANY ('{0,...,999000}'::
Try creating a temporary table, populating with the list and joining
against it. That's probably your best bet for a long list of target
values.
Check :
forum_bench=> CREATE TABLE test (value INTEGER NOT NULL);
CREATE TABLE
forum_bench=> INSERT INTO test SELECT * FROM generate_seri
loaded by a single application and used by our customers with MS Access,
From by previous job where we had an Access based backoffice application,
you might want to learn enough to be ready to switch to something better
(ie postgres) when you feel the need.
Access is a booby trap setup
On Sun, 29 Apr 2007 22:33:37 +0200, Rich Shepard
<[EMAIL PROTECTED]> wrote:
On Sun, 29 Apr 2007, Martijn van Oosterhout wrote:
Then I'm afraid you havn't indicated your requirements properly. All I
can
see is that the interval type does exactly what you want. It can store
days, weeks, mont
Tom,
We were looking at the explain results and noticed that it was
converting
the 'wynn%' into fname >= 'wynn' and fname < 'wyno'
Does this also work if the last character is a unicode character ?
---(end of broadcast)---
TIP 1: if
Any idea why using a variable v_streetName instead of a string 'wynn%'
behaves differently?
Yeah.
'wynn%' doesn't start with % so LIKE uses index.
But Postgres doesn't know that you know that the contents of this
variable never starts with '%'...
Thanks,
John
---
I don't see a way to remove the old index entries before inserting new
ones without creating a window where the index and table will be
inconsistent if vacuum fails.
VACUUM FULL is slow because it plays with the indexes...
CLUSTER is slow because it has to order the rows...
M
name| ids
-
Peter| 2, 3, 4, 5
Jack| 100, 34, 3
Both name and ids are in text format.
IF you really do not want to use a link table (user_id, flag_id), you
could use an array of ints instead of a string...
---(end of broadcast)-
Hello,
Is there a way to know how many pages were hit by a Bitmap Heap scan ?
For instance :
Bitmap Heap Scan on posts
(cost=56.71..295.24 rows=2123 width=67)
(actual time=0.575..1.462 rows=2160 loops=1)
I'd like to know if it hit 2160 pages (ie I should really run CL
I'm aware of that and in my case I don't think it will be a
problem. It is for a type-ahead search web interface so actually it
only requires indexing all possible substrings starting from char 1,
ie. p, po, pos, post, postg, postgr, postgre, postgres, postgresq,
postgresql.
If you want to pr
You want trigram based search.
ie.
postgresql -> 'pos', 'ost', 'stg', 'tgr', 'gre', 'res', 'esq', 'sql'
searching for 'gresq' is searching for 'gre' and 'res' and 'esq' which
is good friends with bitmap scan. Then a little LIKE '%gresq%' to filter
the results.
P
tsearch2 versus mysql FULLTEXT in the context of a large forum.
I guess you know the answer already, but it never hurts to have nice
graphics to show your boss.
http://peufeu.free.fr/ftsbench/
I will upload new versions with more results, and maybe other engines, as
I ca
If you want embedded SQL, you'll probably have only 1 user at a time so
sqlite is a better choice.
But do you want embedded SQL ?
On Wed, 18 Apr 2007 22:48:52 +0200, Raymond Hurst <[EMAIL PROTECTED]>
wrote:
I have a requirement of incorporating a database in a hard disk drive.
OK, suppose in his function :
- if it does not exist, he creates the temp table, with ON COMMIT DELETE
ROWS
- if it does exists, he truncates it just to be sure
So the next execution of the function will find the temp table, it will
have the same OID, all is well.
On Fri, 13 Apr 2007 12:15:30 +0200, Alexander Presber
<[EMAIL PROTECTED]> wrote:
Listmail schrieb:
Then, other languages will make you feel the pain of having to
quote all your arguments YOURSELF and provide all results as string.
The most famous offender is PHP (this
On Fri, 13 Apr 2007 10:30:29 +0200, Tino Wildenhain <[EMAIL PROTECTED]>
wrote:
Joshua D. Drake schrieb:
Rick Schumeyer wrote:
Has anyone here used a postgres array with Rails? If so, how?
split()?
Err... there is no type mapping?
You know, some languages spoil us developers, so tha
If someone wants the schema change, react *now*. Later on we can only
append to it, and not change it :)
Since I like to complain...
Suppose you someday add another dot, or a "b" for beta, wouldn't it be
better to have
823
... or
---(end of b
Rules mess with queries. For data copying/archiving kinds of tasks,
triggers are a better bet, like you suggested in your original post.
Let me put that a different way: rules can *only* be used where data
integrity is not at stake. My own thinking is that it might be time to
make an officia
I love Open Source XD
http://ethan.tira-thompson.com/cvslog2web/
Note that this is overkill (but it would look SEXY on the site).
However, the original poster probably wants to know when to update his
servers, so he won't care about CVS commits...
If there was a RSS
Yeah yeah, but terminology aside, having 2 or three digits in each
attribute is just wrong!
Terminology aside, why? The unit is "8.1" not "8" and "1". It makes no
sense to say you're on version 8, in the given context, so why should the
XML data pretend there is?
//Magnus
Just prete
Well, the planner probably guessed that in your case it's faster to
scan the table than to use the index (indexes are not free). Did it
choose wrong?
Yes, see the other query in my post...
id IN ( 60 values ) => 0.582 ms (bitmap scan on the index : perfect)
join with V
Table definition and problem query is below. I'm surprised...
caillaudangers=> \d relations
Table « public.relations »
Colonne | Type | Modificateurs
+-+
pare
Here you go.
Fetches versions and prints most recent minor for each major
Tests all mirrors for speed and prints out the 4 fastest (takes some
time)
http://www.crummy.com/software/BeautifulSoup/
Have a nice day !
#! /bin/env python
# -*- coding: utf-8 -
I've never seen this before.
Is this PG specific or generic SQL that I've never been exposed to?
http://www.postgresql.org/docs/8.2/interactive/sql-values.html
VALUES conforms to the SQL standard, except that LIMIT and OFFSET are
PostgreSQL extensions.
It doesn't seem like much a
On Fri, 06 Apr 2007 18:45:15 +0200, Markus Schiltknecht
<[EMAIL PROTECTED]> wrote:
Hi,
tom wrote:
Initially it seems that the WHERE IN (...) approach takes a turn for
the worse when the list gets very large.
Since I use this a lot on webpages, I thought maybe a little benchmark is
in
I have a choice of running:
SELECT bar FROM tokens WHERE foo IN ('apple','orange','biscuit') for
up to ~300 words
OR
SELECT bar FROM tokens WHERE foo = 'apple' up to ~300 times as a
prepared/cached SQL statements.
With new PG versions you can also use VALUES which will save you a
On 4/5/07, Michelle Konzack <[EMAIL PROTECTED]> wrote:
Am 2007-04-01 12:05:44, schrieb Leonel:
> and once you downloaded the packages do a :
>
> apt-get build-deps postgresql-8.1
Are you sure? -- It should be:
You don't have the build-dep for 8.2 in ubuntu dapper/ edgy
I recently
Today I rewrote a particularly nasty query involving a UNION ALL between
an active table and a huge archive table, some left joins, order by and
limit, and it went from 5 minutes to under one second ; however one query
became 4 with some glue in between.
EXPLAIN
SELECT * FROM (
SELECT 0
My personal view is that in general, binary files have no place in
databases. Filesystems are for files, databases are for data. My design
choice is to store the files in a fileystem and use the database to hold
metadata as well as a pointer to the file.
If you *must* put files into the d
Any suggestions for finding out where all the time is being spent? I'm
- time spent updating indexes ? do you have a lot of them ?
- try with fsync off (not in production, but it will give you an idea)
---(end of broadcast)---
TI
You don't like filesystems ?
On Wed, 04 Apr 2007 07:44:57 +0200, Nikolay Moskvichev <[EMAIL PROTECTED]>
wrote:
Hi All!
Question is : How suitable PG for storing about 2 000 000 binary files
0,5-2,0 Mb size each ? It is not planned the big number of clients or a
plenty of updati
Hmmm [ studies query a bit more... ] I think the reason why that index
is so expensive to use is exposed here:
Index Cond: ((detect_time > (now() - '7
days'::interval)) AND (detect_time >= '2006-10-30
16:17:45.064793'::timestamp without time zone))
Evidently detect_time is
I designed a Java web application. The persistence layer is a
PostgreSQL database. The application needs user authentication.
I think it's a good choice to implement this authentication mechanism
via PostgreSQL login roles. So I can create several database login
roles and set the database permis
Are there any implications with possibly doing this? will PG handle it?
Are there realworld systems using PG that have a massive amount of data
in them?
It's not how much data you have, it's how you query it.
You can have a table with 1000 rows and be dead slow if said rows are bi
On Tue, 03 Apr 2007 19:23:31 +0200, Tom Lane <[EMAIL PROTECTED]> wrote:
Listmail <[EMAIL PROTECTED]> writes:
It bitmapscans about half the table...
Which PG version is this exactly? We've fooled with the
choose_bitmap_and heuristics quite a bit ...
I can't help but think that the way this application writes data is
optimized for MySQL's transactionless table type, where lots of
simultaneous input streams writing at the same time to the same table
would be death.
Can you step back and work on how the app writes out data, so that it
opens a
Hello everyone !
I have this query :
annonces=> EXPLAIN ANALYZE SELECT * FROM annonces AS a WHERE
detect_time > CURRENT_TIMESTAMP - '7 DAY'::INTERVAL
AND detect_time >= '2006-10-30 16:17:45.064793'
AND vente
AND surface IS NOT NULL AND price IS NOT NULL
AND type_id IN
(1,12,24
Anyway, just to signal that tsearch2 crashes if SELECT is not granted
to pg_ts_dict (other tables give a proper error message when not
GRANTed).On
I don't understand this. Are sure on this ? From prompt in your select
examples I see you have superuser's rights and you have successfully
>What will they think of next!
>http://krow.livejournal.com/502908.html
>I suppose it makes as much sense as the others, except why would you
>want to use mysql if the storage is in postgres?
If you've inherited data in a postgresql database this will allow
you to migrate it to the industry sta
OK, I've solved my problem... thanks for the hint !
Anyway, just to signal that tsearch2 crashes if SELECT is not granted to
pg_ts_dict (other tables give a proper error message when not GRANTed).On
Fri, 30 Mar 2007 13:20:30 +0200, Listmail <[EMAIL PROTECTED
Hello,
I have just ditched Gentoo and installed a brand new kubuntu system (was
tired of the endless compiles).
I have a problem with crashing tsearch2. This appeared both on Gentoo and
the brand new kubuntu.
I will describe all my install procedure, maybe I'm doing somet
43 matches
Mail list logo