On Tue, 27 Dec 2005 23:25:37 +0200, Eugene <[EMAIL PROTECTED]> wrote:
>SELECT * FROM ipdb2 WHERE '3229285376' BETWEEN ipfrom AND ipto;
If your [ipfrom, ipto] ranges are non-overlapping, you might be able
to exploit that fact by adding something like
... ORDER BY ipfrom DESC LIMIT 1
Servu
On Mon, 25 Jul 2005 17:50:55 -0400, Kevin Murphy
<[EMAIL PROTECTED]> wrote:
> and because the number of possible search terms is so large, it
>would be nice if the entire index could somehow be preloaded into memory
>and encouraged to stay there.
Postgres does not have such a feature and I wouldn
On 02 Feb 2005 00:58:17 -0500, Greg Stark <[EMAIL PROTECTED]> wrote:
>I don't recall what the original motivation to rewrite the analyze sampling
>was, did having lots of dead tuples cause bad estimates in 7.4?
The original intention was to make analysing huge tables faster. But
the patch turned
On Tue, 31 Aug 2004 11:09:07 -0400, Joe Maldonado
<[EMAIL PROTECTED]> wrote:
>CREATE SCHEMA joe
>CREATE SEQUENCE joe_seq start 1
>CREATE TABLE joe_table (int id, varchar name)
>;
>
>and I get a syntax error for SEQUENCE.
This will work in 8.0.
http://www.postgresql.org/docs/7.4/static/sql
On Tue, 15 Jun 2004 13:06:40 -0700, Shane Wegner
<[EMAIL PROTECTED]> wrote:
>The index I created reads:
>create index books_idx1 on books(publisher_id,place_id,illustrator_id,
>edition_id,type_id,category_id,binding_id,id);
This index is useless, drop it. Is there an index on books(id)?
>The oth
On Fri, 4 Jun 2004 11:23:30 +0200, "Nagib Abi Fadel"
<[EMAIL PROTECTED]> wrote:
>According to those results the meta_data for a temp table are automatically
>removed when the table is destroyed
Yes, but the dead tuples remain. Try
VACUUM VERBOSE pg_class;
VACUUM VERBOSE pg_attrib
On Thu, 3 Jun 2004 09:04:43 +0200, "Nagib Abi Fadel"
<[EMAIL PROTECTED]> wrote:
>Let's say for example the variable is called "X". The view is called
>"t_view" and the temporary table is called "t_temp".
>Each time a user connects to the web, the application will initialize the
>variable X and it w
On Sun, 23 May 2004 23:32:48 +0700, David Garamond
<[EMAIL PROTECTED]> wrote:
>Actually, each record will be incremented probably only thousands of
>times a day. But there are many banners. Each record has a (bannerid,
>campaignid, websiteid, date, countrycode) "dimensions" and (impression,
>cli
On Wed, 05 May 2004 13:30:19 -0400, Tom Lane <[EMAIL PROTECTED]> wrote:
>Yup, that's a gotcha. Patch applied to 7.4 and HEAD branches.
Hey, that was a -r diff. It was not intended for you :-)
I had planned to send a -c diff to -patches, as soon as I have a
suitable regression test. I thought o
On Mon, 3 May 2004 20:47:31 -0400, Vikram Kulkarni
<[EMAIL PROTECTED]> wrote:
>test=# SELECT timeofday();
> timeofday
>-
> Wed Dec 31 16:00:00.591964 1969 PST
>(1 row)
>[...]
>That obviously doesn't look right. Isnt' timeofday() supposed to return
>t
On Wed, 28 Apr 2004 12:33:15 +0300, "Bogdan Vatkov"
<[EMAIL PROTECTED]> wrote:
>SQL error:
>ERROR: JOIN/ON clause refers to "vras_audio_records", which is not part of JOIN
In
SELECT ... FROM a, b LEFT JOIN c ON (a.id = ...)
the LEFT JOIN operator has higher precedence than the comma, so
On Thu, 29 Apr 2004 09:48:10 -0400 (EDT), Edmund Dengler
<[EMAIL PROTECTED]> wrote:
>=> explain analyze select * from replicated where rep_component = 'ps_probe' limit 1;
>---
On Mon, 05 Apr 2004 11:55:17 -0500, Wes Palmer
<[EMAIL PROTECTED]> wrote:
>> match, there's no need to fetch the heap tuple. Unfortunately the
>> planner doesn't believe that this is possible:
>> /* Don't believe estimates less than 1... */
>> if (tuples_fetched < 1.0)
>> tuples_fetched = 1.0;
>
>
On Sat, 03 Apr 2004 22:39:31 -0800, "Mooney, Ryan" <[EMAIL PROTECTED]>
wrote:
>Ok, so I ran a vacuum analyse. It took ~1.7 days to finish.
Just to make it clear: VACUUM and ANALYSE are two different commands.
VACUUM is for cleaning up. It has to visit every tuple in every page,
and if there
[time to move this to -hackers]
On Fri, 02 Apr 2004 11:16:21 -0500, Tom Lane <[EMAIL PROTECTED]> wrote:
>Manfred Koizar <[EMAIL PROTECTED]> writes:
>> The first step, however, (acquire_sample_rows() in analyze.c) has to
>> read more rows than finally end up in the s
On Thu, 01 Apr 2004 12:22:58 +0200, I wrote:
>BTW, ANALYSE is basically a constant time operation.
On closer inspection, this is not the whole truth. ANALY[SZ]E is a two
stage process: First it collects a sample of rows, then these rows are
examined to produce various statistics.
The cost of th
On Tue, 30 Mar 2004 17:48:14 -0800, "Mooney, Ryan" <[EMAIL PROTECTED]>
wrote:
>I have a single table that just went over 234GB in size with about 290M+
>rows.
That would mean ~ 800 bytes/row which, given your schema, is hard to
believe unless there are lots of dead tuples lying around.
>queries u
On Thu, 25 Mar 2004 20:56:35 +0100, Rob Hoopman <[EMAIL PROTECTED]> wrote:
>> > It appears that when adding more than 48 sub nodes
>> >to any node in the tree, craps out because of an INT8 column overflowing.
>>
>> AFAICS it doesn't depend on the number of siblings, but it fails when
>> the sum of
On Thu, 4 Dec 2003 17:56:33 - (GMT), "John Sidney-Woollett"
<[EMAIL PROTECTED]> wrote:
>It would be nice if nested transactions could be (optionally) decoupled
>from their enclosing transaction.
While I see your point how this could be useful for certain use cases,
unfortunately I don't have a
On Fri, 21 Nov 2003 20:22:17 +0900, Alex <[EMAIL PROTECTED]> wrote:
>Hi,
>is it possible to concatenate two rows (same or different data type)
>into one.
>like First Name, Last Name or ZIP City etc.
If you meant to concatenate two *columns*, it goes like
SELECT firstname || ' ' || las
On Thu, 20 Nov 2003 16:52:37 +0900, Alex <[EMAIL PROTECTED]> wrote:
Is there an easy way to write a select statement that returns me the
frist free number or any within the range of 200?
For example if 1-30, and 32-50 are occupied then i would like to fill in
the new entry with id 3
On Fri, 14 Nov 2003 10:01:51 -0500 (EST), <[EMAIL PROTECTED]>
wrote:
>The Production database is the "real" data, and we periodically take a
>back up from Prod and re-instantiate QAT and DEV by dropping them and
>then restoring from the Prod backup.
> Not that OID's are in short supply,
>but I'm a
On Thu, 6 Nov 2003 00:50:06 +0900, Michael Glaesemann
<[EMAIL PROTECTED]> wrote:
>>> What I came up with was deleting and reinserting the relevant
>>> ordercharges rows
>>
>> This might have unwanted side effects (think ON DELETE CASCADE).
>
>Good point. At this stage in my PostgreSQL progress, I h
On Wed, 5 Nov 2003 02:58:28 +0900, Michael Glaesemann
<[EMAIL PROTECTED]> wrote:
>> UPDATE ordercharges
>>SET orderchargeasbilled = 0.065 * sale.orderchargeasbilled
>> FROM orders AS o, ordercharges AS sale
>> WHERE ordercharges.orderchargecode = 'S&H'
>>AND ordercharges.orderid = o.orde
On Fri, 24 Oct 2003 18:19:05 GMT, Ron <[EMAIL PROTECTED]>
wrote:
>When I try the following with my current database I
>get an error:
> giibdb=# ALTER TABLE project ADD CONSTRAINT company_is_ta FOREIGN
> KEY (companyID) REFERENCES tblCompanies(companyID);
> NOTICE: ALTER TABL
On Thu, 23 Oct 2003 16:02:03 GMT, Ron <[EMAIL PROTECTED]>
wrote:
> ALTER TABLE project ADD CONSTRAINT company_is_ta companyID
> REFERENCES company(companyID);
> (plus variations on the above, resulting in errors, all similar to:)
>ERROR: parser: parse error at or near "companyID" at c
On Fri, 17 Oct 2003 15:47:10 +0100, Nick Burrett <[EMAIL PROTECTED]>
wrote:
>CREATE TABLE fiveminute ( server CHAR(32),
> stamp TIMESTAMP,
> bytesin BIGINT CHECK (bytesin >= 0),
> bytesout BIGINT CHECK (bytesout >= 0));
>
On Fri, 17 Oct 2003 01:04:48 -0700 (PDT), Nagib Abi Fadel
<[EMAIL PROTECTED]> wrote:
>Is there a XOR logical operator in Postgresql
Yes: !=
Servus
Manfred
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://
On Fri, 10 Oct 2003 13:59:38 -0500, "Thomas LeBlanc"
<[EMAIL PROTECTED]> wrote:
>Does a Unique Constraint build a unique index?
Yes.
>What is the difference?
A constraint is an abstract concept, an index is an implementation
detail.
Servus
Manfred
---(end of broadcast)
On Fri, 19 Sep 2003 20:06:39 -0400, Tom Lane <[EMAIL PROTECTED]>
wrote:
>Perhaps you should go back and study what
>pg_upgrade actually did.
Thanks for the friendly invitation. I did that.
> It needed only minimal assumptions about the
>format of either old or new catalogs. The reason is tha
On Fri, 19 Sep 2003 18:51:00 -0400, Tom Lane <[EMAIL PROTECTED]>
wrote:
>transfer the schema into the new installation using "pg_dump -s" and
>then push the user tables and indexes physically into place.
I'm more in favour of in-place upgrade. This might seem risky, but I
think we can expect user
On Fri, 19 Sep 2003 17:38:13 -0400, Tom Lane <[EMAIL PROTECTED]>
wrote:
>> A working pg_upgrade is *not* the first thing we need.
>
>Yes it is.
At the risk of being called a stubborn hairsplitter, I continue to say
that pg_upgrade is not the *first* thing we need. Maybe the second
...
> As you
On Fri, 19 Sep 2003 10:10:17 +0100, Rory Campbell-Lange
<[EMAIL PROTECTED]> wrote:
>The sent
>column shows the number of items sent to each recipient from each
>source. The received column [...] is summing the number of messages
> by recipient [and] need to be filtered by source too.
SELECT t_to A
On Thu, 11 Sep 2003 14:24:25 -0700, Sean Chittenden
<[EMAIL PROTECTED]> wrote:
>Agreed, but if anyone has a table with close to 1600 columns in a
>table...
This 1600 column limit has nothing to do with block size. It is
caused by the fact that a heap tuple header cannot be larger than 255
bytes,
On Thu, 21 Aug 2003 21:10:34 +0530, "Shridhar Daithankar"
<[EMAIL PROTECTED]> wrote:
>Point I am trying to make is to tune FSM and autovacuum frequency
>such that you catch all the dead tuples in RAM
You might be able to catch the pages with dead tuples in RAM, but
currently there's no way to kee
On Wed, 25 Jun 2003 15:29:20 -0400, Andrew Sullivan
<[EMAIL PROTECTED]> wrote:
>On Sat, Jun 14, 2003 at 07:33:19PM +0200, Sven K?hler wrote:
>>
>> "select *" should refelect the cosmetical order of the columns.
>
>Why? You asked for everything, and specified no order.
AFAICS it's a matter of sta
36 matches
Mail list logo