http://www.postgresql.org/docs/8.2/interactive/indexes-intro.html
states that "Indexes can moreover be used in join searches. Thus, an
index defined on a column that is part of a join condition can
significantly speed up queries with joins."
Does this mean that a condition like "WHERE ... [AND
am Sat, dem 03.11.2007, um 11:09:05 +0400 mailte rihad folgendes:
> Hi,
>
> Should an index be used on a created_at timestamp column if you know you
> will be using "ORDER BY created_at ASC|DESC" from time to time?
Yes. And you should use EXPLAIN.
Andreas
--
Andreas Kretschmer
Kontakt: Heyn
On Nov 2, 2007 8:45 PM, Kynn Jones <[EMAIL PROTECTED]> wrote:
> PS: As an aside to the list, as a programmer, when I'm starting out in
> language, I learn more than I can say from reading source code written
> by the experts, but for some reason I have had a hard time coming
> across expertly writ
Hi,
Should an index be used on a created_at timestamp column if you know you
will be using "ORDER BY created_at ASC|DESC" from time to time?
Thanks.
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
Hey all,
I've got a program that uses PostgreSQL. In the past, one of the
trickier parts of installation and design was supporting various
versions and various types of SQL servers. So now that I am doing a
ground-up rewrite of the program, I wanted to use a dedicated
installation of PgSQL.
That was the problem, the permissions on /tmp were 0700 with my user
acct. as owner and staff as group. I changed the permissions to 0777
and now the server will start up okay.
Thank you for your help Scott.
On Nov 02, 2007, at 10:05 PM, Scott Marlowe wrote:
On 11/2/07, Myshkin LeV
On 11/2/07, Myshkin LeVine <[EMAIL PROTECTED]> wrote:
> Hi,
> I have a problem starting the server on Mac OSX 10.3.9. I compiled
> v8.2.5 from source. The compilation, regression testing, and
> installation all went smoothly with no errors. I installed PostgreSQL
> in the default location a
Jeff Amiel wrote:
>
> Bruce Momjian wrote:
>>>
>>> No, it isn't. Please add a TODO item about it:
>>> * Prevent long-lived temp tables from causing frozen-Xid advancement
>>>starvation
>
> Can somebody explain this one to me? because of our auditing technique, we
> have many LONG lived temp
Hi,
I have a problem starting the server on Mac OSX 10.3.9. I compiled
v8.2.5 from source. The compilation, regression testing, and
installation all went smoothly with no errors. I installed PostgreSQL
in the default location and created a new user account named postgres.
I ran initdb which e
Chris Browne wrote:
> If I replicate your query, with extra columns, AND NAMES, I get the following:
>
> [EMAIL PROTECTED]:5433=# select random() as r1, random() as r2, random() as
> r3 from generate_series(1,10) order by random();
> r1 | r2 |r3
>
On Fri, 2007-11-02 at 14:45 -0700, Joshua D. Drake wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On Fri, 02 Nov 2007 13:49:27 -0700
> "Jeffrey W. Baker" <[EMAIL PROTECTED]> wrote:
>
> > Nested Loop Left Join (cost=13920.16..2257575559347.46
> > rows=3691992705807 width=128)
> >
"Jeffrey W. Baker" <[EMAIL PROTECTED]> writes:
> I was thinking that it might be nice to be able to tell postgres to
> refuse to execute any plan with an estimated cost above some threshold.
What you suggest has been suggested before, and I might think it was
a good idea if I trusted the planner's
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Fri, 02 Nov 2007 13:49:27 -0700
"Jeffrey W. Baker" <[EMAIL PROTECTED]> wrote:
> Nested Loop Left Join (cost=13920.16..2257575559347.46
> rows=3691992705807 width=128)
>
> After a call to ANALYZE, the same query gave me:
>
> Merge Left Join (cos
I was thinking that it might be nice to be able to tell postgres to
refuse to execute any plan with an estimated cost above some threshold.
For example, earlier today I produced this extremely bogus execution
plan with the following top line:
Nested Loop Left Join (cost=13920.16..2257575559347.46
Kynn Jones wrote:
So... not really an answer (other than I used a stored proc)
Actually, I'm interested in your solution.
Just to make sure I understood what you did: you bulk-populated (i.e.
with $dbh->do('COPY...'), $dbh->pg_putline(...), $dbh->pg_endcopy) the
referring tables, with their fk
On 11/1/07, Kynn Jones <[EMAIL PROTECTED]> wrote:
> Hi. This is a recurrent problem that I have not been able to find a
> good solution for. I have large database that needs to be built from
> scratch roughly once every month. I use a Perl script to do this.
>
> The tables are very large, so I
> So... not really an answer (other than I used a stored proc)
Actually, I'm interested in your solution.
Just to make sure I understood what you did: you bulk-populated (i.e.
with $dbh->do('COPY...'), $dbh->pg_putline(...), $dbh->pg_endcopy) the
referring tables, with their fkey constraints disa
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Fri, 02 Nov 2007 12:02:46 -0600
Dennis Muhlestein <[EMAIL PROTECTED]> wrote:
> I saw the announcement for a multi-master replication system on the
> Announcement list. What are your thoughts on this product? Has
> anyone tried it?
>
I know it
I saw the announcement for a multi-master replication system on the
Announcement list. What are your thoughts on this product? Has anyone
tried it?
The cybercluster link:
http://www.postgresql.at/english/downloads_e.html
-Dennis
---(end of broadcast)-
On Fri, 2007-11-02 at 00:03 -0400, Kevin Hunter wrote:
> - More in line with the conversation with my friend, what/why is it that
> Postgres needs a DBA while MySQL doesn't? I highly suspect that the
> assumption that MySQL doesn't need a DBA is incorrect, but that's what
> was posed to me and I c
On Oct 30, 2007, at 3:42 PM, Simon Riggs wrote:
On Tue, 2007-10-30 at 08:25 -0600, Brian Wipf wrote:
I'm trying to take a base backup from the standby server in archive
recovery mode. I don't believe it's possible to connect to it to
issue
pg_start_backup/pg_stop_backup.
http://www.postgre
Kynn Jones wrote:
Hi. This is a recurrent problem that I have not been able to find a
good solution for. I have large database that needs to be built from
scratch roughly once every month. I use a Perl script to do this.
The tables are very large, so I avoid as much as possible using
in-memo
Tom Lane wrote:
Kevin Hunter <[EMAIL PROTECTED]> writes:
However, I'm not a DBA and only minimally know what's involved in doing
the job, so I don't have "ammo" to defend (or agree?) with my friend
when he says that "Postgres requires a DBA and MySQL doesn't so that's
why they choose the latt
You know the old saying, tell a lie often enough and it becomes the
truth.
There are perceptions about databases out there that may not stand the
test of analysis. But that really doesn't matter. If you want to bring
down the perception, you need to use a different tact. And that has
nothing to
On Fri, Nov 02, 2007 at 01:36:51PM +0100, none none wrote:
> [EMAIL PROTECTED]
>
> currently i try to implement a data structure for tracking my websites
> traffic
> i'm using a key/value table which contains e.g. http header and/or GET/POST
> data
> a request table (1 request per row) references
On Nov 1, 2007, at 18:57, Tom Lane wrote:
In the usual tradition of SQL99, the spec text is enormously less
readable than SQL92 was, but I *think* this says nearly the same thing
as what we do: a plain column reference in ORDER BY is first sought as
an output column name, and failing that sough
On Fri, 2 Nov 2007, Kevin Hunter wrote:
I don't have "ammo" to defend (or agree?) with my friend when he says
that "Postgres requires a DBA and MySQL doesn't so that's why they
choose the latter."
A statement like this suggests a fundamental misunderstanding of what a
DBA does, and unfortuna
[EMAIL PROTECTED]
currently i try to implement a data structure for tracking my websites
traffic
i'm using a key/value table which contains e.g. http header and/or GET/POST
data
a request table (1 request per row) references to the key/value pairs using
an bigint[] array
the problem seems to be t
2007/11/2, Ow Mun Heng <[EMAIL PROTECTED]>:
> pgstattuple.. Okay.. anyone have a centos rpm for it? Don't really have
> access to a compiler on the (server) machine.
don't you have postgresql-contrib package for centos?
--
Filip RembiaĆkowski
---(end of broadcast)--
On Fri, Nov 02, 2007 at 10:04:06AM +0100, Christian Rengstl wrote:
> 1) Is it possible to refer to a column in a different table, ie
> entering a value in mytable.x should only be allowed if mytable2.y=1 for
> example? I know that it is possible to use triggers to do that, but I
> think adding a si
On Fri, 2 Nov 2007, carter ck wrote:
Hm. No. I am not using tsearch2. I have looked at the documentation, but not
sure how it benefits me.
I am searching for text such as mobile numbers and user name in my application.
Any idea of having tseach2 to accommodate different search fields? Way of
Hi list,
I have two question concerning check constraints:
1) Is it possible to refer to a column in a different table, ie
entering a value in mytable.x should only be allowed if mytable2.y=1 for
example? I know that it is possible to use triggers to do that, but I
think adding a simple check cons
Hi,
So how would it look like if you say "1520.50" what should it return you
back?
Regards,
Moiz Kothari
On 11/2/07, Yogesh Arora <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
>As i am Working for Decimal Formats. is there any method to do it
> in the same way.
>
>
> On 11/2/07, Moiz Kothari < [
Hm. No. I am not using tsearch2. I have looked at the documentation, but not
sure how it benefits me.
I am searching for text such as mobile numbers and user name in my application.
Any idea of having tseach2 to accommodate different search fields? Way of
Implementing it will be helpful too.
Hi,
You can use what bruce has suggested and get the value you want.
eg.
select trim(substring(cash_words('1520'), 1, position('dollars' in
cash_words('1520'))-1));
will return you
One thousand five hundred twenty (But ofcourse this eg. does not work for
decimals)
Regards,
Moiz Kothari
On 11
On Thu, Nov 01, 2007 at 08:03:08PM -0700, Craig White wrote:
> *** begin pg_table_dump.scr ***
> #/bin/sh
> #
> # Script to identify tables, backup schema and data separately and
> # then finally, vacuum each table
> #
> DB_NAME=MY_DB
> BACKUP_PATH=/home/backup/postgres/production
> MY_SCHEMA=publi
36 matches
Mail list logo