"Tom Lane" <[EMAIL PROTECTED]> wrote:
> "Gaetano Mendola" <[EMAIL PROTECTED]> writes:
> > I had one row duplicated with the same login and the same id_user,
> > was failing was the update of that row complaining about the duplicated
> > key.
>
> Oh. Your report was quite unclear; I thought you wer
On Mon, 08-sep-2003 at 16:29, Rhaoni Chiu Pereira wrote:
> Could anyone tell me a documentation that explains the " explain " result
> and how to analyze it ?
>
http://archives.postgresql.org/pgsql-performance/2003-09/msg0.php
Regards,
--
Alberto Caso Palomino
Adaptia Soluciones Integ
Hi List,
Could anyone tell me a documentation that explains the " explain " result
and how to analyze it ?
Atenciosamente,
Rhaoni Chiu Pereira
Sistêmica Computadores
Visite-nos na Web: http://sistemica.info
Fone/Fax : +55 51 3328 1122
---(end of broadcast)
"Gaetano Mendola" <[EMAIL PROTECTED]> writes:
> I had one row duplicated with the same login and the same id_user,
> was failing was the update of that row complaining about the duplicated
> key.
Oh. Your report was quite unclear; I thought you were saying that
REINDEX had somehow built two copie
"Tom Lane" <[EMAIL PROTECTED]> wrote:
> "Gaetano Mendola" <[EMAIL PROTECTED]> writes:
> > I'm running Postgres 7.3.3 on a Linux Box
> > I know that seems impossible,
> > that I can not replicate the bug but
> > today without hardware failure, power down etc etc
> > I had a duplicate primary key + a
Can anyone point me to any stats on how well postgresql performs on
various OS's? I'm particularily interested in BSD variations and Linux.
There was recently a thread named "The results of my
PostgreSQL/filesystem performance" in performance list exactly about
this topic. The test results are
Can anyone point me to any stats on how well postgresql performs on
various OS's? I'm particularily interested in BSD variations and Linux.
--
Until later: Geoffrey [EMAIL PROTECTED]
The latest, most widespread virus? Microsoft end user agreement.
Think about it...
---
"Gaetano Mendola" <[EMAIL PROTECTED]> writes:
> I'm running Postgres 7.3.3 on a Linux Box
> I know that seems impossible,
> that I can not replicate the bug but
> today without hardware failure, power down etc etc
> I had a duplicate primary key + a duplicate unique index
> on one table. I already
I'm running Postgres 7.3.3 on a Linux Box
I know that seems impossible,
that I can not replicate the bug but
today without hardware failure, power down etc etc
I had a duplicate primary key + a duplicate unique index
on one table. I already had this "kind" of problem in another
table and I solved t
"Donald Fraser" <[EMAIL PROTECTED]> writes:
> My analysis at the time was that to access random records, performance
> deteriorated the further away the records that you were accessing were
> from the beginning of the index. For example using a query that had
> say OFFSET 25 would cause large d
Hello,
is there a tool like webalizer for postgreSQL to get a nice overview over
all logfiles?
Thank you
Daniel
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAI
On Mon, Sep 08, 2003 at 13:26:05 +0300,
Vasilis Ventirozos <[EMAIL PROTECTED]> wrote:
> This is a simple statement that i run
>
> core_netfon=# EXPLAIN select spcode,count(*) from callticket group by spcode;
> QUERY PLAN
> ---
On Mon, Sep 08, 2003 at 10:32:51 +0300,
Vasilis Ventirozos <[EMAIL PROTECTED]> wrote:
> Hi all, i work in a telco and i have huge ammount of data, (50 million)
> but i see a lack of performance at huge tables with postgres,
> are 50 million rows the "limit" of postgres ? (with a good performance
On Mon, Sep 08, 2003 at 11:43:42 +0530,
Ravi T Ramachandra <[EMAIL PROTECTED]> wrote:
>
> SELECT * FROM TABLE A WHERE COL1 = 1 AND COL2 = 'ABC'.
>
> We have created index definition as follows
>
> CREATE INDEX IDX ON A(COL1, COL2);
>
> Explain on the above statement shows it is sequenti
On Mon, Sep 08, 2003 at 11:26:20 +0530,
Ramesh PAtel <[EMAIL PROTECTED]> wrote:
> Hi All
>
> I am working On Postgresql.
>
> i have on Program VET and it's database on Postgresql this programe
> run on diff. five Place and all five place separet server. and not connected
> eachother.
> now w
On Mon, 8 Sep 2003, Vasilis Ventirozos wrote:
This bit is simple and probably wants leaving alone, except for very
specific changes as you need them
> # Connection Parameters
> #
> tcpip_socket = true
> #ssl = false
>
> #max_connections = 32
> #superuser_reserved_connections = 2
>
> port =
The Server is a dual Xeon 2.4 HP with a 15k rpm scsi disk and 2 Gigz of ram
# Connection Parameters
#
tcpip_socket = true
#ssl = false
#max_connections = 32
#superuser_reserved_connections = 2
port = 5432
#hostname_lookup = false
#show_source_port = false
#unix_socket_directory =
On Mon, 8 Sep 2003, Vasilis Ventirozos wrote:
> I use the default comfiguration file with the tcpip enabled
> any sagestion about the configuration file ?
Post a copy of it to the list, along with the specs of the machine it is
running on, and I'm sure we'll all pipe in.
--
Sam Barnett-Cormack
I use the default comfiguration file with the tcpip enabled
any sagestion about the configuration file ?
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
"Vasilis Ventirozos" <[EMAIL PROTECTED]> wrote:
> This is a simple statement that i run
>
> core_netfon=# EXPLAIN select spcode,count(*) from callticket group by
spcode;
> QUERY PLAN
> --
-
On Mon, 8 Sep 2003, Vasilis Ventirozos wrote:
> This is a simple statement that i run
>
> core_netfon=# EXPLAIN select spcode,count(*) from callticket group by spcode;
Well, yeah. Whatever you do, a complete seqscan and count is going to
take a long time, in the order of hours rather than days I
This is a simple statement that i run
core_netfon=# EXPLAIN select spcode,count(*) from callticket group by spcode;
QUERY PLAN
---
Aggregate (cost=2057275.91..2130712.22 rows
> On Mon, 8 Sep 2003, Vasilis Ventirozos wrote:
>
> > Hi all, i work in a telco and i have huge ammount of data, (50 million)
> > but i see a lack of performance at huge tables with postgres,
> > are 50 million rows the "limit" of postgres ? (with a good performance)
> > i am waiting for 2004 2 bi
On Mon, Sep 08, 2003 at 11:43:42AM +0530, Ravi T Ramachandra wrote:
> I recently setup postgres on a Linux box with 4GB Ram and 2.5 GHz
> processor.
Big box.
> We have created a database with 1.5 million rows in a
> table.
Small database.
> When we try to select rows from the table, it is taki
On Mon, 8 Sep 2003, Vasilis Ventirozos wrote:
> Hi all, i work in a telco and i have huge ammount of data, (50 million)
> but i see a lack of performance at huge tables with postgres,
> are 50 million rows the "limit" of postgres ? (with a good performance)
> i am waiting for 2004 2 billion record
it's not a stadard statement, i am tring to get statistics for the company and
i see a lack of performance (the same statement on informix runs good with
the same indexes of course)
Vasilis Ventirozos
---(end of broadcast)---
TIP 8: explain analy
Could you give more detailed information?
What does explain say?
On Mon, 8 Sep 2003, Vasilis Ventirozos wrote:
> Hi all, i work in a telco and i have huge ammount of data, (50 million)
> but i see a lack of performance at huge tables with postgres,
> are 50 million rows the "limit" of postgres
Hi all, i work in a telco and i have huge ammount of data, (50 million)
but i see a lack of performance at huge tables with postgres,
are 50 million rows the "limit" of postgres ? (with a good performance)
i am waiting for 2004 2 billion records so i have to do something.
Does anyone have a huge d
28 matches
Mail list logo