Hello,
I am using Postgres with PHP and persistent connections.
For simple queries, parsing & preparing time is often longer than actual
query execution time...
I would like to execute a bunch of PREPARE statements to prepare my most
often used small queries on connection
[EMAIL PROTECTED] ("sathiya psql") writes:
> On Tue, Mar 25, 2008 at 2:09 PM, jose
> javier parra sanchez <[EMAIL PROTECTED]> wrote:
>
>
> It's been said zillions of
> times on the maillist. Using a select
>
>
>
> 1st: you should not use a ramdisk for this, it will slow things down as
> compared to simply having the table on disk. Scanning it the first time
> when on disk will load it into the OS IO cache, after which you will get
> memory speed.
>
absolutely
after getting some replies, i dropp
Hello Sathiya,
1st: you should not use a ramdisk for this, it will slow things down as
compared to simply having the table on disk. Scanning it the first time
when on disk will load it into the OS IO cache, after which you will get
memory speed.
2nd: you should expect the ³SELECT COUNT(*)² to r
In response to "sathiya psql" <[EMAIL PROTECTED]>:
> >
> > Yes. It takes your hardware about 3 seconds to read through 700M of ram.
> >
> >
> > Keep in mind that you're not just reading RAM. You're pushing system
> > requests through the VFS layer of your operating system, which is treating
> > t
In response to "sathiya psql" <[EMAIL PROTECTED]>:
> EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;
> QUERY
> PLAN
> -
Well, we're not running PGSQL on a Netapp over NFS, but a DB2 Database.
But nevertheless, it runs quite well. NFS is not a bad choice for your
database, the big memory buffer that allocates the raid6 blocks makes it
all very quick, like you're working directly on a 1+ TB ramdisk.
One important thi
>
> Yes. It takes your hardware about 3 seconds to read through 700M of ram.
>
>
> Keep in mind that you're not just reading RAM. You're pushing system
> requests through the VFS layer of your operating system, which is treating
> the RAM like a disk (with cylinder groups and inodes and blocks
sathiya psql wrote:
> EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;
> QUERY
> PLAN
>
> --
In response to "sathiya psql" <[EMAIL PROTECTED]>:
> Dear Friends,
> I have a table with 32 lakh record in it. Table size is nearly 700 MB,
> and my machine had a 1 GB + 256 MB RAM, i had created the table space in
> RAM, and then created this table in this RAM.
>
> So now everything is
sathiya psql wrote:
EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;
And your usual query is:
SELECT count(*) from call_log_in_ram;
?
If so, you should definitely build a summary table maintained by a
trigger to track the row count. That's VERY well explained in the
mailing list ar
EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;
QUERY
PLAN
--
Aggregate (cost=90760.80..90760.80 rows=
sathiya psql wrote:
yes many a times i need to process all the records,
often i need to use count(*)
so what to do ?? ( those trigger options i know already, but i wil l do
count on different parameters )
*** PLEASE *** post the output of an EXPLAIN ANALYSE on one or more of
your querie
On Mon, Mar 24, 2008 at 3:37 PM, Andreas Kretschmer
<[EMAIL PROTECTED]> wrote:
> petchimuthu lingam <[EMAIL PROTECTED]> schrieb:
>
>
> > Hi friends,
> >
> > I am using postgresql 8.1, I have shared_buffers = 5, now i execute the
> > query, it takes 18 seconds to do sequential scan, when i r
>
>
>
> Shows us the explain analyze. There is no problem with a large number
> of records, as long as you're not expecting to process all of them all
> the time.
yes many a times i need to process all the records,
often i need to use count(*)
so what to do ?? ( those trigger options i kn
sathiya psql escribió:
> I have 1 GB RAM with Pentium Celeron.
> 50 lakh records and postgres performance is not good
>
> It takes 30 sec for simple queries
Shows us the explain analyze. There is no problem with a large number
of records, as long as you're not expecting to process all o
sathiya psql escribió:
> So now everything is in RAM, if i do a count(*) on this table it returns
> 327600 in 3 seconds, why it is taking 3 seconds ? because am sure that
> no Disk I/O is happening.
It has to scan every page and examine visibility for every record. Even
if there's no I/O
>
> th maximum number of records in one PostreSQL table ist unlimited:
>
am asking for good performance, not just limitation..
If i have half a crore record, how the performance will be ?
>
> http://www.postgresql.org/about/
>
> [for some values of unlimited]
>
> Some further help:
>
> googling f
Sathiya,
th maximum number of records in one PostreSQL table ist unlimited:
http://www.postgresql.org/about/
[for some values of unlimited]
Some further help:
googling for:
postgresql limits site:postgresql.org
leads you to this answer quite quick, while googling for
maximum number of rows i
Ok, finally am changing my question.
Do get quick response from postgresql what is the maximum number of records
i can have in a table in postgresql 8.1 ???
hubert depesz lubaczewski wrote:
> On Tue, Mar 25, 2008 at 02:05:20PM +0530, sathiya psql wrote:
>> Any Idea on this ???
>
> yes. dont use count(*).
>
> if you want whole-table row count, use triggers to store the count.
>
> it will be slow. regeardless of whether it's in ram or on hdd.
In othe
On Tue, Mar 25, 2008 at 02:05:20PM +0530, sathiya psql wrote:
> Any Idea on this ???
yes. dont use count(*).
if you want whole-table row count, use triggers to store the count.
it will be slow. regeardless of whether it's in ram or on hdd.
depesz
--
quicksil1er: "postgres is excellent, but li
On Tue, Mar 25, 2008 at 2:09 PM, jose javier parra sanchez <
[EMAIL PROTECTED]> wrote:
> It's been said zillions of times on the maillist. Using a select
> count(*) in postgres is slow, and probably will be slow for a long
> time. So that function is not a good way to measure perfomance.
>
Yes, bu
Dear Friends,
I have a table with 32 lakh record in it. Table size is nearly 700 MB,
and my machine had a 1 GB + 256 MB RAM, i had created the table space in
RAM, and then created this table in this RAM.
So now everything is in RAM, if i do a count(*) on this table it returns
327600 in 3
24 matches
Mail list logo