- "Stephen Frost" wrote:
| From: "Stephen Frost"
| To: "Rajesh Kumar. Mallah"
| Cc: pgsql-performance@postgresql.org
| Sent: Thursday, May 24, 2012 9:27:37 PM
| Subject: Re: [PERFORM] High load average in 64-core server , no I/O wait and
CPU is idle
|
| From: "Steve Crawford"
| To: "Rajesh Kumar. Mallah"
| Cc: "Andy Colson" , "Claudio Freire"
, pgsql-performance@postgresql.org
| Sent: Thursday, May 24, 2012 9:23:47 PM
| Subject: Re: [PERFORM] High load average in 64-core server , no I/O wait and
Dear Andy ,
Following the discussion on load average we are now investigating on some
other parts of the stack (other than db).
Essentially we are bumping up the limits (on appserver) so that more requests
goes to the DB server.
|
| Maybe you are hitting some locks? If its not IO and no
es are
| in a waiting state to have a little more insight.
I will read more on the processes status and try to keep a close
eye over it. I shall be responding after a few hours on it.
regds
mallah.
|
| --
| Sent via pgsql-performance mailing list
| (pgsql-performance@postgresql.org)
| T
- "Claudio Freire" wrote:
| From: "Claudio Freire"
| To: "Rajesh Kumar. Mallah"
| Cc: pgsql-performance@postgresql.org
| Sent: Thursday, May 24, 2012 9:23:43 AM
| Subject: Re: [PERFORM] High load average in 64-core server , no I/O wait and
CPU is idle
|
| On
rtition the host hardware
to 4 equal virtual environments , ie 1 for master (r/w) and 3 slaves r/o
and distribute the r/o load on the 3 slaves ?
regds
mallah
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.o
Thanks for the thought but it (-C) does not work .
>
>
> BTW, I think you should use -C option with pgbench for this kind of
> testing. -C establishes connection for each transaction, which is
> pretty much similar to the real world application which do not use
> connection pooling. You will be s
Looks like ,
pgbench cannot be used for testing with pgbouncer if number of
pgbench clients exceeds pool_size + reserve_pool_size of pgbouncer.
pgbench keeps waiting doing nothing. I am using pgbench of postgresql 8.1.
Are there changes to pgbench in this aspect ?
regds
Rajesh Kumar Mallah.
On
On Sun, Jul 18, 2010 at 10:55 PM, Greg Smith wrote:
> Rajesh Kumar Mallah wrote:
>
>> the no of clients was 10 ( -c 10) carrying out 1 transactions each
>> (-t 1) .
>> pgbench db was initilised with scaling factor -s 100.
>>
>> since client co
i get less performance
(even if no clients waiting)
without pooling the dbserver CPU usage increases but performance of apps
is also become good.
Regds
Rajesh Kumar Mallah.
On Sun, Jul 18, 2010 at 10:55 PM, Greg Smith wrote:
> Rajesh Kumar Mallah wrote:
>
>> the no of clients was
Nice suggestion to try ,
I will put pgbouncer on raw hardware and run pgbench from same hardware.
regds
rajesh kumar mallah.
> Why in VM (openvz container) ?
>
> Did you also try it in the same OS as your appserver ?
>
> Perhaps even connecting from appserver via unix seckets
note: my postgresql server & pgbouncer were not in virtualised environment
in the first setup. Only application server has many openvz containers.
rious
why
inspite of 0 clients waiting pgbounce introduces a drop in tps.
Warm Regds
Rajesh Kumar Mallah.
CTO - tradeindia.com.
Keywords: pgbouncer performance
On Mon, Jul 12, 2010 at 6:11 PM, Kevin Grittner wrote:
> Craig Ringer wrote:
>
> > So rather than asking "
about how much data you are loading ? rows count or
GB data etc
2. how many indexes are you creation ?
regds
Rajesh Kumar Mallah.
mar Mallah wrote:
>
> > I had set it to 128kb
> > it does not really work , i even tried your next suggestion. I am in
> > virtualized
> > environment particularly OpenVz. where echo 3 > /proc/sys/vm/drop_caches
> > does not work inside the virtual container, i di
On Thu, Jul 1, 2010 at 10:07 PM, Craig Ringer
wrote:
> On 01/07/10 17:41, Rajesh Kumar Mallah wrote:
> > Hi,
> >
> > this is not really a performance question , sorry if its bit irrelevant
> > to be posted here. We have a development environment and we want
> > t
Dear Sri,
Please post at least the Explain Analyze output . There is a nice posting
guideline
also regarding on how to post query optimization questions.
http://wiki.postgresql.org/wiki/SlowQueryQuestions
On Thu, Jul 1, 2010 at 10:49 AM, Srikanth Kata wrote:
>
> Please tell me What is the best
the i/o bandwidth . I think you should check when
the max cpu utilisation
is taking place exactly.
regds
Rajesh Kumar Mallah.
On Sat, Jun 26, 2010 at 3:55 AM, Deborah Fuentes wrote:
> Hello,
>
> When I run an SQL to create new tables and indexes is when Postgres
> consumes
analysis a trivial problem. We want that the subsequent runs
of query should take similar times as the first run so that we can work
on the optimizing the calling patterns to the database.
regds
Rajesh Kumar Mallah.
The way to make this go faster is to set up the actually recommended
> infrastructure for full text search, namely create an index on
> (co_name_vec)::tsvector (either directly or using an auxiliary tsvector
> column). If you don't want to maintain such an index, fine, but don't
> expect full text
Dear Tom/Kevin/List
thanks for the insight, i will check the suggestion more closely and post
the results.
regds
Rajesh Kumar Mallah.
On Mon, Jun 28, 2010 at 5:09 PM, Yeb Havinga wrote:
> Rajesh Kumar Mallah wrote:
>
>> Dear List,
>>
>> just by removing the order by co_name reduces the query time dramatically
>> from ~ 9 sec to 63 ms. Can anyone please help.
>>
> The 63 ms query result
Dear List,
just by removing the order by co_name reduces the query time dramatically
from ~ 9 sec to 63 ms. Can anyone please help.
Regds
Rajesh Kumar Mallah.
explain analyze SELECT * from ( SELECT
a.profile_id,a.userid,a.amount,a.category_id,a.catalog_id,a.keywords,b.co_name
from
-
Looks like most of the graph space is filled with (.) or (?) and very
less active queries (long running queries > 1s). on a busy day and busi hour
i shall check the and post again. The script is presented which depends only
on perl , DBI and DBD::Pg.
script pasted here:
http://pastebin.com/mrj
Dear List,
Today has been good since morning. Although it is a lean day
for us but the indications are nice. I thank everyone who shared
the concern. I think the most significant change has been to reduce
shared_buffers from 10G to 4G , this has lead to reduced memory
usage and some breathing spa
Dear Greg/Kevin/List ,
Many thanks for the comments regarding the params, I am however able to
change an
experiment on production in a certain time window , when that arrives i
shall post
my observations.
Rajesh Kumar Mallah.
Tradeindia.com - India's Largest B2B eMarketPlace.
commit nor rollback.
On 6/25/10, Tom Molesworth wrote:
> On 25/06/10 16:59, Rajesh Kumar Mallah wrote:
>> when i reduce max_connections i start getting errors, i will see again
>> concurrent connections
>> during business hours. lot of our connections are in > transactio
Dear Criag,
also check for the possibility of installing sysstat in our system.
it goes a long way in collecting the system stats. you may
consider increasing the frequency of data collection by
changing the interval of cron job manually in /etc/cron.d/
normally its */10 , you may make it */2 for
Dear List,
pgtune suggests the following:
(current value are in braces via reason) , (*) indicates significant
difference from current value.
default_statistics_target = 50 # pgtune wizard 2010-06-25 (current 100
via default)
(*) maintenance_work_mem = 1GB # pgtune wizard 2010-06-25 (16MB v
I changed shared_buffers from 10G to 4G ,
swap usage has almost become nil.
# free
total used free sharedbuffers cached
Mem: 32871276 245758248295452 0 11064 22167324
-/+ buffers/cache:2397436 30473840
Swap: 4192912
g business hours.
Warm Regds
Rajesh Kumar Mallah.
On Fri, Jun 25, 2010 at 4:58 PM, Yeb Havinga wrote:
> Rajesh Kumar Mallah wrote:
>>
>> A scary phenomenon is being exhibited by the server , which is the server
>> is slurping all the swap suddenly
>> 8 1 4192912 9
A scary phenomenon is being exhibited by the server , which is the server
is slurping all the swap suddenly , some of the relevant sar -r output are:
10:30:01 AM kbmemfree kbmemused %memused kbbuffers kbcached
kbswpfree kbswpused %swpused kbswpcad
10:40:01 AM979068 31892208 97.02
010 at 10:55 PM, Rajesh Kumar Mallah
wrote:
> On Thu, Jun 24, 2010 at 8:57 PM, Kevin Grittner
> wrote:
>> I'm not clear whether you still have a problem, or whether the
>> changes you mention solved your issues. I'll comment on potential
>> issues that leap out a
und and 90% of syscalls being
lseek(XXX, 0, SEEK_END) = YYY
>
> Rajesh Kumar Mallah wrote:
>
>> 3. we use xfs and our controller has BBU , we changed barriers=1
>> to barriers=0 as i learnt that having barriers=1 on xfs and fsync
>> as the sync method, the
riable class names
general.report_level = ''
general.disable_audittrail2 = ''
general.employee=''
Also i would like to apologize that some of the discussions on this problem
inadvertently became private between me & kevin.
On Thu, Jun 24, 2010 at 12:10 AM, Rajes
On 6/23/10, Kevin Grittner wrote:
> Rajesh Kumar Mallah wrote:
>> PasteBin for the vmstat output
>> http://pastebin.com/mpHCW9gt
>>
>> On Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah
>> wrote:
>>> Dear List ,
>>>
>>> I observe th
PasteBin for the vmstat output
http://pastebin.com/mpHCW9gt
On Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah
wrote:
> Dear List ,
>
> I observe that my postgresql (ver 8.4.2) dedicated server has turned cpu
> bound and there is a high load average in the server > 50 usuall
. ( excess
ram can be used in disk block caching)
if its cpu bound add more cores or high speed cpus
if its io bound put better raid arrays & controller.
regds
mallah.
On Thu, Mar 12, 2009 at 4:22 PM, Nagalingam, Karthikeyan
wrote:
> Hi,
> Can you guide me, Where is the entry point
There has been an error in the tests the dataset size was not 2*MEM it
was 0.5*MEM
i shall redo the tests and post results.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
indles will reduce perf.
I also have a SATA SAN though from which i can boot!
but the server needs to be rebuilt in that case too.
I (may) give it a shot.
regds
-- mallah.
> I heard plenty of stories where this actually sped up performance. One
> noticeable is case of youtube servers.
>
41.0
xfs_ra256 14642.7
xfs_ra512 14415.6
xfs_ra102414541.6
the value does not seems to be having much effect
unless its totally disabled.
regds
mallah.
>
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Detailed bonnie++ figures.
http://98.129.214.99/bonnie/report.html
On Wed, Feb 18, 2009 at 1:22 PM, Rajesh Kumar Mallah
wrote:
> the raid10 voulme was benchmarked again
> taking in consideration above points
>
> # fdisk -l /dev/sda
> Disk /dev/sda: 290.9 GB, 290984034304 bytes
than the ending sections , considering this is it worth
creating a special tablespace at the begining of drives
if at all done what kind of data objects should be placed
towards begining , WAL , indexes , frequently updated tables
or sequences ?
regds
mallah.
>On Tue, Feb 17, 2009 at 9:49
On Tue, Feb 17, 2009 at 5:15 PM, Matthew Wakeling wrote:
> On Tue, 17 Feb 2009, Rajesh Kumar Mallah wrote:
>>
>> sda6 --> xfs with default formatting options.
>> sda7 --> mkfs.xfs -f -d sunit=128,swidth=512 /dev/sda7
>> sda8 --> ext3 (default)
>>
&g
The URL of the result is
http://98.129.214.99/bonnie/report.html
(sorry if this was a repost)
On Tue, Feb 17, 2009 at 2:04 AM, Rajesh Kumar Mallah
wrote:
> BTW
>
> our Machine got build with 8 15k drives in raid10 ,
> from bonnie++ results its looks like the machine is
>
th=512 /dev/sda7
sda8 --> ext3 (default)
it looks like mkfs.xfs options sunit=128 and swidth=512 did not improve
io throughtput as such in bonnie++ tests .
it looks like ext3 with default options performed worst in my case.
regds
-- mallah
NOTE: observations made in this post are interpret
Its nice to know the evolution of autovacuum and i understand that
the suggestion/requirement of "autovacuum at lean hours only"
was defeating the whole idea.
regds
--rajesh kumar mallah.
On Fri, Feb 13, 2009 at 11:07 PM, Chris Browne wrote:
> mallah.raj...@gmail.com (Rajesh
JITSU Model: MBC2073RC Rev: D506
Type: Direct-Access ANSI SCSI revision: 05
thanks
regds
-- mallah
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Wed, Feb 11, 2009 at 11:30 PM, Brad Nicholson
wrote:
> On Wed, 2009-02-11 at 22:57 +0530, Rajesh Kumar Mallah wrote:
>> On Wed, Feb 11, 2009 at 10:03 PM, Grzegorz Jaśkiewicz
>> wrote:
>> > On Wed, Feb 11, 2009 at 2:57 PM, Rajesh Kumar Mallah
>> > wrote:
&g
On Wed, Feb 11, 2009 at 10:03 PM, Grzegorz Jaśkiewicz wrote:
> On Wed, Feb 11, 2009 at 2:57 PM, Rajesh Kumar Mallah
> wrote:
>
>>> vacuum_cost_delay = 150
>>> vacuum_cost_page_hit = 1
>>> vacuum_cost_page_miss = 10
>>> vacuum_cost
On Wed, Feb 11, 2009 at 7:11 PM, Guillaume Cottenceau wrote:
> Rajesh Kumar Mallah writes:
>
>> Hi,
>>
>> Is it possible to configure autovacuum to run only
>> during certain hours ? We are forced to keep
>> it off because it pops up during the peak
>> q
Hi,
Is it possible to configure autovacuum to run only
during certain hours ? We are forced to keep
it off because it pops up during the peak
query hours.
Regds
rajesh kumar mallah.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
eiver_uid = 1320721)
Filter: (generated_date >= 2251)
Total runtime: 0.082 ms
(5 rows)
tradein_clients=>
On Wed, Feb 11, 2009 at 6:07 PM, Rajesh Kumar Mallah
wrote:
> thanks for the hint,
>
> now the peak hour is over and the same scan is taking 71 ms in place of 8
> ms
thanks for the hint,
now the peak hour is over and the same scan is taking 71 ms in place of 8 ms
and the total query time is also acceptable. But it is surprising that
the scan was
taking so long consistently at that point of time. I shall test again
under similar
circumstance tomorrow.
Is i
r_uid) CLUSTER
"rfis_part_2009_01_sender_uid" btree (sender_uid)
Check constraints:
"rfis_part_2009_01_generated_date_check" CHECK (generated_date >=
3289 AND generated_date <= 3319)
"rfis_part_2009_01_rfi_id_check" CHECK (rfi_id >= 12344252 AND
rfi_id <= 126
On Tue, Feb 10, 2009 at 9:09 PM, Tom Lane wrote:
> Rajesh Kumar Mallah writes:
>> On Tue, Feb 10, 2009 at 6:36 PM, Robert Haas wrote:
>>> I'm guessing that the problem is that the selectivity estimate for
>>> co_name_vec @@ to_tsquery('plastic&tubes
> Can't use an undefined value as an ARRAY reference at
> /usr/lib/perl5/site_perl/5.8.8/Test/Parser/Dbt2.pm line 521.
>
> Can someone please give inputs to resolve this issue? Any help on this will
> be appreciated.
519 sub transactions {
520 my $self = shift;
521 return @{$self->{data}->
=0 loops=7)
Recheck Cond: (trade_leads.profile_id = pm.profile_id)
Filter: ((status)::text = 'm'::text)
-> Bitmap Index Scan on trade_leads_profile_id
(cost=0.00..3.41 rows=47 width=0) (actual time=1.285..1.285 rows=0
loops=7)
e_leads.profile_id = pm.profile_id)
Filter: ((status)::text = 'm'::text)
-> Bitmap Index Scan on trade_leads_profile_id
(cost=0.00..3.41 rows=47 width=0) (actual time=73.579..73.579 rows=0
loops=7)
Index Cond: (trade_leads.profile_id = pm.profile_id)
Total runtime: 1530.137 ms
regds
mallah.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ray...
Where exactly is there limitation of 32 drives.
the datasheet of 1680 states support upto 128drives
using enclosures.
regds
rajesh kumar mallah.
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
&g
hosting the data , i am hiring the storage primarily for
storing base base backups and log archives for PITR implementation.
as retal of separate machine was higher than SATA SAN.
Regds
mallah.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
On 5/31/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
On Thu, May 31, 2007 at 01:28:58AM +0530, Rajesh Kumar Mallah wrote:
> i am still not clear what is the best way of throwing in more
> disks into the system.
> does more stripes means more performance (mostly) ?
> also
Sorry for posting and disappearing.
i am still not clear what is the best way of throwing in more
disks into the system.
does more stripes means more performance (mostly) ?
also is there any thumb rule about best stripe size ? (8k,16k,32k...)
regds
mallah
On 5/30/07, [EMAIL PROTECTED
i got 2 options
1. create a new mirror
D5 raid1 D6 --> MD2
MD0 raid0 MD1 raid0 MD2 --> MDF final
OR
D1 raid1 D2 raid1 D5 --> MD0
D3 raid1 D4 raid1 D6 --> MD1
MD0 raid0 MD1 --> MDF (final)
thanks , hope my question is clear now.
Regds
mallah.
In the stripe of mirrors
?
also does single channel or dual channel controllers makes lot
of difference in raid10 performance ?
regds
mallah.
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your
[offtopic];
hmm quite a long thread below is stats of posting
Total Messages:87Total Participants: 27
-
19 Daniel van Ham Colchete
12 Michael Stone
9 Ron
5 Steinar H. Gunderson
5 Alexander Staubo
4 Tom Lane
4 Greg
of slowdown though.
Regds
mallah.
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
e JDBC expert would tell better how its done with JDBC.
Will it bring performance improvement compared to SELECT UNION solution?
COPY is quite faast.
Regds
mallah.
many thanks in advance,
Jens Schipkowski
--
**
APUS Software GmbH
---(end of broadcast)
ils are
not appreciated by many people. if possible pls avoid it.
Regds
mallah.
-
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's
On 12/11/06, Ravindran G - TLS, Chennai. <[EMAIL PROTECTED]> wrote:
Hello,
How to get Postgresql Threshold value ?. Any commands available ?.
What is meant my threshold value ?
---(end of broadcast)---
TIP 1: if posting/reading through Usenet,
We have a view in our database.
CREATE view public.hogs AS
SELECT pg_stat_activity.procpid, pg_stat_activity.usename,
pg_stat_activity.current_query
FROM ONLY pg_stat_activity;
Select current_query from public.hogs helps us to spot errant queries
at times.
regds
mallah.
On 12/7/06
On 12/6/06, asif ali <[EMAIL PROTECTED]> wrote:
Hi,
I have a "product" table having 350 records. It takes approx 1.8 seconds to
get all records from this table. I copies this table to a "product_temp"
table and run the same query to select all records; and it took 10ms(much
faster).
I did "VACU
On 12/6/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Rajesh Kumar Mallah" <[EMAIL PROTECTED]> writes:
> Startup time of a clean shutdown database is constant. But we still
> face problem when it comes to shutting down. PostgreSQL waits
> for clients to finish graceful
On 12/5/06, Tom Lane <[EMAIL PROTECTED]> wrote:
Jean Arnaud <[EMAIL PROTECTED]> writes:
> Is there a relation between database size and PostGreSQL restart
duration ?
No.
> Does anyone now the behavior of restart time ?
It depends on how many updates were applied since the last checkpoint
befo
functions are *NOT* slower than RAW SQL.
Regds
mallah.
---(end of broadcast)---
TIP 6: explain analyze is your friend
4. fsync can also be turned off while loading huge dataset , but seek others comments too (as study docs) as i am not sure about the reliability. i think it can make a lot of difference.
On 4/10/06, Jesper Krogh <[EMAIL PROTECTED]> wrote:
Rajesh Kumar Mallah wrote:>> I'd r
what is the query ?use LIMIT or a restricting where clause.regdsmallah.On 4/10/06, soni de <
[EMAIL PROTECTED]> wrote:Hello,
I have difficulty in fetching the records from the database.
Database table contains more than 1 GB data.
For fetching the records it is taking more the 1 hour and that's w
sorry for the post , i didn' saw the other replies only after posting.On 4/10/06, Rajesh Kumar Mallah <[EMAIL PROTECTED]
> wrote:
On 4/10/06, Jesper Krogh <[EMAIL PROTECTED]
> wrote:
HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.I'd run pg_dump | gzip >
On 4/10/06, Jesper Krogh <[EMAIL PROTECTED]> wrote:
HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.I'd run pg_dump | gzip > sqldump.gz on the old system. That took about30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psqlinto the 8.1 database seems to take
3. its not a performance question , it shud have been marked more appropriately to pgsql-sql i think.
4. its not a good etiquette to address email to someone and mark Cc to a list.
kind regds
mallah.
>> > BEGIN > > SELECT a1,a2,a3,a4,a5
applicable to your case.
Regds
Rajesh Kumar Mallah
On 4/3/06, Kenji Morishige <[EMAIL PROTECTED]> wrote:
> I am using postgresql to be the central database for a variety of tools for
> our testing infrastructure. We have web tools and CLI tools that require
> access
> to machine
On 9/29/05, Gavin Sherry <[EMAIL PROTECTED]> wrote:
> On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:
>
> > > > Number of Copies | Update perl Sec
> > > >
> > > > 1 --> 119
> > > > 2 ---> 59
> > > > 3 ---> 3
On 9/28/05, Gavin Sherry <[EMAIL PROTECTED]> wrote:
> On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:
>
> > Hi
> >
> > While doing some stress testing for updates in a small sized table
> > we found the following results. We are not too happy about the speed
>
he table was vacuum analyzed during the tests
total number of records in table: 93
-----
Regds
Rajesh Kumar Mallah.
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Hi ,
Gist indexes take a long time to create as compared
to normal indexes is there any way to speed them up ?
(for example by modifying sort_mem or something temporarily )
Regds
Mallah.
---(end of broadcast)---
TIP 9: the planner will ignore
% improvement in performance
for certain queries. None, everything works just fine.
Regds
Mallah.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
Have you checked Tsearch2
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/
is the most feature rich Full text Search system available
for postgresql. We are also using the same system in
the revamped version of our website.
Regds
Mallah.
Mark Stosberg wrote:
Hello,
I work for
le sleep and does it relate
to the apparent poor performance? Is it problem with the disk
hardware. I know at nite this query will run reasonably fast.
I am running on a decent hardware .
Regds
mallah.
1:41pm up 348 days, 21:10, 1 user, load average: 11.59, 13.69, 11.49
85 processes: 83 sl
Bill Moran wrote:
Rajesh Kumar Mallah wrote:
Hi,
The problem was solved by reloading the Table.
the query now takes only 3 seconds. But that is
not a solution.
If dropping/recreating the table improves things, then we can reasonably
assume that the table is pretty active with updates/inserts
Richard Huxton wrote:
On Thursday 15 April 2004 08:10, Rajesh Kumar Mallah wrote:
The problem is that i want to know if i need a Hardware upgrade
at the moment.
Eg i have another table rfis which contains ~ .6 million records.
SELECT count(*) from rfis where sender_uid >
, hardware or dead rows.
I already did vacumm full on the table but it still did not
have that effect on performance.
In fact the last figures were after doing a vacuum full.
Can there be any more elegent solution to this problem.
Regds
Mallah.
Richard Huxton wrote:
On Thursday 15 April 2004 08:10
The relation size for this table is 1.7 GB
tradein_clients=# SELECT public.relation_size ('general.rfis');
+--+
| relation_size|
+--+
|1,762,639,872 |
+--+
(1 row)
Regds
mallah.
Rajesh Kumar Mallah wrote:
The problem is that
804 records per second. Is it an acceptable
performance on the hardware below:
RAM: 2 GB
DISKS: ultra160 , 10 K , 18 GB
Processor: 2* 2.0 Ghz Xeon
What kind of upgrades shoud be put on the server for it to become
reasonable fast.
Regds
mallah.
Richard Huxton wrote:
On Wednesday 14 April 2004 18
Hi
I have .5 million rows in a table. My problem is select count(*) takes
ages.
VACUUM FULL does not help. can anyone please tell me
how to i enhance the performance of the setup.
Regds
mallah.
postgresql.conf
--
max_fsm_pages = 55099264 # min
Richard Huxton wrote:
On Wednesday 14 April 2004 18:53, Rajesh Kumar Mallah wrote:
Hi
I have .5 million rows in a table. My problem is select count(*) takes
ages. VACUUM FULL does not help. can anyone please tell me
how to i enhance the performance of the setup.
SELECT count(*) from
t ;
drop index ;
insert into forecastelement select * from temp_table ;
commit;
create indexes
Analyze forecastelement ;
note that distinct on will keep only one row out of all rows having
distinct values
of the specified columns. kindly go thru the distinct on manual before
trying
the queries
table
as more and more applications will access the same table.
Any ideas if its better to split the table application wise or is it ok?
Regds
mallah.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
greetings!
on a dedicated pgsql server is putting pg_xlog
in drive as OS almost equivalent to putting on a seperate
drive?
in both case the actual data files are in a seperate
drive.
regds
mallah
---(end of broadcast)---
TIP 9: the planner will
Greetings!
Why does creation of gist indexes takes significantly more time
than normal btree index. Can any configuration changes lead to faster index
creation?
query:
CREATE INDEX co_name_index_idx ON profiles USING gist (co_name_index
public.gist_txtidx_ops);
regds
mallah
scott.marlowe wrote:
On Tue, 13 Jan 2004, David Shadovitz wrote:
We avert the subsequent execution of count(*) by passing the
value of count(*) as a query parameter through the link in page
numbers.
Mallah, and others who mentioned caching the record count
1 - 100 of 137 matches
Mail list logo