On 07/07/2010 06:06 PM, Ryan Wexler wrote:
Postgresql was previously running on a single cpu linux machine with 2 gigs of
memory and a single sata drive (v8.3). Basically a desktop with linux on it.
I experienced slow performance.
So, I finally moved it to a real server. A dually zeon cento
On Wed, Jul 7, 2010 at 4:06 PM, Ryan Wexler wrote:
> Postgresql was previously running on a single cpu linux machine with 2 gigs
> of memory and a single sata drive (v8.3). Basically a desktop with linux on
> it. I experienced slow performance.
>
> So, I finally moved it to a real server. A dua
Ryan Wexler writes:
> Postgresql was previously running on a single cpu linux machine with 2 gigs
> of memory and a single sata drive (v8.3). Basically a desktop with linux on
> it. I experienced slow performance.
> So, I finally moved it to a real server. A dually zeon centos machine with
> 6
Postgresql was previously running on a single cpu linux machine with 2 gigs
of memory and a single sata drive (v8.3). Basically a desktop with linux on
it. I experienced slow performance.
So, I finally moved it to a real server. A dually zeon centos machine with
6 gigs of memory and raid 10, po
Jerry Gamache writes:
> On 8.1, I have a very interesting database where the distributions of
> some values in a multi-million rows table is logarithmic (i.e. the most
> frequent value is an order of magnitude more frequent than the next
> ones). If I analyze the table, the statistics become ex
On 8.1, I have a very interesting database where the distributions of
some values in a multi-million rows table is logarithmic (i.e. the most
frequent value is an order of magnitude more frequent than the next
ones). If I analyze the table, the statistics become extremely skewed
towards the mos
On Wed, Jul 7, 2010 at 9:31 AM, Michal Fapso wrote:
> thank you for your help. I tried to cluster the table on
> hyps_wordid_index and the query execution time dropped from 4.43 to
> 0.19 seconds which is not that far from Lucene's performance of 0.10
> second.
Dang. Nice!
--
Robert Haas
Enter
Hi Eliot,
Would you mind posting your code for reference. It is nice to
have working examples when trying to figure out how it all fits
together.
Regards,
Ken
On Wed, Jul 07, 2010 at 03:23:12PM -0400, Eliot Gable wrote:
> Thanks again for all the input and suggestions from people. I have this
>
On Jul 6, 2010, at 8:25 PM, Scott Marlowe wrote:
> Tell us what you can about your hardware setup.
Sorry, I made the bad assumption that the hardware setup would be
irrelevant--dunno why I thought that.
My hardware setup is 2 FusionIO 160GB drives in a RAID-1 configuration, running
on an HP DL
Thanks again for all the input and suggestions from people. I have this
sorting algorithm re-implemented in C now and it is somewhere <2ms to run it
now; though it is difficult to get a more accurate measure. There may be
some additional optimizations I can come up with, but for now, this will
work
Sorry, I forgot to mention that archive_mode is "off" and commented out, and
archive command is '' and commented out.
Thanks for following up!
-- Richard
On Jul 7, 2010, at 1:58, Mark Kirkwood wrote:
> On 07/07/10 13:10, Richard Yen wrote:
>>
>> This leads me to believe that there was a sudd
Greg Smith wrote:
> Kevin Grittner wrote:
> > I don't know at the protocol level; I just know that write barriers
> > do *something* which causes our controllers to wait for actual disk
> > platter persistence, while fsync does not
>
> It's in the docs now:
> http://www.postgresql.org/docs/9.0/s
Hello again,
At last, I check the same query with the same data on my desktop
computer. Just after loading the data, the queries were slow, I launch a
vaccum analyse which collect good stats on the main table, the query
became quick (~200ms). Now 1classic sata disk computer is faster than
our
Hi Robert,
thank you for your help. I tried to cluster the table on
hyps_wordid_index and the query execution time dropped from 4.43 to
0.19 seconds which is not that far from Lucene's performance of 0.10
second.
Thanks a lot!
Miso Fapso
On 6 July 2010 02:25, Robert Haas wrote:
> On Thu, Jul 1,
On Mon, Jul 5, 2010 at 2:07 AM, Octavio Alvarez
wrote:
> Hello.
>
> I have a tree-like table with a three-field PK (name, date, id) and one
> parent field.
> It has 5k to 6k records as of now, but it will hold about 1 million records.
>
> I am trying the following WITH RECURSIVE query:
>
> WITH RE
On Wed, 7 Jul 2010, JOUANIN Nicolas (44) wrote:
It seems to work fine (same execution plan and less duration) after :
- setting default_statistics_target to 100
- full vacuum with analyze
Don't do VACUUM FULL.
Matthew
--
I suppose some of you have done a Continuous Maths course. Yes? Contin
Richard Yen wrote:
> the pg_xlog partition filled up (33GB)
> checkpoint_segments=16
> a sudden flurry of write activity
Was this database bulk-loaded in some way (like by restoring the
output of pg_dump, for example)? If so, all rows inserted into all
tables would have the same (or very n
It seems to work fine (same
execution plan and less duration) after :
- setting default_statistics_target to 100
- full vacuum with analyze
- reindexdb
Thanks.
Message original
Sujet : Re: [PERFORM] Two different execution plan for the same request
De : Guillaume Smet
P
Hi Nicolas,
On Wed, Jul 7, 2010 at 10:47 AM, JOUANIN Nicolas (44)
wrote:
> There were no modification made on the database except a restart yesterday
> evening and a vacuumdb --analyse ran at night.
It's not really surprising considering you probably kept the
default_statistics_target to 10 (it
On 07/07/10 13:10, Richard Yen wrote:
This leads me to believe that there was a sudden flurry of write activity that
occurred, and the process that would flush WAL files to /db/data/ couldn't keep
up, thereby filling up the disk. I'm wondering if anyone else out there might
be able to give m
JOUANIN Nicolas (44) wrote:
The strange thing is that this morning explain analyze now gives a
much better duration :
There were no modification made on the database except a restart
yesterday evening and a vacuumdb --analyse ran at night.
If the earlier bad query was run on a fresh imported
Please find attached schema of
tables and indexes involved.
Here is the pre-prod. environment config file:
listen_addresses = '*'
max_connections = 200
shared_buffers = 1024MB
work_mem = 24MB
maintenance_work_mem = 128MB
max_stack_depth = 6MB
max_fsm_pages = 204800
wal_buffers = 921MB
checkpoin
JOUANIN Nicolas (44) wrote:
Hi,
I've trouble with some SQL request which have different execution
plans when ran on two different servers. One server is the development
environment, the othe rone is th pre-production env.
Both servers run postgreSQL 8.3.0 on Linux and :
- both databases cont
Hi,
I've trouble with some SQL request which have different execution plans
when ran on two different servers. One server is the development
environment, the othe rone is th pre-production env.
Both servers run postgreSQL 8.3.0 on Linux and :
- both databases contains the same data (pg_dump/p
24 matches
Mail list logo