er states (full DIRECT or fully cached).
Rgds,
-Dimitri
Rgds,
-Dimitri
On 5/5/11, Robert Haas wrote:
> On Sat, Apr 30, 2011 at 4:51 AM, Hsien-Wen Chu
> wrote:
>> since the block size is 8k for the default, and it consisted with many
>> tuple/line; as my understand, i
e how closely it is related, but have you tried preprepare?
https://github.com/dimitri/preprepare
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes t
.
Do you intend to run queries across multiple simulations at once? If
yes, you want to avoid multi databases. Other than that, I'd go with a
naming convention like samples_ and maybe some
inheritance to ease querying multiple simulations.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr P
You may also try the Sun's F5100 (flash storage array) - you may
easily get 700 MB/s just with a single I/O stream (single process), so
just with 2 streams you'll get your throughput.. - The array has 2TB
total space and max throughput should be around 4GB/s..
Rgds,
-Dimitri
On 11/1
gresql.org/wiki/PgBouncer
http://preprepare.projects.postgresql.org/README.html
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://w
;VACUUM FORCE TABLE" will be just aware about what he's
doing and be sure no one of the active transactions will be ever
access this table.
What do you think?.. ;-)
Rgds,
-Dimitri
On 8/22/10, Robert Haas wrote:
> On Sat, Aug 21, 2010 at 9:49 AM, Alexandre de Arruda Paes
> w
n the
oldest transaction = we have a problem in PG.. Otherwise it works as
expected to match MVCC.
Rgds,
-Dimitri
On 8/21/10, Scott Marlowe wrote:
> No, it means it can't clean rows that are younger than the oldest
> transaction currently in progress. if you started a transaction 5
&
So, does it mean that VACUUM will never clean dead rows if you have a
non-stop transactional activity in your PG database???... (24/7 OLTP
for ex.)
Rgds,
-Dimitri
On 8/19/10, Kevin Grittner wrote:
> Alexandre de Arruda Paes wrote:
>> 2010/8/18 Tom Lane
>
>>> There&
e are many
posts in blogs about optimal compiler options to use).. - don't
hesitate to try and don't forget to share here with others :-))
Rgds,
-Dimitri
On 8/11/10, Joseph Conway wrote:
> With a 16 CPU, 32 GB Solaris Sparc server, is there any conceivable
> reason to use
Craig Ringer writes:
> 9.0 has application_name to let apps identify themselves. Perhaps a
> "pooled_client_ip", to be set by a pooler rather than the app, could be
> added to address this problem in a way that can be used by all poolers
> new and existing, not just any new in-core pooling system.
Greg Smith writes:
> Craig James wrote:
>> By using "current" and encouraging people to link to that, we could
>> quickly change the Google pagerank so that a search for Postgres would
>> turn up the most-recent version of documentation.
>
> How do you propose to encourage people to do that?
Wh
Tom Lane writes:
> I agree with the comments to the effect that this is really a packaging
> and documentation problem. There is no need for us to re-invent the
> existing solutions, but there is a need for making sure that they are
> readily available and people know when to use them.
On this t
such resistance to implement hints withing SQL
queries in PG?..
Rgds,
-Dimitri
On 7/9/10, Robert Haas wrote:
> On Fri, Jul 9, 2010 at 6:13 AM, damien hostin
> wrote:
>>> Have you tried running ANALYZE on the production server?
>>>
>>> You might also want to try ALT
t...@exquisiteimages.com writes:
> I am wondering how I should architect this in PostgreSQL. Should I follow
> a similar strategy and have a separate database for each client and one
> database that contains the global data?
As others said already, there's more problems to foresee doing so that
t
Tom Lane writes:
> The problem with a system-wide no-WAL setting is it means you can't
> trust the system catalogs after a crash. Which means you are forced to
> use initdb to recover from any crash, in return for not a lot of savings
> (for typical usages where there's not really much churn in t
Hi,
Josh Berkus writes:
> a) Eliminate WAL logging entirely
> b) Eliminate checkpointing
> c) Turn off the background writer
> d) Have PostgreSQL refuse to restart after a crash and instead call an
> exteral script (for reprovisioning)
Well I guess I'd prefer a per-transaction setting, allowing
"Pierre C" writes:
> The same is true of a web server : 1000 active php interpreters (each eating
> several megabytes or more) are not ideal for performance !
>
> For php, I like lighttpd with php-fastcgi : the webserver proxies requests
> to a small pool of php processes, which are only busy whil
Balkrishna Sharma writes:
> I will have a web application having postgres 8.4+ as backend. At any given
> time, there will be max of 1000 parallel web-users interacting with the
> database (read/write)
> I wish to do performance testing of 1000 simultaneous read/write to
> the database.
See abo
. But
once you've re-copied your files again - the right order was applied
again.
BTW, 8K is recommended for OLTP workloads, but for DW you may stay
with 128K without problem.
Rgds,
-Dimitri
On 5/10/10, Josh Berkus wrote:
> On 5/9/10 1:45 AM, Dimitri wrote:
>> Josh,
>>
>&g
Josh,
it'll be great if you explain how did you change the records size to
128K? - as this size is assigned on the file creation and cannot be
changed later - I suppose that you made a backup of your data and then
process a full restore.. is it so?
Rgds,
-Dimitri
On 5/8/10, Josh Berkus
Greg Smith writes:
> However, that doesn't actually solve any of the problems I was talking about
> though, which is why I'm not even talking about that part. We need the glue
> to pull out software releases, run whatever testing tool is appropriate, and
> then save the run artifacts in some stan
Corin writes:
> I'm running quite a large social community website (250k users, 16gb
> database). We are currently preparing a complete relaunch and thinking about
> switching from mysql 5.1.37 innodb to postgresql 8.4.2. The database server
> is a dual dualcore operton 2216 with 12gb ram running
Greg Smith writes:
> I'm not sure how to make progress on similar ideas about
> tuning closer to the filesystem level without having something automated
> that takes over the actual benchmark running and data recording steps; it's
> just way too time consuming to do those right now with every too
Tom Lane writes:
> "Davor J." writes:
>> Now, if one takes a subquery for "1", the optimizer evaluates it first
>> (let's say to "1"), but then searches for it (sequentially) in every
>> partition, which, for large partitions, can be very time-consuming and goes
>> beyond the point of partitio
Bob Dusek writes:
> So, pgBouncer is pretty good. It doesn't appear to be as good as
> limiting TCON and using pconnect, but since we can't limit TCON in a
> production environment, we may not have a choice.
You can still use pconnect() with pgbouncer, in transaction mode, if
your application is
Jesper Krogh writes:
> I have a "message queue" table, that contains in the order of 1-10m
> "messages". It is implemented using TheSchwartz:
> http://search.cpan.org/~bradfitz/TheSchwartz-1.07/lib/TheSchwartz.pm
One way to approach queueing efficiently with PostgreSQL is to rely on
PGQ. New upco
Scott Marlowe writes:
> That's a lot of work to get to COPY.
Well, yes. I though about it this way only after having read that OP is
uneasy with producing another format from his source data, and
considering it's a one-shot operation.
Ah, tradeoffs, how to find the right one!
--
dim
--
Sent
Hi,
Ben Brehmer writes:
> By "Loading data" I am implying: "psql -U postgres -d somedatabase -f
> sql_file.sql". The sql_file.sql contains table creates and insert
> statements. There are no
> indexes present nor created during the load.
>
> OS: x86_64-unknown-linux-gnu, compiled by GCC gcc (G
Brian Karlak writes:
> I have a simple queuing application written on top of postgres which I'm
> trying to squeeze some more performance out of.
Have you tried to write a custom PGQ consumer yet?
http://wiki.postgresql.org/wiki/PGQ_Tutorial
Regards,
--
dim
--
Sent via pgsql-performance mai
Cédric Villemain writes:
>> If you want the latest and greatest, then you can use Debian testing.
>
> testing and sid are usually the same with a 15 days delay.
And receive no out-of-band security updates, so you keep the holes for
3 days when lucky, and 10 to 15 days otherwise, when choosing
tes
Tom Lane writes:
> It's worth your time to learn how to do this on whatever system you
> prefer to use. Then, if you're ever in a situation where you really
> need patch XYZ right now, you can easily add that patch to the package
> sources and rebuild a custom version that will play nicely within
astro77 writes:
> Kevin Grittner wrote:
>> I would try to minimize how many XML values it had to read, parse, and
>> search. The best approach that comes to mind would be to use tsearch2
>> techniques (with a GIN or GiST index on the tsvector) to identify
>> which rows contain 'fdc3da1f-060f-4c34
Hi,
Heikki Linnakangas writes:
> Joshua Rubin wrote:
>> We "hardcode" the parts of the where clause so that the prepared plan
>> will not vary among the possible partitions of the table. The only
>> values that are bound would not affect the planner's choice of table.
>
> Then you would benefit f
Merlin Moncure writes:
> like joining the result to another table...the planner can see
> 'through' the view, etc. in a function, the result is fetched first
> and materialized without looking at the rest of the query.
I though the planner would "see through" SQL language functions and
inline t
Try tsung, dig the archives for a pg specific howto. Tsung is open
source and supports multiple protocols.
Regards,
--
dim
Le 31 juil. 2009 à 08:50, Chris a écrit :
Hi,
Everyone says "load test using your app" - out of interest how does
everyone do that at the database level?
I've trie
Hi,
Le 16 juil. 09 à 11:52, Andres Freund a écrit :
If I interpret those findings correcty the execution is approx. as
fast as DB2,
only DB2 is doing automated plan caching while pg is not.
If it _really_ is necessary that this is that fast, you can prepare
the query
like I showed.
A for
Also consider on update triggers that you could want to run anyway
--
dim
Le 25 juin 2009 à 07:45, Craig Ringer a
écrit :
On Wed, 2009-06-24 at 21:03 -0700, Chris St Denis wrote:
This sounds like something that should just be on by default, not a
trigger. Is there some reason it would
Hi,
Le 24 juin 09 à 18:29, Alvaro Herrera a écrit :
Oleg Bartunov wrote:
On Wed, 24 Jun 2009, Chris St Denis wrote:
Is tsvector_update_trigger() smart enough to not bother updating a
tsvector if the text in that column has not changed?
no, you should do check yourself. There are several exa
Hi,
Shaul Dar writes:
> 1. A staging server, which receives new data and updates the DB
> 2. Two web servers that have copies of the DB (essentially read-only)
> and answer user queries (with load balancer)
[...]
> Suggestions?
I'd consider WAL Shipping for the staging server and some trigger
"Markus Wanner" writes:
> If anybody has ever tried their systems, I'd like to hear back. I wish such
> an offering would exist for Europe (guess that's just a matter of time).
http://www.niftyname.org/
http://lost-oasis.fr/
It seems to be coming very soon, in France :)
--
dim
--
Sent v
Віталій Тимчишин writes:
> I'd prefer ALTER VIEW SET ANALYZE=true; or CREATE/DROP ANALYZE ;
> Also it should be possible to change statistics target for analyzed
> columns.
Yeah, my idea was ALTER VIEW ENABLE ANALYZE; but that's an easy
point to solve if the idea proves helpful.
> Such a stat
Hi,
Le 6 juin 09 à 10:50, Simon Riggs a écrit :
On Wed, 2009-06-03 at 21:21 -0400, Robert Haas wrote:
But, we're not always real clever about selectivity. Sometimes you
have to fake the planner out, as discussed here.
[...]
Fortunately, these kinds of problems are fairly rare, but they can
re are any PG scalability limits,
integrated pooler will be in most cases more performant than external;
if there are no PG scalability limits - it'll still help to size PG
most optimally according a HW or OS capacities..
Rgds,
-Dimitri
On 6/3/09, Kevin Grittner wrote:
> Dimitri wrote:
&
Few weeks ago tested a customer application on 16 cores with Oracle:
- 20,000 sessions in total
- 70,000 queries/sec
without any problem on a mid-range Sun box + Solaris 10..
Rgds,
-Dimitri
On 6/3/09, Kevin Grittner wrote:
> James Mansion wrote:
>
>> I'm sure most
"Kenneth Cox" writes:
> On Tue, 02 Jun 2009 05:26:41 -0400, Dimitri Fontaine
> wrote:
>> I'd recommand having a look at tsung which will be able to replay a
>> typical application scenario with as many concurrent users as you want
>> to: http://arch
Hi, "Peter Sheats" writes: > I’m about to
set up a large instance on Amazon EC2 to be our DB server. > >
Before we switch to using it in production I would like to
simulate some load on it so that I know what it can handle and so
that I can make sure I have the > optimal settings in the conf
Hi,
Le 27 mai 09 à 19:57, Alan McKay a écrit :
I have done some googling and found a few things on the matter. But
am looking for some suggestions from the experts out there.
Got any good pointers for reading material to help me get up to speed
on PostgreSQL clustering? What options are avai
Hi,
Greg Smith writes:
> I keep falling into situations where it would be nice to host a server
> somewhere else. Virtual host solutions and the mysterious cloud are no good
> for the ones I run into though, as disk performance is important for all the
> applications I have to deal with.
A fre
On 5/19/09, Merlin Moncure wrote:
> On Tue, May 19, 2009 at 11:53 AM, Dimitri wrote:
>> On 5/19/09, Merlin Moncure wrote:
>>> On Mon, May 18, 2009 at 6:32 PM, Dimitri wrote:
>>>> Thanks Dave for correction, but I'm also curious where the time is
>>>
On 5/19/09, Scott Carey wrote:
>
> On 5/19/09 3:46 AM, "Dimitri" wrote:
>
>> On 5/19/09, Scott Carey wrote:
>>>
>>> On 5/18/09 3:32 PM, "Dimitri" wrote:
>>>
>>>> On 5/18/09, Scott Carey wrote:
>>>>> Gre
On 5/19/09, Merlin Moncure wrote:
> On Mon, May 18, 2009 at 6:32 PM, Dimitri wrote:
>> Thanks Dave for correction, but I'm also curious where the time is
>> wasted in this case?..
>>
>> 0.84ms is displayed by "psql" once the result output is printed,
;s dramatically dropping down..
Rgds,
-Dimitri
On 5/19/09, Simon Riggs wrote:
>
> On Tue, 2009-05-19 at 14:00 +0200, Dimitri wrote:
>
>> I may confirm the issue with hash join - it's repeating both with
>> prepared and not prepared statements - it's curious because i
ith 32 sessions it's 18ms, etc..
I've retested on 24 isolated cores, so any external secondary effects
are avoided.
Rgds,
-Dimitri
On 5/19/09, Dimitri wrote:
> On 5/19/09, Simon Riggs wrote:
>>
>> On Tue, 2009-05-19 at 00:33 +0200, Dimitri wrote:
>>> >
>
On 5/19/09, Simon Riggs wrote:
>
> On Tue, 2009-05-19 at 00:33 +0200, Dimitri wrote:
>> >
>> > In particular, running the tests repeatedly using
>> >H.REF_OBJECT = '01'
>> > rather than varying the value seems likely to benefit MyS
On 5/19/09, Scott Carey wrote:
>
> On 5/18/09 3:32 PM, "Dimitri" wrote:
>
>> On 5/18/09, Scott Carey wrote:
>>> Great data Dimitri!'
>>
>> Thank you! :-)
>>
>>>
>>> I see a few key trends in the poor scalability:
No, Tom, the query cache was off.
I put it always explicitly off on MySQL as it has scalability issues.
Rgds,
-Dimitri
On 5/19/09, Tom Lane wrote:
> Simon Riggs writes:
>> In particular, running the tests repeatedly using
>> H.REF_OBJECT = '01'
>>
On 5/18/09, Simon Riggs wrote:
>
> On Mon, 2009-05-18 at 20:00 +0200, Dimitri wrote:
>
>> >From my point of view it needs first to understand where the time is
>> wasted on a single query (even when the statement is prepared it runs
>> still slower comparing t
On 5/18/09, Scott Carey wrote:
> Great data Dimitri!'
Thank you! :-)
>
> I see a few key trends in the poor scalability:
>
> The throughput scales roughly with %CPU fairly well. But CPU used doesn't
> go past ~50% on the 32 core tests. This indicates lock contentio
Thanks Dave for correction, but I'm also curious where the time is
wasted in this case?..
0.84ms is displayed by "psql" once the result output is printed, and I
got similar time within my client (using libpq) which is not printing
any output..
Rgds,
-Dimitri
On 5/18/09, Dave
vailable test time will be very limited..
Best regards!
-Dimitri
On 5/18/09, Simon Riggs wrote:
>
> On Thu, 2009-05-14 at 20:25 +0200, Dimitri wrote:
>
>> # lwlock_wait_8.4.d `pgrep -n postgres`
>
>>Lock IdMode Combined Time
Hi Scott,
let me now finish my report and regroup all data together, and then
we'll continue discussion as it'll come more in debug/profile phase..
- I'll be not polite from my part to send some tons of attachments to
the mail list :-)
Rgds,
-Dimitri
On 5/13/09, Scott Carey wro
It's absolutely great!
it'll not help here because a think time is 0.
but for any kind of solution with a spooler it's a must to try!
Rgds,
-Dimitri
On 5/13/09, Dimitri Fontaine wrote:
> Hi,
>
> Le 13 mai 09 à 18:42, Scott Carey a écrit :
>>> will not help, as e
aphs, pgsql, and other). I'll publish it on my web site and
send you a link.
Rgds,
-Dimitri
On 5/14/09, Simon Riggs wrote:
>
> On Tue, 2009-05-12 at 14:28 +0200, Dimitri wrote:
>
>> As problem I'm considering a scalability issue on Read-Only workload -
>> only selects
Hi,
Le 13 mai 09 à 18:42, Scott Carey a écrit :
will not help, as each client is *not* disconnecting/reconnecting
during the test, as well PG is keeping well even 256 users. And TPS
limit is reached already on 64 users, don't think pooler will help
here.
Actually, it might help a little. Post
db engine):
http://dimitrik.free.fr/db_STRESS_MySQL_540_and_others_Apr2009.html#note_5442
Rgds,
-Dimitri
On 5/13/09, Kevin Grittner wrote:
> Glenn Maynard wrote:
>> I'm sorry, but I'm confused. Everyone keeps talking about
>> connection pooling, but Dimitri has said
your position with a pooler, but I also want you think
about idea that 128 cores system will become a commodity server very
soon, and to use these cores on their full power you'll need a
database engine capable to run 256 users without pooler, because a
pooler will not help you here anymore..
Rgds
- 2 another clients are started => 4 in total
- sleep ..
...
... ===> 256 in total
- sleep ...
- kill clients
So I even able to monitor how each new client impact all others. The
test kit is quite flexible to prepare any kind of stress situations.
Rgds,
-Dimitri
On 5/12/09, Gle
s talking about this as an 'unoptimal' solution, the
> fact is there is no evidence that a connection pooler will fix the
> scalability from 16 > 32 cores.
> Certainly a connection pooler will help most results, but it may not fix the
> scalability problem.
>
> A q
On 5/12/09, Robert Haas wrote:
> On Tue, May 12, 2009 at 1:00 PM, Dimitri wrote:
>> On MySQL there is no changes if I set the number of sessions in the
>> config file to 400 or to 2000 - for 2000 it'll just allocate more
>> memory.
>
> I don't care whether t
Hi,
Le 12 mai 09 à 18:32, Robert Haas a écrit :
implement this same logic internally? IOW, when a client disconnects,
instead of having the backend exit immediately, have it perform the
equivalent of DISCARD ALL and then stick around for a minute or two
and, if a new connection request arrives
No, they keep connections till the end of the test.
Rgds,
-Dimitri
On 5/12/09, Joshua D. Drake wrote:
> On Tue, 2009-05-12 at 17:22 +0200, Dimitri wrote:
>> Robert, what I'm testing now is 256 users max. The workload is growing
>> progressively from 1, 2, 4, 8 ... to 256
On 5/12/09, Stefan Kaltenbrunner wrote:
> Dimitri wrote:
>> Hi Stefan,
>>
>> sorry, I did not have a time to bring all details into the toolkit -
>> but at least I published it instead to tell a "nice story" about :-)
>
> fair point and appreciate
Good point! I missed it.. - will 20MB be enough?
Rgds,
-Dimitri
On 5/12/09, Julian v. Bock wrote:
> Hi
>
>>>>>> "D" == Dimitri writes:
>
> D> current postgresql.conf:
>
> D> #
> D> max_connections =
ago PostgreSQL outperformed MySQL on
the same test case, and there was nothing done within MySQL code to
improve it explicitly for db_STRESS.. And I'm staying pretty honest
when I'm testing something.
Rgds,
-Dimitri
On 5/12/09, Robert Haas wrote:
> On Tue, May 12, 2009 at 8:59 AM,
way to run it against any
database schema, it's only question of time..
Rgds,
-Dimitri
On 5/12/09, Stefan Kaltenbrunner wrote:
> Dimitri wrote:
>> Folks, before you start to think "what a dumb guy doing a dumb thing" :-))
>> I'll explain you few details:
>>
_statistics_target to 5 ! -but this one I
found myself :-))
Probably checkpoint_timeout may be bigger now with the current
settings? - the goal here is to keep Read+Write TPS as stable as
possible and also avoid a long recovery in case of
system/database/other crash (in theory).
Rgds,
-Dimitri
)..
And yes, I'll try to profile on 32 cores, it makes sense.
Rgds,
-Dimitri
On 5/12/09, Heikki Linnakangas wrote:
> Dimitri wrote:
>> What I discovered so far with all your help:
>> - the impact of a planner
>> - the impact of the analyze target
>> - the i
et to
get a profit from idiots :-)) That's why I never betting in my life,
but every time telling the same story in such situation... Did you
like it? ;-))
However, no problem to give you a credit as well to all pg-perf list
as it provides a very valuable help! :-))
Rgds,
-Dimitri
On 5/12
on read+write workload! :-)
Any other comments are welcome!
Rgds,
-Dimitri
On 5/12/09, Dimitri Fontaine wrote:
> Hi,
>
> Dimitri writes:
>
>>>> So, why I don't use prepare here: let's say I'm testing the worst
>>>> stress case :-) Imagine y
It's just one of the test conditions - "what if there 2000 users?" - I
know I may use pgpool or others, but I also need to know the limits of
the database engine itself.. For the moment I'm limiting to 256
concurrent sessions, but config params are kept like for 2000 :-)
Rgd
Hi,
Dimitri writes:
>>> So, why I don't use prepare here: let's say I'm testing the worst
>>> stress case :-) Imagine you have thousands of such kind of queries -
>>> you cannot prepare all of them! :-)
>>
>> Thousands? Surely there'
ta and your indexes - don't
need to spend so much time.
Rgds,
-Dimitri
On 5/12/09, Heikki Linnakangas wrote:
> Dimitri wrote:
>> Now, as you see from your explanation, the Part #2 is the most
>> dominant - so why instead to blame this query not to implement a QUERY
>> PLAN
ry is recognized by
>> parser we simply *reuse* the same plan?..
>
> This has been discussed in the past, but it turns out that a real
> implementation is a lot harder than it seems.
Ok. If I remember well, Oracle have it and it helps a lot, but for
sure it's not easy t
is the most
dominant - so why instead to blame this query not to implement a QUERY
PLANNER CACHE??? - in way if any *similar* query is recognized by
parser we simply *reuse* the same plan?..
Rgds,
-Dimitri
On 5/11/09, Aidan Van Dyk wrote:
> * Dimitri [090511 11:18]:
>> Folks, it'
tests it gives:
- on 8 cores: 14.000 TPS
- on 16 cores: 17.500 TPS
- on 32 cores: 15.000 TPS (regression)
Rgds,
-Dimitri
On 5/11/09, Simon Riggs wrote:
>
> On Mon, 2009-05-11 at 17:18 +0200, Dimitri wrote:
>
>> Yes, forget, MySQL is reaching 17.500 TPS here.
>
> Ple
!
Rgds,
-Dimitri
On 5/11/09, Scott Marlowe wrote:
> On Mon, May 11, 2009 at 10:31 AM, Dimitri wrote:
>> Hi Kevin,
>>
>> PostgreSQL: 8.3.7 & 8.4
>> Server: Sun M5000 32cores
>> OS: Solaris 10
>>
>> current postgresql.conf:
>>
>>
OK, it'll be better to avoid a such improvement :-)
Performance - yes, but not for any price :-)
Thank you!
Rgds,
-Dimitri
On 5/11/09, Kevin Grittner wrote:
> Dimitri wrote:
>
>> What about "full_page_writes" ? seems it's "on" by default. Does it
&g
ot; ? seems it's "on" by default. Does it
makes sense to put if off?..
Rgds,
-Dimitri
On 5/11/09, Kevin Grittner wrote:
> Dimitri wrote:
>
>> PostgreSQL: 8.3.7 & 8.4
>> Server: Sun M5000 32cores
>> OS: Solaris 10
>
> Does that have a battery
ween autovacuum runs
autovacuum_vacuum_threshold = 50
autovacuum_analyze_threshold = 50
autovacuum_vacuum_scale_factor = 0.001
lc_messages = 'C'
lc_monetary = 'C'
lc_numeric = 'C'
lc_time = 'C'
#
Rgds,
-Dimitri
On 5/11
;t it?
;-)
And what about scalability on 32cores?..
Any idea?
Rgds,
-Dimitri
On 5/11/09, Tom Lane wrote:
> Dimitri writes:
>> Anyone may explain me why analyze target may have so huge negative
>> secondary effect?..
>
> If these are simple queries, maybe what you're
sec - it helped, throughput is more
stable now, but instead of big waves I have now short waves anyway..
What is the best options combination here?..
Rgds,
-Dimitri
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
t else may limit concurrent SELECTs here?..
Yes, forget, MySQL is reaching 17.500 TPS here.
Rgds,
-Dimitri
On 5/7/09, Simon Riggs wrote:
>
> On Thu, 2009-05-07 at 20:36 +0200, Dimitri wrote:
>
>> I've simply restarted a full test with hashjoin OFF. Until 32
>> concurre
and not outpassing
6.000 TPS, while 8.4 uses 90% CPU and reaching 11.000 TPS..
On the same time while I'm comparing 8.3 and 8.4 - the response time
is 2 times lower in 8.4, and seems to me the main gain for 8.4 is
here.
I'll publish all details, just need a time :-)
Rgds,
-Dimitri
On 5
ect
0.007 0.127 ExecScan
...
Curiously "memcpy" is in top. Don't know if it's impacted in many
cases, but probably it make sense to see if it may be optimized, etc..
Rgds,
-Dimitri
On 5/7/09, Euler Taveira de Oliveira wrote:
> Dimitri escreveu:
>>
6ms
Any idea why planner is not choosing this plan from the beginning?..
Any way to keep this plan without having a global or per sessions
hashjoin disabled?..
Rgds,
-Dimitri
On 5/6/09, Simon Riggs wrote:
>
> On Wed, 2009-05-06 at 10:31 +0200, Dimitri wrote:
>
>> I've alrea
Hi Ken,
yes, I may do it, but I did not expect to come into profiling initially :-)
I expected there is just something trivial within a plan that I just
don't know.. :-)
BTW, is there already an integrated profiled within a code? or do I
need external tools?..
Rgds,
-Dimitri
On 5/6/09, Ke
I supposed in case with prepare and then execute a query optimizer is
no more coming in play on "execute" phase, or did I miss something?..
Forget to say: query cache is disabled on MySQL side.
Rgds,
-Dimitri
On 5/6/09, Craig Ringer wrote:
> Dimitri wrote:
>> Hi Chris,
>&
- execute of the same prepared "select count(*) ..." took 0.68ms
So, where the time is going?...
Rgds,
-Dimitri
On 5/6/09, Ries van Twisk wrote:
>
> On May 6, 2009, at 7:53 AM, Richard Huxton wrote:
>
>> Dimitri wrote:
>>> I'll try to answer all mails
.3.7, but there is still a room for improvement if such a
small query may go faster :-)
Rgds,
-Dimitri
On 5/6/09, Albe Laurenz wrote:
> Dimitri wrote:
>> I've run several tests before and now going in depth to understand if
>> there is nothing wrong. Due such a single query tim
t all my "lc_*" variables are set to "C"...
Rgds,
-Dimitri
On 5/6/09, Merlin Moncure wrote:
> On Wed, May 6, 2009 at 7:46 AM, Merlin Moncure wrote:
>> prepare history_stat(char(10) as
>
> typo:
> prepare history_stat(char(10)) as
>
--
Sent via pgsql-p
1 - 100 of 182 matches
Mail list logo