Robert Haas wrote:
On Fri, Mar 20, 2009 at 7:39 PM, Jignesh K. Shah wrote:
Alvaro Herrera wrote:
So Simon's correct.
And perhaps this explains why Jignesh is measuring an improvement on his
benchmark. Perhaps an useful experiment would be to turn this behavior
of
Alvaro Herrera wrote:
Alvaro Herrera escribió:
So Simon's correct.
And perhaps this explains why Jignesh is measuring an improvement on his
benchmark. Perhaps an useful experiment would be to turn this behavior
off and compare performance. This lack of measurement is probably the
m...@bortal.de wrote:
Hi Greg,
thanks a lot for your hints. I changed my config and changed raid6 to
raid10, but whatever i do, the benchmark breaks down at a scaling
factor 75 where the database is "only" 1126MB big.
Here are my benchmark Results (scaling factor, DB size in MB, TPS) using
Robert Haas wrote:
Actually the patch I submitted shows no overhead from what I have seen and I
think it is useful depending on workloads where it can be turned on even on
production.
Well, unless I'm misunderstanding something, waking all waiters every
time could lead to arbitrarily lo
On 03/18/09 17:25, Robert Haas wrote:
On Wed, Mar 18, 2009 at 1:43 PM, Scott Carey wrote:
Its worth ruling out given that even if the likelihood is small, the fix is
easy. However, I don¹t see the throughput drop from peak as more
concurrency is added that is the hallmark of this problem
On 03/18/09 17:16, Scott Carey wrote:
On 3/18/09 4:36 AM, "Gregory Stark" wrote:
"Jignesh K. Shah" writes:
In next couple of weeks I plan to test the patch on a different x64 based
system to do a sanity testing on lower number of cores and also try
On 03/18/09 08:06, Simon Riggs wrote:
On Wed, 2009-03-18 at 11:45 +, Matthew Wakeling wrote:
On Wed, 18 Mar 2009, Simon Riggs wrote:
I agree with that, apart from the "granting no more" bit.
The most useful behaviour is just to have two modes:
* exclusive-lock held - all other x
Simon Riggs wrote:
On Tue, 2009-03-17 at 17:41 -0400, Jignesh K. Shah wrote:
I did a quick test with patch. Unfortunately it improves my number
even with default setting 0 (not sure whether I should be pleased or
sad - Definitely no overhead infact seems to help performance a bit.
NOTE
On 03/16/09 13:39, Simon Riggs wrote:
On Wed, 2009-03-11 at 22:20 -0400, Jignesh K. Shah wrote:
A tunable does not impact existing behavior
Why not put the tunable parameter into the patch and then show the test
results with it in? If there is no overhead, we should then be able to
see
Simon Riggs wrote:
On Wed, 2009-03-11 at 22:20 -0400, Jignesh K. Shah wrote:
A tunable does not impact existing behavior
Why not put the tunable parameter into the patch and then show the test
results with it in? If there is no overhead, we should then be able to
see that.
Can
On 03/16/09 11:08, Gregory Stark wrote:
"Jignesh K. Shah" writes:
Generally when there is dead constant.. signs of classic bottleneck ;-) We
will be fixing one to get to another.. but knocking bottlenecks is the name of
the game I think
Indeed. I think the bottle
decibel wrote:
On Mar 13, 2009, at 3:02 PM, Jignesh K. Shah wrote:
vmstat seems similar to wakeup some
kthr memorypagedisk faults
cpu
r b w swap free re mf pi po fr de sr s0 s1 s2 sd in sy cs
us sy id
63 0 0 45535728 38689856 0 14 0 0 0
decibel wrote:
On Mar 11, 2009, at 10:48 PM, Jignesh K. Shah wrote:
Fair enough.. Well I am now appealing to all who has a fairly
decent sized hardware want to try it out and see whether there are
"gains", "no-changes" or "regressions" based on your workl
Simon Riggs wrote:
On Wed, 2009-03-11 at 16:53 -0400, Jignesh K. Shah wrote:
1200: 2000: Medium Throughput: -1781969.000 Avg Medium Resp: 0.019
I think you need to iron out bugs in your test script before we put too
much stock into the results generated. Your throughput should not
h of doing
little bursts rather than tie them to 1 exclusive only.
-Jignesh
Jignesh K. Shah wrote:
Now with a modified Fix (not the original one that I proposed but
something that works like a heart valve : Opens and shuts to minimum
default way thus controlling how many waiters ar
Somebody else asked a question: This is actually a two socket machine
(128) threads but one socket is disabled by the OS so only 64-threads
are available... The idea being let me choke one socket first with 100%
CPU ..
Forgot some data: with the second test above, CPU: 48% user, 18% sys,
35% id
Scott Carey wrote:
On 3/13/09 8:55 AM, "Kevin Grittner" wrote:
>>> "Jignesh K. Shah" wrote:
> usr sys wt idl sze
> 38 11 0 50 64
The fact that you're maxing out at 50% CPU utilization has me
wondering -- are there really
Now with a modified Fix (not the original one that I proposed but
something that works like a heart valve : Opens and shuts to minimum
default way thus controlling how many waiters are waked up )
Time:Users:throughput: Reponse
60: 8: Medium Throughput: 7774.000 Avg Medium Resp: 0.004
120: 1
In general, I suggest that it is useful to run tests with a few
different types of pacing. Zero delay pacing will not have realistic
number of connections, but will expose bottlenecks that are
universal, and less controversial. Small latency (100ms to 1s) tests
are easy to make from the ze
Gregory Stark wrote:
"Jignesh K. Shah" writes:
Scott Carey wrote:
On 3/12/09 11:37 AM, "Jignesh K. Shah" wrote:
In general, I suggest that it is useful to run tests with a few different
types of pacing. Zero delay pacing will not have realistic number of
c
Greg Smith wrote:
On Thu, 12 Mar 2009, Jignesh K. Shah wrote:
As soon as I get more "cycles" I will try variations of it but it
would help if others can try it out in their own environments to see
if it helps their instances.
What you should do next is see whether you can
Scott Carey wrote:
On 3/12/09 11:37 AM, "Jignesh K. Shah" wrote:
And again this is the third time I am saying.. the test users also
have some latency build up in them which is what generally is
exploited to get more users than number of CPUS on the system but
On 03/12/09 15:10, Alvaro Herrera wrote:
Tom Lane wrote:
Scott Carey writes:
They are not meaningless. It is certainly more to understand, but the test is
entirely valid without that. In a CPU bound / RAM bound case, as concurrency
increases you look for the throughput trend, the
On 03/12/09 13:48, Scott Carey wrote:
On 3/11/09 7:47 PM, "Tom Lane" wrote:
All I'm adding, is that it makes some sense to me based on my
experience in CPU / RAM bound scalability tuning. It was expressed
that the test itself didn't even make sense.
I was wrong in my understanding of wha
On 03/12/09 11:13, Kevin Grittner wrote:
Scott Carey wrote:
"Kevin Grittner" wrote:
I'm a lot more interested in what's happening between 60 and 180
than over 1000, personally. If there was a RAID involved, I'd put
it down to better use of the numerous spindles, but when it'
On 03/11/09 22:01, Scott Carey wrote:
On 3/11/09 3:27 PM, "Kevin Grittner" wrote:
I'm a lot more interested in what's happening between 60 and 180 than
over 1000, personally. If there was a RAID involved, I'd put it down
to better use of the numerous spindles, but when it's all
Tom Lane wrote:
Scott Carey writes:
If there is enough lock contention and a common lock case is a short lived
shared lock, it makes perfect sense sense. Fewer readers are blocked waiting
on writers at any given time. Readers can 'cut' in line ahead of writers
within a certain scope (
Tom Lane wrote:
"Kevin Grittner" writes:
I'm wondering about the testing methodology.
Me too. This test case seems much too far away from real world use
to justify diddling low-level locking behavior; especially a change
that is obviously likely to have very negative effects in oth
On 03/11/09 18:27, Kevin Grittner wrote:
"Jignesh K. Shah" wrote:
Rerunning similar tests on a 64-thread UltraSPARC T2plus based
server config
(IO is not a problem... all in RAM .. no disks):
Time:Users:Type:TPM: Response Time
60: 100: Medium Throughput: 105
Hello All,
As you know that one of the thing that constantly that I have been
using benchmark kits to see how we can scale PostgreSQL on the
UltraSPARC T2 based 1 socket (64 threads) and 2 socket (128 threads)
servers that Sun sells.
During last PgCon 2008
http://www.pgcon.org/2008/schedul
Moving this thread to Performance alias as it might make more sense for
folks searching on this topic:
Greg Smith wrote:
On Tue, 9 Sep 2008, Amber wrote:
I read something from
http://monetdb.cwi.nl/projects/monetdb/SQL/Benchmark/TPCH/index.html
saying that PostgreSQL can't give the correct
Based on feedback after the sessions I did few more tests which might be
useful to share
One point that was suggested to get each clients do more work and reduce
the number of clients.. The igen benchmarks was flexible and what I did
was remove all think time from it and repeated the test till
Greg Smith wrote:
On Wed, 28 May 2008, Josh Berkus wrote:
shared_buffers: according to witnesses, Greg Smith presented at East
that
based on PostgreSQL's buffer algorithms, buffers above 2GB would not
really receive significant use. However, Jignesh Shah has tested
that on
workloads with
Josh Berkus wrote:
Folks,
Subsequent to my presentation of the new annotated.conf at pgCon last week,
there's been some argument about the utility of certain memory settings
above 2GB. I'd like to hash those out on this list so that we can make
some concrete recomendations to users.
shar
Joshua D. Drake wrote:
On Mon, 28 Apr 2008 14:40:25 -0400
Gregory Stark <[EMAIL PROTECTED]> wrote:
We certainly can pass TPC-C. I'm curious what you mean by 1/4 though?
On similar hardware? Or the maximum we can scale to is 1/4 as large
as Oracle? Can you point me to the actual benchmark r
Franck Routier wrote:
Hi,
I am in the process of setting up a postgresql server with 12 SAS disks.
I am considering two options:
1) set up a 12 disks raid 10 array to get maximum raw performance from
the system and put everything on it (it the whole pg cluster, including
WAL, and every table
Greg Smith wrote:
On Fri, 15 Feb 2008, Peter Schuller wrote:
Or is it a matter of PostgreSQL doing non-direct I/O, such that
anything cached in shared_buffers will also be cached by the OS?
PostgreSQL only uses direct I/O for writing to the WAL; everything
else goes through the regular OS
Greg Smith wrote:
On Wed, 6 Feb 2008, Simon Riggs wrote:
For me, it would be good to see a --parallel=n parameter that would
allow pg_loader to distribute rows in "round-robin" manner to "n"
different concurrent COPY statements. i.e. a non-routing version.
Let me expand on this. In many of
index. (I have spent two days in previous role trying to figure out why
a particular query plan on another database changed in production.)
Simon Riggs wrote:
On Tue, 2008-02-05 at 13:47 -0500, Jignesh K. Shah wrote:
That sounds cool to me too..
How much work is to make pg_bulkload to
Hi Heikki,
Is there a way such an operation can be spawned as a worker process?
Generally during such loading - which most people will do during
"offpeak" hours I expect additional CPU resources available. By
delegating such additional work to worker processes, we should be able
to capitalize
That sounds cool to me too..
How much work is to make pg_bulkload to work on 8.3? An Integrated
version is certainly more beneficial.
Specially I think it will also help for other setups like TPC-E too
where this is a problem.
Regards,
Jignesh
Simon Riggs wrote:
On Tue, 2008-02-05 at 1
One of the problems with "Empty Table optimization" is that if there are
indexes created then it is considered as no longer empty.
Commercial databases have options like "IRRECOVERABLE" clause along
with DISK PARTITIONS and CPU partitions for their bulk loaders.
So one option turns off loggi
Gregory Stark wrote:
Incidentally we found some cases that Solaris was particularly bad at. Is
there anybody in particular that would be interested in hearing about them?
(Not meant to be a knock on Solaris, I'm sure there are other cases Linux or
BSD handle poorly too)
Send me the detai
er releases.. I have ideas
for few of them.
Regards,
Jignesh
Gregory Stark wrote:
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
Then for the power run that is essentially running one query at a time should
essentially be able to utilize the full system (specially mul
Doing it at low scales is not attractive.
Commercial databases are publishing at scale factor of 1000(about 1TB)
to 1(10TB) with one in 30TB space. So ideally right now tuning
should start at 1000 scale factor.
Unfortunately I have tried that before with PostgreSQL the few of the
problem
Hi Simon,
I have some insight into TPC-H on how it works.
First of all I think it is a violation of TPC rules to publish numbers
without auditing them first. So even if I do the test to show the
better performance of PostgreSQL 8.3, I cannot post it here or any
public forum without doing goi
Hi David,
I have been running few tests with 8.2.4 and here is what I have seen:
If fysnc=off is not an option (and it should not be an option :-) )
then commit_delay=10 setting seems to help a lot in my OLTP runs.
Granted it will delay your transactions a bit, but the gain is big
considering
ly better than the default ..
(Plus the risk of loosing data is reduced from 600ms to 300ms)
Thanks.
Regards,
Jignesh
Tom Lane wrote:
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
So the ratio of reads vs writes to clog files is pretty huge..
It looks to me that
-2753028252512 27
#
So the ratio of reads vs writes to clog files is pretty huge..
-Jignesh
Jignesh K. Shah wrote:
Tom,
Here is what I did:
I started aggregating all read information:
First I also had added group by pid(arg0,arg1, pid) and the counts
were all coming as
Hi George,
I have seen the 4M/sec problem first actually during an EAStress type
run with only 150 connections.
I will try to do more testing today that Tom has requested.
Regards,
Jignesh
Gregory Stark wrote:
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
CLOG data
sive 15
Lock Id Combined Time (ns)
WALInsertLock55243
ProcArrayLock 69700140
#
What will help you find out why it is reading the same page again?
-Jignesh
Jignesh K. Shah wrote:
I agree with Tom.. somehow I think increasing
I agree with Tom.. somehow I think increasing NUM_CLOG_BUFFERS is just
avoiding the symptom to a later value.. I promise to look more into it
before making any recommendations to increase NUM_CLOG_BUFFERs.
Because though "iGen" showed improvements in that area by increasing
num_clog_buffer
The problem I saw was first highlighted by EAStress runs with PostgreSQL
on Solaris with 120-150 users. I just replicated that via my smaller
internal benchmark that we use here to recreate that problem.
EAStress should be just fine to highlight it.. Just put pg_clog on
O_DIRECT or something s
I thought I will update this to the Performance alias too about our
testing with PG8.3beta1 on Solaris.
Regards,
Jignesh
__Background_:_
We were using PostgreSQL 8.3beta1 testing on our latest Sun SPARC
Enterprise T5220 Server using Solaris 10 8/07. Generally for performance
benefits in Solar
Update on my testing 8.3beta1 on Solaris.
* CLOG reads
* Asynchronous Commit benefit
* Hot CPU Utilization
Regards,
Jignesh
__Background_:_
We were using PostgreSQL 8.3beta1 testing on our latest Sun SPARC
Enterprise T5220 Server using Solaris 10 8/07 and Sun Fire X4200 using
Solaris 10 8/07.
change for high
number of users.
Thanks.
Regards,
Jignesh
Simon Riggs wrote:
On Fri, 2007-08-03 at 16:09 -0400, Jignesh K. Shah wrote:
This patch seems to work well (both with 32 and 64 value but not with 16
and the default 8).
Could you test at 24 please also? Tom has pointed
wrote:
On Thu, 2007-07-26 at 11:27 -0400, Jignesh K. Shah wrote:
However at 900 Users where the big drop in throughput occurs:
It gives a different top "consumer" of time:
postgres`LWLockAcquire+0x1c8
postgres`SimpleLruReadPage+0x1ac
Yep quite a bit of transactions .. But the piece that's slow is where it
is clearing it up in CommitTransaction().
I am not sure of how ProcArrayLock is designed to work and hence not
clear what we are seeing is what we expect.
Do we have some design doc on ProcArrayLock to understand its purp
.
Though I havent seen what we can do with ProcArrayLock problem.
Regards,
Jignesh
Jignesh K. Shah wrote:
Using CLOG Buffers 32 and the commit sibling check patch I still see a
drop at 1200-1300 users..
bash-3.00# ./4_lwlock_waits.d 18250
Lock IdMode Count
Using CLOG Buffers 32 and the commit sibling check patch I still see a
drop at 1200-1300 users..
bash-3.00# ./4_lwlock_waits.d 18250
Lock IdMode Count
XidGenLock Shared 1
CLogControlLock Shared 2
will try with your second patch.
Regards,
Jignesh
Simon Riggs wrote:
On Thu, 2007-07-26 at 17:17 -0400, Jignesh K. Shah wrote:
Lock Id Combined Time (ns)
XidGenLock194966200
WALInsertLock517955000
CLogControlLock
Yes I can try to breakup the Shared and exclusive time..
Also yes I use commit delays =10, it helps out a lot in reducing IO load..
I will try out the patch soon.
-Jignesh
Simon Riggs wrote:
On Thu, 2007-07-26 at 17:17 -0400, Jignesh K. Shah wrote:
Lock Id Combined Time
-0400, Jignesh K. Shah wrote:
BEAUTIFUL!!!
Using the Patch I can now go upto 1300 users without dropping.. But now
it still repeats at 1300-1350 users..
OK, can you try again with 16 and 32 buffers please? We need to know
how many is enough and whether this number needs to be variable
?
-Jignesh
Simon Riggs wrote:
On Thu, 2007-07-26 at 11:27 -0400, Jignesh K. Shah wrote:
However at 900 Users where the big drop in throughput occurs:
It gives a different top "consumer" of time:
postgres`LWLockAcquire+0x1c8
postgres`SimpleLruRead
Loop+0x63c
postgres`PostmasterMain+0xc40
166130
Maybe you all will understand more than I do on what it does here..
Looks like IndexNext has a problem at high number of users to me.. but I
could be wrong..
-Jignesh
Tom Lane wrote:
"Jignesh K. Shah" <[E
1900900
CheckpointStartLock 2392893700
bash-3.00#
-Jignesh
Tom Lane wrote:
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
Here is how I got the numbers..
I had about 1600 users login into postgresql. Then started the run with
500 users and using DTrace I started tracki
Here is how I got the numbers..
I had about 1600 users login into postgresql. Then started the run with
500 users and using DTrace I started tracking Postgresql Locking "as
viewed from one user/connection". Echo statements indicate how many
users were active at that point and how was throughpu
http://blogs.sun.com/jkshah/entry/specjappserver2004_with_glassfish_v2_and
This time with 33% less App Server hardware but same setup for
PostgreSQL 8.2.4 with 4.5% better score .. There has been reduction in
CPU utilization by postgresql with the new app server which means there
is potential
197 2800 8435 43 13 0 44
210 0 179 2485 7352 234 2233 3090 8237 43 12 0 44
220 0 813 6041 5963 4006 82 1125 4480 4442 25 42 0 33
230 0 162 2415 7364 225 2170 3550 7720 43 11 0 45
Tom Lane wrote:
"Jign
mutex
lock 0x10059e280 in Solaris Library call:
postgres`AllocSetDelete+0x98
postgres`AllocSetAlloc+0x1c4
I need to enable the DTrace probes on my builds
-Jignesh
Tom Lane wrote:
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
So follow that up --- try to
Tom Lane wrote:
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
Yes I did see increase in context switches and CPU migrations at that
point using mpstat.
So follow that up --- try to determine which lock is being contended
for. There's some very crude code in the source
Yes I did see increase in context switches and CPU migrations at that
point using mpstat.
Regards,
Jignesh
Tom Lane wrote:
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
There are no hard failures reported anywhere. Log min durations does
show that queries are now slowing
I forgot to add one more piece of information.. I also tried the same
test with 64-bit postgresql with 6GB shared_buffers and results are the
same it drops around the same point which to me sounds like a bottleneck..
More later
-Jignesh
Jignesh K. Shah wrote:
Awww Josh,
I was just
Awww Josh,
I was just enjoying the chat on the picket fence! :-)
Anyway the workload is mixed (reads,writes) with simple to medium
queries. The workload is known to scale well. But inorder to provide
substantial input I am still trying to eliminate things that can
bottleneck. Currently I hav
Can you list others that seemed out of place?
Thanks.
Regards,
Jignesh
Tom Lane wrote:
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
Heikki Linnakangas wrote:
May I ask you why you set max_prepared_transactions to 450, while
you're apparently not using
Heikki Linnakangas wrote:
May I ask you why you set max_prepared_transactions to 450, while
you're apparently not using prepared transactions, according to this
quote:
Recoverable 2-phase transactions were used to coordinate the
interaction between
the database server and JMS server usi
!!
-Jignesh
Philippe Amelant wrote:
am I wrong or DB2 9.1 is faster on less powerfull hardware ?
Le lundi 09 juillet 2007 à 11:57 -0400, Jignesh K. Shah a écrit :
Hello all,
I think this result will be useful for performance discussions of
postgresql against other databases.
http
Hi Heikki,
Heikki Linnakangas wrote:
That's really exciting news!
I'm sure you spent a lot of time tweaking the settings, so let me ask
you something topical:
How did you end up with the bgwriter settings you used? Did you
experiment with different values? How much difference did it make?
Hello all,
I think this result will be useful for performance discussions of
postgresql against other databases.
http://www.spec.org/jAppServer2004/results/res2007q3/
More on Josh Berkus's blog:
http://blogs.ittoolbox.com/database/soup/archives/postgresql-publishes-first-real-benchmark-17470
On Solaris you just look at the mount options on the file system and see
if there is a forcedirectio option enabled. Generally since PostgreSQL
doesn't use any special options for enabling directio that's a known way
to figure it out on Solaris. Atleast on Solaris the performance over
buffered file
tried changing the bg_writer_delay parameter to 10. The
spike in i/o still occurs although not in a consistent basis and it
is only happening for a few seconds.
On 8/30/06, *Jignesh K. Shah* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]> >
ueries waiting to be processed. After a few seconds,
everything seems to be gone. All writes that are not happening at the
time of this i/o jump are being processed very fast, thus do not show on
pg_stat_activity.
Thanks in advance for the reply,
Best,
J
On 8/29/06, *Jignesh K. Shah* <[EMAIL PRO
Also to answer your real question:
DTrace On Solaris 10:
# dtrace -s /usr/demo/dtrace/whoio.d
It will tell you the pids doing the io activity and on which devices.
There are more scripts in that directory like iosnoop.d, iotime.d and others which also will give
other details like file accesse
Did you increase the checkpoint segments and changed the default WAL lock
method to fdsync?
http://blogs.sun.com/jkshah/entry/postgresql_on_solaris_better_use
Try fdsync instead of fysnc as mentioned in the entry.
Regards,
Jignesh
Junaili Lie wrote:
Hi everyone,
We have a postgresql 8.1 ins
ere are a few larger than
that. Most records are relatively small, consisting of mostly numbers
(id's and such).
The results presented here was with 25 concurrent connections.
Best regards,
Arjen
Jignesh K. Shah wrote:
You usertime is way too high for T2000...
If you have a 6 core ma
's for semsys, like the memcpy-traces,
or are those of no interest here?
Best regards,
Arjen
On 16-5-2006 17:52, Jignesh K. Shah wrote:
Hi Arjen,
Can you send me my colleagues's names in a private email?
One of the drawbacks of the syscall.d script is relative timings and
hence if
Hi Jignesh,
Jignesh K. Shah wrote:
Hi Arjen,
Looking at your outputs...of syscall and usrcall it looks like
* Spending too much time in semsys which means you have too many
connections and they are contending to get a lock.. which is
potentially the WAL log lock
* llseek is high which
Hi Arjen,
Looking at your outputs...of syscall and usrcall it looks like
* Spending too much time in semsys which means you have too many
connections and they are contending to get a lock.. which is potentially
the WAL log lock
* llseek is high which means you can obviously gain a bit
ds help.
Just my two cents.
Regards,
Jignesh
Bruce Momjian wrote:
Jignesh K. Shah wrote:
Bruce,
Hard to answer that... People like me who know and love PostgreSQL and
Solaris finds this as an opportunity to make their favorite database
work best on their favorite operating syste
Bruce,
Hard to answer that... People like me who know and love PostgreSQL and
Solaris finds this as an opportunity to make their favorite database
work best on their favorite operating system.
Many times PostgreSQL has many things based on assumption that it will
run on Linux and it is le
For a DSS type workload with PostgreSQL where you end up with single
long running queries on postgresql with about 100GB, you better use
something like Sun Fire V40z with those fast Ultra320 internal drives.
This might be perfect low cost complete database in a box.
Sun Fire T2000 is great for
Hi Leigh
inline comments
Leigh Dyer wrote:
Luke Lonergan wrote:
Juan,
We've got a Sun Fire V40z and it's quite a nice machine -- 6x 15krpm
drives, 4GB RAM, and a pair of Opteron 850s. This gives us more than
enough power now for what we need, but it's nice to know that we can
shoehorn
06.03.2006, at 21:10 Uhr, Jignesh K. Shah wrote:
Like migrate all your postgresql databases to one T2000. You might
see that your average response time may not be faster but it can
handle probably all your databases migrated to one T2000.
In essence, your single thread performance will not speed up
Suggestions for benchmarks on Sun Fire T2000...
* Don't try DSS or TPC-H type of test with Postgres on Sun Fire T2000
Since such queries tend to have one connection, it will perform badly
with Postgre since it will use only one hardware virtual CPU of the
available 32 virtual CPU on Sun Fire T
Actually fsync is not the default on solaris (verify using "show all;)
(If you look closely in postgresql.conf it is commented out and
mentioned as default but show all tells a different story)
In all my cases I saw the default as
wal_sync_method | open_datasync
Also I had se
What's your postgresql.conf parameter for the equivalent ones that I
suggested?
I believe your wal_buffers and checkpoint_segments could be bigger. If
that's the case then yep you are fine.
As for the background writer I am seeing mixed results yet so not sure
about that.
But thanks for the
What version of Solaris are you using?
Do you have the recommendations while using COPY on Solaris?
http://blogs.sun.com/roller/page/jkshah?entry=postgresql_on_solaris_better_use
wal_sync_method = fsync
wal_buffers = 128
checkpoint_segments = 128
bgwriter_percent = 0
bgwriter_maxpages = 0
And
For people installing PostgreSQL on Solaris with the new packaget, it
will show a greatly improved experience to get PostgreSQL up and running
which was quite a inhibitor in terms of "Love at First Sight". This will
now help people familiar with Solaris have a great first impression of
Postgre
lockstat is available in Solaris 9. That can help you to determine if
there are any kernel level locks that are occuring during that time.
Solaris 10 also has plockstat which can be used to identify userland
locks happening in your process.
Since you have Solaris 9, try the following:
You ca
Hi Juan,
Solaris 10 license is for free.. Infact I believe you do receive the
media with Sun Fire V20z. If you want support then there are various
"pay" plans depending on the level of support. If not your license
allows you Right to Use anyway for free.
That said I haven't done much testing
1 - 100 of 107 matches
Mail list logo