Re: [PERFORM] Opteron vs. Xeon "benchmark"
On 23-Sep-06, at 9:49 AM, Guido Neitzer wrote: On 9/23/06, Dave Cramer <[EMAIL PROTECTED]> wrote: 1) The database fits entirely in memory, so this is really only testing CPU, not I/O which should be taken into account IMO I don't think this really is a reason that MySQL broke down on ten or more concurrent connections. The RAM might be, but I don't think so too in this case as it represents exactly what we have seen in similar tests. MySQL performs quite well on easy queries and not so much concurrency. We don't have that case very often in my company ... we have at least ten to twenty connections to the db performing statements. And we have some fairly complex statements running very often. Nevertheless - a benchmark is a benchmark. Nothing else. We prefer PostgreSQL for other reasons then higher performance (which it has for lots of situations). I should make myself clear. I like the results of the benchmark. But I wanted to keep things in perspective. Dave cug -- PostgreSQL Bootcamp, Big Nerd Ranch Europe, Nov 2006 http://www.bignerdranch.com/news/2006-08-21.shtml ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] Update on high concurrency OLTP application and Postgres
Christian Storm wrote: At the moment, my rule of thumb is to check out the ANALYZE VERBOSE messages to see if all table pages are being scanned. INFO: "mytable": scanned xxx of yyy pages, containing ... If xxx = yyy, then I keep statistics at the current level. When xxx is way less than yyy, I increase the numbers a bit and retry. It's probably primitive, but it seems to work well. > What heuristic do you use to up the statistics for such a table? No heuristics, just try and see. For tables of ~ 10k pages, I set statistics to 100/200. For ~ 100k pages, I set them to 500 or more. I don't know the exact relation. Once you've changed it, what metric do you use to > see if it helps or was effective? I rerun an analyze and see the results... :-) If you mean checking the usefulness, I can see it only under heavy load, if particular db queries run in the order of a few milliseconds. If I see normal queries that take longer and longer, or they even appear in the server's log (> 500 ms), then I know an analyze is needed, or statistics should be set higher. -- Cosimo ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [PERFORM] Opteron vs. Xeon "benchmark"
On 9/23/06, Dave Cramer <[EMAIL PROTECTED]> wrote: 1) The database fits entirely in memory, so this is really only testing CPU, not I/O which should be taken into account IMO I don't think this really is a reason that MySQL broke down on ten or more concurrent connections. The RAM might be, but I don't think so too in this case as it represents exactly what we have seen in similar tests. MySQL performs quite well on easy queries and not so much concurrency. We don't have that case very often in my company ... we have at least ten to twenty connections to the db performing statements. And we have some fairly complex statements running very often. Nevertheless - a benchmark is a benchmark. Nothing else. We prefer PostgreSQL for other reasons then higher performance (which it has for lots of situations). cug -- PostgreSQL Bootcamp, Big Nerd Ranch Europe, Nov 2006 http://www.bignerdranch.com/news/2006-08-21.shtml ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] Opteron vs. Xeon "benchmark"
On 23-Sep-06, at 9:00 AM, Guido Neitzer wrote: I find the benchmark much more interesting in comparing PostgreSQL to MySQL than Intel to AMD. It might be as biased as other "benchmarks" but it shows clearly something that a lot of PostgreSQL user always thought: MySQL gives up on concurrency ... it just doesn't scale well. cug Before you get too carried away with this benchmark, you should review the previous comments on this thread. Not that I don't agree, but lets put things in perspective. 1) The database fits entirely in memory, so this is really only testing CPU, not I/O which should be taken into account IMO 2) The machines were not "equal" The AMD boxes did not have as much ram. DAVE On 9/23/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: Yep. From what I understand, Intel is 8 to 10 times the size of AMD. It's somewhat amazing that AMD even competes, and excellent for us, the consumer, that they compete well, ensuring that we get very fast computers, for amazingly low prices. But Intel isn't crashing down any time soon. Perhaps they became a little lazy, and made a few mistakes. AMD is forcing them to clean up. May the competition continue... :-) Cheers, mark -- PostgreSQL Bootcamp, Big Nerd Ranch Europe, Nov 2006 http://www.bignerdranch.com/news/2006-08-21.shtml ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [PERFORM] Opteron vs. Xeon "benchmark"
I find the benchmark much more interesting in comparing PostgreSQL to MySQL than Intel to AMD. It might be as biased as other "benchmarks" but it shows clearly something that a lot of PostgreSQL user always thought: MySQL gives up on concurrency ... it just doesn't scale well. cug On 9/23/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: Yep. From what I understand, Intel is 8 to 10 times the size of AMD. It's somewhat amazing that AMD even competes, and excellent for us, the consumer, that they compete well, ensuring that we get very fast computers, for amazingly low prices. But Intel isn't crashing down any time soon. Perhaps they became a little lazy, and made a few mistakes. AMD is forcing them to clean up. May the competition continue... :-) Cheers, mark -- PostgreSQL Bootcamp, Big Nerd Ranch Europe, Nov 2006 http://www.bignerdranch.com/news/2006-08-21.shtml ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PERFORM] Confusion and Questions about blocks read
Hi, Tom, Tom Lane wrote: > "Alex Turner" <[EMAIL PROTECTED]> writes: >> Home come the query statistics showed that 229066 blocks where read given >> that all the blocks in all the tables put together only total 122968? > > You forgot to count the indexes. Also, the use of indexscans in the > mergejoins probably causes multiple re-reads of some table blocks, > depending on just what the physical ordering of the rows is. As far as I understand, Index Bitmap Scans improve this behaviour, by ensuring that every table block is read only once. Btw, would it be feasible to enhance normal index scans by looking at all rows in the current table block whether they meet the query criteria, fetch them all, and blacklist the block for further revisiting during the same index scan? I think that, for non-sorted cases, this could improve index scans a little, but I don't know whether it's worth the effort, given that bitmap indidex scans exist. Thanks, Markus -- Markus Schaber | Logical Tracking&Tracing International AG Dipl. Inf. | Software Development GIS Fight against software patents in Europe! www.ffii.org www.nosoftwarepatents.org ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [PERFORM] recommended benchmarks
If the real world applications you'll be running on the box are Java (or use lots of prepared statements and no stored procedures)... try BenchmarkSQL from pgFoundry. Its extremely easy to setup and use. Like the DBT2, it's an oltp benchmark that is similar to the tpc-c. --Denis Lussier http://www.enterprisedb.com On 9/22/06, Bucky Jordan <[EMAIL PROTECTED]> wrote: > On Fri, 2006-09-22 at 13:14 -0400, Charles Sprickman wrote: > > Hi all, > > > > I still have an dual dual-core opteron box with a 3Ware 9550SX-12 > sitting > > here and I need to start getting it ready for production. I also have > to > > send back one processor since we were mistakenly sent two. Before I do > > that, I would like to record some stats for posterity and post to the > list > > so that others can see how this particular hardware performs. > > > > It looks to be more than adequate for our needs... > > > > What are the standard benchmarks that people here use for comparison > > purposes? I know all benchmarks are flawed in some way, but I'd at > least > > like to measure with the same tools that folks here generally use to get > a > > ballpark figure. > > Check out the OSDL stuff. > > http://www.osdl.org/lab_activities/kernel_testing/osdl_database_test_sui te > / > > Brad. > Let me know what tests you end up using and how difficult they are to setup/run- I have a dell 2950 (2 dual core woodcrest) that I could probably run the same tests on. I'm looking into DBT2 (OLTP, similar to TPC-C) to start with, then probably DBT-3 since it's more OLAP style (and more like the application I'll be dealing with). What specific hardware are you testing? (CPU, RAM, raid setup, etc?) - Bucky ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org