3x200GB suggests you want to use RAID5?
Perhaps you should just pick 2x200GB and set them to RAID1. With roughly
200GB of storage, that should still easily house your "potentially
10GB"-database with ample of room to allow the SSD's to balance the
writes. But you save the investment and its pr
On 12-11-2012 11:45, Eildert Groeneveld wrote:
Dear All
I am currently implementing using a compressed binary storage scheme
genotyping data. These are basically vectors of binary data which may be
megabytes in size.
Our current implementation uses the data type bit varying.
Wouldn't 'bytea'
On 18-11-2011 4:44 CSS wrote:
Resurrecting this long-dormant thread...
Btw, the 5500 and 5600 Xeons are normally more efficient with a multiple of 6
ram-modules, so you may want to have a look at 24GB (6x4), 36GB (6x4+6x2) or
48GB (12x4 or 6x8) RAM.
Thanks - I really had a hard time wrappi
On 14-10-2011 10:23, CSS wrote:
-I'm calling our combined databases at 133GB "small", fair
assumption? -Is there any chance that a server with dual quad core
xeons, 32GB RAM, and 2 or 4 SSDs (assume mirrored) could be slower
than the 4 old servers described above? I'm beating those on raw
cpu,
On 11-10-2011 20:05 Claudio Freire wrote:
On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital
wrote:
2) Change all memory chips to new others, instead of maintain the old (16
GB) + new (32 GB).
Of course, mixing disables double/triple/whatuple channel, and makes
your memory subsystem
Anandtech took the trouble of doing that:
http://www.anandtech.com/show/4902/intel-ssd-710-200gb-review
I think the main advantage of the 710 compared to the 320 is its much
heavier over-provisioning and better quality MLC-chips. Both the 320 and
710 use the same controller and offer similar pe
On 12-9-2011 0:44 Anthony Presley wrote:
A few weeks back, we purchased two refurb'd HP DL360's G5's, and were
hoping to set them up with PG 9.0.2, running replicated. These machines
have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the
HP SA P400i with 512MB of BBWC. PG is
On 11-4-2011 22:04 da...@lang.hm wrote:
in your case, try your new servers without hyperthreading. you will end
up with a 4x4 core system, which should handily outperform the 2x4 core
system you are replacing.
the limit isn't 8 cores, it's that the hyperthreaded cores don't work
well with the p
On 18-3-2011 10:11, Scott Marlowe wrote:
On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
wrote:
On 18-3-2011 4:02 Scott Marlowe wrote:
We have several 1U boxes (mostly Dell and Sun) running and had several in
the past. And we've never had any heating problems with them. That inc
On 18-3-2011 4:02 Scott Marlowe wrote:
On Thu, Mar 17, 2011 at 6:51 PM, Oliver Charles
wrote:
Another point. My experience with 1U chassis and cooling is that they
don't move enough air across their cards to make sure they stay cool.
You'd be better off ordering a 2U chassis with 8 3.5" drive
On 2-3-2011 16:29 Robert Haas wrote:
On Mon, Feb 28, 2011 at 2:09 PM, Josh Berkus wrote:
Does anyone have the hardware to test FlashCache with PostgreSQL?
http://perspectives.mvdirona.com/2010/04/29/FacebookFlashcache.aspx
I'd be interested to hear how it performs ...
It'd be a lot more int
On 10-12-2010 18:57 Arjen van der Meijden wrote:
Have a look here: http://www.anandtech.com/show/2829/21
The sequential writes-graphs consistently put several SSD's at twice the
performance of the VelociRaptor 300GB 10k rpm disk and that's a test
from over a year old, current
On 10-12-2010 14:58 Andy wrote:
We use ZFS and use SSDs for both the log device and L2ARC. All
disks and SSDs are behind a 3ware with BBU in single disk mode.
Out of curiosity why do you put your log on SSD? Log is all
sequential IOs, an area in which SSD is not any faster than HDD. So
I'd thi
On 16-11-2010 11:50, Louis-David Mitterrand wrote:
I have to collect lots of prices from web sites and keep track of their
changes. What is the best option?
1) one 'price' row per price change:
create table price (
id_price primary key,
id_product integer
Isn't it more fair to just flush the cache before doing each of the
queries? In real-life, you'll also have disk caching... Flushing the
buffer pool is easy, just restart PostgreSQL (or perhaps there is a
admin command for it too?). Flushing the OS-disk cache is obviously
OS-dependent, for linu
On 13-8-2010 1:40 Scott Carey wrote:
Agreed. There is a HUGE gap between "ooh ssd's are fast, look!" and
engineering a solution that uses them properly with all their
strengths and faults. And as 'gnuoytr' points out, there is a big
difference between an Intel SSD and say, this thing:
http://ww
On 12-8-2010 2:53 gnuo...@rcn.com wrote:
- The value of SSD in the database world is not as A Faster HDD(tm).
Never was, despite the naive' who assert otherwise. The value of SSD
is to enable BCNF datastores. Period. If you're not going to do
that, don't bother. Silicon storage will never rea
What about FreeBSD with ZFS? I have no idea which features they support
and which not, but it at least is a bit more free than Solaris and still
offers that very nice file system.
Best regards,
Arjen
On 2-4-2010 21:15 Christiaan Willemsen wrote:
Hi there,
About a year ago we setup a machine
On 18-3-2010 16:50 Scott Marlowe wrote:
It's different because it only takes pgsql 5 milliseconds to run the
query, and 40 seconds to transfer the data across to your applicaiton,
which THEN promptly throws it away. If you run it as
MySQL's client lib doesn't transfer over the whole thing. Thi
On 22-2-2010 6:39 Greg Smith wrote:
But the point of this whole testing exercise coming back into vogue
again is that SSDs have returned this negligent behavior to the
mainstream again. See
http://opensolaris.org/jive/thread.jspa?threadID=121424 for a discussion
of this in a ZFS context just last
Hi Reydan,
PostgreSQL is unaware of the amount of cpu's and cores and doesn't
really care about that. Its up to your OS to spread the work across the
system.
Be careful with what you test, normal PostgreSQL cannot spread a single
query over multiple cores. But it will of course spread its fu
On 19-1-2010 13:59 Willy-Bas Loos wrote:
Hi,
I have a query that runs for about 16 hours, it should run at least weekly.
There are also clients connecting via a website, we don't want to keep
them waiting because of long DSS queries.
We use Debian Lenny.
I've noticed that renicing the process r
On 7-1-2010 13:38 Lefteris wrote:
I decided to run the benchmark over postgres to get some more
experience and insights. Unfortunately, the query times I got from
postgres were not the expected ones:
Why were they not expected? In the given scenario, column databases are
having a huge advantag
On 30-7-2009 20:46 Scott Carey wrote:
Of course Compression has a HUGE effect if your I/O system is half-decent.
Max GZIP compression speed with the newest Intel CPU's is something like
50MB/sec (it is data dependant, obviously -- it is usually closer to
30MB/sec). Max gzip decompression ranges
On 13-5-2009 20:39 Scott Carey wrote:
Excellent! That is a pretty huge boost. I'm curious which aspects of this
new architecture helped the most. For Postgres, the following would seem
the most relevant:
1. Shared L3 cache per processors -- more efficient shared datastructure
access.
2. Fas
We have a dual E5540 with 16GB (I think 1066Mhz) memory here, but no AMD
Shanghai. We haven't done PostgreSQL benchmarks yet, but given the
previous experiences, PostgreSQL should be equally faster compared to mysql.
Our databasebenchmark is actually mostly a cpu/memory-benchmark.
Comparing th
On 9-4-2009 16:09 Kevin Grittner wrote:
I haven't benchmarked it, but when one of our new machines seemed a
little sluggish, I found this hadn't been set. Setting this and
rebooting Linux got us back to our normal level of performance.
Why would you reboot after changing the elevator? For 2.6-
On 6-2-2009 16:27 Bruce Momjian wrote:
The experiences I have heard is that Dell looks at server hardware in
the same way they look at their consumer gear, "If I put in a cheaper
part, how much will it cost Dell to warranty replace it". Sorry, but I
don't look at my performance or downtime in th
On 4-2-2009 22:36 Scott Marlowe wrote:
We purhcased the Perc 5E, which dell wanted $728 for last fall with 8
SATA disks in an MD-1000 and the performance is just terrible. No
matter what we do the best throughput on any RAID setup was about 30
megs/second write and 60 Megs/second read. I can ge
On 4-2-2009 21:09 Scott Marlowe wrote:
I have little experience with the 6i. I do have experience with all
the Percs from the 3i/3c series to the 5e series. My experience has
taught me that a brand new, latest model $700 Dell RAID controller is
about as good as a $150 LSI, Areca, or Escalade/3W
0..43.65 rows=2404 width=0) (actual time=38.472..38.472
rows=2455 loops=1)"
" Index Cond: (gene_ref = 301)"
"Total runtime: 94.622 ms"
Arjen van der Meijden wrote:
First of all, there is the 'explain analyze' output, which is pretty
helpful in
First of all, there is the 'explain analyze' output, which is pretty
helpful in postgresql.
My guess is, postgresql decides to do a table scan for some reason. It
might not have enough statistics for this particular table or column, to
make a sound decision. What you can try is to increase the
My colleague has tested a single Mtron Mobo's and a set of 4. He also
mentioned the write performance was pretty bad compared to a Western
Digital Raptor. He had a solution for that however, just plug the SSD in
a raid-controller with decent cache performance (his favorites are the
Areca contro
On 6-3-2008 16:28 Craig James wrote:
On the one hand, I understand that Postgres has its architecture, and I
understand the issue of row visibility, and so forth. On the other
hand, my database is just sitting there, nothing going on, no
connections except me, and... it takes FIFTY FIVE SECOND
On 13-2-2008 22:06 Tobias Brox wrote:
What I'm told is that the state-of-the-art SAN allows for
an "insane amount" of hard disks to be installed, much more than what
would fit into any decent database server. We've ended up buying a SAN,
the physical installation was done last week, and I will b
There are several suppliers who offer Seagate's 2.5" 15k rpm disks, I
know HP, Dell are amongst those. So I was actually refering to those,
rather than to the 10k one's.
Best regards,
Arjen
[EMAIL PROTECTED] wrote:
On Mon, 28 Jan 2008, Arjen van der Meijden wrote:
On
On 28-1-2008 20:25 Christian Nicolaisen wrote:
So, my question is: should I go for the 2.5" disk setup or 3.5" disk
setup, and does the raid setup in either case look correct?
Afaik they are about equal in speed. With the smaller ones being a bit
faster in random access and the larger ones a b
On 31-10-2007 17:45 Ketema wrote:
I understand query tuning and table design play a large role in
performance, but taking that factor away
and focusing on just hardware, what is the best hardware to get for Pg
to work at the highest level
(meaning speed at returning results)?
It really depends
On 5-10-2007 16:34 Cláudia Macedo Amorim wrote:
[13236.470] statement_type=0, statement='select
a_teste_nestle."CODCLI",
a_teste_nestle."CODFAB",
a_teste_nestle."CODFAMILIANESTLE",
a_teste_nestle."CODFILIAL",
a_teste_nestle."CODGRUPONESTLE",
a_teste_nestle."CODSUBGRUPONESTLE",
a_t
On 6-9-2007 20:29 Mark Lewis wrote:
Maybe I'm jaded by past experiences, but the only real use case I can
see to justify a SAN for a database would be something like Oracle RAC,
but I'm not aware of any PG equivalent to that.
PG Cluster II seems to be able to do that, but I don't know whether
On 6-9-2007 20:42 Scott Marlowe wrote:
On 9/6/07, Harsh Azad <[EMAIL PROTECTED]> wrote:
Hi,
How about the Dell Perc 5/i card, 512MB battery backed cache or IBM
ServeRAID-8k Adapter?
All Dell Percs have so far been based on either adaptec or LSI
controllers, and have ranged from really bad to
On 6-9-2007 14:35 Harsh Azad wrote:
2x Quad Xeon 2.4 Ghz (4-way only 2 populated right now)
I don't understand this sentence. You seem to imply you might be able to
fit more processors in your system?
Currently the only Quad Core's you can buy are dual-processor
processors, unless you already
On 9-8-2007 23:50 Merlin Moncure wrote:
Where the extra controller especially pays off is if you have to
expand to a second tray. It's easy to add trays but installing
controllers on a production server is scary.
For connectivity-sake that's not a necessity. You can either connect
(two?) extr
Perhaps you should've read the configuration-manual-page more carefully. ;)
Besides, WAL-archiving is turned off by default, so if you see them
being archived you actually enabled it earlier
The "archive_command" is empty by default: "If this is an empty string
(the default), WAL archiving is
Have you also tried the COPY-statement? Afaik select into is similar to
what happens in there.
Best regards,
Arjen
On 17-7-2007 21:38 Thomas Finneid wrote:
Hi
I was doing some testing on "insert" compared to "select into". I
inserted 100 000 rows (with 8 column values) into a table, which t
There are two solutions:
You can insert all data from tableB in tableA using a simple insert
select-statement like so:
INSERT INTO tabelA SELECT EmpId, EmpName FROM tabelB;
Or you can visually combine them without actually putting the records in
a single table. That can be with a normal select
n try the newest 5.0-verion (5.0.41?)
which eliminates several scaling issues in InnoDB, but afaik not all of
them. Besides that, it just can be pretty painful to get a certain query
fast, although we've not very often seen it failing completely in the
last few years
ecture for cache coherency.
Best regards,
Arjen van der Meijden
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
I assume red is PostgreSQL and green is MySQL. That reflects my own
benchmarks with those two.
But I don't fully understand what the graph displays. Does it reflect
the ability of the underlying database to support a certain amount of
users per second given a certain database size? Or is the g
On 21-4-2007 1:42 Mark Kirkwood wrote:
I don't think that will work for the vector norm i.e:
|x - y| = sqrt(sum over j ((x[j] - y[j])^2))
I don't know if this is usefull here, but I was able to rewrite that
algorithm for a set of very sparse vectors (i.e. they had very little
overlapping fac
On 7-4-2007 18:24 Tilo Buschmann wrote:
Unfortunately, the query above will definitely not work correctly, if
someone searches for "a" or "the".
That are two words you may want to consider not searching on at all.
As Tom said, its not very likely to be fixed in PostgreSQL. But you can
always
Can't you use something like this? Or is the distinct on the t.cd_id
still causing the major slowdown here?
SELECT ... FROM cd
JOIN tracks ...
WHERE cd.id IN (SELECT DISTINCT t.cd_id FROM tracks t
WHERE t.tstitle @@ plainto_tsquery('simple','education') LIMIT 10)
If that is your main cul
On 5-4-2007 17:58 [EMAIL PROTECTED] wrote:
On Apr 5, 2007, at 4:09 AM, Ron wrote:
BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!
Thanks - I received similar private emails with the same advice. I will
change the controller to a LSI MegaRAID SAS 8408E -- any feedback on
this one?
We ha
If the 3U case has a SAS-expander in its backplane (which it probably
has?) you should be able to connect all drives to the Adaptec
controller, depending on the casing's exact architecture etc. That's
another two advantages of SAS, you don't need a controller port for each
harddisk (we have a D
On 4-4-2007 21:17 [EMAIL PROTECTED] wrote:
fwiw, I've had horrible experiences with areca drivers on linux. I've
found them to be unreliable when used with dual AMD64 processors 4+ GB
of ram. I've tried kernels 2.16 up to 2.19... intermittent yet
inevitable ext3 corruptions. 3ware cards, on th
On 4-4-2007 0:13 [EMAIL PROTECTED] wrote:
We need to upgrade a postgres server. I'm not tied to these specific
alternatives, but I'm curious to get feedback on their general qualities.
SCSI
dual xeon 5120, 8GB ECC
8*73GB SCSI 15k drives (PERC 5/i)
(dell poweredge 2900)
This is a SAS set
On 5-3-2007 21:38 Tom Lane wrote:
Keep in mind that Arjen's test exercises some rather narrow scenarios;
IIRC its performance is mostly determined by some complicated
bitmap-indexscan cases. So that "30% slower" bit certainly doesn't
represent an across-the-board figure. As best I can tell, the
Stefan Kaltenbrunner wrote:
ouch - do I read that right that even after tom's fixes for the
"regressions" in 8.2.0 we are still 30% slower then the -HEAD checkout
from the middle of the 8.2 development cycle ?
Yes, and although I tested about 17 different cvs-checkouts, Tom and I
weren't real
And here is that latest benchmark we did, using a 8 dual core opteron
Sun Fire x4600. Unfortunately PostgreSQL seems to have some difficulties
scaling over 8 cores, but not as bad as MySQL.
http://tweakers.net/reviews/674
Best regards,
Arjen
Arjen van der Meijden wrote:
Alvaro Herrera
On 28-2-2007 0:42 Geoff Tolley wrote:
[2] How do people on this list monitor their hardware raid? Thus far we
have used Dell and the only way to easily monitor disk status is to use
their openmanage application. Do other controllers offer easier means
of monitoring individual disks in a raid co
Alvaro Herrera wrote:
Interesting -- the MySQL/Linux graph is very similar to the graphs from
the .nl magazine posted last year. I think this suggests that the
"MySQL deficiency" was rather a performance bug in Linux, not in MySQL
itself ...
The latest benchmark we did was both with Solaris an
exhaust and power
supply. It was one of the reasons we decided to use seperate enclosures,
seperating the processors/memory from the big disk array.
Best regards and good luck,
Arjen van der Meijden
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On 18-1-2007 23:11 Tom Lane wrote:
Increase work_mem? It's not taking the hash because it thinks it won't
fit in memory ...
When I increase it to 128MB in the session (arbitrarily selected
relatively large value) it indeed has the other plan.
Best regards,
Arjen
--
On 18-1-2007 18:28 Jeremy Haile wrote:
I once had a query which would operate on a recordlist and
see whether there were any gaps larger than 1 between consecutive
primary keys.
Would you mind sharing the query you described? I am attempting to do
something similar now.
Well it was over a
On 18-1-2007 17:20 Scott Marlowe wrote:
Besides that, mysql rewrites the entire table for most table-altering
statements you do (including indexes).
Note that this applies to the myisam table type. innodb works quite
differently. It is more like pgsql in behaviour, and is an mvcc storage
A
On 18-1-2007 0:37 Adam Rich wrote:
4) Complex queries that might take advantage of the MySQL "Query Cache"
since the base data never changes
Have you ever compared MySQL's performance with complex queries to
PostgreSQL's? I once had a query which would operate on a recordlist and
see whether
On 16-12-2006 4:24 Jeff Frost wrote:
We can add more RAM and drives for testing purposes. Can someone
suggest what benchmarks with what settings would be desirable to see how
this system performs. I don't believe I've seen any postgres benchmarks
done on a quad xeon yet.
We've done our "sta
On 7-12-2006 12:05 Mindaugas wrote:
Now about 2 core vs 4 core Woodcrest. For HP DL360 I see similarly
priced dual core [EMAIL PROTECTED] and four core [EMAIL PROTECTED] According to
article's scaling data PostgreSQL performance should be similar (1.86GHz
* 2 * 80% = ~3GHz). And quad core has
These benchmarks are all done using 64 bit linux:
http://tweakers.net/reviews/646
Best regards,
Arjen
On 7-12-2006 11:18 Mindaugas wrote:
Hello,
We're planning new server or two for PostgreSQL and I'm wondering Intel
Core 2 (Woodcrest for servers?) or Opteron is faster for PostgreSQL now?
On 7-12-2006 7:01 Jim C. Nasby wrote:
Can you post them on the web somewhere so everyone can look at them?
No, its not (only) the size that matters, its the confidentiality I'm
not allowed to just break by myself. Well, at least not on a scale like
that. I've been mailing off-list with Tom and
Tom Lane wrote:
Arjen van der Meijden <[EMAIL PROTECTED]> writes:
I'll run another analyze on the database to see if that makes any
difference, but after that I'm not sure what to check first to figure
out where things go wrong?
Look for changes in plans?
Yeah, there are
'-fields of pg_statistic, will it?
I'll run another analyze on the database to see if that makes any
difference, but after that I'm not sure what to check first to figure
out where things go wrong?
Best regards,
Arjen van der Meijden
Tweakers.net
---(end
ith 8 disks each and they have
been excellent. Here (on your site) are results that bear this out:
http://tweakers.net/reviews/639/9
- Luke
On 11/22/06 11:07 AM, "Arjen van der Meijden" <[EMAIL PROTECTED]>
wrote:
Jeff,
You can find some (Dutch) results here on our websit
Jeff,
You can find some (Dutch) results here on our website:
http://tweakers.net/reviews/647/5
You'll find the AMCC/3ware 9550SX-12 with up to 12 disks, Areca 1280 and
1160 with up to 14 disks and a Promise and LSI sata-raid controller with
each up to 8 disks. Btw, that Dell Perc5 (sas) is afa
On 17-11-2006 18:45 Jeff Frost wrote:
I see many of you folks singing the praises of the Areca and 3ware SATA
controllers, but I've been trying to price some systems and am having
trouble finding a vendor who ships these controllers with their
systems. Are you rolling your own white boxes or a
Alvaro Herrera wrote:
Performance analysis of strange queries is useful, but the input queries
have to be meaningful as well. Otherwise you end up optimizing bizarre
and useless cases.
I had a similar one a few weeks ago. I did some batch-processing over a
bunch of documents and discovered p
On 20-10-2006 22:33 Ben Suffolk wrote:
How about the Fujitsu Siemens Sun Clones? I have not really looked at
them but have heard the odd good thing about them.
Fujitsu doesn't build Sun clones! That really is insulting for them ;-)
They do offer Sparc-hardware, but that's a bit higher up the m
On 20-10-2006 16:58 Dave Cramer wrote:
Ben,
My option in disks is either 5 x 15K rpm disks or 8 x 10K rpm disks
(all SAS), or if I pick a different server I can have 6 x 15K rpm or 8
x 10K rpm (again SAS). In each case controlled by a PERC 5/i (which I
think is an LSI Mega Raid SAS 8408E card
On 12-10-2006 21:07 Jeff Davis wrote:
On Thu, 2006-10-12 at 19:15 +0200, Csaba Nagy wrote:
To formalize the proposal a litte, you could have syntax like:
CREATE HINT [FOR USER username] MATCHES regex APPLY HINT some_hint;
Where "some_hint" would be a hinting language perhaps like Jim's, except
x27;t
rule out Intel, just because with previous processors they were the
slower player ;)
Best regards,
Arjen van der Meijden
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Try the translation ;)
http://tweakers.net/reviews/646/13
On 22-9-2006 10:32 Hannes Dorbath wrote:
A colleague pointed me to this site tomorrow:
http://tweakers.net/reviews/642/13
I can't read the language, so can't get a grip on what exactly the
"benchmark" was about.
Their diagrams show
u need maximum performance, is when you have
to service a lot of concurrent visitors.
But if you benchmark only with a single thread or do benchmarks that are
no where near a real-life environment, it may show very different
results of course.
Best regards,
Arjen van der Meijden
--
On 15-9-2006 17:53 Tom Lane wrote:
If that WHERE logic is actually what you need, then getting this query
to run quickly seems pretty hopeless. The database must form the full
outer join result: it cannot discard any listing0_ rows, even if they
have lastupdate outside the given range, because t
On 8-9-2006 18:18 Stefan Kaltenbrunner wrote:
interesting - so this is a mostly CPU-bound benchmark ?
Out of curiousity have you done any profiling on the databases under
test to see where they are spending their time ?
Yeah, it is.
We didn't do any profiling.
We had a Sun-engineer visit us t
On 8-9-2006 15:01 Dave Cramer wrote:
But then again, systems with the Woodcrest 5150 (the subtop one) and
Opteron 280 (also the subtop one) are about equal in price, so its not
a bad comparison in a bang-for-bucks point of view. The Dempsey was
added to show how both the Opteron and the newer
Dave Cramer wrote:
Hi, Arjen,
The Woodcrest is quite a bit faster than the Opterons. Actually...
With Hyperthreading *enabled* the older Dempsey-processor is also
faster than the Opterons with PostgreSQL. But then again, it is the
top-model Dempsey and not a top-model Opteron so that isn't a
l-controller so its faster in sequential IO.
These benchmarks were not done using postgresql, so you shouldn't read
them as absolute for all your situations ;-) But you can get a good
impression I think.
Best regards,
Arjen van der Meijden
Tweakers.net
---(end of
the moment. I currently am runing a load
average of about .5 on a dual Xeon 3.06Ghz P4 setup. How much CPU
performance improvement do you think the new woodcrest cpus are over these?
-Kenji
On Fri, Aug 18, 2006 at 09:41:55PM +0200, Arjen van der Meijden wrote:
Hi Kenji,
I'm not sure
are considerably more than the
Dells. Is it worth waiting a few more weeks/months for Dell to release
something newer?
-Kenji
On Wed, Aug 09, 2006 at 07:35:22AM +0200, Arjen van der Meijden wrote:
With such a budget you should easily be able to get something like:
- A 1U high-performance serve
On 16-8-2006 18:48, Peter Hardman wrote:
Using identically structured tables and the same primary key, if I run this on
Paradox/BDE it takes about 120ms, on MySQL (5.0.24, local server) about 3ms,
and on PostgresSQL (8.1.3, local server) about 1290ms). All on the same
Windows XP Pro machine wit
ion. Regarding the JBOD enclosures, are these generally just 2U or 4U
units with SCSI interface connectors? I didn't see these types of boxes
availble on Dell website, I'll look again.
-Kenji
On Wed, Aug 09, 2006 at 07:35:22AM +0200, Arjen van der Meijden wrote:
With such a budget you
uld still be able to have two top-off-the-line x86 cpu's (amd
opteron 285 or intel woorcrest 5160) and 16GB of memory (even FB Dimm,
which is pretty expensive).
Best regards,
Arjen van der Meijden
On 8-8-2006 22:43, Kenji Morishige wrote:
I've asked for some help here a few mo
Hi Markus,
As said, our environment really was a read-mostly one. So we didn't do
much inserts/updates and thus spent no time tuning those values and left
them as default settings.
Best regards,
Arjen
Markus Schaber wrote:
Hi, Arjen,
Arjen van der Meijden wrote:
It was the
On 1-8-2006 19:26, Jim C. Nasby wrote:
On Sat, Jul 29, 2006 at 08:43:49AM -0700, Joshua D. Drake wrote:
I'd love to get an english translation that we could use for PR.
Actually, we have an english version of the Socket F follow-up.
http://tweakers.net/reviews/638 which basically displays the
gh it will be in Dutch, you can still read the
graphs).
Best regards,
Arjen van der Meijden
Tweakers.net
---(end of broadcast)---
TIP 6: explain analyze is your friend
hat, we did no extra tuning of the OS, nor did Hans for the
MySQL-optimizations (afaik, but then again, he knows best).
Best regards,
Arjen van der Meijden
Jignesh Shah wrote:
Hi Arjen,
I am curious about your Sun Studio compiler options also.
Can you send that too ?
Any other tweakings tha
On 29-7-2006 19:01, Joshua D. Drake wrote:
Well I would be curious about the postgresql.conf and how much ram
etc... it had.
It was the 8core version with 16GB memory... but actually that's just
overkill, the active portions of the database easily fits in 8GB and a
test on another machine wit
because Tweakers.net runs on
MySQL, but Arjen van der Meijden has ported it to PostgreSQL and has
done basic optimizations like adding indexes.
There were a few minor datatype changes (like enum('Y', 'N') to boolean,
but on the other hand also 'int unsigned' to
On 29-7-2006 17:43, Joshua D. Drake wrote:
I would love to get my hands on that postgresql version and see how much
farther it could be optimized.
You probably mean the entire installation? As said in my reply to
Jochem, I've spent a few days testing all queries to improve their
performance
On 22-6-2006 15:03, David Roussel wrote:
Sureky the 'perfect' line ought to be linear? If the performance was
perfectly linear, then the 'pages generated' ought to be G times the
number (virtual) processors, where G is the gradient of the graph. In
such a case the graph will go through the or
1 - 100 of 131 matches
Mail list logo