On 22/06/12 09:02, Maxim Boguk wrote:
Hi all,
May be I completely wrong but I always assumed that the access speed
to the array element in PostgreSQL should be close to constant time.
But in tests I found that access speed degrade as O(N) of array size.
Test case (performed on large not busy
writes:
>> Then there's this from the article:
>>
>> "The key ideas are that SQL code is translated into C++, so avoiding the
>> need to use a slow SQL interpreter, and that the data is kept in memory,
>> with disk read/writes taking place in the background."
>>
>> Besides the nonsense stateme
"It does so by removing all constraints, then it compares table contents row by
row, inserts missing rows and deletes "extra" rows in the target database."
If the delete's you do when the constraints and indexes are removed then you
need to create the constraints and indexes before you delete the
Original message
>Date: Mon, 25 Jun 2012 12:03:10 -0500
>From: pgsql-performance-ow...@postgresql.org (on behalf of Shaun Thomas
>)
>Subject: Re: [PERFORM] MemSQL the "world's fastest database"?
>To: Craig James
>Cc:
>
>On 06/25/2012 11:25 AM, Craig James wrote:
>
>> Any thoughts a
On 6/25/12 10:23 AM, Kevin Grittner wrote:
> Craig James wrote:
>
>> It claims to be "the world's fastest database."
>
>> [link where they boast of 80,000 tps read-only]
>
> 20,000 tps? Didn't we hit well over 300,000 tps in read-only
> benchmarks of PostgreSQL with some of the 9.2 performa
Craig James wrote:
> It claims to be "the world's fastest database."
> [link where they boast of 80,000 tps read-only]
20,000 tps? Didn't we hit well over 300,000 tps in read-only
benchmarks of PostgreSQL with some of the 9.2 performance
enhancements?
-Kevin
--
Sent via pgsql-performanc
On 06/25/2012 11:25 AM, Craig James wrote:
Any thoughts about this? It seems to be a new database system designed
from scratch to take advantage of the growth in RAM size (data sets that
fit in memory) and the availability of SSD drives. It claims to be "the
world's fastest database."
I person
Any thoughts about this? It seems to be a new database system designed
from scratch to take advantage of the growth in RAM size (data sets that
fit in memory) and the availability of SSD drives. It claims to be "the
world's fastest database."
http://www.i-programmer.info/news/84-database/4397-me
On 2012-06-20 16:51, Michal Szymanski wrote:
Hi,
We started to think about using SSD drive for our telco system DB. Because we have many
"almost" read-only data I think SSD is good candidate for our task. We would
like to speed up process of read operation.
I've read post (http://blog.2ndquadr
Hello.
This may be wrong type for parameter, like using setObject(param, value)
instead of setObject(param, value, type). Especially if value passed is
string object. AFAIR index may be skipped in this case. You can check by
changing statement to "delete from xxx where xxx_pk=?::bigint". If it
Hi all,
May be I completely wrong but I always assumed that the access speed to the
array element in PostgreSQL should be close to constant time.
But in tests I found that access speed degrade as O(N) of array size.
Test case (performed on large not busy server with 1GB work_mem to ensure I
worki
I have two tables node and relationship. Each relationship record connects two
nodes and has an application keys (unfortunately named) that can be used by the
application to look-up a relationship and get from one node to the other.
My query uses a node id and a description of a relationship f
Hi,
We started to think about using SSD drive for our telco system DB. Because we
have many "almost" read-only data I think SSD is good candidate for our task.
We would like to speed up process of read operation.
I've read post (http://blog.2ndquadrant.com/intel_ssd_now_off_the_sherr_sh/)
abou
Hi all,
I am currently playing with the nice pgbench tool.
I would like to build a benchmark using pgbench with customized scenarios,
in order to get something quite representative of a real workload.
I have designed a few tables, with a simple script to populate them, and
defined 3 scenarios re
Hi,
I have a Java application that tries to synchronize tables in two databases
(remote source to local target). It does so by removing all constraints,
then it compares table contents row by row, inserts missing rows and
deletes "extra" rows in the target database. Delete performance is
incredibl
Le jeudi 21 juin 2012 04:45:41, Craig Ringer a écrit :
> On 06/20/2012 11:32 PM, Shaun Thomas wrote:
> > On 06/20/2012 09:11 AM, Craig Ringer wrote:
> >> For those of us who don't know MS-SQL, can you give a quick
> >> explanation of what the INCLUDE keyword in an index definition is
> >> expected
16 matches
Mail list logo