On 16 Feb 2011, at 9:54, Alessandro Candini wrote:
>> Try the above on a single DB using 4 threads. It will very probably perform
>> better.
>> To use your example:
>> 5432 ---> 150 million records
>> 5432 ---> 150 million records
>> 5432 ---> 150 million records
>> 5432 ---> 150 million recor
Il 15/02/2011 19:32, Alban Hertroys ha scritto:
On 15 Feb 2011, at 9:32, Alessandro Candini wrote:
Is that a single query on that one DB compared to 4 queries on 4 DB's? How does
a single DB with 4 parallel queries perform? I'd expect that to win from 4
DB's, due to the overhead those extra D
On 15 Feb 2011, at 9:32, Alessandro Candini wrote:
>> Is that a single query on that one DB compared to 4 queries on 4 DB's? How
>> does a single DB with 4 parallel queries perform? I'd expect that to win
>> from 4 DB's, due to the overhead those extra DB instances are generating.
>
> Maybe my
* Alessandro Candini wrote:
Il 14/02/2011 21:00, Allan Kamau ha scritto:
On Mon, Feb 14, 2011 at 10:38 AM, Alessandro Candini wrote:
No, this database is on a single machine, but a very powerful one.
Processors with 16 cores each and ssd disks.
I already use partitioning and tablespaces for
Il 14/02/2011 21:00, Allan Kamau ha scritto:
On Mon, Feb 14, 2011 at 10:38 AM, Alessandro Candini wrote:
No, this database is on a single machine, but a very powerful one.
Processors with 16 cores each and ssd disks.
I already use partitioning and tablespaces for every instance of my db and I
On 14 Feb 2011, at 9:38, Alessandro Candini wrote:
I performed tests with a query returning more or less 10 records and using
my C module I obtain the following results (every test performed cleaning cache
before):
- single db: 9.555 sec
- splitted in 4: 5.496 sec
Is that a single query
On Mon, Feb 14, 2011 at 10:38 AM, Alessandro Candini wrote:
> No, this database is on a single machine, but a very powerful one.
> Processors with 16 cores each and ssd disks.
>
> I already use partitioning and tablespaces for every instance of my db and I
> gain a lot with my splitted configurati
On 14 Feb 2011, at 9:38, Alessandro Candini wrote:
> I performed tests with a query returning more or less 10 records and
> using my C module I obtain the following results (every test performed
> cleaning cache before):
> - single db: 9.555 sec
> - splitted in 4: 5.496 sec
Is that a single
For shure my life is more complex with this configuration :-D
But the performance tests that I performed (described in the prevoius
thread) tell me that this is a good way...
Otherwise you probably didn't gain anything
by splitting your database up like that - you've just reduced the
available
No, this database is on a single machine, but a very powerful one.
Processors with 16 cores each and ssd disks.
I already use partitioning and tablespaces for every instance of my db
and I gain a lot with my splitted configuration.
My db is pretty huge: 600 milions of records and partitioning is
> Otherwise you probably didn't gain anything
> by splitting your database up like that - you've just reduced the
> available resources on that single machine.
>
Unless that single machine has more resource than a single PG instance can
consume to satisfy one query. (see how much faster a greenpl
On 10 Feb 2011, at 9:01, Alessandro Candini wrote:
> I have installed 4 different instances of postgresql-9.0.2 on the same
> machine, on ports 5433, 5434, 5435, 5436.
I do hope you intend to put those databases on different machines eventually,
or some such? Otherwise you probably didn't gain
> 2011/2/10, Alessandro Candini :
>> Here you are my probably uncommon situation.
>>
>> I have installed 4 different instances of postgresql-9.0.2 on the same
>> machine, on ports 5433, 5434, 5435, 5436.
>> On these instances I have splitted a huge database, dividing it per date
>> (from 1995 to 19
I think this is bad idea. Better you use cursors.
2011/2/10, Alessandro Candini :
> Here you are my probably uncommon situation.
>
> I have installed 4 different instances of postgresql-9.0.2 on the same
> machine, on ports 5433, 5434, 5435, 5436.
> On these instances I have splitted a huge databa
Here you are my probably uncommon situation.
I have installed 4 different instances of postgresql-9.0.2 on the same
machine, on ports 5433, 5434, 5435, 5436.
On these instances I have splitted a huge database, dividing it per date
(from 1995 to 1998 on 5433, from 1999 to 2002 on 5434 and so on.
15 matches
Mail list logo