haven't tested a composite index
invsensor is 2,003,980 rows and 219MB
granver is 5,138,730 rows and 556MB
the machine has 32G memory
seq_page_cost, random_page_costs & effective_cache_size are set to the
defaults (1,4, and 128MB) - looks like they could be bumped up.
Got any recommendations?
thanks for taking a look at this and it's never too late!!
I've tried bumping up work_mem and did not see any improvements -
All the indexes do exist that you asked see below
Any other ideas?
CREATE INDEX invsnsr_idx1
ON invsensor
USING btree
(granule_id);
CREATE INDEX invsns
On Tue, May 10, 2011 at 7:35 PM, Craig Ringer
wrote:
> On 11/05/11 05:34, Aren Cambre wrote:
>
>> Using one thread, the app can do about 111 rows per second, and it's
>> only exercising 1.5 of 8 CPU cores while doing this. 12,000,000 rows /
>> 111 rows per second ~= 30 hours.
>
> I don't know how
On Tue, May 10, 2011 at 3:32 PM, Merlin Moncure wrote:
> why even have multiple rows? just jam it all it there! :-D
Exactly, serialize the object and stuff it into a simple key->value
table. Way more efficient than EAV.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.
On 11/05/11 05:34, Aren Cambre wrote:
> Using one thread, the app can do about 111 rows per second, and it's
> only exercising 1.5 of 8 CPU cores while doing this. 12,000,000 rows /
> 111 rows per second ~= 30 hours.
I don't know how I missed that. You ARE maxing out one cpu core, so
you're quite
On 05/11/2011 05:34 AM, Aren Cambre wrote:
> Using one thread, the app can do about 111 rows per second, and it's
> only exercising 1.5 of 8 CPU cores while doing this. 12,000,000 rows /
> 111 rows per second ~= 30 hours.
>
> I hoped to speed things up with some parallel processing.
>
> When the a
On Tue, May 10, 2011 at 12:56 PM, Pierre C wrote:
>
> While reading about NoSQL,
>
>> MongoDB let's you store and search JSON objects.In that case, you don't
>> need to have the same "columns" in each "row"
>
> The following ensued. Isn't it cute ?
>
> CREATE TABLE mongo ( id SERIAL PRIMARY KEY, o
On 05/10/2011 01:28 PM, Tory M Blue wrote:
AMD Opteron(tm) Processor 4174 HE vs Intel(R) Xeon(R) CPUE5345 @ 2.33GHz
I'm wondering if there is a performance difference running postgres on
fedora on AMD vs Intel (the 2 listed above).
I have an 8 way Intel Xeon box and a 12way AMD box and
While reading about NoSQL,
MongoDB let's you store and search JSON objects.In that case, you don't
need to have the same "columns" in each "row"
The following ensued. Isn't it cute ?
CREATE TABLE mongo ( id SERIAL PRIMARY KEY, obj hstore NOT NULL );
INSERT INTO mongo (obj) SELECT ('a=>'||n|
Dne 10.5.2011 18:22, Shaun Thomas napsal(a):
> On 05/10/2011 10:06 AM, Maciek Sakrejda wrote:
>
>>> I have 8-core server, I wanted to ask whether a query can be divided for
>>> multiple processors or cores, if it could be what to do in postgresql
>>
>> No, at this time (and for the foreseeable fut
[ woops, accidentally replied off-list, trying again ]
On Tue, May 10, 2011 at 1:47 PM, Maria L. Wilson
wrote:
> thanks for taking a look at this and it's never too late!!
>
> I've tried bumping up work_mem and did not see any improvements -
> All the indexes do exist that you asked see
Greg Smith wrote:
On 05/09/2011 11:13 PM, Shaun Thomas wrote:
Take a look at /proc/sys/vm/dirty_ratio and
/proc/sys/vm/dirty_background_ratio if you have an older Linux
system, or /proc/sys/vm/dirty_bytes, and
/proc/sys/vm/dirty_background_bytes with a newer one.
On older systems for instance,
On Tue, Apr 5, 2011 at 3:25 PM, Maria L. Wilson
wrote:
> Would really appreciate someone taking a look at the query below Thanks
> in advance!
>
>
> this is on a linux box...
> Linux dsrvr201.larc.nasa.gov 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9 03:27:37
> EST 2009 x86_64 x86_64 x86_64 GNU/Linux
AMD Opteron(tm) Processor 4174 HE vs Intel(R) Xeon(R) CPUE5345 @ 2.33GHz
I'm wondering if there is a performance difference running postgres on
fedora on AMD vs Intel (the 2 listed above).
I have an 8 way Intel Xeon box and a 12way AMD box and was thinking
about migrating to the new AMD b
On 05/10/2011 10:06 AM, Maciek Sakrejda wrote:
I have 8-core server, I wanted to ask whether a query can be divided for
multiple processors or cores, if it could be what to do in postgresql
No, at this time (and for the foreseeable future), a single query will
run on a single core.
It can *k
> I have 8-core server, I wanted to ask whether a query can be divided for
> multiple processors or cores, if it could be what to do in postgresql
No, at this time (and for the foreseeable future), a single query will
run on a single core.
---
Maciek Sakrejda | System Architect | Truviso
1065 E.
On 05/10/2011 03:01 AM, AI Rumman wrote:
I am using Postgresql 8.2.13 and I found that most of the commits and
insert or update statements are taking more than 4s in the db and the
app performance is slow for that.
My db settings are as follows;
bgwriter_all_maxpages | 300 |
bgwriter_all
2011/5/10 Greg Smith :
> On 05/09/2011 11:13 PM, Shaun Thomas wrote:
>>
>> Take a look at /proc/sys/vm/dirty_ratio and
>> /proc/sys/vm/dirty_background_ratio if you have an older Linux system, or
>> /proc/sys/vm/dirty_bytes, and /proc/sys/vm/dirty_background_bytes with a
>> newer one.
>> On older s
On May 9, 2011, at 4:50 PM, Merlin Moncure wrote:
hm, if it was me, I'd write a small C program that just jumped
directly on the device around and did random writes assuming it wasn't
formatted. For sequential read, just flush caches and dd the device
to /dev/null. Probably someone will sugge
On Mon, May 9, 2011 at 9:40 PM, Aren Cambre wrote:
>> how are you reading through the table? if you are using OFFSET, you
>> owe me a steak dinner.
>>
>
> Nope. :-)
> Below is my exact code for the main thread. The C# PLINQ statement is
> highlighted. Let me know if I can help to explain this.
>
>
> Any idea how to improve the performance?
Hmmm, I guess we'll need more info about resource usage (CPU, I/O, locks)
used when the commit happens. Run these two commands
$ iostat -x 1
$ vmstat 1
and then execute the commit. See what's causing problems. Is the drive
utilization close to 100%? Yo
hi florian
sorry for the late reply - it took almost a day to dump & reload the
data into 9.1b1.
how can i get postgres to use the indexes when querying the master
table?
I believe that this is a new feature in PostgreSQL 9.1 ("Allow
inheritance table queries to return meaningfully-sorted r
On Mon, May 9, 2011 at 10:32 PM, Chris Hoover wrote:
> So, does anyone have any suggestions/experiences in benchmarking storage
> when the storage is smaller then 2x memory?
Try writing a small python script (or C program) to mmap a large chunk
of memory, with MAP_LOCKED, this will keep it in RAM
Yes it has something to do with Hot Standby, if you omit some parts on the
archive then the standby instance will not have the necessary stuff and
complain like this..
I kept the FusionIO drive in my checklist while attending to this issue, as
we tried it looking for performance combined with read
I am using Postgresql 8.2.13 and I found that most of the commits and insert
or update statements are taking more than 4s in the db and the app
performance is slow for that.
My db settings are as follows;
bgwriter_all_maxpages | 300 |
bgwriter_all_percent | 15 |
bgwriter_delay
25 matches
Mail list logo