Exactly, it's about the concurrency. I have a server with 128G ram
but it runs dozens of queries at a time for hundreds of clients a
second. The chance that something big for work_mem might jump up and
bite me are pretty good there. Even so, at 16Meg it's not really big
for that machine, and I m
The full set of conf changes that were in use during these tests are as
follows:
default_statistics_target = 100 # pgtune wizard 2010-08-17
maintenance_work_mem = 1GB # pgtune wizard 2010-08-17
constraint_exclusion = on # pgtune wizard 2010-08-17
checkpoint_completion_target = 0.9 # pgtune wizard
On Wed, Aug 18, 2010 at 11:14 PM, Samuel Gendler
wrote:
> Please forgive the barrage of questions. I'm just learning how to tune
> things in postgres and I've still got a bit of learning curve to get over,
> apparently. I have done a lot of reading, though, I swear.
>
> I've got two identical qu
Yeah, although with 48GB of available memory and not that much concurrency,
I'm not sure it matters that much. But point taken, I'll see about modifying
the app such that work_mem gets set on a per-query basis.
On Wed, Aug 18, 2010 at 11:24 PM, Scott Marlowe wrote:
> On Wed, Aug 18, 2010 at 11:4
On Wed, Aug 18, 2010 at 11:45 PM, Samuel Gendler
wrote:
> Answered my own question. Cranking work_mem up to 350MB revealed that
> the in-memory sort requires more memory than the disk sort.
Note that unless you run VERY few client connections, it's usually
better to leave work_mem somewhere in t
Please forgive the barrage of questions. I'm just learning how to tune
things in postgres and I've still got a bit of learning curve to get over,
apparently. I have done a lot of reading, though, I swear.
I've got two identical queries except for a change of one condition which
cuts the number o
Hello
>>
>> I'm not understanding why it is sorting on disk if it would fit within
>> a work_mem segment - by a fairly wide margin. Is there something else
>> I can do to get that sort to happen in memory?
>>
Planner working with estimations. So there is some probability so
planner expected a lar
Answered my own question. Cranking work_mem up to 350MB revealed that
the in-memory sort requires more memory than the disk sort.
On Wed, Aug 18, 2010 at 10:23 PM, Samuel Gendler
wrote:
> I've got this explain: http://explain.depesz.com/s/Xh9
>
> And these settings:
> default_statistics_target =
I've got this explain: http://explain.depesz.com/s/Xh9
And these settings:
default_statistics_target = 50 # pgtune wizard 2010-08-17
maintenance_work_mem = 1GB # pgtune wizard 2010-08-17
constraint_exclusion = on # pgtune wizard 2010-08-17
checkpoint_completion_target = 0.9 # pgtune wizard 2010-08
On Wed, Aug 18, 2010 at 3:42 PM, Tom Lane wrote:
> Saadat Anwar writes:
> > I am having severe COPY performance issues after adding indices. What
> used
> > to take a few minutes (without indices) now takes several hours (with
> > indices). I've tried to tweak the database configuration (based o
s anwar wrote:
3. 8T RAID5 partition for database on a Dell PERC 5/E controller
(I understand that I'll never get fast inserts/updates on it based on
http://wiki.postgresql.org/wiki/SlowQueryQuestions but cannot change
to a RAID0+1 for now).
Database's filesystem is ext4 on LVM on
Saadat Anwar writes:
> I am having severe COPY performance issues after adding indices. What used
> to take a few minutes (without indices) now takes several hours (with
> indices). I've tried to tweak the database configuration (based on Postgres
> documentation and forums), but it hasn't helped
I am having severe COPY performance issues after adding indices. What used
to take a few minutes (without indices) now takes several hours (with
indices). I've tried to tweak the database configuration (based on Postgres
documentation and forums), but it hasn't helped as yet. Perhaps, I haven't
inc
On Wed, Aug 18, 2010 at 1:25 PM, Yeb Havinga wrote:
> Samuel Gendler wrote:
>>
>> When running pgbench on a db which fits easily into RAM (10% of RAM =
>> -s 380), I see transaction counts a little less than 5K. When I go to
>> 90% of RAM (-s 3420), transaction rate dropped to around 1000 ( at a
I am. I was giving mean numbers
On Wed, Aug 18, 2010 at 12:56 PM, Craig James
wrote:
> On 8/18/10 12:24 PM, Samuel Gendler wrote:
>>
>> With barriers off, I saw a transaction rate of about 1200. With
>> barriers on, it was closer to 1050. The test had a concurrency of 40
>> in both cases.
>
>
Samuel Gendler wrote:
When running pgbench on a db which fits easily into RAM (10% of RAM =
-s 380), I see transaction counts a little less than 5K. When I go to
90% of RAM (-s 3420), transaction rate dropped to around 1000 ( at a
fairly wide range of concurrencies). At that point, I decided to
I am having severe COPY performance issues after adding indices. What used
to take a few minutes (without indices) now takes several hours (with
indices). I've tried to tweak the database configuration (based on Postgres
documentation and forums), but it hasn't helped as yet. Perhaps, I haven't
inc
On 8/18/10 12:24 PM, Samuel Gendler wrote:
With barriers off, I saw a transaction rate of about 1200. With
barriers on, it was closer to 1050. The test had a concurrency of 40
in both cases.
I discovered there is roughly 10-20% "noise" in pgbench results after running
the exact same test ove
I'm just starting the process of trying to tune new hardware, which is
2x quad core xeon, 48GB RAM, 8x300GB SAS 15K drives in RAID 1+0,
2x72GB 15K SAS drives in RAID 1 for WAL and system. It is a PERC 6/i
card with BBU. Write-back cache is enabled. The system volume is
ext3. The large data partit
On 18 August 2010 17:06, Justin Graf wrote:
> On 8/18/2010 9:15 AM, Clemens Eisserer wrote:
>> Hi,
>>
>>
>>> they are generated automatically.
>>>
>> Thanks depesz!
>> The reason why I asked was because pgAdmin doesn't display the
>> automatically created indices, which confused me.
>>
>> Thanks,
On 8/18/2010 9:15 AM, Clemens Eisserer wrote:
> Hi,
>
>
>> they are generated automatically.
>>
> Thanks depesz!
> The reason why I asked was because pgAdmin doesn't display the
> automatically created indices, which confused me.
>
> Thanks, Clemens
>
PGAdmin caches all database layout locally, the
Clemens Eisserer wrote:
> I looked at the "Indexes and ORDER BY" which doesn't mention it
The doesn't seem like an appropriate place to discuss when indexes
are created. Do you think you'd have found a mention in the
Introduction page for indexes? Since we talk about the CREATE INDEX
stateme
Clemens Eisserer wrote:
> Its quite harsh to imply I didn't look for documentation.
Sorry; I didn't mean to be harsh. PostgreSQL has excellent
documentation, and we strive to make it better all the time.
Sometimes people coming from some other products aren't used to that
-- I was just tryin
Hi,
> they are generated automatically.
Thanks depesz!
The reason why I asked was because pgAdmin doesn't display the
automatically created indices, which confused me.
Thanks, Clemens
PS:
> If you look at the documentation page for CREATE TABLE, you'll see
> the following . but if you use
Clemens Eisserer wrote:
> Are indices for columns marked with "PRIMARY KEY" automatically
> generated by postgresql, or do I have to do it manually?
If you look at the documentation page for CREATE TABLE, you'll see
the following:
| PostgreSQL automatically creates an index for each unique
|
On Wed, Aug 18, 2010 at 03:51:22PM +0200, Clemens Eisserer wrote:
> Hi,
>
> Are indices for columns marked with "PRIMARY KEY" automatically generated by
> postgresql, or do I have to do it manually?
> The question might seem dumb, I ask because I remember from working with
> MySQL it generates ind
Hi,
Are indices for columns marked with "PRIMARY KEY" automatically generated by
postgresql, or do I have to do it manually?
The question might seem dumb, I ask because I remember from working with
MySQL it generates indices automatically in this case.
Thank you in advance, Clemens
Alexandre de Arruda Paes writes:
> I know the problem with VACUUM FULL and bloated Indexes, but I don't
> understand why the table that is not in use by nobody, cant be vacuumed or
> clustered to avoid dead tuples.
There's an open transaction somewhere that VACUUM is preserving the
tuples for. T
I'm forwarding again this email to list, because Me and Scoot unfortunately
was talking alone. (thanks Scott)
>So what do:
>select * from pg_stat_activity where current_query ilike '%transaction%';
>and
>select * from pg_stat_activity where now()-current_query > '1
minute'::interval;
>say?
>You sh
On Tue, Aug 17, 2010 at 7:54 PM, Aaron Burnett wrote:
> Yeah, missed a '.', it's 8.2.5
Centos 5.5 has postgresql 8.4.4 available from the main repository.
You might consider an upgrade.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscri
If you can cite a specific device that draws more than 10% of the equivalently
performing (e.g., short stroked) array, I would be very interested. There may
be a DRAM SSD that draws more than a flash SSD, but I'd be really surprised to
find a flash SSD that draws the same as any HDD, even at gr
31 matches
Mail list logo