Tory M Blue wrote:
> Any assistance would be appreciated, don't worry about slapping me
> around I need to figure this out. Otherwise I'm buying new hardware
> where it may not be required.
What is the reporting query that takes 26 hours? You didn't seem to
include it, or any query plan informati
elias ghanem wrote:
> Actually this query is inside a function and this function is called
> from a .sh file using the following syntax: psql -h $DB_HOST -p $DB_PORT
> -d $DB_NAME -U $DB_USER -c "SELECT testupdate()"
>
> (the function is called 100 times with a vacuum analyze after each call
> f
Pierre Frédéric Caillaud wrote:
Does postgres write something to the logfile whenever a fsync()
takes a suspiciously long amount of time ?
Not specifically. If you're logging statements that take a while, you
can see this indirectly, but commits that just take much longer than usual.
tmp wrote:
Does anyone know if there exists an open source implementation of the
TPC-C benchmark for postgresql somewhere?
http://osdldbt.sourceforge.net/
http://wiki.postgresql.org/wiki/DBT-2
http://www.slideshare.net/markwkm/postgresql-portland-performance-practice-project-database-test-2-h
Does anyone know if there exists an open source implementation of the
TPC-C benchmark for postgresql somewhere?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi Jeff,
Are you running VACUUM (without FULL) regularly? And if so, is that
> insufficient?
>
Unfortunately, we have not run vacuumlo as often as we would like, and that
has caused a lot of garbage blobs to get generated by our application.
You can always expect some degree of bloat. Can you gi
The issues we are seeing besides just saying the reports take over 26 hours,
is that the query seems to be CPU bound. Meaning that the query consumes an
entire CPU and quite often it is sitting with 65%-90% WAIT. Now this is not
iowait, the disks are fine, 5000-6000tps, 700K reads etc with maybe 10
On Thu, 2010-01-21 at 13:44 -0300, Alvaro Herrera wrote:
> I think Devrim publishes debuginfo packages which you need to install
> separately.
Right.
--
Devrim GÜNDÜZ, RHCE
Command Prompt - http://www.CommandPrompt.com
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://
"elias ghanem" wrote:
> here's more details as you requested
You didn't include an EXPLAIN ANALYZE of the UPDATE statement.
> -The version of postgres is 8.4 (by the way select pg_version() is
> not working but let's concentrate on the query issue)
As far as I know, there is no pg_version(
Hi,
Thanks for your help, here's more details as you requested:
-The version of postgres is 8.4 (by the way select pg_version() is not
working but let's concentrate on the query issue)
Here's the full definition of the table with it's indices:
-- Table: in_sortie
-- DROP TABLE in_sortie;
On Thu, Jan 21, 2010 at 9:44 AM, Alvaro Herrera
wrote:
> Scott Marlowe escribió:
>> On Thu, Jan 21, 2010 at 8:51 AM, Alvaro Herrera
>> wrote:
>> > Scott Marlowe escribió:
>> >> On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks
>> >> wrote:
>> >
>> >> > 4) Is this the right PG version for our ne
Scott Marlowe escribió:
> On Thu, Jan 21, 2010 at 8:51 AM, Alvaro Herrera
> wrote:
> > Scott Marlowe escribió:
> >> On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks
> >> wrote:
> >
> >> > 4) Is this the right PG version for our needs?
> >>
> >> 8.3 is very stable. Update to the latest. 8.4 se
On Thu, Jan 21, 2010 at 8:51 AM, Alvaro Herrera
wrote:
> Scott Marlowe escribió:
>> On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks
>> wrote:
>
>> > 4) Is this the right PG version for our needs?
>>
>> 8.3 is very stable. Update to the latest. 8.4 seems good, but I've
>> had, and still am ha
Now, with ext4 moving to full barrier/fsync support, we could get to the
point where WAL in the main data FS can mimic the state where WAL is
seperate, namely that WAL writes can "jump the queue" and be written
without waiting for the data pages to be flushed down to disk, but also
that you'll g
* Matthew Wakeling:
> The data needs to be written first to the WAL, in order to provide
> crash-safety. So you're actually writing 1600MB, not 800.
In addition to that, PostgreSQL 8.4.2 seems pre-fill TOAST files (or
all relations?) with zeros when they are written first, which adds
another 400
>Aidan Van Dyk wrote:
> But, I think that's one of the reasons people usually recommend
> putting WAL separate. Even if it's just another partition on the
> same (set of) disk(s), you get the benefit of not having to wait
> for all the dirty ext3 pages from your whole database FS to be
> flushed
Scott Marlowe escribió:
> On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks
> wrote:
> > 4) Is this the right PG version for our needs?
>
> 8.3 is very stable. Update to the latest. 8.4 seems good, but I've
> had, and still am having, problems with it crashing in production.
> Not often, mayb
"elias ghanem" wrote:
> I'm not sure this is the right place to ask my question
Yes it is. You gave a lot of good information, but we'd have a
better shot at diagnosing the issue with a bit more. Please read
the following and resubmit with as much of the requested information
as you can. No
Hi,
I'm not sure this is the right place to ask my question, so please if it is
not let me know where I can get an answer from.
I'm using postgresql 8.4 on Linux machine with 1.5 GB RAM, and I'm issuing
an update query with a where clause that updates approximately 100 000 rows
in a table contain
* Greg Smith [100121 09:49]:
> Aidan Van Dyk wrote:
>> Sure, if your WAL is on the same FS as your data, you're going to get
>> hit, and *especially* on ext3...
>>
>> But, I think that's one of the reasons people usually recommend putting
>> WAL separate.
>
> Separate disks can actually concentrat
Aidan Van Dyk wrote:
Sure, if your WAL is on the same FS as your data, you're going to get
hit, and *especially* on ext3...
But, I think that's one of the reasons people usually recommend putting
WAL separate.
Separate disks can actually concentrate the problem. The writes to the
data disk b
* Greg Smith:
> Note the comment from the first article saying "those delays can be 30
> seconds or more". On multiple occasions, I've measured systems with
> dozens of disks in a high-performance RAID1+0 with battery-backed
> controller that could grind to a halt for 10, 20, or more seconds in
>
* Greg Smith [100121 00:58]:
> Greg Stark wrote:
>>
>> That doesn't sound right. The kernel having 10% of memory dirty
>> doesn't mean there's a queue you have to jump at all. You don't get
>> into any queue until the kernel initiates write-out which will be
>> based on the usage counters --
On Thu, 21 Jan 2010, Greg Smith wrote:
In the attachement you'll find 2 screenshots perfmon34.png
and perfmon35.png (I hope 2x14 kb is o.k. for the mailing
list).
I don't think they made it to the list?
No, it seems that no emails with image attachments ever make it through
the list server.
On Wed, 20 Jan 2010, Greg Smith wrote:
Basically, to an extent, that's right. However, when you get 16 drives or
more into a system, then it starts being an issue.
I guess if I test a system with *only* 16 drives in it one day, maybe I'll
find out.
*Curious* What sorts of systems have you tr
Both of those refer to the *drive* cache.
greg
On 21 Jan 2010 05:58, "Greg Smith" wrote:
Greg Stark wrote: > > > That doesn't sound right. The kernel having 10% of
memory dirty doesn't mean...
Most safe ways ext3 knows how to initiate a write-out on something that must
go (because it's gotten a
Scott Carey wrote:
On Jan 20, 2010, at 5:32 AM, fka...@googlemail.com wrote:
In the attachement you'll find 2 screenshots perfmon34.png
and perfmon35.png (I hope 2x14 kb is o.k. for the mailing
list).
I don't think they made it to the list? I didn't see it, presumably
Scott got a d
On Jan 20, 2010, at 5:32 AM, fka...@googlemail.com wrote:
>
>> Bulk inserts into an indexed table is always significantly
>> slower than inserting unindexed and then indexing later.
>
> Agreed. However, shouldn't this be included in the disk-time
> counters? If so, it should by near 100%.
>
W
28 matches
Mail list logo