i've had to write queries to get trail balance values out of the GL
transaction table and i'm not happy with its performance The table has
76K rows growing about 1000 rows per working day so the performance is
not that great it takes about 20 to 30 seconds to get all the records
for the
PFC wrote:
i've had to write queries to get trail balance values out of the GL
transaction table and i'm not happy with its performance The table
has 76K rows growing about 1000 rows per working day so the
performance is not that great it takes about 20 to 30 seconds to get
all the
What is a period ? Is it a month, or something more custom ?
Can periods overlap ?
No periods can never overlap. If the periods did you would be in
violation of many tax laws around the world. Plus it you would not know
how much money you are making or losing.
I was wondering if
On Tue, 2008-05-06 at 03:01 +0100, Justin wrote:
i've had to write queries to get trail balance values out of the GL
transaction table and i'm not happy with its performance
Go ahead and give this a try:
SELECT p.period_id, p.period_start, p.period_end, a.accnt_id,
a.accnt_number,
it worked it had couple missing parts but it worked and ran in 3.3
seconds. *Thanks for this *
i need to review the result and balance it to my results as the
Accountant already went through and balanced some accounts by hand to
verify my results
begin quote
You might want to consider a
PFC wrote:
What is a period ? Is it a month, or something more custom ?
Can periods overlap ?
No periods can never overlap. If the periods did you would be in
violation of many tax laws around the world. Plus it you would not
know how much money you are making or losing.
I
Justin --
You wrote:
i've had to write queries to get trail balance values out of the GL
transaction table and i'm not happy with its performance
The table has 76K rows growing about 1000 rows per working day so the
performance is not that great it takes about 20 to 30 seconds to
yes the cross join is intentional.
Thanks creating the two column index drop processing time to 15 to 17
seconds
put per period down to 1 second
Scott Marlowe wrote:
You're joining these two tables: period, accnt, but I'm not seeing an
on () clause or a where clause joining them. Is the
Gregory Williamson wrote:
Justin --
You wrote:
i've had to write queries to get trail balance values out of the GL
transaction table and i'm not happy with its performance
The table has 76K rows growing about 1000 rows per working day so the
performance is not that great it takes
I have found that while the OS may flush to the controller fast with
fsync=true, the controller does as it pleases (it has BBU, so I'm not
too worried), so you get great performance because your controller is
determine read/write sequence outside of what is being demanded by an
fsync.
Alex Turner
-Original Message-
From: Luke Lonergan [mailto:[EMAIL PROTECTED]
Sent: Thursday, September 01, 2005 9:38 AM
To: Ulrich Wisser; pgsql-performance@postgresql.org
Cc: Nicholas E. Wakefield; Barry Klawans; Daria Hutchinson
Subject: Re: [PERFORM] Need for speed 3
Ulrich,
On 9/1/05 6:25 AM, Ulrich
Ulrich wrote:
Hi again,
first I want to say ***THANK YOU*** for everyone who kindly shared
their
thoughts on my hardware problems. I really appreciate it. I started to
look for a new server and I am quite sure we'll get a serious hardware
update. As suggested by some people I would like
Hi Merlin,
schemas would be helpful.
right now I would like to know if my approach to the problem makes
sense. Or if I should rework the whole procedure of import and aggregate.
Just a thought: have you considered having apache logs write to a
process that immediately makes insert
Hi Merlin,
Just a thought: have you considered having apache logs write to a
process that immediately makes insert query(s) to postgresql?
Yes we have considered that, but dismissed the idea very soon. We need
Apache to be as responsive as possible. It's a two server setup with
load
Ulrich,
On 9/1/05 6:25 AM, Ulrich Wisser [EMAIL PROTECTED] wrote:
My application basically imports Apache log files into a Postgres
database. Every row in the log file gets imported in one of three (raw
data) tables. My columns are exactly as in the log file. The import is
run approx. every
On Thu, 25 Aug 2005 09:10:37 +0200
Ulrich Wisser [EMAIL PROTECTED] wrote:
Pentium 4 2.4GHz
Memory 4x DIMM DDR 1GB PC3200 400MHZ CAS3, KVR
Motherboard chipset 'I865G', two IDE channels on board
2x SEAGATE BARRACUDA 7200.7 80GB 7200RPM ATA/100
(software raid 1, system, swap, pg_xlog)
ADAPTEC
At 03:10 AM 8/25/2005, Ulrich Wisser wrote:
I realize I need to be much more specific. Here is a more detailed
description of my hardware and system design.
Pentium 4 2.4GHz
Memory 4x DIMM DDR 1GB PC3200 400MHZ CAS3, KVR
Motherboard chipset 'I865G', two IDE channels on board
First
Putting pg_xlog on the IDE drives gave about 10% performance
improvement. Would faster disks give more performance?
What my application does:
Every five minutes a new logfile will be imported. Depending on the
source of the request it will be imported in one of three raw click
tables.
On Thu, 2005-08-25 at 11:16 -0400, Ron wrote:
# - Settings -
fsync = false # turns forced synchronization on or off
#wal_sync_method = fsync# the default varies across platforms:
# fsync, fdatasync, open_sync, or
I hope you have a
RRS (http://rrs.decibel.org) might be of use in this case.
On Tue, Aug 16, 2005 at 01:59:53PM -0400, Alex Turner wrote:
Are you calculating aggregates, and if so, how are you doing it (I ask
the question from experience of a similar application where I found
that my aggregating PGPLSQL
Ulrich Wisser wrote:
one of our services is click counting for on line advertising. We do
this by importing Apache log files every five minutes. This results in a
lot of insert and delete statements.
...
If you are doing mostly inserting, make sure you are in a transaction,
Well, yes,
Hello,
thanks for all your suggestions.
I can see that the Linux system is 90% waiting for disc io. At that time
all my queries are *very* slow. My scsi raid controller and disc are
already the fastest available. The query plan uses indexes and vacuum
analyze is run once a day.
To avoid
Ulrich Wisser [EMAIL PROTECTED] writes:
My machine has 2GB memory, please find postgresql.conf below.
max_fsm_pages = 5 # min max_fsm_relations*16, 6 bytes each
FWIW, that index I've been groveling through in connection with your
other problem contains an astonishingly large
On Wed, 2005-08-17 at 11:15 +0200, Ulrich Wisser wrote:
Hello,
thanks for all your suggestions.
I can see that the Linux system is 90% waiting for disc io. At that time
all my queries are *very* slow. My scsi raid controller and disc are
already the fastest available.
What RAID
Ulrich,
I believe the biggest problem is disc io. Reports for very recent data
are quite fast, these are used very often and therefor already in the
cache. But reports can contain (and regulary do) very old data. In that
case the whole system slows down. To me this sounds like the recent data
At 05:15 AM 8/17/2005, Ulrich Wisser wrote:
Hello,
thanks for all your suggestions.
I can see that the Linux system is 90% waiting for disc io.
A clear indication that you need to improve your HD IO subsystem.
At that time all my queries are *very* slow.
To be more precise, your server
At 05:15 AM 8/17/2005, Ulrich Wisser wrote:
Hello,
thanks for all your suggestions.
I can see that the Linux system is 90% waiting for disc io.
A clear indication that you need to improve your HD IO subsystem if possible.
At that time all my queries are *very* slow.
To be more precise,
Ulrich Wisser wrote:
Hello,
one of our services is click counting for on line advertising. We do
this by importing Apache log files every five minutes. This results in a
lot of insert and delete statements. At the same time our customers
shall be able to do on line reporting.
I need some
Hi,
How much Ram do you have ?
Could you give us your postgresql.conf ? (shared buffer parameter)
If you do lots of deletes/inserts operations you HAVE to vacuum analyze
your table (especially if you have indexes).
I'm not sure if vacuuming locks your table with pg 7.4.2 (it doesn't with
Ulrich Wisser wrote:
Hello,
one of our services is click counting for on line advertising. We do
this by importing Apache log files every five minutes. This results in a
lot of insert and delete statements. At the same time our customers
shall be able to do on line reporting.
What are you
On Tue, 2005-08-16 at 17:39 +0200, Ulrich Wisser wrote:
Hello,
one of our services is click counting for on line advertising. We do
this by importing Apache log files every five minutes. This results in a
lot of insert and delete statements. At the same time our customers
shall be able
Are you calculating aggregates, and if so, how are you doing it (I ask
the question from experience of a similar application where I found
that my aggregating PGPLSQL triggers were bogging the system down, and
changed them so scheduled jobs instead).
Alex Turner
NetEconomist
On 8/16/05, Ulrich
On Tue, 16 Aug 2005, Ulrich Wisser wrote:
Still when several users are on line the reporting gets very slow.
Queries can take more then 2 min.
Could you show an exampleof such a query and the output of EXPLAIN ANALYZE
on that query (preferably done when the database is slow).
It's hard to
33 matches
Mail list logo