Jim Nasby wrote:
While working on determining a good stripe size for a database, I
realized it would be handy to know what the average request size is.
Getting this info is a simple matter of joining pg_stat_all_tables and
pg_statio_all_tables and doing some math, but there's one issue I've
fo
On Thu, Apr 13, 2006 at 06:26:00PM -0700, patrick keshishian wrote:
> $ dropdb dbname
> $ createdb dbname
> $ pg_restore -vsOd dbname dbname.DUMP
That step is pointless, because the next pg_restore will create the
schema for you anyway.
> $ date > db.restore ; pg_restore -vcOd dbname \
> dbna
On Thu, Apr 13, 2006 at 08:36:09PM -0400, Tom Lane wrote:
> Jim Nasby <[EMAIL PROTECTED]> writes:
> > While working on determining a good stripe size for a database, I
> > realized it would be handy to know what the average request size is.
> > Getting this info is a simple matter of joining pg
On Thu, Apr 13, 2006 at 02:59:23PM -0400, Francisco Reyes wrote:
> In RAID 10 would it matter that WALL is in the same RAID set?
> Would it be better:
> 4 disks in RAID10 Data
> 2 disks RAID 1 WALL
> 2 hot spares
Well, benchmark it with your app and find out, but generally speaking
unless your dat
"patrick keshishian" <[EMAIL PROTECTED]> writes:
> With these settings and running:
> pg_restore -vaOd dbname dbname.DUMP
If you had mentioned you were using random nondefault switches, we'd
have told you not to. -a in particular is a horrid idea performancewise
--- a standard schema-plus-data re
Hi Tom, et.al.,
So I changed the following settings in postgresql.conf file and
restarted PostgreSQL and then proceeded with pg_restore:
# new changes for this test-run
log_statement = true
sort_mem = 10240 # default 1024
vacuum_mem = 20480 # default 8192
# from before
checkpoint_segments = 10
lo
PFC <[EMAIL PROTECTED]> writes:
> I guess AFTER COMMIT triggers would be like a NOTIFY, but more
> powerful.
I'll let you in on a secret: NOTIFY is actually a before-commit
operation. This is good enough because it never, or hardly ever,
fails. I would argue that anything you want to do
Jim Nasby <[EMAIL PROTECTED]> writes:
> While working on determining a good stripe size for a database, I
> realized it would be handy to know what the average request size is.
> Getting this info is a simple matter of joining pg_stat_all_tables
> and pg_statio_all_tables and doing some math,
Tom Lane <[EMAIL PROTECTED]> writes:
> Christian Storm <[EMAIL PROTECTED]> writes:
> > Not sure if I follow why this is a problem. Seems like it would be
> > beneficial to have both BEFORE and AFTER COMMIT triggers.
> > With the BEFORE COMMIT trigger you would have the ability to 'un-
> > comm
An AFTER COMMIT trigger would have to be in a separate transaction.
I guess AFTER COMMIT triggers would be like a NOTIFY, but more powerful.
While NOTIFY can't transmit information to another process, this trigger
could, and the other process could then view the results of the commited
Hi Jim
The problem is fixed. The destination table that was being updated had 3
separate indexes. I combined them to a multi-column index and the effect
was amazing.
Thanks for your input
Sriram
-Original Message-
From: Jim Nasby [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 13, 2006
On 4/12/06, Josh Berkus wrote:
> People,
>
> > Lately I find people are not so receptive to VxFS, and Sun is promoting
> > ZFS, and we don't have a reasonable near term option for Raw IO in
> > Postgres, so we need to work to find a reasonable path for Solaris users
> > IMO. The long delays in ZFS
Tom Lane writes:
Or at least try to do multiple inserts per transaction.
Will see if the program has an option like that.
Also, increasing checkpoint_segments and possibly wal_buffers helps a
Will try those.
Try to get the WAL onto a separate disk
spindle if you can. (These things don'
Adding -performance back in...
> From: Steve Poe [mailto:[EMAIL PROTECTED]
> Jim,
>
> I could be way off, but doesn't from pg_statio_user_tables
> contain this
> information?
http://www.postgresql.org/docs/8.1/interactive/monitoring-stats.html#MONITORING-STATS-VIEWS
states:
"numbers of disk b
Chris writes:
If you can, use copy instead:
http://www.postgresql.org/docs/8.1/interactive/sql-copy.html
I am familiar with copy.
Can't use it in this scenario.
The data is coming from a program called Bacula (Backup server).
It is not static data.
---(end of broadcas
On Apr 13, 2006, at 12:38 PM, Tom Lane wrote:
Christian Storm <[EMAIL PROTECTED]> writes:
Not sure if I follow why this is a problem. Seems like it would be
beneficial to have both BEFORE and AFTER COMMIT triggers.
With the BEFORE COMMIT trigger you would have the ability to 'un-
commit' (roll
While working on determining a good stripe size for a database, I
realized it would be handy to know what the average request size is.
Getting this info is a simple matter of joining pg_stat_all_tables
and pg_statio_all_tables and doing some math, but there's one issue
I've found; it appear
Bruce Momjian wrote On 04/13/06 01:39 AM,:
>
> Yes, if someone wants to give us a clear answer on which wal_sync method
> is best on all versions of Solaris, we can easily make that change.
>
We're doing tests to see how various parameters in postgresql.conf
affect performance on Solaris and wi
Christian Storm <[EMAIL PROTECTED]> writes:
> Not sure if I follow why this is a problem. Seems like it would be
> beneficial to have both BEFORE and AFTER COMMIT triggers.
> With the BEFORE COMMIT trigger you would have the ability to 'un-
> commit' (rollback) the transaction. With
> the AFTE
You need to run EXPLAIN ANALYZE. Also, what's random_page_cost set to? And the
output of \d chkpfw_tr_dy_dimension. The cost for that index scan looks way too
high.
And please reply-all so that the list is included.
> -Original Message-
> From: Sriram Dandapani [mailto:[EMAIL PROTECTED]
Cristian Veronesi <[EMAIL PROTECTED]> writes:
> If I try to use a function that returns the current time instead, a
> sequential scan is always performed:
> ...
> Any suggestion?
1. Use something newer than 7.4 ;-)
2. Set up a dummy range constraint, ie
select ... where ora_rif > localti
Hi Markus,Markus Schaber <[EMAIL PROTECTED]> wrote:>Hi, Oscar,>>Please reply to the list and not privately, so others can learn from>your replies, and possibly have better Ideas than me.That was my intention. I made a mistake.>Oscar Picasso wrote:>>> I cannot group the columns logically. Any column
Gavin Hamill <[EMAIL PROTECTED]> writes:
> If I replace the
> (allocation0_."Date" between '2006-06-09 00:00:00.00' and
> '2006-06-09 00:00:00.00')
> with
> allocation0_."Date" ='2006-04-09 00:00:00.00'
> then the query comes back in a few milliseconds (as I'd expect :)
Could we see E
Hello, postgresql 7.4.8 on SuSE Linux here.
I have a table called DMO with a column called ORA_RIF defined as
"timestamp without time zone" ;
I created an index on this table based on this column only.
If I run a query against a text literal the index is used:
explain select * from dmo where
Tom Lane wrote:
Gavin Hamill <[EMAIL PROTECTED]> writes:
would a simple "#define LWLOCK_PADDED_SIZE 128" be sufficient?
Yeah, that's fine.
OK I tried that but noticed no real improvement... in the interim I've
installed Debian on the pSeries (using
http://debian.gonicus.de/de
Hi, Tom,
Tom Lane wrote:
>>Why are AFTER COMMIT triggers impossible?
>
> What happens if such a trigger gets an error? You can't un-commit.
Then it must be specified that those triggers are in their own
transaction, and cannot abort the transaction.
Or use the 2-phase-commit infrastructure fo
Hi, Oscar,
Please reply to the list and not privately, so others can learn from
your replies, and possibly have better Ideas than me.
Oscar Picasso wrote:
> I cannot group the columns logically. Any column may or may not appear
> in a query.
That's suboptimal.
> Summrarizing what I have learne
Gavin Hamill wrote:
chris smith wrote:
1.6secs isn't too bad on 4.3mill rows...
How many entries are there for that date range?
1.7 secs /is/ good - it typically takes 5 or 6 seconds, which isn't so
good. My question is 'why does the planner choose such a bizarre range
request when both e
chris smith wrote:
1.6secs isn't too bad on 4.3mill rows...
How many entries are there for that date range?
1.7 secs /is/ good - it typically takes 5 or 6 seconds, which isn't so
good. My question is 'why does the planner choose such a bizarre range
request when both elements of the 'betwe
On 4/13/06, Gavin Hamill <[EMAIL PROTECTED]> wrote:
> laterooms=# explain analyze select allocation0_."ID" as y1_,
> allocation0_."RoomID" as y2_, allocation0_."StatusID" as y4_,
> allocation0_."Price" as y3_, allocation0_."Number" as y5_,
> allocation0_."Date" as y6_ from "Allocation" allocation0_
laterooms=# explain analyze select allocation0_."ID" as y1_,
allocation0_."RoomID" as y2_, allocation0_."StatusID" as y4_,
allocation0_."Price" as y3_, allocation0_."Number" as y5_,
allocation0_."Date" as y6_ from "Allocation" allocation0_ where
(allocation0_."Date" between '2006-06-09 00:00:00
Hi, Jim,
Jim Nasby wrote:
> Adding -performance back in
> I would like to try it.
>
> However in an other post I added that contrary to what I stated
> initially all the paramXX columns are not mandatory in the query. So
> it seems that requirement make the problem more complexe.
Hi, Jim,
Jim C. Nasby wrote:
>>>I was also thinking about about using a functional index.
>>If there's a logical relation between those values that they can easily
>>combined, that may be a good alternative.
> How would that be any better than just doing a multi-column index?
10 different values
Jignesh K. Shah wrote:
>
> Bruce,
>
> Hard to answer that... People like me who know and love PostgreSQL and
> Solaris finds this as an opportunity to make their favorite database
> work best on their favorite operating system.
>
> Many times PostgreSQL has many things based on assumption tha
34 matches
Mail list logo