NB> shared_buffers = 1/16th of total memory
NB> effective_cache_size = 80% of the supposed kernel cache.
I think Sean(?) mentioned this one for FreeBSD (Bash code):
echo "effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))"
I've used it for my dedicated servers. Is this calculation cor
On Fri, 2003-10-10 at 16:32, David Busby wrote:
> - Original Message -
> From: "Ron Johnson"
> > On Fri, 2003-10-10 at 16:04, David Busby wrote:
> > > List,
> > > I'm creating this multi company POS database.
> > > My inventory table looks like (all items are unique):
> > >
> > > id,ca
On Fri, 2003-10-10 at 18:59, Sean Chittenden wrote:
> > NB> So far:
> >
> > NB> shared_buffers = 1/16th of total memory
> > NB> effective_cache_size = 80% of the supposed kernel cache.
> >
> > Please take into account the blocksize compiled into PG, too...
>
> Would anyone object to a patch that
Sean Chittenden wrote:
> > NB> So far:
> >
> > NB> shared_buffers = 1/16th of total memory
> > NB> effective_cache_size = 80% of the supposed kernel cache.
> >
> > Please take into account the blocksize compiled into PG, too...
>
> Would anyone object to a patch that exports the blocksize via a
On Fri, 10 Oct 2003, David Busby wrote:
> - Original Message -
> From: "Ron Johnson"
> > On Fri, 2003-10-10 at 16:04, David Busby wrote:
> > > List,
> > > I'm creating this multi company POS database.
> > > My inventory table looks like (all items are unique):
> > >
> > > id,category_i
On Fri, 10 Oct 2003, Josh Berkus wrote:
> Bruce,
>
> > Yes. If you were doing multiple WAL writes before transaction fsync,
> > you would be fsyncing every write, rather than doing two writes and
> > fsync'ing them both. I wonder if larger transactions would find
> > open_sync slower?
>
> Want
> NB> So far:
>
> NB> shared_buffers = 1/16th of total memory
> NB> effective_cache_size = 80% of the supposed kernel cache.
>
> Please take into account the blocksize compiled into PG, too...
Would anyone object to a patch that exports the blocksize via a
readonly GUC? Too many tunables are pa
Vivek,
> when you compute optimal shared buffers and effective cache size,
> these are in terms of blocksize. so if I have 16k block size, you
> can't compute based on default 8k blocksize. at worst, it would have
> to be a parameter you pass to the tuning script.
Oh, yes! Thank you.
--
- Original Message -
From: "Ron Johnson"
> On Fri, 2003-10-10 at 16:04, David Busby wrote:
> > List,
> > I'm creating this multi company POS database.
> > My inventory table looks like (all items are unique):
> >
> > id,category_id,invoice_id,x,y,z,gid,uid
> >
> > I have a primary key
On Fri, 2003-10-10 at 16:04, David Busby wrote:
> List,
> I'm creating this multi company POS database.
> My inventory table looks like (all items are unique):
>
> id,category_id,invoice_id,x,y,z,gid,uid
>
> I have a primary key on id, and then an foreign keys on category_id and
> invoice_id.
Vivek Khera wrote:
> > "BM" == Bruce Momjian <[EMAIL PROTECTED]> writes:
>
> >> Sounds reasonable to me. Are there many / any scenarios where a plain
> >> fsync would be faster than open_sync?
>
> BM> Yes. If you were doing multiple WAL writes before transaction fsync,
> BM> you would be f
> "JB" == Josh Berkus <[EMAIL PROTECTED]> writes:
JB> Vivek,
NB> shared_buffers = 1/16th of total memory
NB> effective_cache_size = 80% of the supposed kernel cache.
>>
>> Please take into account the blocksize compiled into PG, too...
JB> We can;t change the blocksize in a script that only
List,
I'm creating this multi company POS database.
My inventory table looks like (all items are unique):
id,category_id,invoice_id,x,y,z,gid,uid
I have a primary key on id, and then an foreign keys on category_id and
invoice_id.
GID is the group ID of the company, UID is the companies user,
> "BM" == Bruce Momjian <[EMAIL PROTECTED]> writes:
>> Sounds reasonable to me. Are there many / any scenarios where a plain
>> fsync would be faster than open_sync?
BM> Yes. If you were doing multiple WAL writes before transaction fsync,
BM> you would be fsyncing every write, rather than
Bruce,
> I would be interested to see if wal_sync_method = fsync is slower than
> wal_sync_method = open_sync. How often are we doing more then one write
> before a fsync anyway?
OK. I'll see if I can get to it around my other stuff I have to do this
weekend.
--
Josh Berkus
Aglio Database S
Chris,
> > If I do a SQL request ... does PostgreSQL use one or more processor ?
>
> Just one processor.
For one query, yes. For multiple queries, PostgreSQL will use multiple
processors, and that's what he's concerned about given his earlier posts.
--
Josh Berkus
Aglio Database Solutions
San
Vivek,
> NB> shared_buffers = 1/16th of total memory
> NB> effective_cache_size = 80% of the supposed kernel cache.
>
> Please take into account the blocksize compiled into PG, too...
We can;t change the blocksize in a script that only does the .conf file. Or
are you suggesting something else?
Nick,
> Sounds good to me. I will carry on working on it but I would definitely
> need some help, or at least a list of parameters to tweak, and some
> recomended values based on data about the puter in question.
> shared_buffers = 1/16th of total memory
> effective_cache_size = 80% of the suppos
> "NB" == Nick Barr <[EMAIL PROTECTED]> writes:
NB> So far:
NB> shared_buffers = 1/16th of total memory
NB> effective_cache_size = 80% of the supposed kernel cache.
Please take into account the blocksize compiled into PG, too...
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> "SC" == Sean Chittenden <[EMAIL PROTECTED]> writes:
SC> patches to extract info from their OS so that initdb can make useful
SC> decisions. Or, lastly, does anyone think that this should be in a
SC> different, external program? -sc
Well, there should definitely be a way to run a "get curr
[EMAIL PROTECTED] (=?iso-8859-15?q?Herv=E9=20Piedvache?=) writes:
> A simple question about PostgreSQL ... I have a Pentium Xeon Quadri processors
> ...
> If I do a SQL request ... does PostgreSQL use one or more processor ?
Just one processor.
> And if it use only one ... why ?
> Could you expl
Josh Berkus wrote:
> Bruce,
>
> > Yes. If you were doing multiple WAL writes before transaction fsync,
> > you would be fsyncing every write, rather than doing two writes and
> > fsync'ing them both. I wonder if larger transactions would find
> > open_sync slower?
>
> Want me to test? I've go
On Fri, 2003-10-10 at 10:41, Josh Berkus wrote:
> Herve'
> > One other small question ... Does PostgreSQL is scalable ?
>
> Given that we have several members of our community with 2TB databases, and
> one entitiy with a 32TB database, I'd say yes.
It depends on what is meant by "scalable". In
On Fri, Oct 10, 2003 at 10:32:32AM -0700, Josh Berkus wrote:
> Dror,
>
> > Ouch. I just double checked and you're right. Is this considered a bug,
> > or just an implementation issue?
>
> It's an implementation issue, which may be fixed by 7.5 but not sooner.
> Basically, the free ability of Po
On Fri, 10 Oct 2003, Josh Berkus wrote:
> Bruce,
>
> > Yes. If you were doing multiple WAL writes before transaction fsync,
> > you would be fsyncing every write, rather than doing two writes and
> > fsync'ing them both. I wonder if larger transactions would find
> > open_sync slower?
>
> Want
Josh Berkus wrote:
Nick,
Having been following the thread on "go for a script! / ex: PostgreSQL vs.
MySQL". I thought I would throw something together in Perl.
Cool! Would you be willing to work with me so that I can inject some of my
knowledge of .conf tuning?
Sounds good to me.
Bruce,
> Yes. If you were doing multiple WAL writes before transaction fsync,
> you would be fsyncing every write, rather than doing two writes and
> fsync'ing them both. I wonder if larger transactions would find
> open_sync slower?
Want me to test? I've got an ide-based test machine here, a
Herve'
> A simple question about PostgreSQL ... I have a Pentium Xeon Quadri
> processors ...
> If I do a SQL request ... does PostgreSQL use one or more processor ?
For your configuration, yes, you want multiple processors. Postgres (or
rather, the host OS) will distribute active connections o
Herve'
> One other small question ... Does PostgreSQL is scalable ?
Given that we have several members of our community with 2TB databases, and
one entitiy with a 32TB database, I'd say yes.
> I mean ... is it possible to have two servers, one rack of disks connected
> to the 2 servers to get a
Nick,
> Having been following the thread on "go for a script! / ex: PostgreSQL vs.
> MySQL". I thought I would throw something together in Perl.
Cool! Would you be willing to work with me so that I can inject some of my
knowledge of .conf tuning?
--
Josh Berkus
Aglio Database Solutions
San
Dror,
> Ouch. I just double checked and you're right. Is this considered a bug,
> or just an implementation issue?
It's an implementation issue, which may be fixed by 7.5 but not sooner.
Basically, the free ability of PostgreSQL users to define their own
aggregates limits our ability to define
johnn wrote:
On Fri, Oct 10, 2003 at 12:42:04PM -0400, Bill Moran wrote:
4) It simply isn't practical to expect a single query to
execute on multiple processors simultaneously.
Do you know of any RDBMS that actually will execute a single query
on multiple processors?
Yes, DB2 will do this if
scott.marlowe wrote:
> On Thu, 9 Oct 2003, Bruce Momjian wrote:
>
> > scott.marlowe wrote:
> > > I was testing to get some idea of how to speed up the speed of pgbench
> > > with IDE drives and the write caching turned off in Linux (i.e. hdparm -W0
> > > /dev/hdx).
> > >
> > > The only paramete
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Bill Moran
> Sent: Friday, October 10, 2003 12:42 PM
> To: Hervé Piedvache
> Cc: Postgresql Performance
> Subject: Re: [PERFORM] One or more processor ?
>
>
> Hervé Piedvache wrote:
> > Hi,
> >
> > A simp
On Fri, Oct 10, 2003 at 12:42:04PM -0400, Bill Moran wrote:
> 4) It simply isn't practical to expect a single query to
>execute on multiple processors simultaneously.
>
> Do you know of any RDBMS that actually will execute a single query
> on multiple processors?
Yes, DB2 will do this if conf
On Fri, 10 Oct 2003, [iso-8859-15] Hervé Piedvache wrote:
> One other small question ... Does PostgreSQL is scalable ?
> I mean ... is it possible to have two servers, one rack of disks connected to
> the 2 servers to get access in same time to the same database ?
No. You need to replicate the DB
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> The most efficient way for you to store data would be like this:
> main table
> id address
> 3 67.92
> 7 69.5
>
> child table
> id value_type value
> 3 uptime 0.3
> 3 memory 37
> 7 uptime 1.1
> 7 memory 15
Actual
Hervé Piedvache wrote:
Hi,
A simple question about PostgreSQL ... I have a Pentium Xeon Quadri processors
...
If I do a SQL request ... does PostgreSQL use one or more processor ?
PostgreSQL uses one processor per connection. If you have 4 simultaneous
connections, you'll use all four processors
On Fri, 10 Oct 2003, [iso-8859-15] Hervé Piedvache wrote:
> A simple question about PostgreSQL ... I have a Pentium Xeon Quadri
> processors ... If I do a SQL request ... does PostgreSQL use one or more
> processor ?
Each connection becomes a process, and each process runs on one processor.
So,
On Fri, 10 Oct 2003, [iso-8859-15] Hervé Piedvache wrote:
> If I do a SQL request ... does PostgreSQL use one or more processor ?
>
Nope. Just one processor this is because PG is process not thread based.
However, if you opened 4 connections and each issued a sql statement all 4
processors would
Hi,
One other small question ... Does PostgreSQL is scalable ?
I mean ... is it possible to have two servers, one rack of disks connected to
the 2 servers to get access in same time to the same database ?
Other point is there any possibilties to make servers clusters with PostgreSQL
? If no why
Hi,
A simple question about PostgreSQL ... I have a Pentium Xeon Quadri processors
...
If I do a SQL request ... does PostgreSQL use one or more processor ?
And if it use only one ... why ?
Could you explain me this ;o)
Thanks per advance.
--
Hervé Piedvache
Elma Ingénierie Informatique
6 rue
Greg Stark writes:
> Call it a wishlist bug. The problem is it would be a hard feature to
> implement properly. And none of the people paid to work on postgres
> by various companies seem to have this on their to-do lists. So
> don't expect it in the near future.
We are using Postgres heavily, and
On Thu, 9 Oct 2003, Bruce Momjian wrote:
> scott.marlowe wrote:
> > I was testing to get some idea of how to speed up the speed of pgbench
> > with IDE drives and the write caching turned off in Linux (i.e. hdparm -W0
> > /dev/hdx).
> >
> > The only parameter that seems to make a noticeable dif
Nope, write-cache enabled by default.
On Thu, 9 Oct 2003, Bruce Momjian wrote:
>
> How did this drive come by default? Write-cache disabled?
>
> ---
>
> scott.marlowe wrote:
> > On Thu, 2 Oct 2003, scott.marlowe wrote:
>
David Griffiths wrote:
This is a timely thread for myself, as I'm in the middle of testing
both databases as an Oracle replacement.
As of this moment, I know more about MySQL (tuning, setup, features)
than I do about Postgres. Not because I like MySQL more, but because
1) the MySQL docs are
- Original Message -
From: "Nick Barr" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, October 10, 2003 1:35 PM
Subject: go for a script! / ex: PostgreSQL vs. MySQL
> I will also post it on me website and as I develop it further new
versions
> will appear there
>
> http://www.c
Heya Guys n Gals,
Having been following the thread on "go for a script! / ex: PostgreSQL vs.
MySQL". I thought I would throw something together in Perl. My current issue
is that I only have access to a RH Linux box and so cannot make it
cross-platform on my own :-(. Anyhow please find it attached.
I took this approach with a former company in designing an dynamic
e-commerce system. This kept the addition of new products from
requiring an alteration of the schema. With an ORB manager and cache
control the performance was not significantly, but the automatic
extensibility and the ease
On 9 Oct 2003, Greg Stark wrote:
> I don't quite follow your #2 so I can only comment on the above idea of using
> an LVM snapshot. If you have the hardware and the LVM-fu to be able to do this
> properly I would recommend it.
>
Just to be a bit clearer incase it was my wording:
Method #2 is near
On Fri, 10 Oct 2003, Seth Ladd wrote:
> >> Is there any way to speed this up, or is that DISTINCT going to keep
> >> hounding me?
> >>
> >> I checked the mailing list, and didn't see anything like this.
> >>
> >> Any tips or hints would be greatly appreciated. Thanks for your help!
> >> Seth
> >>
> Thanks for the tip, I'll give this a shot soon. I am curious, your
> example above does not use GROUP BY yet you have an INDEX SCAN. I am
> using a similar query, yet I get a full table scan. I wonder how they
> are different?
Please send us the results of EXPLAIN ANALYZE the query. The EXPL
Seth Ladd wrote:
[EMAIL PROTECTED]:express=# explain select distinct region from region;
QUERY PLAN
---
---
Unique (cost=0.00..4326.95 rows=9518 width=14)
-> Ind
Is there any way to speed this up, or is that DISTINCT going to keep
hounding me?
I checked the mailing list, and didn't see anything like this.
Any tips or hints would be greatly appreciated. Thanks for your help!
Seth
Try group by instead. I think this is an old bug its fixed in
7.3.2
On Thu, 9 Oct 2003, Seth Ladd wrote:
> Hello,
>
> I am running 7.3.2 RPMs on RH9, on a celeron 1.7 w/ 1gig ram.
>
> I have a table that has 6.9 million rows, 2 columns, and an index on
> each column. When I run:
>
> SELECT DISTINCT column1 FROM table
>
> It is very, very slow (10-15 min to c
Hello,
I am running 7.3.2 RPMs on RH9, on a celeron 1.7 w/ 1gig ram.
I have a table that has 6.9 million rows, 2 columns, and an index on
each column. When I run:
SELECT DISTINCT column1 FROM table
It is very, very slow (10-15 min to complete). An EXPLAIN shows no
indexes are being used.
I
On Thu, 9 Oct 2003, Dror Matalon wrote:
> On Thu, Oct 09, 2003 at 07:07:00PM -0400, Greg Stark wrote:
> > Dror Matalon <[EMAIL PROTECTED]> writes:
> >
> > > Actually what finally sovled the problem is repeating the
> > > dtstamp > last_viewed
> > > in the sub select
> >
> > That will at least c
David Griffiths wrote:
Have you checked these pages? They've been posted on this list numerous
times:
http://techdocs.postgresql.org
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html
Those are much more instructive
On Thu, 9 Oct 2003, Gaetano Mendola wrote:
> Andriy Tkachuk wrote:
> > On Wed, 8 Oct 2003, Tom Lane wrote:
> >
> >
> >>Andriy Tkachuk <[EMAIL PROTECTED]> writes:
> >>
> >>>At second. calc_total() is immutable function:
> >>>but it seems that it's not cached in one session:
> >>
> >>It's not suppos
59 matches
Mail list logo