Mark,
On 9/21/06 8:40 PM, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
> I'd advise against using this call unless it can be shown that the page
> will not be used in the future, or at least, that the page is less useful
> than all other pages currently in memory. This is what the call really m
On Fri, Sep 22, 2006 at 02:52:09PM +1200, Guy Thornley wrote:
> > >> I thought that posix_fadvise() with POSIX_FADV_WILLNEED was exactly
> > >> meant for this purpose?
> > > This is a good idea - I wasn't aware that this was possible.
> > This possibility was the reason for me to propose it. :-)
>
Guy Thornley wrote:
> > >> I thought that posix_fadvise() with POSIX_FADV_WILLNEED was exactly
> > >> meant for this purpose?
> > >
> > > This is a good idea - I wasn't aware that this was possible.
> >
> > This possibility was the reason for me to propose it. :-)
>
> posix_fadvise() features in
> >> I thought that posix_fadvise() with POSIX_FADV_WILLNEED was exactly
> >> meant for this purpose?
> >
> > This is a good idea - I wasn't aware that this was possible.
>
> This possibility was the reason for me to propose it. :-)
posix_fadvise() features in the TODO list already; I'm not sure
Bucky,
On 9/21/06 2:16 PM, "Bucky Jordan" <[EMAIL PROTECTED]> wrote:
> Does this have anything to do with postgres indexes not storing data, as
> some previous posts to this list have mentioned? (In otherwords, having
> the index in memory doesn't help? Or are we talking about indexes that
> are
On 21-9-2006 23:49 Jim C. Nasby wrote:
Even with fsync = off, there's still a non-trivial amount of overhead
brought on by MVCC that's missing in myisam. If you don't care about
concurrency or ACIDity, but performance is critical (the case that the
MySQL benchmark favors), then PostgreSQL probabl
On Thu, Sep 21, 2006 at 11:12:45AM -0400, Tom Lane wrote:
> yoav x <[EMAIL PROTECTED]> writes:
> > I've applied the following parameters to postgres.conf:
>
> > max_connections = 500
> > shared_buffers = 3000
> > work_mem = 10
> > effective_cache_size = 30
You just told the database
Hi, Bucky,
Bucky Jordan wrote:
> Each postgres process also uses shared memory (aka the buffer cache) so
> as to not fetch data that another process has already requested,
> correct?
Yes.
Additinally, the OS caches disk blocks. Most unixoid ones like Linux use
(nearly) all unused memory for thi
Markus,
First, thanks- your email was very enlightining. But, it does bring up a
few additional questions, so thanks for your patience also- I've listed
them below.
> It applies per active backend. When connecting, the Postmaster forks a
> new backend process. Each backend process has its own sca
Hi, Bucky,
Bucky Jordan wrote:
>> We can implement multiple scanners (already present in MPP), or we
> could
>> implement AIO and fire off a number of simultaneous I/O requests for
>> fulfillment.
>
> So this might be a dumb question, but the above statements apply to the
> cluster (e.g. postmas
> So this might be a dumb question, but the above statements apply to the
> cluster (e.g. postmaster) as a whole, not per postgres
> process/transaction correct? So each transaction is blocked waiting for
> the main postmaster to retrieve the data in the order it was requested
> (i.e. not multiple
> > Do you think that adding some posix_fadvise() calls to the backend
to
> > pre-fetch some blocks into the OS cache asynchroneously could
improve
> > that situation?
>
> Nope - this requires true multi-threading of the I/O, there need to be
> multiple seek operations running simultaneously. The
On Thu, 2006-09-21 at 07:52 -0700, yoav x wrote:
> Hi
>
> After upgrading DBI and DBD::Pg, this benchmark still picks MySQL as the
> winner (at least on Linux
> RH3 on a Dell 1875 server with 2 hyperthreaded 3.6GHz CPUs and 4GB RAM).
> I've applied the following parameters to postgres.conf:
>
>
yoav x <[EMAIL PROTECTED]> writes:
> I've applied the following parameters to postgres.conf:
> max_connections = 500
> shared_buffers = 3000
> work_mem = 10
> effective_cache_size = 30
Please see my earlier reply --- you ignored at least
checkpoint_segments, which is critical, and per
Hi.
Do you compare apples to apples? InnoDB tables to PostgreSQL? Are all
needed indexes available? Are you sure about that? What about fsync?
Does the benchmark insert a lot of rows? Have you tested placing the
WAL on a separate disk? Is PostgreSQL logging more stuff?
Another thing: have you an
Not to offend, but since most of us are PG users, we're not all that
familiar with what the different tests in MySQL's sql-bench benchmark
do. So you won't get very far by saying "PG is slow on benchmark X, can
I make it faster?", because that doesn't include any of the information
we need in orde
On Thu, 2006-09-21 at 07:52 -0700, yoav x wrote:
> Hi
>
> After upgrading DBI and DBD::Pg, this benchmark still picks MySQL as the
> winner (at least on Linux
> RH3 on a Dell 1875 server with 2 hyperthreaded 3.6GHz CPUs and 4GB RAM).
> I've applied the following parameters to postgres.conf:
>
>
Hi
After upgrading DBI and DBD::Pg, this benchmark still picks MySQL as the winner
(at least on Linux
RH3 on a Dell 1875 server with 2 hyperthreaded 3.6GHz CPUs and 4GB RAM).
I've applied the following parameters to postgres.conf:
max_connections = 500
shared_buffers = 3000
work_mem = 10
eff
Hi, Luke,
Luke Lonergan wrote:
>> I thought that posix_fadvise() with POSIX_FADV_WILLNEED was exactly
>> meant for this purpose?
>
> This is a good idea - I wasn't aware that this was possible.
This possibility was the reason for me to propose it. :-)
> We'll do some testing and see if it work
19 matches
Mail list logo