Hi, Christopher,
[sorry for the delay of my answer, we were rather busy last weks]
On Thu, 04 Nov 2004 21:29:04 -0500
Christopher Browne <[EMAIL PROTECTED]> wrote:
> In an attempt to throw the authorities off his trail, [EMAIL PROTECTED]
> (Markus Schaber) transmitted:
> > We should create a lis
On Fri, 2004-11-05 at 02:47, Chris Browne wrote:
> Another thing that would be valuable would be to have some way to say:
>
> "Read this data; don't bother throwing other data out of the cache
>to stuff this in."
This is similar, although not exactly the same thing:
http://www.opengroup.or
On Thu, 2004-11-04 at 23:29, Pierre-Frédéric Caillaud wrote:
> There is also the fact that syncing after every transaction could be
> changed to syncing every N transactions (N fixed or depending on the data
> size written by the transactions) which would be more efficient than the
> cu
On Fri, 2004-11-05 at 06:20, Steinar H. Gunderson wrote:
> You mean, like, open(filename, O_DIRECT)? :-)
This disables readahead (at least on Linux), which is certainly not we
want: for the very case where we don't want to keep the data in cache
for a while (sequential scans, VACUUM), we also want
After a long battle with technology, [EMAIL PROTECTED] (Simon Riggs), an earthling,
wrote:
> On Thu, 2004-11-04 at 15:47, Chris Browne wrote:
>
>> Another thing that would be valuable would be to have some way to say:
>>
>> "Read this data; don't bother throwing other data out of the cache
>>
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] (Markus
Schaber) transmitted:
> We should create a list of those needs, and then communicate those
> to the kernel/fs developers. Then we (as well as other apps) can
> make use of those features where they are available, and u
Simon Riggs <[EMAIL PROTECTED]> writes:
> On Thu, 2004-11-04 at 19:34, Tom Lane wrote:
>> But only for Postgres' own shared buffers. The kernel cache still gets
>> trashed, because we have no way to suggest to the kernel that it not
>> hang onto the data read in.
> I guess a difference in viewpoi
On Thu, 2004-11-04 at 19:34, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > On Thu, 2004-11-04 at 15:47, Chris Browne wrote:
> >> Something like a "read_uncached()" call...
> >>
> >> That would mean that a seq scan or a vacuum wouldn't force useful data
> >> out of cache.
>
> > ARC
Simon Riggs <[EMAIL PROTECTED]> writes:
> On Thu, 2004-11-04 at 15:47, Chris Browne wrote:
>> Something like a "read_uncached()" call...
>>
>> That would mean that a seq scan or a vacuum wouldn't force useful data
>> out of cache.
> ARC does almost exactly those two things in 8.0.
But only for P
On Thu, Nov 04, 2004 at 10:47:31AM -0500, Chris Browne wrote:
> Another thing that would be valuable would be to have some way to say:
>
> "Read this data; don't bother throwing other data out of the cache
>to stuff this in."
>
> Something like a "read_uncached()" call...
You mean, like, o
On Thu, 2004-11-04 at 15:47, Chris Browne wrote:
> Another thing that would be valuable would be to have some way to say:
>
> "Read this data; don't bother throwing other data out of the cache
>to stuff this in."
>
> Something like a "read_uncached()" call...
>
> That would mean that a se
[EMAIL PROTECTED] (Pierre-Frédéric Caillaud) writes:
>> posix_fadvise(2) may be a candidate. Read/Write bareers another pone, as
>> well asn syncing a bunch of data in different files with a single call
>> (so that the OS can determine the best write order). I can also imagine
>> some interaction w
posix_fadvise(2) may be a candidate. Read/Write bareers another pone, as
well asn syncing a bunch of data in different files with a single call
(so that the OS can determine the best write order). I can also imagine
some interaction with the FS journalling system (to avoid duplicate
efforts).
The
Hi, Leeuw,
On Thu, 21 Oct 2004 12:44:10 +0200
"Leeuw van der, Tim" <[EMAIL PROTECTED]> wrote:
> (I'm not sure if it's a good idea to create a PG-specific FS in your
> OS of choice, but it's certainly gonna be easier than getting FS code
> inside of PG)
I don't think PG really needs a specific FS
Note that most people are now moving away from raw devices for databases
in most applicaitons. The relatively small performance gain isn't worth
the hassles.
On Thu, Oct 21, 2004 at 12:27:27PM +0200, Steinar H. Gunderson wrote:
> On Thu, Oct 21, 2004 at 08:58:01AM +0100, Matt Clark wrote:
> > I su
As someone else noted, this doesn't belong in the filesystem (rather
the kernel's block I/O layer/buffer cache). But I agree, an API by
which we can tell the kernel what kind of I/O behavior to expect would
be good.
[snip]
The closest API to what you're describing that I'm aware of is
posix_fad
On Thu, Oct 21, 2004 at 10:20:55AM -0400, Tom Lane wrote:
>> ... I have no idea how much you can improve over the "best"
>> filesystems out there, but having two layers of journalling (both WAL _and_
>> FS journalling) on top of each other don't make all that much sense to me.
> Which is why settin
Neil Conway wrote:
> Also, I would imagine Win32 provides some means to inform the kernel
> about your expected I/O pattern, but I haven't checked. Does anyone know
> of any other relevant APIs?
See CreateFile, Parameter dwFlagsAndAttributes
http://msdn.microsoft.com/library/default.asp?url=/li
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> ... I have no idea how much you can improve over the "best"
> filesystems out there, but having two layers of journalling (both WAL _and_
> FS journalling) on top of each other don't make all that much sense to me.
Which is why setting the FS to
On Thu, Oct 21, 2004 at 12:44:10PM +0200, Leeuw van der, Tim wrote:
> Hacking PG internally to handle raw devices will meet with strong
> resistance from large portions of the development team. I don't expect
> (m)any core devs of PG will be excited about rewriting the entire I/O
> architecture of
Matt Clark wrote:
I'm thinking along the lines of an FS that's aware of PG's strategies and
requirements and therefore optimised to make those activities as efiicient
as possible - possibly even being aware of PG's disk layout and treating
files differently on that basis.
As someone else noted, thi
e of PG)
>
> cheers,
>
> --Tim
>
>
>
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Steinar H. Gunderson
> Sent: Thursday, October 21, 2004 12:27 PM
> To: [EMAIL PROTECTED]
> Subject: Re: [PERFORM] Anything
sier than getting FS code inside of PG)
cheers,
--Tim
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Steinar H. Gunderson
Sent: Thursday, October 21, 2004 12:27 PM
To: [EMAIL PROTECTED]
Subject: Re: [PERFORM] Anything to be gained from a 'Postgres Fil
On Thu, Oct 21, 2004 at 08:58:01AM +0100, Matt Clark wrote:
> I suppose I'm just idly wondering really. Clearly it's against PG
> philosophy to build an FS or direct IO management into PG, but now it's so
> relatively easy to plug filesystems into the main open-source Oses, It
> struck me that the
> Looking at that list, I got the feeling that you'd want to
> push that PG-awareness down into the block-io layer as well,
> then, so as to be able to optimise for (perhaps) conflicting
> goals depending on what the app does; for the IO system to be
> able to read the apps mind it needs to hav
Reiser4 ?
On Thu, 21 Oct 2004 08:58:01 +0100, Matt Clark <[EMAIL PROTECTED]> wrote:
I suppose I'm just idly wondering really. Clearly it's against PG
philosophy to build an FS or direct IO management into PG, but now it's
so
relatively easy to plug filesystems into the main open-source Oses,
lf Of Matt Clark
Sent: Thursday, October 21, 2004 9:58 AM
To: [EMAIL PROTECTED]
Subject: [PERFORM] Anything to be gained from a 'Postgres Filesystem'?
I suppose I'm just idly wondering really. Clearly it's against PG
philosophy to build an FS or direct IO management into PG, b
I suppose I'm just idly wondering really. Clearly it's against PG
philosophy to build an FS or direct IO management into PG, but now it's so
relatively easy to plug filesystems into the main open-source Oses, It
struck me that there might be some useful changes to, say, XFS or ext3, that
could be
28 matches
Mail list logo