OTOH DB2 and SQLServer take block level
read locks, so they can do this too, but at major loss of
concurrency
and threat of deadlock.
Note, that in the usual committed read isolation, they do not need to
read lock a row ! e.g. Informix only verifies, that it could lock the
row
(that
Hello Tom,
I tried both patches on a different machine (but had to take the patches from
cvs diff, cut'n paste from the mail-program did not work). Up until now, they
work like a charm, correct results and fast. I will try on the other machine
that failed yesterday in the afternoon, maybe it
OTOH DB2 and SQLServer take block level
read locks, so they can do this too, but at major loss of
concurrency
and threat of deadlock.
Note, that in the usual committed read isolation, they do not need to
read lock a row ! e.g. Informix only verifies, that it could lock the
row
(that
On Tue, Nov 29, 2005 at 02:53:36PM +1100, Gavin Sherry wrote:
The second idea is using posix async IO at key points within the system
to better parallelise CPU and IO work. There areas I think we could use
async IO are: during sequential scans, use async IO to do pre-fetching of
blocks; inside
DB2:
Uncommitted Read (UR) mode Dirty read isn't the default, or
the recommended lock level for most apps. I was considering
Cursor Stability mode (or higher), which is the default
Sorry, they call it read committed but actually do cursor stability,
which does keep one lock on the last
In article [EMAIL PROTECTED],
Jim C. Nasby [EMAIL PROTECTED] writes:
On Sun, Nov 27, 2005 at 07:44:55PM +, Simon Riggs wrote:
not have any unique indexes or row triggers. It should be possible to
take advantage of this automatically when those requirements are met,
without any new
First, we need a new term for a thread of execution, that could be a
thread or could be a process, I don't care. When discussing anything
that is to run in parallel, the first thing that pops out of someones
mouth is Don't you mean (thread/process)? But that's an
implementation detail and should
Am Dienstag, 29. November 2005 10:05 schrieb Mario Weilguni:
Hello Tom,
I tried both patches on a different machine (but had to take the patches
from cvs diff, cut'n paste from the mail-program did not work). Up until
now, they work like a charm, correct results and fast. I will try on the
threw up some threads to read ahead, then my user thread and my read
ahead threads would thrash on trying to lock the buffer slots. So, I
had the read ahead threads start at some distance into the table, and
work toward the beginning. The user thread would do its own I/O until
Ah. The
Harald Fuchs [EMAIL PROTECTED] writes:
In article [EMAIL PROTECTED],
Jim C. Nasby [EMAIL PROTECTED] writes:
Does that mean that this fast copy would end up not re-using space on
pages that have space available? ISTM that's something users would want
to be able to over-ride. In fact, it seems
On Tue, Nov 29, 2005 at 09:45:30AM -0500, Pollard, Mike wrote:
Anyway, what I did was the following. When doing a sequential scan, we
were starting at the beginning of the table and scanning forward. If I
threw up some threads to read ahead, then my user thread and my read
ahead threads
No, I only go x number of pages ahead of the user scan (where x is
currently user defined, but it should be significantly smaller than your
number of data buffers). I have found that reading about 16Mb ahead
gives optimal performance, and on modern machines isn't all that much
memory. Once the
Unfortunatly I can't really test it at it's full potential because it
uses glibc's default POSIX AIO which is *lame*. No more than one
outstanding request per fd which for PostgreSQL is crappy. There was
I had the impression from the kernel aio mailing list a while back that
post-some
On Tue, Nov 29, 2005 at 03:14:38PM +1100, Gavin Sherry wrote:
On Mon, 28 Nov 2005, David Boreham wrote:
Gavin Sherry wrote:
MySQL, Oracle and others implement read-ahead threads to simulate async IO
I always believed that Oracle used async file I/O. Not that I've seen their
The paper I
On Tue, Nov 29, 2005 at 08:42:18AM -0700, David Boreham wrote:
Unfortunatly I can't really test it at it's full potential because it
uses glibc's default POSIX AIO which is *lame*. No more than one
outstanding request per fd which for PostgreSQL is crappy. There was
I had the impression
By default when you use aio you get the version in libc (-lrt IIRC)
which has the issue I mentioned, probably because it's probably
optimised for the lots-of-network-connections type program where
multiple outstanding requests on a single fd are not meaningful. You
can however link in some
On Tue, 2005-11-29 at 09:45 -0500, Pollard, Mike wrote:
Anyway, what I did was the following. When doing a sequential scan, we
were starting at the beginning of the table and scanning forward. If I
threw up some threads to read ahead, then my user thread and my read
ahead threads would
On Tue, Nov 29, 2005 at 10:28:57AM -0700, David Boreham wrote:
Actually, after reading up on the current state of things, I'm not sure you
can even get POSIX aio on top of kernel aio in Linux. There are also a
few limitations in the 2.6 aio implementation that might prove troublesome:
for
Qingqing Zhou [EMAIL PROTECTED] wrote
I wrote a program to simulate the sequential scan in PostgreSQL
with/without ice-broker.
We need more tests
If anybody has a test results then I'd love to see it ...
Thanks,
Qingqing
---(end of
I just found this book online today:
http://producingoss.com/
It has the most detailed explaination I have ever read online about how
to manage an open source project. I have ordered a printed copy to read
in the next few months, but all the content is online.
If people find sections
Simon Riggs wrote:
As a further enhancement, I would also return to the NOLOGGING option
for COPY. Previously we had said that COPY LOCK was the way to go -
taking a full table lock to prevent concurrent inserts to a block from a
COPY that didn't write WAL and another backend which wanted to
Bruce Momjian pgman@candle.pha.pa.us writes:
One idea for default behavior would be to use EXCLUSIVE when the table
is zero size. I think that would do pg_dump and most of the user cases,
and of course users could override the default by using a keyword. We
could emit a NOTICE if an an
Tom Lane wrote:
Bruce Momjian pgman@candle.pha.pa.us writes:
One idea for default behavior would be to use EXCLUSIVE when the table
is zero size. I think that would do pg_dump and most of the user cases,
and of course users could override the default by using a keyword. We
could emit a
Bruce Momjian pgman@candle.pha.pa.us writes:
Tom Lane wrote:
Something that would probably be reasonable, and require *no* weird new
syntax, is to shortcut in a COPY into a table created in the current
transaction. I believe we still keep a flag in the relcache indicating
whether that's the
Tom Lane wrote:
Bruce Momjian pgman@candle.pha.pa.us writes:
Tom Lane wrote:
Something that would probably be reasonable, and require *no* weird new
syntax, is to shortcut in a COPY into a table created in the current
transaction. I believe we still keep a flag in the relcache indicating
On Tue, 2005-11-29 at 14:17 -0500, Bruce Momjian wrote:
Simon Riggs wrote:
As a further enhancement, I would also return to the NOLOGGING option
for COPY. Previously we had said that COPY LOCK was the way to go -
taking a full table lock to prevent concurrent inserts to a block from a
On Tue, 2005-11-29 at 15:15 -0500, Bruce Momjian wrote:
Tom Lane wrote:
Bruce Momjian pgman@candle.pha.pa.us writes:
Tom Lane wrote:
Something that would probably be reasonable, and require *no* weird new
syntax, is to shortcut in a COPY into a table created in the current
On Tue, 2005-11-29 at 14:56 -0500, Tom Lane wrote:
Bruce Momjian pgman@candle.pha.pa.us writes:
Tom Lane wrote:
Something that would probably be reasonable, and require *no* weird new
syntax, is to shortcut in a COPY into a table created in the current
transaction. I believe we still
Simon Riggs [EMAIL PROTECTED] writes:
Tom Lane wrote:
Log, yes, unless it's a temp table. The point is we could avoid taking
buffer content locks. Come to think of it, we could implement that
trivially in the heapam.c routines; it would then apply to any table
update whether generated by
On Mon, 2005-11-28 at 09:26 -0500, Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
I don't see why couldn't have an additional index access method entry
point to insert multiple rows on one call.
I think Simon was mainly on about the idea of inserting multiple *heap*
entries with
On Tue, 29 Nov 2005, Andrew Piskorski wrote:
On Tue, Nov 29, 2005 at 03:14:38PM +1100, Gavin Sherry wrote:
On Mon, 28 Nov 2005, David Boreham wrote:
Gavin Sherry wrote:
MySQL, Oracle and others implement read-ahead threads to simulate async
IO
I always believed that Oracle
On Tue, 2005-11-29 at 15:58 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
Tom Lane wrote:
Log, yes, unless it's a temp table. The point is we could avoid taking
buffer content locks. Come to think of it, we could implement that
trivially in the heapam.c routines; it would
On Tue, 29 Nov 2005, David Boreham wrote:
By default when you use aio you get the version in libc (-lrt IIRC)
which has the issue I mentioned, probably because it's probably
optimised for the lots-of-network-connections type program where
multiple outstanding requests on a single fd are not
Simon Riggs [EMAIL PROTECTED] writes:
On Tue, 2005-11-29 at 15:58 -0500, Tom Lane wrote:
anyway, I had second thoughts about this while
eating lunch. A buffer for a new table can reasonably be assumed not to
be accessed by any other backend, but we can *not* make that assumption
for the
On Mon, 2005-10-17 at 12:49 +0100, Simon Riggs wrote:
On Fri, 2005-10-14 at 19:09 -0400, Tom Lane wrote:
I wrote:
I'm thinking that IN should be
converted to a ScalarArrayOpExpr, ie
x = ANY (ARRAY[val1,val2,val3,val4,...])
Actually, there is one little thing in the way of
On Wed, 2005-11-30 at 08:30 +1100, Gavin Sherry wrote:
On Tue, 29 Nov 2005, David Boreham wrote:
By default when you use aio you get the version in libc (-lrt IIRC)
which has the issue I mentioned, probably because it's probably
optimised for the lots-of-network-connections type program
Simon Riggs [EMAIL PROTECTED] writes:
Do you think we'll be able to generate a single ScalarArrayOpExpr from a
small subselect and pass it through as an indexable expression?
If you don't mind spelling it with the ARRAY(sub-select) syntax, which
I think is a Postgres-ism (though it's possible
On Tue, 2005-11-29 at 09:45 -0500, Pollard, Mike wrote:
I've implemented this on another database product
You're scaring me. Is the information you describe in the public domain
or is it intellectual property of any particular company? Are you sure?
We just recovered from one patent scare.
Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
Do you think we'll be able to generate a single ScalarArrayOpExpr from a
small subselect and pass it through as an indexable expression?
If you don't mind spelling it with the ARRAY(sub-select) syntax, which
I think is a Postgres-ism
On Tue, 2005-11-29 at 17:21 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
Do you think we'll be able to generate a single ScalarArrayOpExpr from a
small subselect and pass it through as an indexable expression?
If you don't mind spelling it with the ARRAY(sub-select) syntax,
Please let me back up and ask a more simplistic question. I understand
the idea of allowing COPY to insert rows with less locking, but I am
wondering about the NOLOGGING idea. On commit, we must guarantee that
all the rows are in the table, so what advantage is there to a NOLOGGING
option?
Jeff,
On 11/29/05 9:35 AM, Jeffrey W. Baker [EMAIL PROTECTED] wrote:
On Tue, 2005-11-29 at 09:45 -0500, Pollard, Mike wrote:
Anyway, what I did was the following. When doing a sequential scan, we
were starting at the beginning of the table and scanning forward. If I
threw up some
Folks,
We're thinking of throwing a little conference next July.However, we
need to know *this week*, an estimate of how many people would come to the
conference.
So, please fill out the survey at:
http://thepostgresqlfoundation.org/survey/
... as soon as you can. Thanks!
--
--Josh
Hi guys,
How would I go about implementing MySQL's BIN() function easily in PL/SQL.
mysql SELECT BIN(12);
- '1100'
Basically it converts a bigint to a string containing 1's and 0's.
I've tried messing about with bit() types, but those types lack casts to
text, etc. And they are left
here's a plperl version :-) :
create or replace function bin(bigint) returns text language plperl as $$
my $arg = $_[0] + 0;
my $res = ;
while($arg)
{
$res = ($arg % 2) . $res;
$arg = 1;
}
return $res;
$$;
cheers
andrew
Christopher Kings-Lynne wrote:
Hi guys,
How would I
On Tue, Nov 29, 2005 at 09:46:13PM -0500, Andrew Dunstan wrote:
create or replace function bin(bigint) returns text language plperl as $$
my $arg = $_[0] + 0;
my $res = ;
while($arg)
{
$res = ($arg % 2) . $res;
$arg = 1;
}
return $res;
$$;
Any reason not to use
create or replace function bin(bigint) returns text language plperl as $$
my $arg = $_[0] + 0;
my $res = ;
while($arg)
{
$res = ($arg % 2) . $res;
$arg = 1;
}
return $res;
$$;
Any reason not to use sprintf(%b, $_[0])?
All very well and good, but it has to be PL/SQL preferably or
On Tue, Nov 29, 2005 at 07:57:58PM -0700, Michael Fuhr wrote:
Any reason not to use sprintf(%b, $_[0])?
Or something like this in SQL or PL/pgSQL:
test= SELECT ltrim(textin(bit_out(12::bit(64))), '0');
ltrim
---
1100
(1 row)
--
Michael Fuhr
---(end of
Or something like this in SQL or PL/pgSQL:
test= SELECT ltrim(textin(bit_out(12::bit(64))), '0');
ltrim
---
1100
(1 row)
Swet. Good old i/o functions.
Chris
---(end of broadcast)---
TIP 6: explain analyze is your friend
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
test= SELECT ltrim(textin(bit_out(12::bit(64))), '0');
ltrim
---
1100
(1 row)
Swet. Good old i/o functions.
Who needs the I/O functions? Just cast int to bit(n).
regards, tom lane
Tom Lane wrote:
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
test= SELECT ltrim(textin(bit_out(12::bit(64))), '0');
ltrim
---
1100
(1 row)
Swet. Good old i/o functions.
Who needs the I/O functions? Just cast int to bit(n).
Then how do you remove all leading zeros,
Michael Fuhr wrote:
On Tue, Nov 29, 2005 at 09:46:13PM -0500, Andrew Dunstan wrote:
create or replace function bin(bigint) returns text language plperl as $$
my $arg = $_[0] + 0;
my $res = ;
while($arg)
{
$res = ($arg % 2) . $res;
$arg = 1;
}
return $res;
$$;
Any reason not
Hi,
I have to add a function in varlena.c named
btcasecmp(PG_FUNCTION_ARG) ,
so what other information I have to add in source.
I know little bit of it,
I have added that function in
src/backend/utils/fmgrtab.c and assign a new oid but when I installed that
function definition is not appearing
On Wed, Nov 30, 2005 at 10:40:42AM +0530, sandeep satpal wrote:
Hi,
I have to add a function in varlena.c named
btcasecmp(PG_FUNCTION_ARG) ,
so what other information I have to add in source.
I know little bit of it,
I have added that function in
src/backend/utils/fmgrtab.c and assign a
On Tue, Nov 29, 2005 at 10:53:38PM +, Simon Riggs wrote:
On Tue, 2005-11-29 at 17:21 -0500, Tom Lane wrote:
regression=# explain select * from tenk1 where unique1 = any (array(select
f1 from int4_tbl));
snip
So we could teach the planner to transform:
IN (subselect)
into
=
Simon Riggs [EMAIL PROTECTED] writes:
IMHO the only way to do joins that access partitions is to do the
constraint exclusion at run time, but I can see thats a longer
conversation than I can start right now.
My experience in Oracle was that you can see three different types of
partition
On Tue, 2005-11-29 at 18:51 -0500, Bruce Momjian wrote:
Please let me back up and ask a more simplistic question. I understand
the idea of allowing COPY to insert rows with less locking, but I am
wondering about the NOLOGGING idea. On commit, we must guarantee that
all the rows are in the
Am Mittwoch, den 30.11.2005, 10:15 +0800 schrieb Christopher
Kings-Lynne:
Hi guys,
How would I go about implementing MySQL's BIN() function easily in PL/SQL.
mysql SELECT BIN(12);
- '1100'
Basically it converts a bigint to a string containing 1's and 0's.
I've tried messing
Martijn van Oosterhout kleptog@svana.org writes:
Do these constructs have the same semantics w.r.t. NULL?
I think so, though it'd be good to read the spec closely.
Currently arrays can't have nulls
That's so last week ;-)
regards, tom lane
On Wed, Nov 30, 2005 at 07:42:36AM +0100, Tino Wildenhain wrote:
In python, I usually go like this:
In Ruby (and therefore in PL/Ruby) you could do this:
10.to_s(2)
= 1010
10.to_s(16)
= a
--
Michael Fuhr
---(end of broadcast)---
TIP 4: Have
Simon Riggs [EMAIL PROTECTED] writes:
[...context omitted...]
We would need to flush all the blocks in the table out of cache at
commit time, for that table only.
This seems striking close to the Old Concept of temp tables, which
we got rid of for good-and-sufficient reasons. You might want
Am Mittwoch, den 30.11.2005, 00:03 -0700 schrieb Michael Fuhr:
On Wed, Nov 30, 2005 at 07:42:36AM +0100, Tino Wildenhain wrote:
In python, I usually go like this:
In Ruby (and therefore in PL/Ruby) you could do this:
10.to_s(2)
= 1010
10.to_s(16)
= a
is there a
62 matches
Mail list logo