Florian,
if you set the transaction isolation level SERIALIZABLE in MySQL/InnoDB,
then InnoDB uses next-key locking in every SELECT, and transactions really
are serializable in the mathematical sense. I think the same holds for DB2
and MS SQL Server.
PostgreSQL and Oracle use a loophole of SQL-19
Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
> gmake -C parser parse.h
> gmake[3]: Entering directory
> `/usr/home/chriskl/pgsql-server/src/backend/parser'
> bison -y -d gram.y
> gram.y:1820.51-52: $$ of `OptLocation' has no declared type
> gram.y:1821.99-100: $$ of `OptLocation' has no dec
Doh! False alarm guys - was caused by erroneous local modifications :(
Chris
On Sun, 21 Sep 2003, Christopher Kings-Lynne wrote:
> gmake -C parser parse.h
> gmake[3]: Entering directory
> `/usr/home/chriskl/pgsql-server/src/backend/parser'
> bison -y -d gram.y
> gram.y:1820.51-52: $$ of `OptLo
> Wouldn't it be useful, though, to implement a "KILL" or "CANCEL" SQL
> command that takes a backend ID as its argument (and, of course, does
> the appropriate checks of whether you're a superuser or the owner of
> the backend) and sends the appropriate signal to the target backend?
>
> That would
gmake -C parser parse.h
gmake[3]: Entering directory
`/usr/home/chriskl/pgsql-server/src/backend/parser'
bison -y -d gram.y
gram.y:1820.51-52: $$ of `OptLocation' has no declared type
gram.y:1821.99-100: $$ of `OptLocation' has no declared type
Chris
---(end of broadcas
Tom Lane wrote:
> "Paulo Scardine" <[EMAIL PROTECTED]> writes:
> >> I trust when you say "kill", you really mean "send SIGINT" ...
>
> > I'm sending a SIGTERM. Would SIGINT be more appropriate?
>
> Yes --- that would actually cancel the query, not cause the backend to
> shut down.
Ahh...this is
Christopher Browne <[EMAIL PROTECTED]> writes:
> It's worth elaborating on the answers here
Agreed.
> This also begs two other questions.
> 1. What, _exactly_, is the aggregate select getting?
> The assumption made in Florian's article is that
> "SELECT COUNT(*) from items"
> i
Quoth [EMAIL PROTECTED] (Tom Lane):
> Florian Weimer <[EMAIL PROTECTED]> writes:
>> Is this a bug, or is SQLxx serializability defined in different terms?
>
> Strictly speaking, we do not guarantee serializability because we do not
> do predicate locking. See for example
> http://archives.postgres
Florian Weimer <[EMAIL PROTECTED]> writes:
> Is this a bug, or is SQLxx serializability defined in different terms?
Strictly speaking, we do not guarantee serializability because we do not
do predicate locking. See for example
http://archives.postgresql.org/pgsql-general/2003-01/msg01581.php
AFA
Tom Lane wrote:
Your idea of reducing id_provider to id_class using a separate query
seems like a good one to me --- that will allow the planner to generate
different plans depending on which id_class value is involved.
However is not a natural way to approch the problem;
Am I wrong ?
Gaetano
Hunter Hillegas <[EMAIL PROTECTED]> writes:
> I cannot build the latest release on OS X Jaguar.
> Running GCC 3.3 from Apple:
It seems "-traditional-cpp" has become nontraditional in 3.3. Or
possibly Apple changed their system header files in a way that broke
that preprocessor. What's certain is
"scott.marlowe" <[EMAIL PROTECTED]> writes:
> Postgresql supports Serializable transactions, which are 100% ACID
> compliant.
How can I activate it? 8-)
Yes, I know about SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, please
read on.
Given the two tables:
CREATE TABLE items (item INTEGER);
CR
Manfred Spraul <[EMAIL PROTECTED]> writes:
> MAX_ALIGNOF affects the on-disk format, correct?
Right, it could affect placement of fields on-disk. I was thinking we
could change it as an easy test, but maybe not ...
If you set up the shared buffers at an appropriate offset, that should
get most o
Tom Lane wrote:
Manfred Spraul <[EMAIL PROTECTED]> writes:
... Initially I tried to increase MAX_ALIGNOF to 16, but
the result didn't work:
You would need to do a full recompile and initdb to alter MAX_ALIGNOF.
I think I did that, but it still failed. 7.4cvs works, I'll ignore it.
MAX_AL
Manfred Spraul <[EMAIL PROTECTED]> writes:
> ... Initially I tried to increase MAX_ALIGNOF to 16, but
> the result didn't work:
You would need to do a full recompile and initdb to alter MAX_ALIGNOF.
However, if you are wanting to raise it past about 8, that's probably
not the way to go anyway; it
Tom Lane wrote:
Oh, pgbench ;-). Are you aware that you need a "scale factor" (-s)
larger than the number of clients to avoid unreasonable levels of
contention in pgbench?
No. What about adding a few reasonable examples to README? I've switched
to "pgbench -c 10 -s 11 -t 1000 test". Is that ok?
[EMAIL PROTECTED] writes:
>Is there a compile option or any other tweak to restrict postgresql
> data files size (the files that are usually under /usr/local/pgsql/data
> directory) to a certain limit, for example no file should be bigger
> that 10Mb. Thanks in advance
Check out RELSEG_SIZE i
Hello:
Looks right to me. Either you have some small typo in your code, or the
backend does ... let me know which ...
Note that "insufficient data left in message" could arise from
misformatting of an individual array element with respect to its
individual value size, not only from a mistake at t
Hi,
I am a Ph.D candidate of Institute of Software, the Chinese Academic of
Science, and the major is computer software and theory. My research
interests are mainly focused on database management or other system
software.I want to be a development partner of your project.Would you
like introduce me
Hi,
Is there a compile option or any other tweak to restrict postgresql
data files size (the files that are usually under /usr/local/pgsql/data
directory) to a certain limit, for example no file should be bigger
that 10Mb. Thanks in advance
Wally
---(end of broadcast)-
On Sat, 2003-09-20 at 06:14, Jeff wrote:
> Well, that depends. First, turn on stats collecting and run "VACUUM
> ANALYZE". That will collect some data about your data which helps the
> planner make a good choice.
The statistics collector and the statistics collected by ANALYZE have
nothing to do w
Gaetano Mendola <[EMAIL PROTECTED]> writes:
> What about the wrong row expected ?
After I looked more closely, I realized that the planner hasn't any hope
of getting a really correct answer on that. You've got
WHERE ... ud.id_class = cd.id_class AND
cd.id_provider
Manfred Spraul <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> I'd be more interested in asking why you're seeing long series
>> of semops in the first place.
> I couldn't figure out what exactly causes the long series of semops. I
> tried to track it down (enable LOCK_DEBUG):
> - postgres 7.3.3
Jeff <[EMAIL PROTECTED]> wrote:
> Then run "VACUUM ANALYZE" every once in a while (depending on how
> fast your data changes), like every night for instance.
Consider VACUUM and ANALYZE somewhat separately.
You need to ANALYZE any time the distribution of the data changes.
You need to VACUUM any
Joshua D. Drake wrote:
I need a hug.
*HUG*
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list
Tom Lane wrote:
Gaetano Mendola <[EMAIL PROTECTED]> writes:
The select take long:
Postgres7.3.3: average 4000 ms
Postgres7.4b2: average 2600 ms
you can experiment your self with the dump that I gave you
Hm. I tried to duplicate your results. I'm getting about 5400 msec
versus 4200 msec, whic
Jinqiang Han wrote:
> hello, all.
>
> I have a table about 2 million rows. when I run "select * from table1" in
> psql, it will take me about 10 minutes to get the result. I wonder if
> postgresql can immediately return result like db2.
If you're executing in psql, it's probably trying to load t
Tom Lane wrote:
AFAIK, semops are not done unless we actually have to yield the
processor, so saving a syscall or two in that path doesn't sound like a
big win. I'd be more interested in asking why you're seeing long series
of semops in the first place.
Virtually all semops yield the processor
For now I simply renamed the function. We can look for the reason later.
I'd prefer to get ecpg release ready first. :-)
Michael
--
Michael Meskes
Email: Michael at Fam-Meskes dot De
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Lin
Hello:
Looks right to me. Either you have some small typo in your code, or the
backend does ... let me know which ...
Note that "insufficient data left in message" could arise from
misformatting of an individual array element with respect to its
individual value size, not only from a mistake at t
On Saturday 20 September 2003 10:38, Jinqiang Han wrote:
> hello, all.
This isn't really a hackers question - perhaps try the "general","sql" etc
lists in future. This list is for questions about the source-code of PG.
> I have a table about 2 million rows. when I run "select * from table1" in
>
hello, all.
I have a table about 2 million rows. when I run "select * from table1" in psql, it
will take me about 10 minutes to get the result. I wonder if postgresql can
immediately return result like db2.
After that I create a index on a column named id. The time executing "selct * from
tab
32 matches
Mail list logo