Not to mention palloc, another extremely fundamental and non-reentrant
subsystem.
Possibly we could work on making all that stuff re-entrant, but it would
be a huge amount of work for a distant and uncertain payoff.
Right. I think it makes more sense to try to get parallelism working
first
On Wed, 21 Sep 2011 18:13:07 +0200, Tom Lane wrote:
Heikki Linnakangas writes:
On 21.09.2011 18:46, Tom Lane wrote:
The idea that I was toying with was to allow the regular SQL-callable
comparison function to somehow return a function pointer to the
alternate comparison function,
You coul
The problem can generally be written as "tuples seeing multiple
updates in the same transaction"?
I think that every time PostgreSQL is used with an ORM, there is
a certain amount of multiple updates taking place. I have actually
been reworking clientside to get around multiple updates, since t
The real problem here is that we're sending records to the slave which
might cease to exist on the master if it unexpectedly reboots. I
believe that what we need to do is make sure that the master only
sends WAL it has already fsync'd
How about this :
- pg records somewhere the xlog position
The linux kernel also uses it when it's availabe, see e.g.
http://tomoyo.sourceforge.jp/cgi-bin/lxr/source/arch/x86/crypto/crc32c-intel.c
If you guys are interested I have a Core i7 here, could run a little
benchmark.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
T
On Sunday 30 May 2010 18:29:31 Greg Stark wrote:
On Sun, May 30, 2010 at 4:54 AM, Tom Lane wrote:
> I read through that thread and couldn't find much discussion of
> alternative CRC implementations --- we spent all our time on arguing
> about whether we needed 64-bit CRC or not.
SSE4.2 has a h
On Tue, 30 Mar 2010 13:01:54 +0200, Peter Eisentraut
wrote:
On tis, 2010-03-30 at 08:39 +0200, Stefan Kaltenbrunner wrote:
on fast systems pg_dump is completely CPU bottlenecked
Might be useful to profile why that is. I don't think pg_dump has
historically been developed with CPU efficien
So, if php dev doesn't have time to learn to do things right then we
have to find time to learn to do things wrong? seems like a nosense
argument to me
The best ever reply I got from phpBB guys on I don't remember which
question was :
"WE DO IT THIS WAY BECAUSE WE WANT TO SUPPORT MYSQL 3.x
Oh, this is what I believe MySQL calls "loose index scans". I'm
Exactly :
http://dev.mysql.com/doc/refman/5.0/en/loose-index-scan.html
actually looking into this as we speak,
Great ! Will it support the famous "top-n by category" ?
but there seems to be a
non-trivial amount of work to b
As far as I can tell, we already do index skip scans:
This feature is great but I was thinking about something else, like SELECT
DISTINCT, which currently does a seq scan, even if x is indexed.
Here is an example. In both cases it could use the index to skip all
non-interesting rows, pulli
My opinion is that PostgreSQL should accept any MySQL syntax and return
warnings. I believe that we should access even innodb syntax and turn it
immediately into PostgreSQL tables. This would allow people with no
interest in SQL to migrate from MySQL to PostgreSQL without any harm.
A solution
What about catching the error in the application and INSERT'ing into the
current preprepare.relation table? The aim would be to do that in dev or
in pre-prod environments, then copy the table content in production.
Yep, but it's a bit awkward and time-consuming, and not quite suited to
ORM-g
On Thu, 18 Feb 2010 16:09:42 +0100, Dimitri Fontaine
wrote:
"Pierre C" writes:
Problem with prepared statements is they're a chore to use in web apps,
especially PHP, since after grabbing a connection from the pool, you
don't
know if it has prepared plans in it or
On Tue, 16 Feb 2010 15:22:00 +0100, Greg Stark wrote:
There's a second problem though. We don't actually know how long any
given query is going to take to plan or execute. We could just
remember how long it took to plan and execute last time or how long it
took to plan last time and the average
14 matches
Mail list logo