I have not kept up with PostgreSQL changes and have just been using it. A
co-worker recently told me that you need to word CONCURRENTLY in CREATE
INDEX to avoid table locking. I called BS on this because to my knowledge
PostgreSQL does not lock tables. I referenced this page in the
documentation:
I am using PostgreSQL's SSL support and the conventions for the key and
certifications don't make sense from the client perspective. Especially
under Windows.
I am proposing a few simple changes:
Adding two API
void PQsetSSLUserCertFileName(char *filename)
{
user_crt_filename =
I have been looking at this thread for a bit and want to interject an idea.
A couple years ago, I offered a patch to the GUC system that added a
number of abilities, two left out were:
(1) Specify a configuration file on the command line.
(2) Allow the inclusion of a configuration file from
Mark Woodward wrote:
I have been looking at this thread for a bit and want to interject an
idea.
A couple years ago, I offered a patch to the GUC system that added a
number of abilities, two left out were:
(1) Specify a configuration file on the command line.
(2) Allow the inclusion
Shouldn't this work?
select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15;
ERROR: column y.ycis_id must appear in the GROUP BY clause or be used
in an aggregate function
If I am asking for a specific column value, should I, technically
speaking, need to group by that column?
Stephen Frost wrote:
select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15;
But back to the query the issue comes in that the ycis_id value is
included with the return values requested (a single row value with
aggregate values that isn't grouped) - if ycis_id is not unique you
Hi, Mark,
Mark Woodward wrote:
Stephen Frost wrote:
select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15;
But back to the query the issue comes in that the ycis_id value is
included with the return values requested (a single row value with
aggregate values that isn't grouped
Hi, Mark,
Mark Woodward wrote:
Shouldn't this work?
select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15;
ERROR: column y.ycis_id must appear in the GROUP BY clause or be used
in an aggregate function
If I am asking for a specific column value, should I, technically
Mark Woodward wrote:
Stephen Frost wrote:
select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15;
But back to the query the issue comes in that the ycis_id value is
included with the return values requested (a single row value with
aggregate values that isn't grouped
Mark Woodward wrote:
select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15;
I still assert that there will always only be one row to this query.
This
is an aggregate query, so all the rows with ycis_id = 15, will be
aggregated. Since ycis_id is the identifying part
On Tue, Oct 17, 2006 at 02:41:25PM -0400, Mark Woodward wrote:
The output column ycis_id is unabiguously a single value with regards
to
the query. Shouldn't PostgreSQL know this? AFAIR, I think I've used
this
exact type of query before either on PostgreSQL or another system, maybe
Oracle
Mark Woodward wrote:
Shouldn't this work?
select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15;
ERROR: column y.ycis_id must appear in the GROUP BY clause or be
used in an aggregate function
This would require a great deal of special-casing, in particular
knowledge
On Oct 17, 2006, at 15:19, Peter Eisentraut wrote:
Mark Woodward wrote:
Shouldn't this work?
select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15;
ERROR: column y.ycis_id must appear in the GROUP BY clause or be
used in an aggregate function
This would require a great
Clinging to sanity, [EMAIL PROTECTED] (Mark Woodward) mumbled into
her beard:
What is the point of writing a proposal if there is a threat of
will be rejected if one of the people who would do the rejection
doesn't at least outline what would be acceptable?
If your proposal is merely let's
On 10/10/06, Mark Woodward [EMAIL PROTECTED] wrote:
I think the idea of virtual indexes is pretty interesting, but
ultimately a lesser solution to a more fundimental issue, and that would
be hands on control over the planner. Estimating the effect of an
index
on a query prior to creating
Mark Woodward [EMAIL PROTECTED] writes:
The analyzer, at least the last time I checked, does not recognize these
relationships.
The analyzer is imperfect but arguing from any particular imperfection is
weak
because someone will just come back and say we should work on that problem
Mark Woodward [EMAIL PROTECTED] writes:
I would say that a simpler planner with better hints
will always be capable of creating a better query plan.
This is demonstrably false: all you need is an out-of-date hint, and
you can have a worse plan.
That doesn't make it false, it makes it higher
Mark,
First off, I'm going to request that you (and other people) stop hijacking
Simon's thread on hypothetical indexes. Hijacking threads is an
effective way to get your ideas rejected out of hand, just because the
people whose thread you hijacked are angry with you.
So please observe
Since you're the one who wants hints, that's kind of up to you to
define.
Write a specification and make a proposal.
What is the point of writing a proposal if there is a threat of will be
rejected if one of the people who would do the rejection doesn't at
least
outline what would be
Simon Riggs [EMAIL PROTECTED] writes:
- RECOMMEND command
Similar in usage to an EXPLAIN, the RECOMMEND command would return a
list of indexes that need to be added to get the cheapest plan for a
particular query (no explain plan result though).
Both of these seem to assume that EXPLAIN
Mark Woodward [EMAIL PROTECTED] writes:
Whenever someone actually writes a pg_upgrade, we'll institute a policy
to restrict changes it can't handle.
IMHO, *before* any such tool *can* be written, a set of rules must be
enacted regulating catalog changes.
That one is easy
On Mon, Oct 09, 2006 at 11:50:10AM -0400, Mark Woodward wrote:
That one is easy: there are no rules. We already know how to deal
with
catalog restructurings --- you do the equivalent of a pg_dump -s and
reload. Any proposed pg_upgrade that can't cope with this will be
rejected out
Mark,
No one could expect that this could happen by 8.2, or the release after
that, but as a direction for the project, the directors of the
PostgreSQL project must realize that the dump/restore is becomming like
the old locking vacuum problem. It is a *serious* issue for PostgreSQL
Mark Woodward [EMAIL PROTECTED] writes:
Not to cause any arguments, but this is sort a standard discussion that
gets brought up periodically and I was wondering if there has been any
softening of the attitudes against an in place upgrade, or movement
to
not having to dump and restore
I am using the netflix database:
Table public.ratings
Column | Type | Modifiers
+--+---
item | integer |
client | integer |
day| smallint |
rating | smallint |
The query was executed as:
psql -p 5435 -U pgsql -t -A -c select client, item, rating, day
Mark Woodward [EMAIL PROTECTED] writes:
psql -p 5435 -U pgsql -t -A -c select client, item, rating, day from
ratings order by client netflix netflix.txt
My question, it looks like the kernel killed psql, and not postmaster.
Not too surprising.
Question, is this a bug in psql?
It's
On Thu, Oct 05, 2006 at 11:56:43AM -0400, Mark Woodward wrote:
The query was executed as:
psql -p 5435 -U pgsql -t -A -c select client, item, rating, day from
ratings order by client netflix netflix.txt
My question, it looks like the kernel killed psql, and not postmaster.
The
postgresql
FWIW, there's a feature in CVS HEAD to instruct psql to try to use a
cursor to break up huge query results like this. For the moment I'd
suggest using COPY instead.
That's sort of what I was afraid off. I am trying to get 100 million
records into a text file in a specific order.
Tom Lane wrote:
Mark Woodward [EMAIL PROTECTED] writes:
psql -p 5435 -U pgsql -t -A -c select client, item, rating, day from
ratings order by client netflix netflix.txt
FWIW, there's a feature in CVS HEAD to instruct psql to try to use a
cursor to break up huge query results like
On Thu, 2006-10-05 at 14:53 -0400, Luke Lonergan wrote:
Is that in the release notes?
Yes: Allow COPY to dump a SELECT query (Zoltan Boszormenyi, Karel Zak)
I remember this discussion, it is cool when great features get added.
---(end of
Not to cause any arguments, but this is sort a standard discussion that
gets brought up periodically and I was wondering if there has been any
softening of the attitudes against an in place upgrade, or movement to
not having to dump and restore for upgrades.
I am aware that this is a difficult
Mark Woodward wrote:
I am currently building a project that will have a huge number of
records,
1/2tb of data. I can't see how I would ever be able to upgrade
PostgreSQL
on this system.
Slony will help you upgrade (and downgrade, for that matter) with no
downtime at all, pretty much
Indeed. The main issue for me is that the dumping and replication
setups require at least 2x the space of one db. That's 2x the
hardware which equals 2x $$$. If there were some tool which modified
the storage while postgres is down, that would save lots of people
lots of money.
Its time and
I signed up for the Netflix Prize. (www.netflixprize.com) and downloaded
their data and have imported it into PostgreSQL. Here is how I created the
table:
Table public.ratings
Column | Type | Modifiers
+-+---
item | integer |
client | integer |
rating | integer
I signed up for the Netflix Prize. (www.netflixprize.com)
and downloaded their data and have imported it into PostgreSQL.
Here is how I created the table:
I signed up as well, but have the table as follows:
CREATE TABLE rating (
movie SMALLINT NOT NULL,
person INTEGER NOT NULL,
Mark Woodward [EMAIL PROTECTED] writes:
The one thing I notice is that it is REAL slow.
How fast is your disk? Counting on my fingers, I estimate you are
scanning the table at about 47MB/sec, which might or might not be
disk-limited...
I'm using 8.1.4. The rdate field looks something like
Greg Sabino Mullane [EMAIL PROTECTED] writes:
CREATE TABLE rating (
movie SMALLINT NOT NULL,
person INTEGER NOT NULL,
rating SMALLINT NOT NULL,
viewed DATE NOT NULL
);
You would probably be better off putting the two smallints first followed
by
the integer and date.
Mark Woodward [EMAIL PROTECTED] writes:
The rating, however, is one char 1~9. Would making it a char(1) buy
anything?
No, that would actually hurt because of the length word for the char
field. Even if you used the char type, which really is only one byte,
you wouldn't win anything because
I have a system by which I store complex data in PostgreSQL as an XML
string. I have a simple function that can return a single value.
I would like to return sets and sets of rows from the data. This is not a
huge problem, as I've written a few of these functions. The question I'd
like to put out
On Tue, Jul 04, 2006 at 11:59:27AM +0200, Zdenek Kotala wrote:
Mark,
I don't know how it will exactly works in postgres but my expectations
are:
Mark Woodward wrote:
Is there a difference in PostgreSQL performance between these two
different strategies:
if(!exec(update foo set bar
Is there a difference in PostgreSQL performance between these two
different strategies:
if(!exec(update foo set bar='blahblah' where name = 'xx'))
exec(insert into foo(name, bar) values('xx','blahblah');
or
exec(delete from foo where name = 'xx');
exec(insert into foo(name, bar)
Ãhel kenal päeval, E, 2006-06-26 kell 09:10, kirjutas Mark Woodward:
ÃÅhel kenal päeval, R, 2006-06-23 kell 17:27, kirjutas Bruce
Momjian:
Jonah H. Harris wrote:
On 6/23/06, Tom Lane [EMAIL PROTECTED] wrote:
What I see in this discussion is a huge amount of the grass must
On Fri, Jun 23, 2006 at 06:37:01AM -0400, Mark Woodward wrote:
While we all know session data is, at best, ephemeral, people still want
some sort of persistence, thus, you need a database. For mcache I have a
couple plugins that have a wide range of opitions, from read/write at
startup
I would set the SO_SNDBUF to 32768.
Hi,
I see a performance issue on win32. This problem is causes by the
following URL.
http://support.microsoft.com/kb/823764/EN-US/
On win32, default SO_SNDBUF value is 8192 bytes. And libpq's buffer is
8192 too.
pqcomm.c:117
#define
We have definitly seen weird timing issues sometimes when both client
and server were on Windows, but have been unable to pin it exactly on
what. From Yoshiykis other mail it looks like this could possibly be it,
since he did experience a speedup in the range we've been looking for in
those
Ãhel kenal päeval, R, 2006-06-23 kell 17:27, kirjutas Bruce Momjian:
Jonah H. Harris wrote:
On 6/23/06, Tom Lane [EMAIL PROTECTED] wrote:
What I see in this discussion is a huge amount of the grass must be
greener on the other side syndrome, and hardly any recognition that
every
Heikki Linnakangas wrote:
On Mon, 26 Jun 2006, Jan Wieck wrote:
On 6/25/2006 10:12 PM, Bruce Momjian wrote:
When you are using the update chaining, you can't mark that index row
as
dead because it actually points to more than one row on the page,
some
are non-visible, some are
On 6/24/06, Mark Woodward [EMAIL PROTECTED] wrote:
I originally suggested a methodology for preserving MVCC and everyone is
confusing it as update in place, this isnot what I intended.
Actually, you should've presented your idea as performing MVCC the way
Firebird does... the idea
On 6/24/2006 9:23 AM, Mark Woodward wrote:
On Sat, 24 Jun 2006, Mark Woodward wrote:
I'm probably mistaken, but aren't there already forward references in
tuples to later versions? If so, I'm only sugesting reversing the
order
and referencing the latest version.
I thought I understood
On 6/23/2006 3:10 PM, Mark Woodward wrote:
This is NOT an in-place update. The whole MVCC strategy of keeping old
versions around doesn't change. The only thing that does change is one
level of indirection. Rather than keep references to all versions of all
rows in indexes, keep only
On Sat, 24 Jun 2006, Mark Woodward wrote:
I'm probably mistaken, but aren't there already forward references in
tuples to later versions? If so, I'm only sugesting reversing the order
and referencing the latest version.
I thought I understood your idea, but now you lost me again. I thought
On 6/24/06, Mark Woodward [EMAIL PROTECTED] wrote:
Currently it looks like this:
ver001-ver002-ver003-...-verN
That's what t_ctid does now, right? Well, that's sort of stupid. Why not
have it do this:
ver001-verN-...-ver003-ver002-|
Heh, because that's crazy. The first time you insert
On 6/24/06, Mark Woodward [EMAIL PROTECTED] wrote:
In the scenario, as previously outlined:
ver001-verN-...-ver003-ver2-|
^-/
So you want to always keep an old version around?
Prior to vacuum, it will be there anyway, and after vacuum, the new
version
On 6/24/06, Mark Woodward [EMAIL PROTECTED] wrote:
On 6/24/06, Mark Woodward [EMAIL PROTECTED] wrote:
In the scenario, as previously outlined:
ver001-verN-...-ver003-ver2-|
^-/
So you want to always keep an old version around?
Prior to vacuum
I originally suggested a methodology for preserving MVCC and everyone is
confusing it as update in place, this isnot what I intended.
How about a form of vacuum that targets a particular row? Is this
possible? Would if have to be by transaction?
---(end of
The example is a very active web site, the flow is this:
query for session information
process HTTP request
update session information
This happens for EVERY http request. Chances are that you won't have
concurrent requests for the same row, but you may have well over 100
HTTP
server
I suppose you have a table memberships (user_id, group_id) or something
like it ; it should have as few columns as possible ; then try regularly
clustering on group_id (maybe once a week) so that all the records for a
particular group are close together. Getting the members of a group to
Let me ask a question, you have this hundred million row table. OK, how
much of that table is read/write? Would it be posible to divide the
table into two (or more) tables where one is basically static, only
infrequent inserts and deletes, and the other is highly updated?
Well, all of it is
Ãhel kenal päeval, N, 2006-06-22 kell 12:41, kirjutas Mark Woodward:
Depending on exact details and optimisations done, this can be either
slower or faster than postgresql's way, but they still need to do
something to get transactional visibility rules implemented.
I think they have
Mark Woodward wrote:
In case of the number of actively modified rows being in only tens or
low hundreds of thousands of rows, (i.e. the modified set fits in
memory) the continuous vacuum process shows up as just another
backend,
not really taking order of magnitude more resources
Bottom line: there's still lots of low-hanging fruit. Why are people
feeling that we need to abandon or massively complicate our basic
architecture to make progress?
regards, tom lane
I, for one, see a particularly nasty unscalable behavior in the
implementation of
On 6/23/06, Mark Woodward [EMAIL PROTECTED] wrote:
I, for one, see a particularly nasty unscalable behavior in the
implementation of MVCC with regards to updates.
I think this is a fairly common acceptance. The overhead required to
perform an UPDATE in PostgreSQL is pretty heavy. Actually
Tom Lane wrote:
If you're doing heavy updates of a big table then it's likely to end up
visiting most of the table anyway, no? There is talk of keeping a map
of dirty pages, but I think it'd be a win for infrequently-updated
tables, not ones that need constant vacuuming.
I think a lot of
the *last* serious issue.
Debate all you want, vacuum mitigates the problem to varying levels,
fixing the problem will be a huge win. If the update behavior gets fixed,
I can't think of a single issue with postgresql that would be a show
stopper.
Rick
On Jun 22, 2006, at 7:59 AM, Mark Woodward wrote
Clinging to sanity, [EMAIL PROTECTED] (Mark Woodward) mumbled into
her beard:
We all know that PostgreSQL suffers performance problems when rows are
updated frequently prior to a vacuum. The most serious example can be
seen
by using PostgreSQL as a session handler for a busy we site. You may
After a long battle with technology, [EMAIL PROTECTED] (Mark
Woodward), an earthling, wrote:
Clinging to sanity, [EMAIL PROTECTED] (Mark Woodward) mumbled into
her beard:
[snip]
1. The index points to all the versions, until they get vacuumed out.
It can't point to all versions, it points
Ãhel kenal päeval, N, 2006-06-22 kell 09:59, kirjutas Mark Woodward:
After a long battle with technology, [EMAIL PROTECTED] (Mark
Woodward), an earthling, wrote:
Clinging to sanity, [EMAIL PROTECTED] (Mark Woodward) mumbled
into
It pointed to *ALL* the versions.
Hmm, OK
Ãhel kenal päeval, N, 2006-06-22 kell 10:20, kirjutas Jonah H. Harris:
On 6/22/06, Alvaro Herrera [EMAIL PROTECTED] wrote:
Hmm, OK, then the problem is more serious than I suspected.
This means that every index on a row has to be updated on every
transaction that modifies that row. Is
Christopher Browne [EMAIL PROTECTED] writes:
After a long battle with technology, [EMAIL PROTECTED] (Mark
Woodward), an earthling, wrote:
Not true. Oracle does not seem to exhibit this problem.
Oracle suffers a problem in this regard that PostgreSQL doesn't; in
Oracle, rollbacks are quite
You mean systems that are designed so exactly, that they can't take
10%
performance change ?
No, that's not really the point, performance degrades over time, in one
minute it degraded 10%.
The update to session ratio has a HUGE impact on PostgreSQL. If you have
a
thousand active
As you can see, in about a minute at high load, this very simple table
lost about 10% of its performance, and I've seen worse based on update
frequency. Before you say this is an obscure problem, I can tell you it
isn't. I have worked with more than a few projects that had to switch
away
What you seem not to grasp at this point is a large web-farm, about 10
or
more servers running PHP, Java, ASP, or even perl. The database is
usually
the most convenient and, aside from the particular issue we are talking
about, best suited.
The answer is sticky sessions : each user
We all know that PostgreSQL suffers performance problems when rows are
updated frequently prior to a vacuum. The most serious example can be seen
by using PostgreSQL as a session handler for a busy we site. You may have
thousands or millions of active sessions, each being updated per page hit.
On 6/16/06, Mark Woodward [EMAIL PROTECTED] wrote:
Chris Campbell [EMAIL PROTECTED] writes:
I heard an interesting feature request today: preventing the
execution of a DELETE or UPDATE query that does not have a WHERE
clause.
These syntaxes are required by the SQL spec. Furthermore
Chris Campbell [EMAIL PROTECTED] writes:
I heard an interesting feature request today: preventing the
execution of a DELETE or UPDATE query that does not have a WHERE clause.
These syntaxes are required by the SQL spec. Furthermore, it's easy
to imagine far-more-probable cases in which the
On Wed, Jun 07, 2006 at 07:07:55PM -0400, Mark Woodward wrote:
I guess what I am saying is that PostgreSQL isn't smooth, between
checkpoints and vacuum, it is near impossible to make a product that
performs consistently under high load.
Have you tuned the bgwriter and all the vacuum_cost
On Wed, Jun 07, 2006 at 11:47:45AM -0400, Tom Lane wrote:
Zdenek Kotala [EMAIL PROTECTED] writes:
Koichi Suzuki wrote:
I've once proposed a patch for 64bit transaction ID, but this causes
some overhead to each tuple (XMIN and XMAX).
Did you check performance on 32-bit or 64-bit systems
OK, here's my problem, I have a nature study where we have about 10 video
cameras taking 15 frames per second.
For each frame we make a few transactions on a PostgreSQL database.
We want to keep about a years worth of data at any specific time.
We have triggers that fire is something interesting
Hello, I would like to know where in the source-code of postgres is
located the code of the aggregate functions min, max, avg.
I wish to develop more statistical aggregate functions, and I prefer to
use C than to write then in the PL/R.
There is a library in contrib called intagg. I wrote it
Mark Woodward wrote:
OK, here's my problem, I have a nature study where we have about 10
video
cameras taking 15 frames per second.
For each frame we make a few transactions on a PostgreSQL database.
Maybe if you grouped multiple operations on bigger transactions, the I/O
savings could
Jim C. Nasby [EMAIL PROTECTED] writes:
On Mon, Jun 05, 2006 at 11:27:30AM -0400, Tom Lane wrote:
I'm reading this as just another uninformed complaint about libpq's
habit of buffering the whole query result. It's possible that there's
a memory leak in the -A path specifically, but nothing
Mark Woodward wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
On Mon, Jun 05, 2006 at 11:27:30AM -0400, Tom Lane wrote:
I'm reading this as just another uninformed complaint about libpq's
habit of buffering the whole query result. It's possible that
there's
a memory leak in the -A path
Tom had posted a question about file compression with copy. I thought
about it, and I want to through this out and see if anyone things it is a
good idea.
Currently, the COPY command only copies a table, what if it could operate
with a query, as:
COPY (select * from mytable where foo='bar') as
Mark Woodward wrote:
Tom had posted a question about file compression with copy. I thought
about it, and I want to through this out and see if anyone things it is
a
good idea.
Currently, the COPY command only copies a table, what if it could
operate
with a query, as:
COPY (select * from
Mark Woodward wrote:
Mark Woodward wrote:
Tom had posted a question about file compression with copy. I thought
about it, and I want to through this out and see if anyone things it
is
a
good idea.
Currently, the COPY command only copies a table, what if it could
operate
with a query
Mark Woodward wrote:
...
create table as select ...; followed by a copy of that table
if it really is faster then just the usual select fetch?
Why create table?
Just to simulate and time the proposal.
SELECT ... already works over the network and if COPY from a
select (which would
Mark Woodward wrote:
...
pg_dump -t mytable | psql -h target -c COPY mytable FROM STDIN
With a more selective copy, you can use pretty much this mechanism to
limit a copy to a sumset of the records in a table.
Ok, but why not just implement this into pg_dump or psql?
Why bother
Allow COPY to output from views
Another idea would be to allow actual SELECT statements in a COPY.
Personally I strongly favor the second option as being more flexible
than the first.
I second that - allowing arbitrary SELECT statements as a COPY source
seems much more powerful and
After re-reading what I just wrote to Andreas about how compression of
COPY data would be better done outside the backend than inside, it
struck me that we are missing a feature that's fairly common in Unix
programs. Perhaps COPY ought to have the ability to pipe its output
to a shell
On Thu, May 25, 2006 at 08:41:17PM -0300, Rodrigo Hjort wrote:
I think more exactly, the planner can't possibly know how to plan an
indexscan with a leading '%', because it has nowhere to start.
The fact is that index scan is performed on LIKE expression on a string
not
preceded by '%',
Dhanaraj M wrote:
I have the following doubts.
1. Does postgres create an index on every primary key? Usually, queries
are performed against a table on the primary key, so, an index on it
will be very useful.
Yes, a unique index is used to enforce the primary-key.
Well, here is an
, but haven't found one
I really like.
Chris
Mark Woodward wrote:
I have a side project that needs to intelligently know if two strings
are contextually similar. Think about how CDDB information is collected
and sorted. It isn't perfect, but there should be enough information to
be
usable.
Think
On Sat, May 20, 2006 at 02:29:01PM +0200, Dawid Kuroczko wrote:
On 5/20/06, Lukas Smith [EMAIL PROTECTED] wrote:
The improvements to the installer are great, but there simply needs to
be a packaged solution that adds more of the things people are very
likely to use. From my understanding
What I was hoping someone had was a function that could find the substring
runs in something less than a strlen1*strlen2 number of operations and a
numerically sane way of representing the similarity or difference.
Acually, it is more like strlen1*strlen2*N, where N is the number of valid
My question is whether psql using libreadline.so has to be GPL, meaning
the psql source has to be included in a binary distribution.
If I understand what I have been told by lawyers, here's what using a GPL,
and NOT LGPL, library means:
According to RMS, the definition of a derivitive work is
On Fri, May 19, 2006 at 07:04:47PM -0400, Bruce Momjian wrote:
libreadline is not a problem because you can distribute postgresql
compiled with readline and comply with all licences involved
simultaneously. It doesn't work with openssl because the licence
requires things that are
Andrew Dunstan [EMAIL PROTECTED] writes:
Mark Woodward wrote:
Again, there is so much code for MySQL, a MySQL emulation layer, MEL
for
short, could allow plug and play compatibility for open source, and
closed
source, applications that otherwise would force a PostgreSQL user to
hold
his
Actually, I think it's a lot more accurate to compare PostgreSQL and
MySQL as FreeBSD vs Linux from about 5 years ago. Back then FreeBSD was
clearly superior from a technology standpoint, and clearly playing
second-fiddle when it came to users. And now, Linux is actually
technically superior
Mark Woodward wrote:
I have a side project that needs to intelligently know if two strings
are contextually similar. Think about how CDDB information is collected
and sorted. It isn't perfect, but there should be enough information to
be
usable.
Think about this:
pink floyd - dark side
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I have a side project that needs to intelligently know if two strings
are contextually similar.
The examples you gave seem heavy on word order and whitespace
consideration,
before applying any algorithms. Here's a quick perl version that
1 - 100 of 193 matches
Mail list logo