Re: [GENERAL] PostgreSQL slammed by PHP creator

2006-09-17 Thread Tim Allen
As a comment on his points though, isn't it better to use the right tool 
for the job rather than try to force one tool to do everything?  I've 
never used a hammer to put in a screw. 


You haven't?

All right-minded people know that PostgreSQL is manifestly superior
and that people who want to use MySQL are scurvy dogs.


I fear you've jumped the gun, sir - Talk Like A Pirate Day isn't until 
tomorrow :-).



Ron Johnson, Jr.


Tim

--
-----------
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] On DNS for postgresql.org

2006-09-06 Thread Tim Allen

Steve Atkins wrote:


On Sep 6, 2006, at 5:29 PM, Joshua D. Drake wrote:

When you commit to providing services to this community, it is  
absolutely the business of that community on how the infrastructure  
is managed.


It is the business of the community that the services provided are  
adequate and stable, certainly. That's become rather obvious recently.


Irrelevant details of the server configuration that do not directly  
affect those services aren't really something to gossip about on a  
public mailing list, though.


The two are quite different things.


Andrew was apparently suggesting that the configuration issue he 
mentioned is not irrelevant, and may be the actual cause of the 
problems. Since he works for a domain registrar, I'm prepared to assume, 
at least as a working hypothesis, that he knows what he's talking about. 
At the least, I suggest it's wise to consider his opinion rather than 
tell him it's not his business.


Tim

--
-------
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Two billion records ok?

2006-09-04 Thread Tim Allen

Nick Bower wrote:

Thanks - but what do you call big?


How many stars do you think there are? :-)


My application is satellite data btw so the reference could be useful.

On Tue, 5 Sep 2006 01:40 pm, Oleg Bartunov wrote:


Nick,

if you need very fast spatial queries (spherical) you may use our
Q3C module for POstgreSQL (q3c.sf.net). We use it for providing access
to very big astronomical catalogs.

Oleg


Tim

--
---
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Postgresql mentioned on Newsforge MySQL article

2006-08-30 Thread Tim Allen

Martijn van Oosterhout wrote:

On Wed, Aug 30, 2006 at 01:54:56PM -0400, Robert Treat wrote:

Seems they are fast where it counts... namely getting a team of developers on 
top of the contest so they come out shining.  Too bad people in this 
community don't see the value of paying people to do advocacy related 
development. 



I don't remember this ever being mentioned on the lists? I never heard
of it. Maybe if it had been mentioned here it would've got more
attention?

It's true there isn't a group here looking for competitions to enter...

Have a ncie day,


Speaking for myself, I'm _delighted_ that the core postgres developers 
spend their time improving postgres, rather than spending their time on 
random beauty contests.


Tim

--
-------
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] How do i store arbitrary questions and answers in SQL?

2006-08-23 Thread Tim Allen

lifeisgood wrote:

The problem : to store, in SQL/RDBMS, an arbitrary set of questions and
their answers, where the questions are text (obviously) but the answers
can be dates, text, integers, money etc.
think of it as a big questionnaire where at design time we have no idea
what the questions will be.

My usual solution to this problem is to store everything in varchar and
flag the type, converting
as I extract data.  It is not a desirable solution.

i.e.
CREATE TABLE Qu (ID INT, Question VARCHAR(64))

CREATE TABLE Answer(ID INT, questionID INT, answer_text VARCHAR(255),
datatype INT)


Use text, not varchar - varchar is just a waste of time and space. This 
approach isn't all that bad, since if you're using libpq or similar to 
read and write the values (without using a binary cursor), you're using 
a text representation anyway. Just use the same text representation of 
your data that the db interface is going to use.



Are there any other solutions out there?


The other solution is to partition your table, make some number of 
tables, one for each data type, with the value column using the correct 
native type. Your code that reads and writes values then needs to be 
smart enough to use the correct table depending on the data type.


We've used both approaches, and they both work fine for us. The text for 
everything approach has the advantage of simplicity of interface code, 
whereas partitioning on data type gives you better query planning and 
better performance when you have a lot (ie millions of rows plus) of data.



I can think of several ideas but they all fall short in some way

0. (current one)  I have to cast any search term to string first but
after that the SQL performs as one expects.


No, you don't have to "cast" things to text - quite the reverse; if you 
are querying on the contents of your value (answer) column and your 
criterion depends on the correct type (eg find questions with integer 
answers greater than 42) then you have to cast the text to integer in 
the query.



1. store the serialised object in binary form. (how does one search
this? Totally dependant on choice of middleware language)


I'd avoid this one - for the reason you've mentioned, among others. Not 
sure what the middleware language has to do with it, though - if your 
choice of middleware makes things harder then it's the wrong choice. If 
middleware doesn't make things easier, then what use is it?



2. Store different types in diff columns
table answer (questionID, ans_text VARCHAR, ans_money MONEY,
ans_int INT 
But this makes searching through SQL even harder than casting, as
in each query i must what answer to expect.


Definitely avoid this one. Lots of wasted space, as well as extra 
software complexity, with little payoff.



3. Different answer tables each with answer types - same problem as 2
but even harder.


This is the other option I mentioned above. It's not hard at all.


4. I suspect pl/python might be useful, but I cannot see how at the
moment...


I can't see it either :). Decide what you want to do first, _then_ work 
out how to implement it.


Tim

--
---
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] TSearch vs. Homebrew

2006-06-27 Thread Tim Allen

Oleg Bartunov wrote:

On Tue, 27 Jun 2006, Hannes Dorbath wrote:


http://www.symfony-project.com/askeet/21

How does this dead simple approach compare to TSearch performance / 
scaling wise?


Sorry, I was a bit off-topic. Lucene scales as any inverted index based
engine. In 8.2 tsearch2 also has inverted index support, but we obey 
relational approach and couldn't provide a whole set of optimization,

which file based engines could provide.


If you read further down the article, you see that what the fellow is 
actually doing seems to be not using Lucene, but instead setting up his 
own text indexing, ie identifying words, stemming, making a table which 
records which words appear in which record etc. Basically he seems to 
have re-implemented tsearch2 in a mixture of PHP and MySQL. I can't 
imagine how well (or badly...) that must perform for a large amount of 
data. The comments at the end are amusing, one fellow quite touching in 
his naivety, wondering how much effort it would be to turn the framework 
as described into an open source competitor for Google.


My best guess as an answer to the original question is that this 
approach would not scale very well at all, and certainly not as well as 
tsearch2 (even though tsearch2 doesn't scale quite as well as one might 
hope either). And for that matter, it's not all that simple - it seems 
to be of a similar order of complexity to tsearch2. However, my 
performance estimate is completely unfounded in any actual experience, 
so I could be wrong.


Tim

--
-------
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Return the primary key of a newly inserted row?

2006-06-21 Thread Tim Allen

John Tregea wrote:

Greeting again,

I am writing records to postgreSQL from an IDE called revolution. At the 
time I perform the INSERT command I need to retrieve the value of the 
serial_id column from the newly created row.


Is it possible to have a specified column value returned after the 
INSERT (rather than the number of rows affected) ?


That would save me doing a SELECT select statement after every INSERT.

Please excuse the terminology if it is not SQL'esque, but I hope you 
know what I am getting at.


Thanks in advance

John Tregea


It's not supported now, however it has been discussed several times, and 
there is a TODO entry for it at


http://www.postgresql.org/docs/faqs.TODO.html

using syntax along the lines of INSERT ... RETURNING ...

Search for the word "returning" in the todo list and you'll find the 
entry. Your options include waiting for someone to make it happen (no 
telling how long that will be), or helping to make it happen (for which 
we would all thank you :-) ). In the meantime you'll have to work around 
it, as you suggested.


Tim

--
-------
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] new FAQ entry

2006-06-08 Thread Tim Allen

Tino Wildenhain wrote:

Tim Allen schrieb:


[snip]

The way to tell PostgreSQL what encoding you want to use is by use of 
the client_encoding GUC variable, eg


set client_encoding to 'LATIN1';



If you cant educate your client application to set this option on connect,
you can set this per user:

ALTER USER clientappuser SET client_encoding to 'what your app uses';


Good point. I guess it's worth mentioning this and some of the other 
ways you can set the encoding. Though there are actually quite a few 
different ways - that might make the entry overly long. Opinions?


Tim

--
-----------
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


new FAQ entry (was:Re: [GENERAL] UTF8 problem)

2006-06-08 Thread Tim Allen

Matthew T. O'Connor wrote:
Well, to answer my own question, I hacked the source code of DBMail and 
had it set the client encoding to LATIN1 immediately after database 
connect, this seems to have fixed the problem.


Sorry for the noise,

Matt


I've seen this sort of problem asked about in the mailing lists often 
enough to think it merits a FAQ entry, so how about this text:



Q. Why do I have problems inserting text into my database, with error 
messages like


ERROR:  invalid byte sequence for encoding "UTF8": 0xe1202c ?

A. Almost certainly that byte sequence really is an invalid byte 
sequence for that encoding. The reason you are seeing the error is 
probably because you are providing text in some other encoding. You and 
the database need to agree between you what encoding you're using. 
PostgreSQL is fairly good at working with you, converting to and from 
whatever encoding you want to use, but you need to tell it what that 
encoding is, and then stick to that encoding consistently.


If you don't set the client encoding, then PostgreSQL will use the 
default encoding for the database, which in modern times is often UTF8 
(aka UNICODE), and is set at database creation time. However, many 
client apps still use other encodings, (eg Latin1, aka ISO-8859-1), so 
you need to either educate the client app to use UTF8, or get it to 
inform PostgreSQL what other encoding to use.


The way to tell PostgreSQL what encoding you want to use is by use of 
the client_encoding GUC variable, eg


set client_encoding to 'LATIN1';

One reason you may be seeing this problem now, after upgrading your 
version of PostgreSQL, is that recent versions have tighter validation 
of encoded text. Previously you may not have been conscious of what 
encoding you were actually using, especially if you're a speaker of a 
Western European language, and may have gotten away with writing 
incorrectly-encoded text without the database complaining. Now is the 
time to start getting it right.


One thing to be wary of is the "SQL_ASCII" encoding. It appears to be 
commonly and incorrectly believed that this represents either some 
variant on latin1, or pure 7-bit ASCII. It is neither of those, but a 
completely unchecked encoding that really means whatever you want it to 
mean. This makes it not a very good encoding to use in practice, as it 
becomes prone to allowing a mixture of different encodings to be present 
in the same set of data, which will cause you headaches when you try to 
convert the whole lot to some consistent encoding in the future.


See section 21.2 of the documentation for more complete information.


Tim

--
---
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] What is the point of create or replace view command

2006-06-04 Thread Tim Allen

Chris Velevitch wrote:

Yes, please, what is point of CREATE OR REPLACE FUNCTION, when it
doesn't let you change the function's parameter list or result type?


Being able to change the _logic_ of a function is certainly useful. 
Well, I find it so, at least - YMMV.


Tim

--
-------
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] 8.1 on gentoo

2006-05-25 Thread Tim Allen

Holger Hoffstaette wrote:

On Thu, 25 May 2006 08:55:51 +0200, Sim Zacks wrote:


Does anybody know when postgresql 8.1 will be considered stable on gentoo
for x86?


No, maybe ask in gentoo-users or -dev? Anyway just because it's not marked
"stable" does not mean it isn't. It's very unfortunate that distributions
have adopted these terms because more often than not they have absolutely
nothing to do with the *runtime* stability of the software.


Presumably the stable status is more to do with all the packages that 
depend on postgres, rather than postgres itself. I guess the reason it 
hasn't been marked stable yet is just lack of round tuits for someone to 
test all the other packages.



Another question would be does anybody know of any instability of
postgresql 8.1 unstable for gentoo x86?


~x86 works just fine. How could it not? It's just the same as compiling
yourself - which is considered to be acceptable, if not good, practice on
all other platforms and distributions too. If a particular build would be
totally broken/untested/known to eat your soul it would be hardmasked.

-h


Tim

--
-------
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] another seemingly simple encoding question

2006-05-21 Thread Tim Allen
kmh496 wrote:
> this is a forward of my problem from April.
> I have this time gone all the way and re-inited a DB from scratch,
> created a new database, documented the import procedure, set the locale
> to match but I am still having problems.  
> For example, look at this match count~
> mod=# select count(*) from korean_english;
>  count
> 
>  205323
> (1 row)
> 
> mod=# 
> mod=# select count(*) from korean_english where word='안녕';
>  count
> ---
>  40332
> (1 row)

You seem to be implying there is something wrong with the above results,
but you haven't given us enough information to have any idea why that's
a problem. AFAICT, it's perfectly plausible that 40332 out of the 205323
rows in that table have that particular value of the word column. If
that's not correct, you need to tell us how, otherwise no-one can help you.

One clue is that you appear to have your mail client set to use EUC-KR
encoding, not UTF-8. Perhaps whatever client you're using to put data
into your database is using that encoding too?

Tim

-- 
---
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Contributing code

2006-05-17 Thread Tim Allen

Don Y wrote:


So, I'll deploy them and get feedback on which features I
may need to add (some of the data types are probably a bit
too exotic for most users).  I figure I can always contribute
them just before a release...


Just before a release would actually be a bad time to contribute the 
code, if you want to get it accepted into PostgreSQL, as the people who 
would be competent to review and potentially accept it all tend to be 
very busy just before a release. There have been disappointed people in 
the past for that very reason. The earlier you publish your code, the 
better.


Tim

--
-------
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Announce: GPL Framework centered on Postgres

2006-05-16 Thread Tim Allen

Kenneth Downs wrote:

GPL is to spread it as far and wide as possible as fast as possible.


LGPL?

My concern would be, I can't use this toolkit for a closed source 
application if it is GPL.


That may be your intent (which I actually don't have a business 
problem with), I was just curious as to your decision.


If it turns out that nobody can release a closed source app, I will 
definitely reconsider and look again at LGPL, but I am not convinced you 
cannot do so.


If you seek to provide a closed source app that is built upon Andromeda, 
you are required to provide the source code to Andromeda itself.  
However, your app is not a derivative work in the strict sense because 
your code is not mixed in with mine in any sense.  You never modify a 
file, and your files and mine are actually in separate directories.


I greatly appreciate your asking the question though because I'd like to 
make sure that people feel safe with the project.  My goal is to provide 
the freedoms typically associated with the "plain old GPL", and 
certainly not to restrict the creation of closed apps.  I just don't 
want anybody closing *my* app.


Then it sounds like LGPL is exactly what you want. That forbids people 
closing your code, but allows linking of it to closed apps. Cf Tom's 
comments, it's quite difficult for anyone to release code that depends 
on GPL'd code without incurring the terms of the GPL for their code (and 
that is clearly the way the FSF want it to be).


But as Joshua was implying, a common business model is to release some 
code under GPL, which means it can be used only for GPL'd apps, and then 
also be willing to sell other sorts of licences for it to be used with 
commercial apps. If that's the sort of business model you have in mind, 
then GPL is probably what you want.


Tim

--
---
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] [ANNOUNCE] == PostgreSQL Weekly News - December 05 2005 ==

2005-12-05 Thread Tim Allen

David Fetter wrote:

== PostgreSQL Weekly News - December 05 2005 ==


Chris Campbell of Big Nerd Ranch shows how to lower a query's priority.
http://weblog.bignerdranch.com/?p=11



I wouldn't normally bother nitpicking news items, but since this is 
presumably being held up as expert advice and an example of something 
particularly clever to do with your postgres database, we should suggest 
that Chris should browse the pg mailing list archives. If he does he 
will find numerous discussions that should be sufficient to persuade him 
that lowering the priority of a postgres process is not a useful way to 
effectively lower the priority of a query relative to other queries. 
You're pretty much guaranteed to get priority inversion.


Pardon me for being grumpy, getting the kids to go to bed has been 
harder than usual tonight :-).


Tim


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] chosing a database name

2005-07-13 Thread Tim Allen

Karsten Hilbert wrote:


My main concern, however, was whether the *approach* is
sound, eg using a separate database name per release or IOW
version. One way would be to use the database name "gnumed"
regardless of release, another way would be to use
"gnumedX_Y" for release X.Y. I wonder whether the latter
approach has any drawbacks people might think of regarding
release management etc.


Others have given you some of the advice I would have given. One more 
suggestion - does your database fit in just one "schema" in the gnumed 
database? If so, then an easy option would be to have all versions of 
your software use the same database, but each one has its own unique 
schema within that database. This gives you the same ease of reversion 
to a previous version, and gives you the added advantage that it's much 
easier to write the scripts that update from one version to another, 
since they can just operate within the same database, eg selecting data 
from one schema and inserting it into another. Your users would then 
drop old schemas on whatever basis they feel comfortable with.


What we do in practice is upgrade the database in situ when we upgrade 
the software. We use a schema version number to automatically determine 
what scripts to run - similar, I think to what Rick Levine was 
describing. However, we have no easy way to revert to an old version if 
required - so your plan will be better than ours in that regard.


Tim

--
-------
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] International Characters

2005-05-24 Thread Tim Allen

Jamie Deppeler wrote:

Hi,

We are having an issue importing international characters into 
postgresql 8.


example Renée

The error message we get is ERROR:  invalid byte sequence for encoding 
"UNICODE": 0xe92044


Any help would be greatfully recieved

Jamie


Judging by the encoding of your email message, you're in the habit of 
using iso-8859-1 (also known as latin1), rather than UTF-8. The error 
message indicates that the database thinks you've asked it to use UTF-8, 
and the latin1 characters you're supplying it are correctly identified 
as not being legal UTF-8 characters.


Solutions are to either use UTF-8 as your encoding, or to tell the 
database what encoding you really want to use, eg by


set client_encoding to latin1;

to set the encoding that a particular session is going to use.

Tim

--
-----------
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-08 Thread Tim Allen
Tom Lane wrote:
Tope Akinniyi <[EMAIL PROTECTED]> writes:
I am not holding anybody responsible, but I think we need to do a
massive re-orientation of the community not to carry the
Linux-Windows game too far.
This is a troll, isn't it?
Perhaps it's a 419 :-). But if so I can't see the catch yet - must be
very subtle.
regards, tom lane

--
-----------
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/
  http://www4.tpg.com.au/users/rita_tim/
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Version Control Software for Database Objects

2005-01-13 Thread Tim Allen
Mark Dexter wrote:
We maintain multiple versions of our application's database and we are 
looking for version control software to help us automate this.  
Specifically, we would like to have a program that automatically tracks 
all changes to the database (tables, views, functions, etc.) and assists 
with updating customers' databases from one version to the next.

Does anyone know of such a program that works with PostgreSQL?  Thanks 
for your help.
Assuming you mean version control of the _schema_, rather than the data 
therein, then no, I don't know of any such program available. However, 
it's not too hard to do it yourself, which is what we've done. Create a 
table which has only one row, containing a schema version number, and 
build some constant into your application which has the same number - 
the application should check the database schema number on startup and 
complain loudly if the two don't match. Every time you release a new 
version of the application that has a schema change, then both increment 
the schema number and write a script that will perform the schema update 
from version n to version n + 1. If you name your scripts in a 
consistent way, it's not hard to then write a script that compares the 
current schema version at a site with that expected by the new 
application version, and incrementally runs each update script.

Tim
--
-------
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/
  http://www4.tpg.com.au/users/rita_tim/
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] UTF-8 and LIKE vs =

2004-08-23 Thread Tim Allen
Tom Lane wrote:
David Wheeler <[EMAIL PROTECTED]> writes:
bric=3D# reindex index udx_keyword__name;
REINDEX
bric=3D# select * from keyword where name =3D'=BA=CF=C7=D1=C0=C7';
  id  |  name  | screen_name | sort_name | active
--++-+---+
 1218 | =B1=B9=B9=E6=BA=F1 | =B1=B9=B9=E6=BA=F1  | =B1=B9=B9=E6=BA=F1=
   |  1
(1 row)

Hmm.  I tried putting your string into a UNICODE database and I got
ERROR:  invalid byte sequence for encoding "UNICODE": 0xc7
So there's something funny happening here.  What is your client_encoding
setting?
			regards, tom lane
One possible clue: your original post in this thread was using encoding 
euc-kr, not unicode (utf-8). If your mailer was set to use that 
encoding, perhaps your other client software is/was also?

Just a guess...
Tim
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] More on float infinities

2000-08-13 Thread Tim Allen

I've found the problem with inserting pseudo-infinite values into database
tables, and can now insert the infinity-ish values OK. The problem was
actually that pg_dump was losing precision when it dumped out the old
values. The numbers were being rounded up at the 13th or so decimal place,
causing an overflow.

So my question is now different: why does pg_dump lose precision, and can
I somehow persuade it not to?

And, btw, it'd be nice to have some assurance that support for these
extreme case values really is supported officially, and isn't a bug that
will be "fixed" in some future version :-).

Tim

--
-----------
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/
  http://www4.tpg.com.au/users/rita_tim/




[GENERAL] +/- Inf for float8's

2000-08-13 Thread Tim Allen

I'm just trying out PG7.0.2, with a view to upgrading from 6.5.3, and I've
found one quirk a little troublesome. Not sure whether I'll get any
sympathy, but I shall ask anyway :).

We find it convenient to be able to store +/- infinity for float8 values
in some database tables. With Postgres 6.5.3, we were able to get away
with this by using the values -1.79769313486232e+308 for -Inf and
1.79769313486232e+308 for Inf. This is probably not very portable, but
anyway, it worked fine for us, on both x86 Linux and SGI IRIX. One thing,
though, to get these numbers past the interface we had to put them in
quotes. It seemed as though there was one level of parsing that didn't
like these particular numbers, and one level of parsing that coped OK, and
using quotes got it past the first level.

Now, however (unfortunately for us), this inconsistency in the interface
has been "fixed", and now we can't get this past the interface, either
quoted or not. Fixing inconsistencies is, of course, in general, a good
thing, which is why I'm not confident of getting much sympathy :).

So, any suggestions as to how we can store +/- infinity as a valid float8
value in a database table?

I notice, btw, that 'NaN' is accepted as a valid float8. Is there any
particular reason why something similar for, eg '-Inf' and 'Inf' doesn't
also exist? Just discovered, there is a special number 'Infinity', which
seems to be recognised, except you can't insert it into a table because it
reports an overflow error. Getting warm, it seems, but not there yet. And
there doesn't seem to be a negative equivalent.

Tim

--
---
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/
  http://www4.tpg.com.au/users/rita_tim/