Re: [GENERAL] duplicate key errors when restoring 8.4.0 database dump into 9.1.2

2011-12-30 Thread Culley Harrelson
This is just the first of many duplicate key errors that cause primary key
creation statements to fail on other tables.  I grepped for the key but it
is hard to tell where the problem is with 888 matches.

I will try pg_dump --inserts.  It is a 17G file with copy statements so...
this should be interesting.  And take a long time.



On Fri, Dec 30, 2011 at 8:20 AM, Bèrto ëd Sèra wrote:

> Hi!
>
>
>> Are you sure there is just one record?
>
>
> What happens if you grep the backup file for "653009"?
>
> If you do have more than one such record, the quickest way out is to
> manually clean it.
>
> Bèrto
>
> --
> ==
> If Pac-Man had affected us as kids, we'd all be running around in a
> darkened room munching pills and listening to repetitive music.
>


Re: [GENERAL] duplicate key errors when restoring 8.4.0 database dump into 9.1.2

2011-12-30 Thread Culley Harrelson
There is not any data in the new database.  I have dropped the database,
created the database and then piped in the backup every time.


On Fri, Dec 30, 2011 at 8:06 AM, Adrian Klaver wrote:

> On Friday, December 30, 2011 7:49:31 am Culley Harrelson wrote:
> > They are just your standard sql errors seen in the output of psql mydb <
> > backup.sql
> >
> >
> > ALTER TABLE
> > ERROR:  could not create unique index "ht_user_pkey"
> > DETAIL:  Key (user_id)=(653009) is duplicated.
> >
> > There is a unique index on user_id in the 8..4.0 system and, of course,
> > only one record for 653009.
> >
>
> http://www.postgresql.org/docs/9.1/interactive/app-pgdump.html
>
> When doing the pg_dump of the 8.4 database you might want to use the  -c
> option
>
> "
> -c
> --clean
>
>Output commands to clean (drop) database objects prior to outputting the
> commands for creating them. (Restore might generate some harmless errors.)
>
>This option is only meaningful for the plain-text format. For the
> archive
> formats, you can specify the option when you call pg_restore.
> "
>
> My suspicion is that there is already data in the tables of the 9.1 server
> from
> previous restore attempts.
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] duplicate key errors when restoring 8.4.0 database dump into 9.1.2

2011-12-30 Thread Culley Harrelson
They are just your standard sql errors seen in the output of psql mydb <
backup.sql


ALTER TABLE
ERROR:  could not create unique index "ht_user_pkey"
DETAIL:  Key (user_id)=(653009) is duplicated.

There is a unique index on user_id in the 8..4.0 system and, of course,
only one record for 653009.




On Fri, Dec 30, 2011 at 6:51 AM, Adrian Klaver wrote:

> On Friday, December 30, 2011 6:32:56 am Culley Harrelson wrote:
> > Hello I am trying to migrate a database from 8.4.0 to 9.1.2 on a test
> > server before updating the production server.  When piping the dump file
> > created with pg_dump in psql I am getting duplicate key errors and the
> > primary keys on several large tables do not get created.  I have read all
> > the migration notes and do not see anything specific other than a pg_dump
> > restore is required.  Any clues for me?
>
> Was there data already in the 9.1 database?
> Post some of the error messages.
>
> >
> > Thanks,
> >
> > culley
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


[GENERAL] duplicate key errors when restoring 8.4.0 database dump into 9.1.2

2011-12-30 Thread Culley Harrelson
Hello I am trying to migrate a database from 8.4.0 to 9.1.2 on a test
server before updating the production server.  When piping the dump file
created with pg_dump in psql I am getting duplicate key errors and the
primary keys on several large tables do not get created.  I have read all
the migration notes and do not see anything specific other than a pg_dump
restore is required.  Any clues for me?

Thanks,

culley


Re: [GENERAL] design help for performance

2011-12-21 Thread Culley Harrelson
Thank you so much everyone!  Introducing table C was indeed my next step
but I was unsure if I was going to be just moving the locking problems from
A to C.  Locking on C is preferable to locking on A but it doesn't really
solve the problem.  It sounds like I should expect less locking on C
because it doesn't relate to B.  Thanks again, I am going to give it a
try.

I am not going to take it to the delta solution for now.



On Wed, Dec 21, 2011 at 1:46 AM, Marc Mamin  wrote:

>
> > -Original Message-
> > From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> > ow...@postgresql.org] On Behalf Of Alban Hertroys
> > Sent: Mittwoch, 21. Dezember 2011 08:53
> > To: Culley Harrelson
> > Cc: pgsql-general@postgresql.org
> > Subject: Re: [GENERAL] design help for performance
> >
> > On 21 Dec 2011, at 24:56, Culley Harrelson wrote:
> >
> > > Several years ago I added table_b_rowcount to table A in order to
> > minimize queries on table B.  And now, as the application has grown, I
> > am starting to having locking problems on table A.  Any change to
> table
> > B requires the that table_b_rowcount be updated on table A...  The
> > application has outgrown this solution.
> >
> >
> > When you update rowcount_b in table A, that locks the row in A of
> > course, but there's more going on. Because a new version of that row
> > gets created, the references from B to A also need updating to that
> new
> > version (creating new versions of rows in B as well). I think that
> > causes a little bit more locking than originally anticipated - it may
> > even be the cause of your locking problem.
> >
> > Instead, if you'd create a new table C that only holds the rowcount_b
> > and a reference to A (in a 1:1 relationship), most of those problems
> go
> > away. It does add an extra foreign key reference to table A though,
> > which means it will weigh down updates and deletes there some more.
> >
> > CREATE TABLE C (
> >   table_a_id int PRIMARY KEY
> >REFERENCES table_a (id) ON UPDATE CASCADE ON DELETE
> > CASCADE,
> >   table_b_rowcount int NOT NULL DEFAULT 0
> > );
> >
> > Yes, those cascades are on purpose - the data in C is useless without
> > the accompanying record in A. Also, the PK makes sure it stays a 1:1
> > relationship.
> >
> > Alban Hertroys
>
> Hello,
>
> it may help to combine Alban solution with yours but at the cost of a
> higher complexity:
>
> In table C use instead a column table_b_delta_rowcount (+1 /-1 ,
> smallint) and only use INSERTs to maintain it, no UPDATEs (hence with a
> non unique index on id).
>
> Then regularily flush table C content to table A, in order to only have
> recent changes in C.
> Your query should  then query both tables:
>
> SELECT A. table_b_rowcount + coalesce(sum(C.table_b_delta_rowcount))
> FROM A LEFT OUTER JOIN B on (A.id=B.id)
> WHERE A.id = xxx
>
> Marc Mamin
>


Re: [GENERAL] design help for performance

2011-12-20 Thread Culley Harrelson
Thanks David.  That was my original solution and it began to bog down the
website so I resorted to demoralization 3 years ago  This is an
extremely high volume website.


On Tue, Dec 20, 2011 at 4:27 PM, David Johnston  wrote:

> Continued top-posting to remain consistent….
>
> ** **
>
> It isn’t that the application has outgrown the solution but rather the
> solution was never correct in the first place.  You attempted pre-mature
> optimization and are getting burned because of it.  The reference solution
> is simply:
>
> ** **
>
> SELECT a.*, COUNT(*) AS b_count
>
> FROM a
>
> JOIN b USING (a_id)
>
> GROUP BY a.* {expanded * as needed)
>
> ** **
>
> Make sure table b has an index on the a.id column.
>
> ** **
>
> This is reference because you never want to introduce computed fields that
> keep track of other tables WITHOUT some kind of proof that the maintenance
> nightmare/overhead you are incurring is more than offset by the savings
> during usage.
>
> ** **
>
> Any further optimization requires two things:
>
> Knowledge of the usage patterns of the affected data
>
> Testing to prove that the alternative solutions out-perform the reference
> solution
>
> ** **
>
> Since you already have an existing query you should implement the
> reference solution above and then test and see whether it performs better
> or worse than you current solution.  If it indeed performs better than move
> to it; and if it is still not good enough then you need to provide more
> information about what kinds of queries are hitting A and B as well as
> Insert/Delete patterns on Table B.
>
> ** **
>
> David J.
>
> ** **
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Misa Simic
> *Sent:* Tuesday, December 20, 2011 7:13 PM
> *To:* Culley Harrelson; pgsql-general@postgresql.org
> *Subject:* Re: [GENERAL] design help for performance
>
> ** **
>
> Hi Culley,
>
> Have you tried to create fk together with index on fk column on table B?
>
> What are results? Would be good if you could send the query and explain
> analyze...
>
> Sent from my Windows Phone
> --
>
> *From: *Culley Harrelson
> *Sent: *21 December 2011 00:57
> *To: *pgsql-general@postgresql.org
> *Subject: *[GENERAL] design help for performance
>
> I am bumping into some performance issues and am seeking help.
>
> I have two tables A and B in a one (A) to many (B) relationship.  There
> are 1.4 million records in table A and 44 million records in table B.  In
> my web application any request for a record from table A is also going to
> need a count of associated records in table B.  Several years ago I added
> table_b_rowcount to table A in order to minimize queries on table B.  And
> now, as the application has grown, I am starting to having locking problems
> on table A.  Any change to table B requires the that table_b_rowcount be
> updated on table A...  The application has outgrown this solution.
>
> So... is there a common solution to this problem?
>
> culley
>


[GENERAL] design help for performance

2011-12-20 Thread Culley Harrelson
I am bumping into some performance issues and am seeking help.

I have two tables A and B in a one (A) to many (B) relationship.  There are
1.4 million records in table A and 44 million records in table B.  In my
web application any request for a record from table A is also going to need
a count of associated records in table B.  Several years ago I added
table_b_rowcount to table A in order to minimize queries on table B.  And
now, as the application has grown, I am starting to having locking problems
on table A.  Any change to table B requires the that table_b_rowcount be
updated on table A...  The application has outgrown this solution.

So... is there a common solution to this problem?

culley


Re: [GENERAL] does "select count(*) from mytable" always do a seq scan?

2005-01-07 Thread Culley Harrelson
On Fri, 07 Jan 2005 16:17:16 +0100, Tino Wildenhain <[EMAIL PROTECTED]> wrote:
> 
> How do you think an index would help if you do an unconditional
> count(*)?

I really don't know .  I don't know the inner workings of
database internals but I would guess that there would be some
optimized way of counting the nodes in an index tree that would be
faster than sequentially going through a table I suppose there is
no free lunch.

One row, two rows, three rows, four rows, five rows 

culley

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] does "select count(*) from mytable" always do a seq scan?

2005-01-07 Thread Culley Harrelson
Hi,

I am using Postgresql 7.4.  I have a table with 1.5 million rows.  It
has a primary key. VACUUM FULL ANALYZE is run every night.  There are
2000-5000 inserts on this table every day but very few updates and
deletes.  When I select count(*) from this table it is using a
sequence scan.  Is this just life or is there some way to get this to
do an index scan?

culley

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] will this bug be fixed in 7.5

2004-07-09 Thread culley harrelson
Hi, 

We just experienced this bug:

http://tinyurl.com/28gkh

on 7.4.1 (freebsd and os x).  Is there any chance this is on the list
for 7.5?

culley

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] fmgr_info: function 7390843: cache lookup failed

2003-12-01 Thread culley harrelson
I am getting this error:

fmgr_info: function 7390843: cache lookup failed

when trying to insert some data into a table using the tsearch2 contrib 
module (this is postgresql 7.3).   I have the following trigger defined:

BEGIN
NEW.search_vector := 
setweight(to_tsvector(coalesce(NEW.display_text, '')), 'A') || 
to_tsvector(coalesce(NEW.content_desc, ''));
RETURN NEW;
END;

Which I suspect is causing the problem.  Where can I look up that 
function number to verify this?  Any suggestions?

culley



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] convert string function and built-in conversions

2003-10-19 Thread culley harrelson
It is one of the extended characters in iso-8859-1.  This data was taken
from a text field in a SQL_ASCII database.  Basically what I am trying to
do is migrate data from a SQL_ASCII database to a UNICODE database by
running all the data through an external script that does something like:

select convert(my_field using ascii_to_utf_8) from my_table;

then inserts the selected text into an identical table in the unicode
database.  All the data goes across, but extended characters such as ñ
are getting munged.  The docs indicate that ascii_to_utf_8 is for
SQL_ASCII -> UNICODE...  Are you saying that ñ isn't really an ASCII
character even though it is valid in a SQL_ASCII database?  I have found
that all extended characters of the various LATIN encodings will work
just fine in my SQL_ASCII database.  

This project is a big can of worms...  Every 6 months I open the can,
stir the worms around a bit, wrinkle my nose then promptly close the can
again and stuff it away for another 6 months. :)  Wish I could figure it
out.



On Sun, 19 Oct 2003 00:31:43 -0700 (PDT), "Stephan Szabo"
<[EMAIL PROTECTED]> said:
> On Sun, 19 Oct 2003, culley harrelson wrote:
> 
> > It seems to me that these values should be the same:
> >
> > select 'lydia eugenia treviño', convert('lydia eugenia treviño' using
> > ascii_to_utf_8);
> >
> > but they seem to be different.  What am I missing?
> 
> I don't think the marked n is a valid ascii character (it might be
> extended ascii, but that's different and not really standard afaik).
> You're probably getting the character associated with the lower 7 bits.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] convert string function and built-in conversions

2003-10-18 Thread culley harrelson
It seems to me that these values should be the same:

select 'lydia eugenia treviño', convert('lydia eugenia treviño' using
ascii_to_utf_8);

but they seem to be different.  What am I missing?

culley

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] 7.3.3 behaving differently on OS X 10.2.6 and FreeBSD

2003-08-14 Thread culley harrelson
DeJuan Jackson wrote:

I have a suspicion that the version might be different.  I have the same 
symptom here on two different RH 7.3 boxes one running 7.3.2 and the 
other running 7.3.3
It would appear 7.3.2 is more strict about the naming of the GROUP BY 
fields.

They really are the same versions.  For the OS X machine I installed 
from source downloaded from the postgresql ftp site.  FreeBSD was 
installed from the port but my ports tree is up to date.

On freebsd:

501 $ pg_ctl --version
pg_ctl (PostgreSQL) 7.3.3
On OS X:

516 $ pg_ctl --version
pg_ctl (PostgreSQL) 7.3.3


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] 7.3.3 behaving differently on OS X 10.2.6 and FreeBSD 4.8-STABLE

2003-08-10 Thread culley harrelson

That's a guess that doesn't really explain why it'd work under one OS
and not under another. Are the two versions of Postgres configured
the same?
I suppose they could be configured differently.  I don't know how to 
investigate this.

It isn't really a problem for me-- just strange.

culley



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] 7.3.3 behaving differently on OS X 10.2.6 and FreeBSD 4.8-STABLE

2003-08-09 Thread culley harrelson
I don't know if this is a postgresql bug or a problem with my 
architecture but I thought I would post here about a strange bug I just 
came across in my application.

I use OS X 10.2.6 as my development machine and FreeBSD 4.8 for my 
production machines.  All systems are running postgresql 7.3.3. I just 
published some code to production and when testing the production 
results it blew up with a sql parsing error.  The following sql worked 
fine on my OS X development machine:

select u.user_id, u.first_name, u.last_name, u.email_address, w.w9, 
pm.description as payment_method, count(s.user_id) as documents, 
sum(s.payment_amount) as amt_sum from ht_user u inner join writer w on 
u.user_id = w.user_id inner join payment_method pm on 
w.payment_method_id = pm.payment_method_id left join submission s on 
u.user_id = s.user_id group by u.user_id, u.first_name, u.last_name, 
u.email_address, w.w9, pm.description order by lower(last_name) asc

But on my production machine postgresql complained about the order by 
clause-- it wanted the table alias to be on last_name.

culley



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] [pgsql-advocacy] interesting PHP/MySQL thread

2003-06-24 Thread culley harrelson
Dennis Gearon wrote:
so,
   if Postgres were to have a manual like PHP's OLD manual(more next), 
that would be a worthwhile contribution?

   the new manuals seems to be drifting to using only GOOGLE listings. 
MUCH less information on one page, not nearly as good search results as 
the old one. I don't know why they are switching.

If google is going to do web searches for technical sites, it nees to 
change the format.
I think they are having performance problems and they are using google 
to shift the load...



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] explicit joins vs implicit joins

2003-06-19 Thread culley harrelson

Did you actually read the cited links?

I read the 7.3 docs then fired off an email and forgot about 7.4 docs. 
duh!  sorry.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PG mailing list problems (was Re: Support issues)

2001-10-15 Thread Culley Harrelson

I have the exact opposite problem-- I keep turning off direct mailing for
pgsql-general (with successfull confirmation) and it keeps sending me mail
anyway! haha

culley

--- David Link <[EMAIL PROTECTED]> wrote:
> Jim Caley wrote:
> > 
> > Has anyone heard any more about this problem?  I also haven't gotten
> anything
> > from pgsql-general since Oct. 1.  The other lists seem to be working
> fine for
> > me.
> 
> I am still only getting a few of the pgsql-general postings.  I know
> this by checking the archives.
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster


__
Do You Yahoo!?
Make a great connection at Yahoo! Personals.
http://personals.yahoo.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] Encoding

2001-09-15 Thread Culley Harrelson

For anyone who was watching me flail with JDBC and UNICODE...

I finally gave up on unicode-- I switched the database encoding to 
MULE_INTERNAL and JDBC is handling everything I can throw at it (we'll just 
have to wait and see if my users manage to enter some unsupported 
characters).  Thanks all for your help!

Culley 


_
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: Fwd: Re: [GENERAL] unicode in 7.1

2001-09-11 Thread Culley Harrelson

Ack!  I guess I am hitting this problem

I had my database rebuilt to use UNICODE encoding.  Data now appears 
correctly in pgsql but not when filtered through JDBC.  Unfortunately Im 
using the open source DbConnectionBroker connection pooling object and I 
have to dig into this to apply the fix.  It is suprising to me that JDBC 
has a problem with a database using UNICODE encoding?!?  I obviously don't 
understand the internals of this stuff 

culley


At 10:06 AM 9/11/01 -0700, you wrote:
>Culley,
>
>With out more details of your setup, I can't give you a complete 
>answer.  But check out the info at:
>
>http://lab.applinet.nl/postgresql-jdbc/#CharacterEncoding
>
>for a brief discussion of what I believe is your problem.  There has also 
>been a number of discussions on this on the pgsql-jdbc mail list. You 
>might also want to check the mail archives.
>
>thanks,
>--Barry
>
>
>Culley Harrelson wrote:
>>The was corrupted in the process of the upgrade.
>>Is there some way to tell what the configuration options were when it was 
>>installed?  I am assuming by API you mean how am I accessing Postgres?  JDBC.
>>Culley
>>
>>>X-Apparently-To: [EMAIL PROTECTED] via web9605; 10 Sep 2001 
>>>18:19:26 -0700 (PDT)
>>>X-Track: 1: 40
>>>To: [EMAIL PROTECTED]
>>>Cc: [EMAIL PROTECTED]
>>>Subject: Re: [GENERAL] unicode in 7.1
>>>X-Mailer: Mew version 1.94.2 on Emacs 20.7 / Mule 4.1
>>>  =?iso-2022-jp?B?KBskQjAqGyhCKQ==?=
>>>Date: Tue, 11 Sep 2001 10:19:00 +0900
>>>From: Tatsuo Ishii <[EMAIL PROTECTED]>
>>>X-Dispatcher: imput version 2228(IM140)
>>>Lines: 9
>>>
>>> > my isp recently upgraded form postgreSQL 7.0 to 7.1.  It went pretty well
>>> > but I just discovered that non-english characters are now in the database
>>> > as a question mark-- inserting non-english characters produces a ? as
>>> > well.  Any idea what has gone wrong and what we need to do to fix this?
>>>
>>>Hard to tell without knowing what the configuration option was and
>>>what kind of API you are using...
>>>--
>>>Tatsuo Ishii
>>
>>_
>>Do You Yahoo!?
>>Get your free @yahoo.com address at http://mail.yahoo.com
>>
>>---(end of broadcast)---
>>TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


_
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Fwd: Re: [GENERAL] unicode in 7.1

2001-09-10 Thread Culley Harrelson

The was corrupted in the process of the upgrade.

Is there some way to tell what the configuration options were when it was 
installed?  I am assuming by API you mean how am I accessing Postgres?  JDBC.

Culley


>X-Apparently-To: [EMAIL PROTECTED] via web9605; 10 Sep 2001 
>18:19:26 -0700 (PDT)
>X-Track: 1: 40
>To: [EMAIL PROTECTED]
>Cc: [EMAIL PROTECTED]
>Subject: Re: [GENERAL] unicode in 7.1
>X-Mailer: Mew version 1.94.2 on Emacs 20.7 / Mule 4.1
>  =?iso-2022-jp?B?KBskQjAqGyhCKQ==?=
>Date: Tue, 11 Sep 2001 10:19:00 +0900
>From: Tatsuo Ishii <[EMAIL PROTECTED]>
>X-Dispatcher: imput version 2228(IM140)
>Lines: 9
>
> > my isp recently upgraded form postgreSQL 7.0 to 7.1.  It went pretty well
> > but I just discovered that non-english characters are now in the database
> > as a question mark-- inserting non-english characters produces a ? as
> > well.  Any idea what has gone wrong and what we need to do to fix this?
>
>Hard to tell without knowing what the configuration option was and
>what kind of API you are using...
>--
>Tatsuo Ishii


_
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] unicode in 7.1

2001-09-10 Thread Culley Harrelson

Hello,

my isp recently upgraded form postgreSQL 7.0 to 7.1.  It went pretty well
but I just discovered that non-english characters are now in the database
as a question mark-- inserting non-english characters produces a ? as
well.  Any idea what has gone wrong and what we need to do to fix this?

culley

__
Do You Yahoo!?
Get email alerts & NEW webcam video instant messaging with Yahoo! Messenger
http://im.yahoo.com

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] jdbc connection pool settings

2001-02-12 Thread Culley Harrelson

I'm in the process of implementing connection pooling
and the setup I'm using (http://www.javaexchange.com -
really slick!) has settings for min # connections and
max # connection.  Any suggestions on where I should
set these values?  min=2, max=6? My site will be
outside the firewall, open to the public for all to
trash.

Culley

__
Do You Yahoo!?
Get personalized email addresses from Yahoo! Mail - only $35 
a year!  http://personal.mail.yahoo.com/



[GENERAL] multi-column fti?

2001-02-09 Thread Culley Harrelson

Can you create a full text index on more than one
column in a table?

Culley


__
Do You Yahoo!?
Get personalized email addresses from Yahoo! Mail - only $35 
a year!  http://personal.mail.yahoo.com/



[GENERAL] timestamp goober

2001-02-08 Thread Culley Harrelson

columns with default timestamp('now') see to be
defaulting to the time I started posgresql!

What am I doing wrong here?  Is it an os problem? 
Need these columns to capture the current date and
time.

Culley

__
Do You Yahoo!?
Get personalized email addresses from Yahoo! Mail - only $35 
a year!  http://personal.mail.yahoo.com/



Re: [GENERAL] selecting a random record

2001-02-06 Thread Culley Harrelson

Can this be done in the framework of plpgsql?  I know
I can do it in the front end (java) but it would be
nice to aviod having to grab the rowcount first.  I
haven't seen a random function in the documentation. 
I could install another language but boy am I lazy :)


--- Mark Lane <[EMAIL PROTECTED]> wrote:
> On Tuesday 06 February 2001 13:11, you wrote:
> > Any suggestions on how to select a random record
> from
> > any given table?
> >
> > Culley
> >
> Set the range of the table index to a random number
> generator.
> 
> if the index ranges from 100 to 1 then you would
> multiply you random 
> number by 9900 and then 100 to generate a random
> index number. Just remember
> to round the number to an integer.
> 
> Mark


__
Do You Yahoo!?
Yahoo! Auctions - Buy the things you want at great prices.
http://auctions.yahoo.com/



[GENERAL] selecting a random record

2001-02-06 Thread Culley Harrelson

Any suggestions on how to select a random record from
any given table?

Culley

__
Do You Yahoo!?
Yahoo! Auctions - Buy the things you want at great prices.
http://auctions.yahoo.com/



[GENERAL] full text searching

2001-02-05 Thread Culley Harrelson

Hi,

OK full text searching.  Will the full text index
catch changes in verb tense?  i.e. will a search for
woman catch women?

I'm researching before I dive in to this later in the
week so please excuse this incompletely informed
question:  Will I need to rebuild postgresql with the
full-text index module included?  Unfortunately I'm
away from my linux machine-- would someone be willing
to email me the README?

Thanks in advance,

Culley

__
Get personalized email addresses from Yahoo! Mail - only $35 
a year!  http://personal.mail.yahoo.com/



[GENERAL] case insensitive unique index

2001-01-30 Thread Culley Harrelson

is there a way to make a unique index case insesitive
for text data?  I can change case in my front end code but I
thought I'd ask :)





[GENERAL] JDBC connection failure

2001-01-27 Thread Culley Harrelson

Hi,

I'm pulling my hair out trying to establish a jdbc
connection to postgresql.  The error I recieve is:

unable to load class postgresql.Driver

I can however import org.postgresql.util.* without
generating an error through tomcat when I put the
postgresql.jar directly in the /usr/local/tomcat/lib
folder.

Also does it matter what the name of the jar file is? 
All the documentation refers to postgresql.jar and the
current download is jdbc7.0-1.2.jar.  I'm assuming it
does not :)

Culley

__
Do You Yahoo!?
Yahoo! Auctions - Buy the things you want at great prices. 
http://auctions.yahoo.com/



[GENERAL] Re: couple of general questions

2001-01-19 Thread Culley Harrelson





Best, depending on the scenario. In cases where you
are using a fixed number
of characters, there's no need for the overhead of a
varchar. For instance
if you are storing state abbreviations, they will
ALWAYS be 2 characters.
The database can look up those fixed fields faster.
But if you are storing
full state names, it would be a waste to have all
those passing spaces so
that you could fit Mississippi with Maine. All that
being said, it's my
understanding that there will be no benefit to using
the CHAR type over the
VARCHAR type in 7.1 due to architectural changes.
---


Is there any difference between varchar and text other
than varchar places a
cap on the number of characters?

Culley



__
Do You Yahoo!?
Get email at your own domain with Yahoo! Mail. 
http://personal.mail.yahoo.com/