Re: [GENERAL] PostgreSQL Top 10 Wishlist

2006-01-17 Thread Joost Kraaijeveld
On Tue, 2006-01-17 at 09:52 -0800, David Fetter wrote:
> On Tue, Jan 17, 2006 at 10:28:03AM -0600, Tony Caduto wrote:
> > As long as we are talking wish lists...
> > 
> > What I would like to see is some way to change the ordering of the
> > fields without having to drop and recreate the table.
> 
> Why are you asking us to optimize the 'SELECT *' case which almost
> never belongs in production code in the 1st place?
Because a lot of tools that I use to manage a database during
*development* (e.g. PgAdmin) show the columns in an other order than the
order of attributes in my Java/C++ code. The "logical" order of the
columns/attributes can change during development. 


-- 
Groeten,

Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.askesis.nl 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] SELECT Rules or stored procedure

2006-01-17 Thread Assad Jarrahian
I have a query that takes two tables (join) and does something on it.
Lets say these tables are A and B.
What I need is that everytime one of the tables (A) has its rows
selected, I want to update the count (which is a column in A) for that
row.

I am not sure what is the best way to do this.
1)I could create a dummy view and do a RULE on that (I am not sure how
to get access to the row's that have been selected).
 2) I could create a storedprocedure. Get all the rows to be returned
and then, loop through the rows and update the column and then return
it.

Q1) Which way is better?
q2) How does one get access to the rows just selected in the CREATE
RULE computation?

thanks.
-assad

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] prepared statement results don't clear?

2006-01-17 Thread Michael Fuhr
On Tue, Jan 17, 2006 at 03:37:14PM -0500, David Rysdam wrote:
> I have a Tcl function that does this:
> 
> 1) create prepared statement for binary insertion via pg_exec (and 
> releases the result handle)
> 2) run statement with binary data via pg_exec_prepared (and releases the 
> result handle)
> 3) deallocate statement via pg_exec (and releases the result handle)
> 
> When I try to run this function a couple hundred times, I get "had limit 
> on result handles reached" after 128 successes.  It seems something is 
> not being released.

Can you post a simple test case?  I can provoke "hard limit on
result handles reached" by not clearing results, but so far I haven't
seen that error if I do clear all results.

What versions of PostgreSQL, Tcl, and pgtcl are you using, and on
what platform?  I tested with PostgreSQL 8.1.2, Tcl 8.4.11, and
pgtcl 1.5 on FreeBSD 6.0.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] mac os x compile failure

2006-01-17 Thread Michael Glaesemann


On Jan 18, 2006, at 4:56 , Tom Lane wrote:

I'll try to see that this gets fixed for PG 8.1.3, but in the short  
run

you might be best off to update your OS X installation, or revert to
PG 8.0.* which doesn't try to use FULLFSYNC at all.


Thanks for working on this, Tom. I've got a Mac OS X 10.2  
installation at a client and saw the same error when I was attempting  
to upgrade from 7.4 to 8.1 but didn't have time to explore the cause.  
If Neil doesn't report back sooner, I'll see if I can't build against  
head this weekend.


Michael Glaesemann
grzm myrealbox com




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL Top 10 Wishlist

2006-01-17 Thread Chris Browne
[EMAIL PROTECTED] ("Jim C. Nasby") writes:
> On Tue, Jan 17, 2006 at 11:35:05AM -0500, Chris Browne wrote:
>>Note: I spent most of yesterday dealing with this very issue,
>>writing up a total of 31 eye-destroying regular expressions to
>>generate a pl/tcl function to parse cases that I had handy...  I
>>daresay that even with so many regexps, I was _still_ left with a
>>reasonably material number that would not be parsed...
>
> I hope you can post that somewhere for others to use... surely it could
> save a lot of people some time...
>
> Or maybe not; I suspect most people just punt on phone numbers, or force
> them to a very strict format.

The trouble is, the rules wind up being all too context-sensitive.

The problems I run into with telnos coming from one source differ from
the problems with telnos coming from another.

I suppose perhaps I should "collect the whole set" and see if I can
have some common heuristics that will cope well with all of them...

Alas, it's pretty gory, and there's never time :-(.
-- 
output = reverse("gro.gultn" "@" "enworbbc")
http://cbbrowne.com/info/wp.html
Oh,  boy, virtual memory!  Now I'm  gonna make  myself a  really *big*
RAMdisk!

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL Top 10 Wishlist

2006-01-17 Thread Jim C. Nasby
On Tue, Jan 17, 2006 at 11:35:05AM -0500, Chris Browne wrote:
>Note: I spent most of yesterday dealing with this very issue,
>writing up a total of 31 eye-destroying regular expressions to
>generate a pl/tcl function to parse cases that I had handy...  I
>daresay that even with so many regexps, I was _still_ left with a
>reasonably material number that would not be parsed...

I hope you can post that somewhere for others to use... surely it could
save a lot of people some time...

Or maybe not; I suspect most people just punt on phone numbers, or force
them to a very strict format.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Text field performance

2006-01-17 Thread Neil Conway
On Tue, 2006-01-17 at 15:01 -0800, Glen Parker wrote:
> We're still on 7.4 (sorry, probly should have mentioned that).  Does 
> that documentation apply to the 7.4 series as well?

AFAIK, there haven't been any major changes to TOAST since 7.4, so most
of that documentation should be applicable.

> Maybe an easier question is, can we expect a TOAST performance increase 
> when upgrading to 8.1?

A lot of performance improvements have been made since since 7.4, but I
don't believe any of them have affected TOAST in particular.

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Text field performance

2006-01-17 Thread Glen Parker
We're still on 7.4 (sorry, probly should have mentioned that).  Does 
that documentation apply to the 7.4 series as well?


Maybe an easier question is, can we expect a TOAST performance increase 
when upgrading to 8.1?


Thx again...

-Glen

Tom Lane wrote:

Glen Parker <[EMAIL PROTECTED]> writes:

We are having some performance issues we think may be related to large 
text field values, and many records.  So, a couple questions...



See
http://www.postgresql.org/docs/8.1/static/storage-toast.html

regards, tom lane





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Modify the password of the service account?

2006-01-17 Thread Joeseph Blowseph

Hi,

I'd like to be able to modify the auto-generated password of the service 
account created during installation. I'm running XP Home which limits access 
to some of the snap-ins available on XP Pro. I've not been able to work out 
how to make the change. Any suggestions would be very welcome.


JB

_
Are you using the latest version of MSN Messenger? Download MSN Messenger 
7.5 today! http://messenger.msn.co.uk



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Text field performance

2006-01-17 Thread Tom Lane
Glen Parker <[EMAIL PROTECTED]> writes:
> We are having some performance issues we think may be related to large 
> text field values, and many records.  So, a couple questions...

See
http://www.postgresql.org/docs/8.1/static/storage-toast.html

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Text field performance

2006-01-17 Thread Glen Parker
We are having some performance issues we think may be related to large 
text field values, and many records.  So, a couple questions...


1) Are text values always stored in the companion text file, or only if 
they exceed a certain size?  What size?


2) When the query executer reads tuples during a query, does it always 
read the text companion file too?  For example, the executer finds an 
index match and then goes after the heap tuple to continue field 
matching.  Now, assuming the query does not contain any conditions 
involving a text field, does the executer read the text field in even if 
the tuple does not match and won't be included in the result set?


3) Is there any eqivelent to the CLUSTER functionality for text 
companion files?  Is there any such thing planned?  What's the best way 
to reduce fragmentation in a text companion file?


TIA

-Glen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Windows re-installation problem

2006-01-17 Thread Rob Brenart
I uninstalled PostgreSQL from windows for the sake of re-installing (for 
the sake of documenting an install for our product on a clean machine), 
and now during re-install on the Service Configuration screen I get 
"Invalid username specified: Logon failure: unknown user name or bad 
password."


Fine makes sense, I was using a new password for the postgres user... so 
I went to my computer management and deleted the postgres user, then 
rebooted, came back to install... but the message is still there. Yes I 
can rename the user from postgres to postgres2 or some such and things 
work fine, but this isn't acceptable for me right now.


Anyone know what else may need to be purged from the system to allow a 
re-install with the same name/new password? And what do the installer 
developers think about possibly building that into the installer?


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] pg_dump throws no buffer space available error

2006-01-17 Thread Vishal Dixit
I can only reproduce this error on windows 2000, on all other
operating systems it works fine. Sometimes I see "No buffer
space available error", other times I see "Socket not open
error".

In the database, besides other tables we have one table that
contains bytea type columns, we see the above error when the
copy command is executed for this table.

I put in some print statements in socket.c, it looks like the
pgwin32_recv() call times out waiting to read from the send
buffer. Since the execution of the query at the server for the
table containing bytea columns takes long time, there is nothing
at the server side buffer to send to the recv side which just
times out and throws socket not open error. I do not understand
the inner working of the recv() and send() data transfer. Can
someone explain it?. Also has any one else has seen this
problem?

Thanks. 



Get your own "800" number
Voicemail, fax, email, and a lot more
http://www.ureach.com/reg/tag

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Rule problem: return value of insert

2006-01-17 Thread Tom Lane
Stephen Friedrich <[EMAIL PROTECTED]> writes:
> Or do you mean that the last rule that
> gets evaluated will determine the result, even if it's where clause evaluates
> to false?

Right.  In effect, the last one in alphabetical order will always
determine the result, whether it's the one that does the useful insert
or not.

> Thanks I'll try that. Yet I have only some dozen cdrs at any time, so
> I doubt it will make a big difference?

Perhaps not, but it's worth checking.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] prepared statement results don't clear?

2006-01-17 Thread David Rysdam

I have a Tcl function that does this:

1) create prepared statement for binary insertion via pg_exec (and 
releases the result handle)
2) run statement with binary data via pg_exec_prepared (and releases the 
result handle)

3) deallocate statement via pg_exec (and releases the result handle)

When I try to run this function a couple hundred times, I get "had limit 
on result handles reached" after 128 successes.  It seems something is 
not being released.  To make absolutely sure it's not me that's leaving 
something out there, I output a line each time I either create or 
destroy a result handle and they add up perfectly.  Furthermore, all the 
pg_execs go through another function that has been well-exercised, so I 
don't think the problem is there.


The only thing I can think of is that a prepared statement (or the Tcl 
specific pg_exec_prepared) has, like, a "double" result handle (one for 
the statement itself and one for the exec'ing thereof).  Kind of a 
half-assed theory, but necessity is the mother of such invention.  Does 
anyone else have any better ideas for locating the result handle leak?



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Distance calculation

2006-01-17 Thread Tino Wildenhain
[EMAIL PROTECTED] schrieb:
> Hi
> 
> I have a latiude and longitude for a city and latitude, longitude
> foreach hotel in hotels table.
> 
> I have to  reitreive 20 hotels nearby to that city in 25 miles. The
> below is the query I am using to check the distance. But the query is
> slow because of distance calulation on fly and order by distance.
> 
> Can anybody help me how can I improve performance by refining lat and
> long data.


Actually I was using cube and earth datatype from contrib directory.

Instead of saving latiude/longitude I had earth (basically
a 0-dimensonal cube so you have 3-coordinates measured from
center of the earth in meters (you can also calculate in archaic
miles measurement by changing the constant for the earth radius -
see the files in contrib)

You can create an index on the earth-column (coordinates).

Next I was using cube_enlarge(earth_coordinates,radius,3) to get a
cube which covers nearest cities using the index:

cube_enlarge(start.coordinates,radius,3) @ cities.coordinates

now since you rules out a lot points far away you can fine scan
using earth_distance:

AND earth_distance(start.coordinates,cities.coordinates) < radius

this is really fast since only a couple of cities are outside
the circle (actually sphere) but inside the cube.

HTH
Tino

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Rule problem: return value of insert

2006-01-17 Thread Stephen Friedrich

Thanks for the quick answer - even though it does not solve my problem  :-(

Tom Lane wrote:
> Not when you're using a pile of conditional rules like that.  The last
> one to fire determines the result, so you'll only see a nonzero count
> when inserting into the last subtable.

Hm, it fails even if I have only a single inherited table.
Also only one rule will 'fire'. Or do you mean that the last rule that
gets evaluated will determine the result, even if it's where clause evaluates
to false?

> I think you'd be better off to forget the rule approach and instead
> put a trigger on the parent table that stores the values into the
> appropriate subtable and then returns NULL.  Unfortunately that won't
> fix the rowcount problem either (because suppressed inserts won't be
> counted), but it will certainly outperform a large collection of rules.

Thanks I'll try that. Yet I have only some dozen cdrs at any time, so
I doubt it will make a big difference?


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Rule problem: return value of insert

2006-01-17 Thread Tom Lane
Stephen Friedrich <[EMAIL PROTECTED]> writes:
> In fact I have found some related mails in the archive, but I really do not
> understand why '0' is returned.
> The documentation (34.5. Rules and Command Status) says if I replace an INSERT
> with another INSERT the value returned should be from the new INSERT 
> statement?

Not when you're using a pile of conditional rules like that.  The last
one to fire determines the result, so you'll only see a nonzero count
when inserting into the last subtable.

I think you'd be better off to forget the rule approach and instead
put a trigger on the parent table that stores the values into the
appropriate subtable and then returns NULL.  Unfortunately that won't
fix the rowcount problem either (because suppressed inserts won't be
counted), but it will certainly outperform a large collection of rules.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] mac os x compile failure

2006-01-17 Thread Tom Lane
Neil Brandt <[EMAIL PROTECTED]> writes:
> fd.c: In function `pg_fsync_writethrough':
> fd.c:271: `F_FULLFSYNC' undeclared (first use in this function)
> fd.c:271: (Each undeclared identifier is reported only once
> fd.c:271: for each function it appears in.)

Hmm.  This is our bug: the code mistakenly supposes that every version
of OS X has that symbol, whereas evidently it was introduced in 10.3.

I'll try to see that this gets fixed for PG 8.1.3, but in the short run
you might be best off to update your OS X installation, or revert to
PG 8.0.* which doesn't try to use FULLFSYNC at all.

If you'd really like to stay on OS X 10.2, please consider joining the
buildfarm
http://www.pgbuildfarm.org/index.html
so that any other such problems will be caught promptly.  We have
buildfarm members running 10.3 and 10.4, but nobody covering 10.2.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] mac os x compile failure

2006-01-17 Thread Joshua D. Drake

Neil Brandt wrote:

I'm trying to compile postgres 8.1.2 on OS X 10.2.8.  Unfortunately I'm not 
much of a
compile-your-own guy, and I've hit a compile error that's beyond my 
improvisation abilities, and
gets no google hits either.
  

Make your life easy. Use Darwin ports:

http://darwinports.opendarwin.org/




--
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: PLphp, PLperl - http://www.commandprompt.com/


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] mac os x compile failure

2006-01-17 Thread Neil Brandt
I'm trying to compile postgres 8.1.2 on OS X 10.2.8.  Unfortunately I'm not 
much of a
compile-your-own guy, and I've hit a compile error that's beyond my 
improvisation abilities, and
gets no google hits either.

I realize this also an OS X expert thing: I'm also seeking help on one of the 
apple.com lists as
well.

I installed the December 2002 xcode package of developer stuff from apple, 
which seemed to be the
last one that was for OS X 10.2.  

Initially I only installed the parts called BSDSDK and Developer Tools 
Software.   

The postgres configure utility ran through fine.

I ran make.

Several minutes into it, I got a missing header file for osx installed version 
constants.

So I then installed the xcode part called Mac OS X SDK.

I ran make again. 

Now, further in, I get the following output and error (note the "make -C file 
SUBSYS.o" is output,
not my initial command which was just "make" in the directory at the top of the 
postgres source).


make -C file SUBSYS.o
gcc -no-cpp-precomp -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
-Winline -fno-strict-aliasing
-I../../../../src/include -I/sw/include  -c -o fd.o fd.c
fd.c: In function `pg_fsync_writethrough':
fd.c:271: `F_FULLFSYNC' undeclared (first use in this function)
fd.c:271: (Each undeclared identifier is reported only once
fd.c:271: for each function it appears in.)
make[4]: *** [fd.o] Error 1
make[3]: *** [file-recursive] Error 2
make[2]: *** [storage-recursive] Error 2
make[1]: *** [all] Error 2
make: *** [all] Error 2


Searching, I couldn't find anything useful on the F_FULLFSYNC constant, and no 
reports on a
similar failure.

Any ideas?  (Help!)

Thanks,

Neil










__ 
Find your next car at http://autos.yahoo.ca

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Rule problem: return value of insert

2006-01-17 Thread Stephen Friedrich

Sorry a little long explanation, and probably an advanced problem.
Any help would be much appreciated all the more!!

I am trying to implement table partitioning (postgres 8.1.2).
Basically I am dealing with Call Detail Records (cdrs) (telecom invoices)
where each one contains possibly millions of Calls.
Since most queries deal with calls from exactly one cdr, partitioning seems very
natural.

I managed to create a trigger that creates a separate, inherited calls_
table for each cdr. Also it creates a rule that redirects inserts into that 
table.
This is working very nicely and performance is drastically better.

However it broke our hibernate applications, because hibernate checks the result
value of inserts. Because the rule rewrites the insert queries, the result now
always seem to be 0, which makes hibernate throw an exception.

I just removed that check from hibernate and recompiled hibernate to make it 
work.
However that's not really a desirable solution (to say the least - I will never
manage to get a hacked hibernate version into production).

Is there any way to make the rewritten query return the correct inserted row 
count?

In fact I have found some related mails in the archive, but I really do not
understand why '0' is returned.
The documentation (34.5. Rules and Command Status) says if I replace an INSERT
with another INSERT the value returned should be from the new INSERT statement?

Here is my sql code:

CREATE OR REPLACE FUNCTION createCallsSubTable() RETURNS TRIGGER AS $cs_subs$
DECLARE
createStatement text;
ruleStatement text;
BEGIN
createStatement := 'create table calls_'
|| NEW.id
|| '( check(cdr_id = '
|| NEW.id
|| '::int8)) inherits(calls);';
EXECUTE createStatement;

ruleStatement := 'CREATE OR REPLACE RULE calls_insert_'
|| NEW.id
|| ' AS '
|| 'ON INSERT TO calls '
|| 'WHERE cdr_id = '
|| NEW.id
|| ' DO INSTEAD INSERT INTO calls_'
|| NEW.id
|| ' VALUES (new.*)';
EXECUTE ruleStatement;

RETURN NULL;
END;
$cs_subs$ LANGUAGE plpgsql;

CREATE TRIGGER cs_subs AFTER INSERT ON cdrs
   FOR EACH ROW EXECUTE PROCEDURE createCallsSubTable();



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL Top 10 Wishlist

2006-01-17 Thread Jaime Casanova
> > > 8.
> > > The ability to use procedural-language extensions everywhere, not just
> > > in functions.
> >
> > Like where? Give an example.
>
> // PHP
> rows = pg_query('IF ... THEN ... ENDIF;');
>

// PHP
rows = pg_query('SELECT CASE WHEN ... ELSE ... END;');

--
Atentamente,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Huge number of disk writes after migration to 8.1

2006-01-17 Thread Marcin
Qingqing Zhou wrote:
> A similar problem was reported before:
> 
> http://archives.postgresql.org/pgsql-admin/2005-12/msg00266.php
> 
> But we conclude that's not related to pgstats. See if that's related to
> your situation.

Unfortunately, I don't think so. The 8.0.3 run just fine. And I don't
see any pgsql_tmp files. The problems started after migration to 8.1.2.
I posted to pgsql-bugs, following Tom suggestion.
BTW, I made some benchmarks on test machine (which is 32bit 2xPIII
1.4GHz) with pgbench and I was quite amazed:
pgbench  -S -c 50 -t 1 -v pgbench
with stats collector disabled resulted in:
tps = 3178.346439 (including connections establishing)
tps = 3183.360731 (excluding connections establishing)

with stats collector enabled, but stats_command_prompt disabled:
tps = 3143.376908 (including connections establishing)
tps = 3147.564695 (excluding connections establishing)

and with stats_command_prompt enabled:
tps = 2446.136610 (including connections establishing)
tps = 2448.785260 (excluding connections establishing)

However, I didn't notice any suspicious write activity.

Thanks for suggestion,
-- 
Marcin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Change owner of all database objects

2006-01-17 Thread Andrus
Currently all my database objects are owned by superuser postgres .

I need to change owner to a nonprivileged role for all objects in a 
database.

I'm using Postgres 8.1 server and client in Windows XP, only plpSQL language 
installed in server.

I'm thinking about the following possibilities:

1. Create a loop over all database objects and issue change owner statements
2. Update system tables.
3. Make database dump and restore
4. make text dump and run some global find/replace. I'm not sure which 
utility can handle it in windows

Is there any sample how to do it in Windows ?

Andrus. 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Distance calculation

2006-01-17 Thread Michael Fuhr
On Tue, Jan 17, 2006 at 10:00:22AM -0800, [EMAIL PROTECTED] wrote:
> I have a latiude and longitude for a city and latitude, longitude foreach 
> hotel in hotels table.
> 
> I have to  reitreive 20 hotels nearby to that city in 25 miles. The below 
> is the query I am using to check the distance. But the query is slow 
> because of distance calulation on fly and order by distance.

You appear to be using PostGIS but the query you posted doesn't use
any of PostGIS's indexable operators.  Have you read the "Using
PostGIS" documentation, in particular the parts that discuss creating
and using indexes on geometry columns?

http://postgis.refractions.net/docs/ch04.html

Your data looks like it has separate lat/lon columns when it should
have a geometry column with a GiST index.

If you're using PostGIS then you might want to subscribe to the
postgis-users mailing list:

http://postgis.refractions.net/mailman/listinfo/postgis-users

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Distance calculation

2006-01-17 Thread John Sidney-Woollett
I'm no expert on this but can't you limit the points to checking any 
hotel whose lat is +- 25km north/south of your city, AND whose longitude 
is also +- 25km of your city. It's crude but will probably eliminate 
lots of points you should never be checking...


If you could index an approx lat and long for each hotel you could use 
two indexes (or one combined) on these fields to filter the results 
BEFORE applying the DISTANCE function.


You'll need to compute your approx upper and lower bound lat and long 
for filtering.


I'm not sure if this approach works, but it may help?

John

[EMAIL PROTECTED] wrote:

Hi

I have a latiude and longitude for a city and latitude, longitude 
foreach hotel in hotels table.


I have to  reitreive 20 hotels nearby to that city in 25 miles. The 
below is the query I am using to check the distance. But the query is 
slow because of distance calulation on fly and order by distance.


Can anybody help me how can I improve performance by refining lat and 
long data.

v_point is the city lat and long.(point(citylong citylat))

 SELECT pr.property_id
  , pr.property_type As property_type_id
   , pr.property_name
,round (DISTANCE( v_point:: geometry,
  POINTFromText('Point(' ||pr.long ||' ' || 
pr.lat||')')::geometry) *69.055) as CityToHotelDistance

   FROM property.property pr
INNER JOIN place p ON (pr.place_id = p.place_id)
   INNER JOIN placedetail pd ON (p.place_id = pd.place_id)
   LEFT OUTER JOIN property.vw_property_price vwp ON 
(vwp.property_id = pr.property_id)

   WHERE DISTANCE( v_point :: geometry,
  POINTFromText('Point(' ||pr.long ||' ' || 
pr.lat||')')::geometry) < .4  AND pr.place_id != p_place_id

  AND (pr.status_type_id is null OR pr.status_type_id = 0)
  ORDER BY DISTANCE( v_point :: geometry,
  POINTFromText('Point(' ||pr.long ||' ' || 
pr.lat||')')::geometry)

  offset 0 LIMIT 20;


---(end of broadcast)---
TIP 6: explain analyze is your friend


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Huge number of disk writes after migration to 8.1

2006-01-17 Thread Qingqing Zhou
On Tue, 17 Jan 2006, Marcin wrote:

>
> Playing with postgresql.conf I changed "stats_command_string" to off,
> reloaded config, and restarted connection from applications (the 
> postgresql
> wasn't restarted). The write rate immediately drops down, and is now
> at 8000-9000 blocks per second (which is still 4-5 times more than in 
> 8.0.3).
>

A similar problem was reported before:

http://archives.postgresql.org/pgsql-admin/2005-12/msg00266.php

But we conclude that's not related to pgstats. See if that's related to
your situation.

Regards,
Qingqing



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Distance calculation

2006-01-17 Thread sunithab

Hi

I have a latiude and longitude for a city and latitude, longitude foreach 
hotel in hotels table.


I have to  reitreive 20 hotels nearby to that city in 25 miles. The below is 
the query I am using to check the distance. But the query is slow because of 
distance calulation on fly and order by distance.


Can anybody help me how can I improve performance by refining lat and long 
data.

v_point is the city lat and long.(point(citylong citylat))

 SELECT pr.property_id
  , pr.property_type As property_type_id
   , pr.property_name
,round (DISTANCE( v_point:: geometry,
  POINTFromText('Point(' ||pr.long ||' ' || 
pr.lat||')')::geometry) *69.055) as CityToHotelDistance

   FROM property.property pr
INNER JOIN place p ON (pr.place_id = p.place_id)
   INNER JOIN placedetail pd ON (p.place_id = pd.place_id)
   LEFT OUTER JOIN property.vw_property_price vwp ON (vwp.property_id = 
pr.property_id)

   WHERE DISTANCE( v_point :: geometry,
  POINTFromText('Point(' ||pr.long ||' ' || 
pr.lat||')')::geometry) < .4  AND pr.place_id != p_place_id

  AND (pr.status_type_id is null OR pr.status_type_id = 0)
  ORDER BY DISTANCE( v_point :: geometry,
  POINTFromText('Point(' ||pr.long ||' ' || 
pr.lat||')')::geometry)

  offset 0 LIMIT 20;


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL Top 10 Wishlist

2006-01-17 Thread David Fetter
On Tue, Jan 17, 2006 at 10:28:03AM -0600, Tony Caduto wrote:
> As long as we are talking wish lists...
> 
> What I would like to see is some way to change the ordering of the
> fields without having to drop and recreate the table.

Why are you asking us to optimize the 'SELECT *' case which almost
never belongs in production code in the 1st place?

> unless...is it possible to safely modify the |attnum field of
> ||pg_attribute?  I know we should never directly modify system
> tables but..|

You're right.  No 'but' involved :)

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 415 235 3778

Remember to vote!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Is there a way to list runaway queries and kill them?

2006-01-17 Thread Michael Fuhr
On Tue, Jan 17, 2006 at 09:38:37AM +, frank church wrote:
> Is there a way to list runaway or long runninng queries and kill them?

With the proper statistics collector settings you can see queries
with the pg_stat_activity view.  You can automatically kill
long-lasting queries by setting the statement_timeout parameter
(but beware of setting it globally -- you probably don't want to
time out maintenance operations like VACUUM).  In 8.0 and later you
can cancel a query in another backend with pg_cancel_backend().

http://www.postgresql.org/docs/8.1/interactive/monitoring-stats.html
http://www.postgresql.org/docs/8.1/interactive/runtime-config-client.html
http://www.postgresql.org/docs/8.1/interactive/functions-admin.html

(These links are for 8.1; use the documentation for whatever version
you're running.)

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL Top 10 Wishlist

2006-01-17 Thread Chris Browne
[EMAIL PROTECTED] ("Jim C. Nasby") writes:
> On Mon, Jan 16, 2006 at 12:13:15PM -0500, Chris Browne wrote:
>> What you seem to be after, here, would confine your telno formatting
>> to telephone numbers for Canada and the United States, and would break
>> any time people have a need to express telephone numbers outside those
>> two countries.
>> 
>> It would be quite interesting to add an EB164 type, as it could
>> represent phone numbers considerably more compactly than is the case
>> for plain strings.  The 20 digits permissible across 1. and 2. could
>> be encoded in... 68 bits.
>
> And it would be trivial to provide functions to map that into the
> customary format for various countries. In fact, since there's 4 bits
> left over, it might be possible to encode the formatting used for the
> number in the storage itself.
>
> BTW, you sure about 68 bits? That doesn't seem to allow for a full 20
> digit number; or are there restrictions on the max value for one of the
> fields?

Hmm.

 [3]> (let ((ttl 1)) (loop
   for i from 1 to 68
   do (setf ttl (* ttl 2))
   do (format t "n=~D  2^n=~D~%" i ttl)))
n=1  2^n=2
n=2  2^n=4
n=3  2^n=8
n=4  2^n=16
n=5  2^n=32
n=6  2^n=64
n=7  2^n=128
n=8  2^n=256
n=9  2^n=512
... boring bits elided :-) ...
n=60  2^n=1152921504606846976
n=61  2^n=2305843009213693952
n=62  2^n=4611686018427387904
n=63  2^n=9223372036854775808
n=64  2^n=18446744073709551616
n=65  2^n=36893488147419103232
n=66  2^n=73786976294838206464
n=67  2^n=147573952589676412928
n=68  2^n=295147905179352825856
NIL

Actually, we pass 10^20 at 2^67, so I went 1 bit too far.  We could,
in principle, get 20 digits in 67 bits.  Not that this is necessarily
the wisest move.

That's a bit over 8 bytes, which is certainly more space-efficient
than 20 bytes.  It's certainly an open question whether that
efficiency is actually worth anything.

My sense is that it would be better to use something equivalent to BCD
(which is what we do with NUMERIC), packing two digits per byte, and
have two "segments" to the telno, a country code segment, and a "local
number" segment.  Using BCD, this would occupy 10 bytes, which, by
magical happenstance, is the same size as a US/Canada phone number.

The real questions are how to deal with:

a) Parsing incoming data, since formats people use vary so inanely

   Note: I spent most of yesterday dealing with this very issue,
   writing up a total of 31 eye-destroying regular expressions to
   generate a pl/tcl function to parse cases that I had handy...  I
   daresay that even with so many regexps, I was _still_ left with a
   reasonably material number that would not be parsed...

b) What sorts of extra operators would be interesting to add in.

   Extracting country code is an obvious thing.  Applying formatting
   rules based on country code is another.
-- 
let name="cbbrowne" and tld="ntlug.org" in name ^ "@" ^ tld;;
http://www.ntlug.org/~cbbrowne/linux.html
Rules of the Evil Overlord #35. "I  will not grow a goatee. In the old
days they made  you look diabolic. Now they just make  you look like a
disaffected member of Generation X." 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] LIMIT + OFFSET

2006-01-17 Thread A. Kretschmer
am  17.01.2006, um 16:43:36 +0100 mailte MG folgendes:
> Hello,
> 
> I want to show an overview where you can place 16 data sets.
> I use the sql-statement
> SELECT F1,F2 FROM testtable limit 16 offset 0
> 
> To show different pages I vary the value for offset.
> 
> But if I want to stay on a special data set, I have the problem to find the 
> right value for offset.
> How can I realize this?

You shoeld add a 'order by F1, F2' to get e defined order for the
result-set.


HTH, Andreas
-- 
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47212,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===Schollglas Unternehmensgruppe=== 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] LIMIT + OFFSET

2006-01-17 Thread MG



Hello,
 
I want to show an overview where you can place 16 data sets.
I use the sql-statement
SELECT F1,F2 FROM testtable limit 16 offset 0
 
To show different pages I vary the value for offset.
 
But if I want to stay on a special data set, I have the problem to find the 
right value for offset.
How can I realize this?
 
Thanks for your help
 
Michaela


Re: [GENERAL] Strange error while executing query from front end:

2006-01-17 Thread Tom Lane
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Mon, Jan 16, 2006 at 09:27:30AM +0530, Mavinakuli, Prasanna (STSD) wrote:
>> ERROR:  xlog flush request 0/108EA5F8 is not satisfied --- flushed only
>> to 0/1813C60

> If I had to guess, I'd say you're having a hard disk failure of some
> kind.  In particular, this tells you that a flush of the write ahead
> log isn't completing.  That's a Bad Thing.

Actually it looks like corrupt data to me: some page's LSN field
contains a value that is past the actual end of WAL, and therefore
isn't a correct copy of the location of the last WAL record affecting
that page, as it's supposed to be.

Could have resulted from a hardware error, but it's hard to tell.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Huge number of disk writes after migration to 8.1

2006-01-17 Thread Tom Lane
Marcin <[EMAIL PROTECTED]> writes:
> It seems that some changes to stats collector introduced in 8.1 are
> now eating my CPU power, and probably also the (I)/O bandwidth. :(

Yeah, something wrong there :-(.  What did you say your platform was
exactly?  Would you strace the collector process, and send maybe ten K
or so of trace output to pgsql-bugs?  It's probably not appropriate for
pgsql-general. 

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] A tale of two similar databases

2006-01-17 Thread James Robinson

Have you vacuum'd and/or analyzed D2?

http://www.postgresql.org/docs/8.1/static/maintenance.html
http://www.postgresql.org/docs/8.1/static/sql-analyze.html


James Robinson
Socialserve.com


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] About Full-text searching under postgresql

2006-01-17 Thread Larry Rosenman
Emi Lu wrote:
> Hi,
> 
> Could someone suggest some links/online docs about how postgreSQL
> supporting full-text searching please?
> 
> Thanks a lot,
> Emi
 For information about tsearch2
 see http://www.devx.com/opensource/Article/21674
 or http://www.devx.com/opensource/Article/21674/0/page/3
 or http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/

Those are good places to start.  using the contrib/tsearch2 module.

LER
-- 
Larry Rosenman  
Database Support Engineer

PERVASIVE SOFTWARE. INC.
12365B RIATA TRACE PKWY
3015
AUSTIN TX  78727-6531 

Tel: 512.231.6173
Fax: 512.231.6597
Email: [EMAIL PROTECTED]
Web: www.pervasive.com 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] About Full-text searching under postgresql

2006-01-17 Thread Emi Lu

Hi,

Could someone suggest some links/online docs about how postgreSQL 
supporting full-text searching please?


Thanks a lot,
Emi


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] linking temporary tables from MS Access?

2006-01-17 Thread Tony Caduto

Zlatko Matić wrote:

Is it possible to use temporary tables from MS Access, as linked tables?
I was not able to link, but maybe someone succeeded?


It should work if you can do it all in the context of a single 
connection, but if it does not, just roll your own temp table, create 
one on the fly, fill it what you want and then drop when you don't need 
it anymore.


--
Tony Caduto
AM Software Design
Home of PG Lightning Admin for Postgresql
http://www.amsoftwaredesign.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] full text search

2006-01-17 Thread Oleg Bartunov

You may try tsearchd
 http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch2

Oleg
On Tue, 17 Jan 2006, Martin Krallinger wrote:


Hi all,

I am not a Postgres expert, and thus would like to ask you a question related 
to full text search (and indexing). I actually would like to
search a table of over 10gb of free text. I am not quite sure which would be 
the best (fastest) way to do it. I was trying out tsearch-v2 but it seemed to 
be very slow.

any suggestions?


thanks,


Martin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

 http://archives.postgresql.org



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] full text search

2006-01-17 Thread Martin Krallinger

Hi all,

I am not a Postgres expert, and thus would like to ask you a question 
related to full text search (and indexing). I actually would like to
search a table of over 10gb of free text. I am not quite sure which 
would be the best (fastest) way to do it. I was trying out tsearch-v2 
but it seemed to be very slow.

any suggestions?


thanks,


Martin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Strange error while executing query from front end:

2006-01-17 Thread Andrew Sullivan
On Mon, Jan 16, 2006 at 09:27:30AM +0530, Mavinakuli, Prasanna (STSD) wrote:
> 
> Hi,
> 
> I am getting following strange errors while executing queries.
> 
> 1)
> ERROR:  xlog flush request 0/108EA5F8 is not satisfied --- flushed only
> to 0/1813C60

If I had to guess, I'd say you're having a hard disk failure of some
kind.  In particular, this tells you that a flush of the write ahead
log isn't completing.  That's a Bad Thing.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
This work was visionary and imaginative, and goes to show that visionary
and imaginative work need not end up well. 
--Dennis Ritchie

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] tablespace and pg_dump

2006-01-17 Thread Michelle Konzack
Hello,

I have on my Server around 5000 $USER and each has its own database
which is automaticly created id I add a new $USER with adduser.

The Server has three 2 channel RAID-Controller and it is splitted
into 6 Raid-5 (each 13+2 HDD of 76 GByte)

Now $USER are on different Raid-5 sets and in each $HOME is a
~/pgsql/ where the tablespace for this $USER is.

Now my question:  :-)

What happen with the tablespace if I make a "pg_dumpall" and restore
it on another machine where I have for example only ONE diskspace?

Greetings
Michelle Konzack
Systemadministrator
Tamay Dogan Network
Debian GNU/Linux Consultant


-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/
# Debian GNU/Linux Consultant #
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSM LinuxMichi
0033/3/8845235667100 Strasbourg/France   IRC #Debian (irc.icq.com)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Data loading from a flat file...

2006-01-17 Thread Michelle Konzack
Am 2006-01-05 23:04:16, schrieb Angshu Kar:

> Also, my data file is showing some ^M chars like
> 
> B1^M   C1^M   E1
> B2^M   C2^M   E2

If those ^M are coming from your data file, then you should use

dos2unix 

and the problem is gone

Greetings
Michelle Konzack
Systemadministrator
Tamay Dogan Network
Debian GNU/Linux Consultant


-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/
# Debian GNU/Linux Consultant #
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSM LinuxMichi
0033/3/8845235667100 Strasbourg/France   IRC #Debian (irc.icq.com)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] "REFERENCES" and UNIQUE

2006-01-17 Thread Michelle Konzack
Am 2006-01-04 12:08:30, schrieb Stephan Szabo:
> 
> On Wed, 4 Jan 2006, Michelle Konzack wrote:

> > | CREATE TABLE countries (
> > | serno   int NOT NULL UNIQUE,
> > | isocode varchar(2)  NOT NULL UNIQUE,
> > | EN  textNOT NULL,
> > | DE  textNOT NULL,
> > | FR  textNOT NULL
> > | );

> > | CREATE TABLE cities (
> > | serno   int NOT NULL UNIQUE,
> > | country varchar(2)  NOT NULL REFERENCES countries (isocode),
> > | EN  varchar(30) NOT NULL,
> > | DE  varchar(30) NOT NULL,
> > | FR  varchar(30) NOT NULL
> > | );

> Which in this case hints that there's a problem with the schema.
> 
> What is the final effect you're looking for?  The above does let you
> lookup the DE or FR strings for a city of a member (what if two cities had
> the same EN name but different DE or FR names, how would you decide which
> one it was). If you only wanted to make sure that the city name was in a
> list, then it wouldn't matter if you made it unique.

Hmmm, never seen this.  I have checked my table and I have more
then 30.000 cities and no doubles.

What do you think, should I do?

First I was thinking, I use additonaly geographical coordinates,
but because I have none I have leaved them out.  Maybe I should
use it even if I do not know it currently?

> The easiest is to write triggers that check for a matching row on insert
> or update to members and make sure that you aren't removing the last match
> on update or deletion of cities. You need to be a little careful to get
> the locking right for concurrent actions on cities and members.

Hmmm...

I think, I will change this part.

If I enter new members and I add the city, I will do following

1)  enter ISO countrycode
2)  klick in a link to select the first letter of the city
which triger a query on the availlabele cities.
3)  now a)  select city from the table
   or   b)  enter a new cityname
4)  while submiting the new/changed member data I use only
the "serno" o determine which city I have choosen if
several cities of the same name exist

This mean, I should change the TABLE to

CREATE TABLE cities (
serno   int NOT NULL UNIQUE,
country varchar(2)  NOT NULL REFERENCES countries (isocode),
geo ???,
EN  varchar(30) NOT NULL,
DE  varchar(30),
FR  varchar(30)
);

Is there a data type for geographical data like GIS or something similar?

Please note, that I am using PostgreSQL 7.4.5

Greetings
Michelle Konzack
Systemadministrator
Tamay Dogan Network
Debian GNU/Linux Consultant


-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/
# Debian GNU/Linux Consultant #
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSM LinuxMichi
0033/3/8845235667100 Strasbourg/France   IRC #Debian (irc.icq.com)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Huge number of disk writes after migration to 8.1

2006-01-17 Thread Marcin
On Mon, Jan 16, 2006 at 05:09:18PM -0500, Qingqing Zhou wrote:
> AFAICS the difference maybe related to this between 8.0 and 8.1 is the 
> introduction of autovacuum. But seems you didn't turn it on. Do you observed 
> only writes increased or both read/write? If the latter, that might be an 
> unoptimized query plan invloves seqscan ... Can you find out what's the 
> writes on?

Autovacuum is turned off.
The reads remains on very, very low level. According to monitored values
from /sys/block/md?/stat  (confirmed by iostat output) the reads rate
stays at 70-80 blocks per second, while the writes rate keeps at 2 blocks.

However, I found something interesting.

Playing with postgresql.conf I changed "stats_command_string" to off,
reloaded config, and restarted connection from applications (the postgresql
wasn't restarted). The write rate immediately drops down, and is now
at 8000-9000 blocks per second (which is still 4-5 times more than in
8.0.3).

Looking at the CPU usage in top, I found strange stats for postgres processes
(ps aux dump):
 11:25 /usr/lib/postgresql/bin/postmaster
  0:25 postgres: writer process
  4:21 postgres: stats buffer process
201:46 postgres: stats collector process
(PostgreSQL is running for two and a half day now).


I found some ps -aux dumps from last week, (with 8.0.3 and
stats_command_string enabled) and CPU usage was completely different:
 31:07 /usr/lib/postgresql/bin/postmaster
175:10 postgres: writer process
 30:55 postgres: stats buffer process
 58:43 postgres: stats collector process
(PostgreSQL was running for 12 days).

It seems that some changes to stats collector introduced in 8.1 are now eating
my CPU power, and probably also the (I)/O bandwidth. :(

Thanks for help,
-- 
Marcin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] A tale of two similar databases

2006-01-17 Thread Harry Jackson
On 17 Jan 2006 01:22:20 -0800, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
> Hi All,
>
> I am using PostgreSQL 7.3.2 on a server running Red Hat Linux 9.0.
>
> I have two databases ( identical schema and similar data ).
>
> One database, D1 contains the actual data of a Production Application.
> The other D2 contains dummy data which is used during development and
> testing of the application.
>
> D2 actually contains a recent snapshot of D1 and hence contain almost
> the same data.
>
> The strange thing is that D1 is extremely fast whereas D2 is relatively
> slow for any given query.
> (Thank God, it isnt the other way round :) )
>
> Eg,
> An extensive Statistics query returns in a matter of seconds on D1 but
> takes close to a minute on D2
>
> I would like to know if anybody can answer why it is so.

I am making the assumption that you have checked your query plan on
both databases to make sure that they are the same i.e.

on DB1
explain "big query";

on DB2
explain "big query"

If these are not almost identical then you need to investigate the
reasons for the difference ie bad stats on the dev database or missing
index's etc. Have you vacuum analyzed D2?

One other possible reason is that D1 is mostly in cache and D2 isn't.
If you run the query twice on D2 immediately after each other is the
second query much faster. If this is the case what you might be seeing
is D1 being in constant use is forcing the D2 data back onto the disk
and out of the cache.

--
Harry
http://www.hjackson.org
http://www.uklug.co.uk

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] A tale of two similar databases

2006-01-17 Thread Michael Glaesemann


On Jan 17, 2006, at 18:22 , [EMAIL PROTECTED] wrote:


I am using PostgreSQL 7.3.2 on a server running Red Hat Linux 9.0.


I'd highly recommend upgrading. The current release is 8.1.2. If you  
can't upgrade to 8.1, at least upgrade to the latest point release of  
7.3, which is 7.3.13. There are a number of security and critical bug  
fixes.


The strange thing is that D1 is extremely fast whereas D2 is  
relatively

slow for any given query.
(Thank God, it isnt the other way round :) )

Eg,
An extensive Statistics query returns in a matter of seconds on D1 but
takes close to a minute on D2


How often do you run ANALYZE? I suspect your statistics are probably  
off. Try running EXPLAIN ANALYZE on the queries and compare the  
results. If you can provide more information such as relevant table  
schema, the queries, and their EXPLAIN ANALYZE output, other list  
members may be able to help you. You may also want to ask on the  
pgsql-performance list.


Michael Glaesemann
grzm myrealbox com




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Stored procedures..PLEASE HELP

2006-01-17 Thread Alagu Madhu
Hello,

  I am alagu madhu working as a Developer (postgresql).pls,send
your table design.






"Minal A. Aryamane" wrote:
> hello all,
> I desparately need help in developing stored procedure using postgresql 8.0. 
> I am using pgadmin for developing the stored procs. Can anyone please help 
> me, The docs are really not helpful.
>
>  am trying to pass values through  a jsp page wherein I am accessing this 
> stored procedures by the following select ststemet
>
> SELECT sp_build_course('ETLCRC0001:ETLCRC0010:', 'CRCSUB0001:CRCSUB0001:', 
> 'ABC', 'minal' )
>
> the function is as below:
>
> CREATE OR REPLACE FUNCTION sp_build_course("varchar", "varchar", "varchar", 
> "varchar")
>   RETURNS text AS
> '
> Declare
> COURSEIDS VARCHAR(1000) ;
> TOPICIDS VARCHAR(1000);
> COURSENAME VARCHAR(150);
> USERNAME VARCHAR(20);
>
> COURSEIDS1 VARCHAR(1000);
> TOPICIDS1 VARCHAR(1000);
> COURSEID1 VARCHAR(20);
> TOPICID1 VARCHAR(20);
> NEWCOURSEID VARCHAR(20);
> NEWTOPICID VARCHAR(20);
> COLUMN1 VARCHAR(50);
> COLUMN2 VARCHAR(50);
> COLUMN3 VARCHAR(50);
> COLUMN4 VARCHAR(50);
> QCODE1 VARCHAR(10);
> QDATE1 VARCHAR(10);
> QSNO INTEGER;
> WEIGHTAGE INTEGER;
> QSTYPE VARCHAR(50);
> QUESTION VARCHAR(1000);
> MARKS VARCHAR(50);
> QFLAG BOOL;
> PFLAG BIT;
> PAPER_TYPE VARCHAR(20);
> CINDEX INTEGER;
> TINDEX INTEGER;
> INDEX INTEGER;
> QINDEX INTEGER;
> QUESTIONID INTEGER;
> NEWQUESTIONID INTEGER;
> LESSON_FLOW_TITLE VARCHAR(50);
> TREE_NODE_TYPE VARCHAR(50);
> VIDEO_SIZE INTEGER;
> ROOT_NODE_ID INTEGER;
> SLIDE_ORDER INTEGER;
> row1 record;
> row2 record;
> row3 record;
> COURSENAME1 varchar(150);
> USERNAME1 varchar(50);
>
> BEGIN
> SET COURSEIDS1:=COURSEIDS;
> SET TOPICIDS1:=TOPICIDS;
> SET COURSENAME1:=COURSENAME;
> SET USERNAME1:=USERNAME;
>
> --PRINT \'INSIDE THE STORED PROCEDURE\'
>
> COLUMN1 := DATE_PART(\'Day\',current_date);
> COLUMN2 := DATE_PART(\'Month\',current_date);
> COLUMN3 := DATE_PART(\'Year\',current_date);
>
> IF LENGTH(COLUMN1) = 1 THEN
>  COLUMN1 := \'0\' || COLUMN1;
> END IF;
> IF LENGTH(COLUMN2) = 1 THEN
>  COLUMN2 := \'0\' || COLUMN2;
> END IF;
>
> QDATE1 := COLUMN1 || COLUMN2 || COLUMN3;
>
> SELECT trim(to_char(max(to_number(QCODE,\'99\'))+1,\'09\')) INTO 
> QCODE1 FROM QUESTION WHERE QDATE = QDATE1;
>
>  WHILE  (LENGTH(QCODE1) < 6) LOOP
>   QCODE1 := \'0\' || QCODE1;
>  END LOOP;
>
> SELECT MAX(SUBSTR(COURSEID,7,LENGTH(COURSEID))) INTO  NEWCOURSEID FROM 
> COURSEMASTER;
>
>  WHILE  (LENGTH(NEWCOURSEID) < 4) LOOP
>   NEWCOURSEID := \'0\' || NEWCOURSEID;
>  END LOOP;
>   NEWCOURSEID := \'ETLCRC\' || NEWCOURSEID;
>   INSERT INTO COURSEMASTER(COURSEID, COURSENAME) VALUES ( 
> NEWCOURSEID,COURSENAME1);
>   CINDEX := POSITION(\':\' IN COURSEIDS1);
>   TINDEX := POSITION(\':\' IN TOPICIDS1);
>   INDEX := 1;
>  WHILE (CINDEX >  0) LOOP
>   COURSEID1 := SUBSTR(COURSEIDS1,1,(CINDEX-1));
>   TOPICID := SUBSTR(TOPICIDS1,1,(TINDEX-1));
>
>  -- TRANSACTION STARTS HERE
>   NEWTOPICID := INDEX;
>WHILE  (LENGTH(NEWTOPICID) < 4) LOOP
> NEWTOPICID := \'0\' || NEWTOPICID;
>END LOOP;
>NEWTOPICID := \'CRCSUB\' || NEWTOPICID;
>INSERT INTO MYTABLE6 VALUES(COURSEID1, TOPICID, NEWCOURSEID, NEWTOPICID);
>INSERT INTO TOPICS SELECT NEWCOURSEID, NEWTOPICID, TOPICNAME  FROM TOPICS 
> WHERE COURSEID=COURSEID AND TOPICID=TOPICID;
>SELECT NULLIF(0,COUNT(*)) INTO COLUMN3 FROM LESSON_FLOW WHERE 
> COURSEID=COURSEID AND TOPICID=TOPICID;
>COLUMN1 := COURSEID1 || \'_\' || TOPICID || \'_\';
>COLUMN2 := NEWCOURSEID || \'_\' || NEWTOPICID || \'_\';
>IF (COLUMN3 >0) THEN
> SELECT MAX(ATTACHMENT_ID) + 1 INTO COLUMN3 FROM COURSE_LESSON;
> INSERT INTO COURSE_LESSON(COURSEID1, TOPICID, RECORD_STATUS, 
> ATTACHMENT_ID) VALUES (NEWCOURSEID, NEWTOPICID,0, CONVERT(INT, COLUMN3));
> INSERT INTO ATTACHMENT SELECT COLUMN3, COLUMN2 + 
> SUBSTR(URL,23,LENGTH(URL)), SEQUENCE FROM ATTACHMENT WHERE SUBSTR(URL,0,23) = 
> COLUMN1;
> SELECT MAX(LF_ID) + 1 INTO COLUMN4 FROM LESSON_FLOW;
>
> FOR row1 IN SELECT  LESSON_FLOW_TITLE, TREE_NODE_TYPE, VIDEO_SIZE, 
> ROOT_NODE_ID, SLIDE_ORDER FROM LESSON_FLOW WHERE COURSEID=COURSEID AND 
> TOPICID = TOPICID
> LOOP
>  LESSON_FLOW_TITLE:=row1.LESSON_FLOW_TITLE;
>  TREE_NODE_TYPE:=row1.TREE_NODE_TYPE;
>  VIDEO_SIZE:=row1.VIDEO_SIZE;
>  ROOT_NODE_ID:=row1.ROOT_NODE_ID;
>  SLIDE_ORDER:=row1.SLIDE_ORDER;
>
>  INSERT INTO LESSON_FLOW  VALUES (COLUMN4, NEWCOURSEID, NEWTOPICID, 
> LESSON_FLOW_TITLE, TREE_NODE_TYPE, VIDEO_SIZE, COLUMN3, ROOT_NODE_ID, 
> SLIDE_ORDER);
>  COLUMN4 := COLUMN4 + 1;
>
> END LOOP;
> INSERT INTO SELECTEDSESSION SELECT NEWCOURSEID, NEWTOPICID, COLUMN2 + 
> SUBSTR(URL,23,LENGTH(URL)) FROM SELECTEDSESSION WHERE COURSEID= COURSEID1 AND 
> TOPICID= TOPICID;
>END IF;
>QINDEX := 1;
>
>   FOR row2 IN SELECT QUESTION_ID, QSNO, WEIGHTAGE, QSTYPE, QUESTION, MARKS, 
> FLAG,  PAPER_TYPE FROM QUESTION WHERE COURSEID=COURSEID AND TOPICID = TOPICID
>   LOOP
>QUESTIONID:=row2.QUESTION_ID;
>   

[GENERAL] A tale of two similar databases

2006-01-17 Thread kishore . sainath
Hi All,

I am using PostgreSQL 7.3.2 on a server running Red Hat Linux 9.0.

I have two databases ( identical schema and similar data ).

One database, D1 contains the actual data of a Production Application.
The other D2 contains dummy data which is used during development and
testing of the application.

D2 actually contains a recent snapshot of D1 and hence contain almost
the same data.

The strange thing is that D1 is extremely fast whereas D2 is relatively
slow for any given query.
(Thank God, it isnt the other way round :) )

Eg,
An extensive Statistics query returns in a matter of seconds on D1 but
takes close to a minute on D2

I would like to know if anybody can answer why it is so.

Thanks in advance
- Kishore


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] general questions about joins in queries

2006-01-17 Thread Viktor Lacina
Hi,

it's the same , try "EXPLAIN " if you are not sure.

Viktor

Dne pondělí 16 ledna 2006 18:01 Zlatko Matić napsal(a):
> Hello.
> Is it better to use A) or B) ?
>
> A)
>
> SELECT
>  "public"."departments".*,
>  "public"."plants".*,
>  "public"."batches_microbs".*,
>  "public"."results_microbs".*
> FROM
>  "public"."departments",
>  "public"."plants",
>  "public"."batches_microbs",
>  "public"."results_microbs"
> WHERE
>  "plants"."department" = "departments"."department" AND
>  "batches_microbs"."plant" = "plants"."plant" AND
>  "results_microbs"."batch" = "batches_microbs"."batch"
>
> B)
>
> SELECT
>  "public"."departments".*,
>  "public"."plants".*,
>  "public"."batches_microbs".*,
>  "public"."results_microbs".*
> FROM
> public.departments
>   INNER JOIN public.plants ON (public.departments.department =
> public.plants.department) INNER JOIN public.batches_microbs ON
> (public.plants.plant = public.batches_microbs.plant) INNER JOIN
> public.results_microbs ON (public.batches_microbs.batch =
> public.results_microbs.batch)
>
>
> Thanks,
>
> Zlatko

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] general questions about joins in queries

2006-01-17 Thread Zlatko Matić



Hello.
Is it better to use A) or B) ?
 
A)
 
SELECT "public"."departments".*, "public"."plants".*, "public"."batches_microbs".*, "public"."results_microbs".*FROM "public"."departments", "public"."plants", "public"."batches_microbs", "public"."results_microbs"WHERE "plants"."department" 
= "departments"."department" AND "batches_microbs"."plant" = 
"plants"."plant" AND "results_microbs"."batch" = 
"batches_microbs"."batch"
 
B)
 
SELECT "public"."departments".*, "public"."plants".*, "public"."batches_microbs".*, "public"."results_microbs".*FROMpublic.departments  INNER JOIN public.plants ON 
(public.departments.department = public.plants.department)  INNER JOIN 
public.batches_microbs ON (public.plants.plant = 
public.batches_microbs.plant)  INNER JOIN public.results_microbs ON 
(public.batches_microbs.batch = public.results_microbs.batch)
 
Thanks,
 
Zlatko


[GENERAL] linking temporary tables from MS Access?

2006-01-17 Thread Zlatko Matić



Is it possible to use temporary tables from MS 
Access, as linked tables?
I was not able to link, but maybe someone 
succeeded?


[GENERAL] Is there a way to list runaway queries and kill them?

2006-01-17 Thread frank church


Is there a way to list runaway or long runninng queries and kill them?

Frank


This message was sent using IMP, the Internet Messaging Program.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Create/Erase 5000 Tables in PostGRE SQL in execution

2006-01-17 Thread Sergey Moiseev
Christopher Browne wrote:
>> Orlando Giovanny Solarte Delgado wrote:
>>> It is a system web and each user can
>>> to do out near 50 consultations for session. I can have simultaneously
>>> around 100 users. Therefore I can have 5000 consultations
>>> simultaneously. Each consultation goes join to a space component in
>>> Postgis, therefore I need to store each consultation in PostgreSQL to
>>> be able to use all the capacity of PostGIS. The question is if for
>>> each consultation in  execution time build a table in PostGRESQL I use
>>> it and then I erase it. Is a system efficient this way? Is it possible
>>> to have 5000 tables in PostGRESQL? How much performance?

>> Use TEMP tables.

> Hmm.  To what degree do temp tables leave dead tuples lying around in
> pg_class, pg_attribute, and such?
> I expect that each one of these connections will leave a bunch of dead
> tuples lying around in the system tables.  The system tables will need
> more vacuuming than if the data was placed in some set of
> more-persistent tables...
> None of this seems forcibly bad; you just need to be sure that you
> vacuum the right things :-).

Since there is pg_autovacuum you don't need to think about it.

-- 
Wbr, Sergey Moiseev

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Stored procedures..PLEASE HELP

2006-01-17 Thread Richard Huxton

Minal A. Aryamane wrote:

hello all,
I desparately need help in developing stored procedure using postgresql 8.0. I am using pgadmin for developing the stored procs. Can anyone please help me, The docs are really not helpful. 


 am trying to pass values through  a jsp page wherein I am accessing this 
stored procedures by the following select ststemet

SELECT sp_build_course('ETLCRC0001:ETLCRC0010:', 'CRCSUB0001:CRCSUB0001:', 
'ABC', 'minal' )

the function is as below:

CREATE OR REPLACE FUNCTION sp_build_course("varchar", "varchar", "varchar", 
"varchar")
  RETURNS text AS

[snip]

SET COURSEIDS1:=COURSEIDS;
SET TOPICIDS1:=TOPICIDS;
SET COURSENAME1:=COURSENAME;
SET USERNAME1:=USERNAME;


Umm - where do you get "SET" from? It's not in any of the manuals.


--PRINT \'INSIDE THE STORED PROCEDURE\'


Or for that matter "PRINT"
[snip]


when I run it I get the following error:
ERROR:  syntax error at or near "$1" at character 6
CONTEXT:  PL/pgSQL function "sp_build_course" line 45 at SQL statement.


From psql I get the following...

richardh=> \i fn_syntax_err.sql
CREATE FUNCTION
richardh=>
richardh=> SELECT sp_build_course('a','b','c','d');
ERROR:  syntax error at or near "$1" at character 6
QUERY:  SET  $1 := $2
CONTEXT:  PL/pgSQL function "sp_build_course" line 45 at SQL statement
LINE 1: SET  $1 := $2
 ^

Looks to me like it's the SET it doesn't like.
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Stored procedures..PLEASE HELP

2006-01-17 Thread Minal A. Aryamane



hello all,
I desparately need help in developing stored 
procedure using postgresql 8.0. I am using pgadmin for developing the stored 
procs. Can anyone please help me, The docs are really not helpful. 
 
 am trying to pass values through  a jsp 
page wherein I am accessing this stored procedures by the following select 
ststemet
 
SELECT sp_build_course('ETLCRC0001:ETLCRC0010:', 
'CRCSUB0001:CRCSUB0001:', 'ABC', 'minal' )
 
the function is as below:
 
CREATE OR REPLACE FUNCTION 
sp_build_course("varchar", "varchar", "varchar", "varchar")  RETURNS 
text AS'Declare COURSEIDS VARCHAR(1000) ;TOPICIDS 
VARCHAR(1000);COURSENAME VARCHAR(150);USERNAME VARCHAR(20);
 
COURSEIDS1 VARCHAR(1000); TOPICIDS1 
VARCHAR(1000); COURSEID1 VARCHAR(20);TOPICID1 
VARCHAR(20);NEWCOURSEID VARCHAR(20);NEWTOPICID VARCHAR(20);COLUMN1 
VARCHAR(50);COLUMN2 VARCHAR(50);COLUMN3 VARCHAR(50);COLUMN4 
VARCHAR(50);QCODE1 VARCHAR(10);QDATE1 VARCHAR(10);QSNO 
INTEGER;WEIGHTAGE INTEGER;QSTYPE VARCHAR(50);QUESTION 
VARCHAR(1000);MARKS VARCHAR(50);QFLAG BOOL;PFLAG BIT;PAPER_TYPE 
VARCHAR(20);CINDEX INTEGER;TINDEX INTEGER;INDEX INTEGER;QINDEX 
INTEGER;QUESTIONID INTEGER;NEWQUESTIONID INTEGER;LESSON_FLOW_TITLE 
VARCHAR(50);TREE_NODE_TYPE VARCHAR(50);VIDEO_SIZE 
INTEGER;ROOT_NODE_ID INTEGER;SLIDE_ORDER INTEGER;row1 
record;row2 record;row3 record;COURSENAME1 
varchar(150);USERNAME1 varchar(50);
 
BEGINSET COURSEIDS1:=COURSEIDS;SET 
TOPICIDS1:=TOPICIDS;SET COURSENAME1:=COURSENAME;SET 
USERNAME1:=USERNAME;
 
--PRINT \'INSIDE THE STORED 
PROCEDURE\'
 
COLUMN1 := 
DATE_PART(\'Day\',current_date);COLUMN2 := 
DATE_PART(\'Month\',current_date);COLUMN3 := 
DATE_PART(\'Year\',current_date);
 
IF LENGTH(COLUMN1) = 1 THEN COLUMN1 := 
\'0\' || COLUMN1;END IF;IF LENGTH(COLUMN2) = 1 THEN COLUMN2 := 
\'0\' || COLUMN2;END IF;
 
QDATE1 := COLUMN1 || COLUMN2 || 
COLUMN3;
 
SELECT 
trim(to_char(max(to_number(QCODE,\'99\'))+1,\'09\')) INTO QCODE1 FROM 
QUESTION WHERE QDATE = QDATE1;
 
 WHILE  (LENGTH(QCODE1) < 6) 
LOOP  QCODE1 := \'0\' || QCODE1; END LOOP;
 
SELECT MAX(SUBSTR(COURSEID,7,LENGTH(COURSEID))) 
INTO  NEWCOURSEID FROM COURSEMASTER;
 
 WHILE  (LENGTH(NEWCOURSEID) < 4) 
LOOP  NEWCOURSEID := \'0\' || NEWCOURSEID; END 
LOOP;  NEWCOURSEID := \'ETLCRC\' || 
NEWCOURSEID;  INSERT INTO COURSEMASTER(COURSEID, COURSENAME) 
VALUES ( NEWCOURSEID,COURSENAME1);  CINDEX := POSITION(\':\' IN 
COURSEIDS1);  TINDEX := POSITION(\':\' IN 
TOPICIDS1);  INDEX := 1; WHILE (CINDEX >  0) 
LOOP  COURSEID1 := 
SUBSTR(COURSEIDS1,1,(CINDEX-1));  TOPICID := 
SUBSTR(TOPICIDS1,1,(TINDEX-1));
 
 -- TRANSACTION STARTS 
HERE  NEWTOPICID := INDEX;   WHILE  
(LENGTH(NEWTOPICID) < 4) LOOPNEWTOPICID := \'0\' 
|| NEWTOPICID;   END LOOP;   NEWTOPICID := 
\'CRCSUB\' || NEWTOPICID;   INSERT INTO MYTABLE6 
VALUES(COURSEID1, TOPICID, NEWCOURSEID, NEWTOPICID);   INSERT 
INTO TOPICS SELECT NEWCOURSEID, NEWTOPICID, TOPICNAME  FROM TOPICS WHERE 
COURSEID=COURSEID AND TOPICID=TOPICID;   SELECT 
NULLIF(0,COUNT(*)) INTO COLUMN3 FROM LESSON_FLOW WHERE COURSEID=COURSEID AND 
TOPICID=TOPICID;   COLUMN1 := COURSEID1 || \'_\' || TOPICID 
|| \'_\';   COLUMN2 := NEWCOURSEID || \'_\' || NEWTOPICID || 
\'_\';   IF (COLUMN3 >0) 
THENSELECT MAX(ATTACHMENT_ID) + 1 INTO COLUMN3 FROM 
COURSE_LESSON;INSERT INTO COURSE_LESSON(COURSEID1, 
TOPICID, RECORD_STATUS, ATTACHMENT_ID) VALUES (NEWCOURSEID, NEWTOPICID,0, 
CONVERT(INT, COLUMN3));INSERT INTO ATTACHMENT SELECT 
COLUMN3, COLUMN2 + SUBSTR(URL,23,LENGTH(URL)), SEQUENCE FROM ATTACHMENT WHERE 
SUBSTR(URL,0,23) = COLUMN1;SELECT MAX(LF_ID) + 1 
INTO COLUMN4 FROM LESSON_FLOW;
 
FOR row1 IN SELECT  
LESSON_FLOW_TITLE, TREE_NODE_TYPE, VIDEO_SIZE, ROOT_NODE_ID, SLIDE_ORDER FROM 
LESSON_FLOW WHERE COURSEID=COURSEID AND TOPICID = 
TOPICIDLOOP LESSON_FLOW_TITLE:=row1.LESSON_FLOW_TITLE; TREE_NODE_TYPE:=row1.TREE_NODE_TYPE; VIDEO_SIZE:=row1.VIDEO_SIZE; ROOT_NODE_ID:=row1.ROOT_NODE_ID; SLIDE_ORDER:=row1.SLIDE_ORDER;
 
 INSERT INTO 
LESSON_FLOW  VALUES (COLUMN4, NEWCOURSEID, NEWTOPICID, LESSON_FLOW_TITLE, 
TREE_NODE_TYPE, VIDEO_SIZE, COLUMN3, ROOT_NODE_ID, 
SLIDE_ORDER); COLUMN4 := COLUMN4 + 
1; END 
LOOP;INSERT INTO SELECTEDSESSION SELECT NEWCOURSEID, 
NEWTOPICID, COLUMN2 + SUBSTR(URL,23,LENGTH(URL)) FROM SELECTEDSESSION WHERE 
COURSEID= COURSEID1 AND TOPICID= TOPICID;   END 
IF;   QINDEX := 1; FOR 
row2 IN SELECT QUESTION_ID, QSNO, WEIGHTAGE, QSTYPE, QUESTION, MARKS, 
FLAG,  PAPER_TYPE FROM QUESTION WHERE COURSEID=COURSEID AND TOPICID = 
TOPICID  LOOP   QUESTIONID:=row2.QUESTION_ID;   QSNO:=row2.QSNO;   WEIGHTAGE:=row2.WEIGHTAGE;   QSTYPE:=row2.QSTYPE;   QUESTION:=row2.QUESTION;   MARKS:=row2.MARKS;   QFLAG:=row2.FLAG;   PAPER_TYPE:=row2.PAPER_TYPE;  INSERT 
INTO QUESTION (COURSEID1, TOPICID, QSNO, WEIGHTAGE, QSTYPE, QUESTION, MARKS, 
QDATE, QCODE,FLAG, USERNAME, PAPER_TYPE) VALUES (NEWCOURSEID, NEWTOPICID, 
QINDEX, WEIGHTAGE, QSTYPE, QUESTION, MARKS, QDATE1, QCODE,QFLAG, USERNAME1, 
PAPER_TYPE);   NEW