[GENERAL] column data size

2006-04-11 Thread Matthew Peter
wondering where to query for the physical size of a column (returned in  bytes) in a row? i looked into the information_schema but i think  that's more for getting triggers, views, etc.   
		New Yahoo! Messenger with Voice. Call regular phones from your PC and save big.

Re: [GENERAL] hard shut down of system

2006-04-11 Thread Jonel Rienton
I will first check the logfiles then the stale pid file.

On 4/12/06, surabhi.ahuja <[EMAIL PROTECTED]> wrote:
>
> if postmaster is running and i do,
>
> kill -9 -1
> i.e. i did abrupt shut down
>
> now when i brought back the system
>
> i am not able to start postmaster
>
> actually i want to simulate, the scenario is which the user will do
> 'hard shut down / Power off'
> while system is in working state..
>
> and i want to ensure that postmaster is able to recover from this kind of
> errors when
> the system is is brought up..
>
> how should i handle the above situation?
>
>


--
Jonel Rienton
mailto:[EMAIL PROTECTED]
powered by: google

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] hard shut down of system

2006-04-11 Thread surabhi.ahuja

if postmaster is running 
and i do,
 
kill -9 -1
i.e. i did abrupt shut downnow when 
i brought back the system 
i am not able to start postmaster
actually i want to simulate, the scenario is 
which the user will do 
'hard shut down / Power off'while system is in working 
state..
and i want to ensure that postmaster is able to recover from this kind of 
errors when the system is is brought up..
how should i handle the above situation?

Re: [GENERAL] How to import a CSV file (originally from Excel) into a Postgresql 8.0.7 database

2006-04-11 Thread Richard Broersma Jr
> For generation in Excel, you can just use File->Save, and select CSV as
> the format. You can use either CSV or tab delimited, really.

I am not sure if it matters with postgresql, but other programs require 
(MS-DOS) CSV format rather
than the initial CSV choice.

Regards,

Richard

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How to import a CSV file (originally from Excel)

2006-04-11 Thread Craig White
On Tue, 2006-04-11 at 23:13 +0200, Magnus Hagander wrote:
> > Hello,
> > 
> > I am trying to import an Excel file in my database (8.0.7). I 
> > think I am supposed to create an CSV file out of my Excel 
> > file and somehow, feed the database with it. My pronblem is 
> > that I don't really know how to do it... :( Can anyone give 
> > me a hint about how to do that?
> > One last "detail", the Excel files contains roughly 45.000 
> > lines and 15 columns. So, I need a fast and efficient method.
> 
> Hi!
> 
> Is your problem with how to generate the CSV file from Excel, or with
> how to import it into PostgreSQL?
> 
> For generation in Excel, you can just use File->Save, and select CSV as
> the format. You can use either CSV or tab delimited, really.
> 
> Then to get it into postgresql, use the \copy command in psql (I'm
> assuming your client is on windows, since you're using Excel. \copy will
> run the process from the client, and will load it into your sever
> regardless of platform). You can specify which delimiter to use there,
> etc. From the example below, I'd guess you want something along the line
> of:
> 
> \copy "Flight Traffic" from yourfile.csv delimiter as ',' csv quote as
> '"'
> 
> (might need some adaption, of course)
> 
> 
> Loading 45,000 lines is trivial for copy, it shouldn't take noticable
> time at all.

along these lines - can I do something similar (CSV file) but 'update' 2
or 3 columns?

Craig


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] how to prevent generating same clipids

2006-04-11 Thread Martijn van Oosterhout
On Mon, Apr 10, 2006 at 10:43:16PM -0700, [EMAIL PROTECTED] wrote:
> Hi
>Now we cannot change the field type. Because already our application
> is running and thousands of records are already entered. we are getting
> same no for clipid when 2 users are entering data at a time. so how to
> solve the problem

You don't need to change the field type, you just need to use a
sequence. i.e. nextval/currval. Wherever you're doing max()+1 do a
nextval() instead (after reading the documentation ofcourse).

Have a nice day,

-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [GENERAL] How to import a CSV file (originally from Excel) into a Postgresql 8.0.7 database

2006-04-11 Thread Magnus Hagander
> Hello,
> 
> I am trying to import an Excel file in my database (8.0.7). I 
> think I am supposed to create an CSV file out of my Excel 
> file and somehow, feed the database with it. My pronblem is 
> that I don't really know how to do it... :( Can anyone give 
> me a hint about how to do that?
> One last "detail", the Excel files contains roughly 45.000 
> lines and 15 columns. So, I need a fast and efficient method.

Hi!

Is your problem with how to generate the CSV file from Excel, or with
how to import it into PostgreSQL?

For generation in Excel, you can just use File->Save, and select CSV as
the format. You can use either CSV or tab delimited, really.

Then to get it into postgresql, use the \copy command in psql (I'm
assuming your client is on windows, since you're using Excel. \copy will
run the process from the client, and will load it into your sever
regardless of platform). You can specify which delimiter to use there,
etc. From the example below, I'd guess you want something along the line
of:

\copy "Flight Traffic" from yourfile.csv delimiter as ',' csv quote as
'"'

(might need some adaption, of course)


Loading 45,000 lines is trivial for copy, it shouldn't take noticable
time at all.


If you need to load things regularly, you can use the excel ODBC driver
and write a small script to transfer it over to get rid of the manual
steps.

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] how to prevent generating same clipids

2006-04-11 Thread Ragnar
On mán, 2006-04-10 at 22:43 -0700, [EMAIL PROTECTED] wrote:
> Hi
>Now we cannot change the field type. Because already our application
> is running and thousands of records are already entered. we are getting
> same no for clipid when 2 users are entering data at a time. so how to
> solve the problem

create a sequence:
CREATE SEQUENCE clipid_seq 
   START WITH somehighenoughnumber

now use that when you allocate ids.
i.e. instead of 
  SELECT 1+max(clipid) FROM yourtable
do:
  SELECT nexval(clipid_seq)

gnari



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] execution plan : Oracle vs PostgreSQL

2006-04-11 Thread Simon Riggs
On Tue, 2006-01-24 at 11:33 +0100, FERREIRA, William (VALTECH) wrote:

> the times are always the same, except with the centrino for which it takes 1 
> min.

Oracle or PostgreSQL? i.e. is the Centrino faster or slower?

> -Postgresql plan :

Please post the EXPLAIN ANALYZE.

Thanks,

--
  Simon Riggs   
  EnterpriseDB  http://www.enterprisedb.com/


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] how to prevent generating same clipids

2006-04-11 Thread Scott Marlowe
On Tue, 2006-04-11 at 00:43, [EMAIL PROTECTED] wrote:
> Hi
>Now we cannot change the field type. Because already our application
> is running and thousands of records are already entered. we are getting
> same no for clipid when 2 users are entering data at a time. so how to
> solve the problem

Addendum:

Note that you can also set the default for the clipid column to be the
nextval('sequencename') at the same time, then if anyone DOES insert a
row without getting the nextval() first, it'll still give them the
proper value.

Note that a "serial" type is actually just an int with a default and a
dependency.

Also, you really should have a primary key or unique index with not null
attribute on the clipid column as well, just to be safe.  PK is about
the same as unique & not null, just that foreign references aren't
automatic, but I get the feeling foreign keys aren't playing a role in
your schema just now.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] how to prevent generating same clipids

2006-04-11 Thread Scott Marlowe
On Tue, 2006-04-11 at 00:43, [EMAIL PROTECTED] wrote:
> Hi
>Now we cannot change the field type. Because already our application
> is running and thousands of records are already entered. we are getting
> same no for clipid when 2 users are entering data at a time. so how to
> solve the problem

Create a sequence.
Set it's next id to something like "select max(clidid)+100" and start
using the sequence to get the next clipid from.

As long as the +100 is a big enough number that you won't have overrun
the sequence with the max(clipid) before you implement the code change.

You should be able to deploy the change to your app and the sequence in
the database at about the same time, or during a maintenance window I'd
assume.

Read up on nextval() and currval() before you set this in motion. 
They're really quite an elegant solution to such issues.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] SQL ASCII encoding

2006-04-11 Thread Richard Jones
On Wed, Apr 05, 2006 at 10:15:18PM +0200, Harald Fuchs wrote:
> In article <[EMAIL PROTECTED]>,
> Martijn van Oosterhout  writes:
> 
> > As a british user, latin9 will cover most of your needs, unless
> > ofcourse someone wants to enter their name in chinese :)
> 
> Since british users don't use French OE ligatures or Euro currency
> signs, even latin1 would do.

However as a British PostgreSQL user, I would really like to encourage
the O.P. to use UNICODE for _every_ database.

My question: Is it possible to upgrade a database from ASCII to
UNICODE without dumping and restoring?

Rich.

-- 
Richard Jones, CTO Merjis Ltd.
Merjis - web marketing and technology - http://merjis.com
Team Notepad - intranets and extranets for business - http://team-notepad.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] installing and using autodoc

2006-04-11 Thread Kaloyan Iliev

Hi,
I think you question if for NOVICE group.
However,

1. Unzip:
  gunzip postgresql_autodoc-1.25.tar.gz
2. Extract from tar
tar -xvf postgresql_autodoc-1.25.tar.gz
3. Enter the directory
 cd postgresql_autodoc
3.1. !READ README file or READ INSTALL file
I presume then you should :
4. Run Configure
  ./configure
5. Run make
  make
5.1. Make SU and become root
6. Run
  make install


This is the usual procedure, but for best result always read the README 
file or INSTALL file first.



Regards,

  Kaloyan Iliev

[EMAIL PROTECTED] wrote:


Hi,

I'm interested in using postgresql_autodoc to document my postgres databases
on Linux, but I'm unable to find any information on how to install the
program or use it.  Can someone point me in the right direction?

I've downloaded postgresql_autodoc-1.25.tar.gz from
http://www.rbt.ca/autodoc/releasenotes.html, un-ziped it and have no idea
what to do.  Does it need to be compiled?  If so, what command do I use?  If
not, what do I do to configure the program?

Thanks in advance,
Courtenay


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


 




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] How to import a CSV file (originally from Excel) into a Postgresql 8.0.7 database

2006-04-11 Thread Daniel Tourde
Hello,

I am trying to import an Excel file in my database (8.0.7). I think I am 
supposed to create an CSV file out of my Excel file and somehow, feed the 
database with it. My pronblem is that I don't really know how to do it... :(
Can anyone give me a hint about how to do that?
One last "detail", the Excel files contains roughly 45.000 lines and 15 
columns. So, I need a fast and efficient method.

Here is my database:

--
-- PostgreSQL database dump
--

SET client_encoding = 'UNICODE';
SET check_function_bodies = false;
SET client_min_messages = warning;

SET search_path = public, pg_catalog;

--
-- Data for Name: Flight Traffic; Type: TABLE DATA; Schema: public; Owner: ted
--

COPY "Flight Traffic" ("AR", "ICAOAIRP", "MOTORTYP", "VIKT", "ICAOTYP", 
"FROMTO", "ANTAL", "CLASS", "SUBCLASS", "BULLERKLASS", "MILJOKLASS", 
"ANTAL_PAX", "ARRDEP", "INTDOM", "KUNDNAMN") FROM stdin;
2004ESDF-   0   RALLESME1   P   F   3   
0   0   A   D   FORSBERG
\.


--
-- PostgreSQL database dump complete
--


And here is the .CSV i try to import:

2004,"ESDF","-",0,"RALL","ESME",1,"P","F",3,0,0,"A","D","FORSBERG, N MARTIN"



Daniel
-- 
**
Daniel TOURDEE-mail : [EMAIL PROTECTED]
Tel : +46 (0)8-55 50 32 12
Fax : +46 (0)8-55 50 30 68
   Cellular :  +46 (0)70-849 93 40
FOI, Swedish Defence Research Agency; Systemteknik
Department of Aviation Environmental Research
SE-164 90 Stockholm, Sweden
**

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] how to prevent generating same clipids

2006-04-11 Thread v . suryaprabha
Hi
   Now we cannot change the field type. Because already our application
is running and thousands of records are already entered. we are getting
same no for clipid when 2 users are entering data at a time. so how to
solve the problem


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Is difference between PostgreSQL and mySQL licences

2006-04-11 Thread Scott Ribe
> This is because many applications are
> written for in-house consumption, simple web sites, etc. and those
> applications are not GPL.

In-house use cannot violate the GPL. The GPL requires that source be made
available without further restrictions, but only to those to whom the app is
distributed. The GPL does not impose any requirement for distribution of the
app. It seems to me that MySQL AB tries to game people's misunderstanding of
this, in order to lay claim to license fees for in-house projects.

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] sound index

2006-04-11 Thread Scott Ribe
>> also, i'd be happy to listen opinions from people who have experience
> of usage
>> of such things like soundex.
> 
> 
> Soundex is grossly outdated. It was designed for manual use by 19th century
> census takers, and I'm always surprised to see it still used. Metaphone
> (google search gets good results) does a much better job of matching names,
> and double metaphone does even better although having each word mapped to
> possibly 2 equivalents might complicate your logic depending on your
> queries.

I remember now that over the years I found a few places where Metaphone
needed improvement. Double Metaphone seemed to incorporate all my revisions,
so the best approach would be to start with it, and if your system can't
accommodate the notion of multiple equivalents, then just use the primary.

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] About checking all dead lock tables

2006-04-11 Thread Florian G. Pflug

Michael Fuhr wrote:

On Thu, Apr 06, 2006 at 03:36:33PM -0400, Emi Lu wrote:


How to check whether a table is locked?

You can monitor locks by querying the pg_locks view.

http://www.postgresql.org/docs/8.0/interactive/monitoring-locks.html
http://www.postgresql.org/docs/8.0/interactive/view-pg-locks.html

Only a few operations lock entire tables; for more information see
the Concurrency Control chapter in the documentation.

http://www.postgresql.org/docs/8.0/interactive/mvcc.html



Is there a way that I can find all deadlocks under postgresql 8.0.1 ?

Are you sure you mean "deadlocks"?  Deadlock occurs when transaction
A holds a lock that transaction B wants and transaction B holds a
lock that transaction A wants; PostgreSQL detects this condition
and aborts one of the transactions, allowing the other to continue.
By default this happens after one second (see the deadlock_timeout
setting).

Perhaps you want to see pending or blocked lock attempts, i.e.,
locks that have been requested but are not yet acquired because
another transaction holds a conflicting lock.  If that's the case
then look for rows in pg_locks where "granted" is false.


Note, however, that (badly programmed) clients can cause deadlocks which
postgres doesn't detect. This happend, for example, if you open more than
one connection to the same database from the same thread of a client app.
If a transaction on one connection waits for a transaction on another connection
to commit or rollback, you have effectivly deadlocked, because now your thread
is blocked, and the "other" transaction hasn't got a change to commit or 
rollback
anymore. But since postgres doesn't know about this interdependency between your
two connections, it won't report a deadlock.

greetings, Florian Pflug


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] sound index

2006-04-11 Thread Scott Ribe
> also, i'd be happy to listen opinions from people who have experience
of usage
> of such things like soundex.


Soundex is grossly outdated. It was designed for manual use by 19th century
census takers, and I'm always surprised to see it still used. Metaphone
(google search gets good results) does a much better job of matching names,
and double metaphone does even better although having each word mapped to
possibly 2 equivalents might complicate your logic depending on your
queries.

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Create database bug in 8.1.3 ?

2006-04-11 Thread Florian G. Pflug

Milen Kulev wrote:

Hi Listers,
I have the following problem (OS= RHELU2 , PG version= 8.1.3) when I  try to 
Create a database :

postgres=# CREATE  DATABASE   world3 TEMPLATE=template1 ENCODING='UTF8' 
TABLESPACE=tbs1 ;
CREATE DATABASE

Then I am backup-ing the database (a small toy DB) with pg_dumpall:

 pg_dumpall   > alldb.bak

Lessalldb.bak:


CREATE DATABASE world3 WITH TEMPLATE = template0 OWNER = pg ENCODING = 'UTF8' 
TABLESPACE = tbs1; 


Why pg_dumpall is stating that my template db is template0 and not template1 ? 
Is there any way to check what db
(template0  or template1 ) was actually used a template when creating world2 db 
? I am having the same problem  with all
my  databases

Why exactly is this a problem? Everything that you might have manually added to 
template1 is now part
of world3 anyway, and therfore included in the dump of world3. When recreating 
world3 from the dump,
starting from an completly empty database (template0) makes sense for me - 
otherwise you'd get lots
of "... already exists" error, because pg_dump would try to restore some 
objects which already exist in
the newly created world3-version, because they were (again!) inherited from 
template1.

greetings, Florian Pflug

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Restoring a PITR backup

2006-04-11 Thread Tom Lane
"Just Someone" <[EMAIL PROTECTED]> writes:
> The documentation say that a recovery.conf file is needed for the
> restore. My tests indicate that I can just restore the backup, copy
> over the latest WAL files and launch postgres. Is that ok? Can I just
> use this way?

If by "latest" you mean "all the WAL files since the backup was made",
it'd probably work.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Restoring a PITR backup

2006-04-11 Thread Just Someone
I have a process for PITR backups running nicely. I'm pretty amazed by
the smoothness of it all!

Now I'm looking at the retrieval part, and I have something I'm
looking for clarification on.

The documentation say that a recovery.conf file is needed for the
restore. My tests indicate that I can just restore the backup, copy
over the latest WAL files and launch postgres. Is that ok? Can I just
use this way?

I could create the recovery file and have it copy the files, but as
the second machine I am restoring into can always have the DB
reloaded, I wonder if it isn't easier just to copy the files. I
actually rsync my backup directory and my WAL archive directories,
into the second machine.

Bye,

Guy.

--
Family management on rails: http://www.famundo.com - coming soon!
My development related blog: http://devblog.famundo.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Asking advice on speeding up a big table

2006-04-11 Thread Alban Hertroys

[EMAIL PROTECTED] wrote:

On Tue, Apr 11, 2006 at 09:52:40AM +0200, hubert depesz lubaczewski wrote:


On 4/10/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:




What I was hoping for was some general insight from the EXPLAIN
ANALYZE, that maybe extra or different indices would help, or if there
is some better method for finding one row from 100 million.  I realize
I am asking a vague question which probably can't be solved as
presented.



hmm .. perhaps you can try to denormalize the table, and then use
multicolumn indices?


That's an idea ... I had thought that since my two referenced tables
are small in comparison to the third table, that wouldn't be of any
use, but I will give it a try.  Thanks ...


You could also experiment with clustering your large tables on some 
index, or using one or more partial indexes on relevant partitions of 
your data set. The application of such measures really depends on how 
your data behaves, so it's hard to predict whether it's going to help 
you or not.


We have seen great benefits of using the 'right' index opposed to just 
any index. An explain analyze showing an index scan is a good start, but 
your design may yet improve.


For example, we have a table with translations of strings in different 
languages. Part of that tables' contents deals with names of 
geographical locations (cities mostly). An index on lowercase location 
name translations and only on the translation records relevant for 
locations sped up our queries immensely (from over 300ms to less than 
1ms - uncached). That was a pretty amazing performance improvement to be 
sure :)


So it is possible.

--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Asking advice on speeding up a big table

2006-04-11 Thread felix
On Tue, Apr 11, 2006 at 09:52:40AM +0200, hubert depesz lubaczewski wrote:
> On 4/10/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

> > What I was hoping for was some general insight from the EXPLAIN
> > ANALYZE, that maybe extra or different indices would help, or if there
> > is some better method for finding one row from 100 million.  I realize
> > I am asking a vague question which probably can't be solved as
> > presented.
> >
> 
> hmm .. perhaps you can try to denormalize the table, and then use
> multicolumn indices?

That's an idea ... I had thought that since my two referenced tables
are small in comparison to the third table, that wouldn't be of any
use, but I will give it a try.  Thanks ...

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman & rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Search by distance

2006-04-11 Thread John D. Burger

Oscar Picasso wrote:

I guess I also need a database of the cities coordinates. Where could 
I find one?


You can download US data here:

  http://geonames.usgs.gov/domestic/download_data.htm

These are tables of geo entities with a number of columns, including 
lat/long.  NGA (formerly NIMA) has similar data for non-US entities:


  http://earth-info.nga.mil/gns/html/cntry_files.html

Note that these data sets include =lots= of entities, not just cities - 
we've combined them into a DB with about 6 million items, and only 
about 2.5 million are population centers (by our scheme).  Each of the 
data sets has feature type columns that you can filter on, though.


- John Burger
  MITRE


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: Fw: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread Merlin Moncure
On 4/11/06, MG <[EMAIL PROTECTED]> wrote:
> In on extreme case the application  makes 100 000  INSERTs, in the other
> extreme case 100 000 UPDATES. Normally the UPDATES are more that INSERTs.
> Here I notice that the first case (100 000  INSERTs)  takes about 30 minues,
> but the 100 000 UPDATES about 1 hour.
>
>  I can't  prepare the file to  use an COPY, because the application
>  manipulates the records from the file in a complex way.
>  I've also tried with vaccum to get more performance, but hat no success.

You should probably look at rewriting your procedure in a pl/pgsql
function.  Have your app bulk load into a work table and then loop it
doing your sql processing on the fly. It will be fast and is a a good
exercise.

Merlin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Debian package for freeradius_postgresql module

2006-04-11 Thread Dave Page
 

> -Original Message-
> From: Martijn van Oosterhout [mailto:[EMAIL PROTECTED] 
> Sent: 11 April 2006 14:02
> To: Dave Page
> Cc: Alban Hertroys; [EMAIL PROTECTED]; 
> [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
> [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
> [EMAIL PROTECTED]; [EMAIL PROTECTED]; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Debian package for freeradius_postgresql module
> 

> The GNU people write a an SSL library and you claim that 
> people are being forced to use it. Perhaps you forgot about 
> the Mozilla NSS library which also implements SSL, available 
> under the MPL, GPL or LGPL? Hardly a vendor lock-in.
 
I was merely pointing out the similarities and do realise there are
alternatives in this case. I have nothing against the GPL itself for
those that want to use it. Personally after having to deal with what I
can only describe as rude and arrogant fanatics from gnu.org I would
never use it again myself though. 

> See also the pgAdmin list for what they did:
> http://archives.postgresql.org/pgadmin-hackers/2004-09/msg00357.php

I can only assume you didn't read that too closely, or didn't notice who
you were replying to :-)

Regards, Dave.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Last Commited Transaction

2006-04-11 Thread Ricardo Manuel Pereira Vilaça

Michael Fuhr wrote:


On Tue, Apr 11, 2006 at 12:54:12PM +0100, Ricardo Manuel Pereira Vilaça wrote:
 

How I can determine information about the last committed transaction in 
postgresql?
   



What information are you looking for?  What are you trying to do?

 


When a server restarts, and possible recovery, how can I know the id of
the last commited transaction?
I need that information for synchronization with  other  servers.
Can I add some application specific information to postgresql write
ahead log?

Thanks,

Ricardo Vilaça



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] sound index

2006-04-11 Thread Martijn van Oosterhout
On Tue, Apr 11, 2006 at 05:28:12AM -0700, Nikolay Samokhvalov wrote:
> hello.
> 
> does anybody know any solutions to the problem of searching
> words/phrases, which are close to each other by sounding? e.g. soundex
> index or smth.

Check out contrib/fuzzystrmatch. It has a number of such algorithms.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [GENERAL] Debian package for freeradius_postgresql module

2006-04-11 Thread Martijn van Oosterhout
On Tue, Apr 11, 2006 at 12:02:33PM +0100, Dave Page wrote:
> > The only proper fix for this licensing issue IMHO is to fix 
> > GPL, not to kludge in some GPL compliant library. The issue 
> > at hand obviously is licensing related, the software is not 
> > the problem. And the cause of the licensing problem is 
> > apparently a restriction in GPL. Fix that, problem solved.
> 
> I was having similar thoughts but restrained from airing them for fear
> of starting a flamewar. I do find it somewhat ironic that some people
> seem to be being forced into using GNU software to resolve these issues
> that almost scream 'vendor lockin' and similar phrases normally aimed at
> Microsoft et al. by GNU/FSF proponents!

The GNU people write a an SSL library and you claim that people are
being forced to use it. Perhaps you forgot about the Mozilla NSS
library which also implements SSL, available under the MPL, GPL or
LGPL? Hardly a vendor lock-in.

The licence on OpenSSL clearly says:

 * 3. All advertising materials mentioning features or use of this
 *software must display the following acknowledgment:
 *"This product includes software developed by the OpenSSL Project
 *for use in the OpenSSL Toolkit. (http://www.openssl.org/)"

Which quite clearly indicates that any time anyone mentions the SSL
feature of PostgreSQL in "advertising material", they must include that
line of text. I imagine at least the following pages on the website
might need to be adjusted:

http://www.postgresql.org/about/history
http://www.postgresql.org/about/advantages

One can debate whether they are "advertising meterials" or the
enforcability of such a licence, but its intent is crystal clear. We
should probably place something on the website warning commercial
distributors of this restriction. This is the kind of crap the GPL is
against, which is why it's incompatable, to raise awareness with
people.

See also the pgAdmin list for what they did:
http://archives.postgresql.org/pgadmin-hackers/2004-09/msg00357.php

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


[GENERAL] sound index

2006-04-11 Thread Nikolay Samokhvalov
hello.

does anybody know any solutions to the problem of searching
words/phrases, which are close to each other by sounding? e.g. soundex
index or smth.

problem I have: tag suggestion mechanism, similar to google suggest,
which is intended to suggest names of people (search field "person's
name" in web form). it would be great if it does its work smarter than
simple LIKE.

also, i'd be happy to listen opinions from people who have experience
of usage of such things like soundex.

--
Best regards,
Nikolay

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Last Commited Transaction

2006-04-11 Thread Michael Fuhr
On Tue, Apr 11, 2006 at 12:54:12PM +0100, Ricardo Manuel Pereira Vilaça wrote:
> How I can determine information about the last committed transaction in 
> postgresql?

What information are you looking for?  What are you trying to do?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: Fw: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread Luckys
The possible solution would be, that you first insert the file into a temporary table via COPY.
have a full outer join on you current table with the temporary one on the basis of the primary column(s) assuming id.
only update the column where the outer table join is not null and insert all the rows where the inner table id is null.
so you'll have select a.id,a.col,b.id,b.col from live a full outer join temp b. similar condition you can place in your where clause. eventually you'll end up in writing 2 sql's one for insert and one for update.

 
cool,
Luckys. 
On 4/11/06, MG <[EMAIL PROTECTED]> wrote:
Hello,I have the following scenario:The application read one record from a file, which contains 100 000 records.
The application checks on different conditions if this record is allreadysave in the table 'test'.If this record exists then the application manipulates record and updatesthe record with UPDATE in PostgreSQL. This UPDATE affects allways 50
columns and PostgreSQL uses an index.If this record doesn`t exit the application manipulate the record and makesan INSERT.Then it reads the next record from the file and so on ...In on extreme case the application  makes 100 000  INSERTs, in the other
extreme case 100 000 UPDATES. Normally the UPDATES are more that INSERTs.Here I notice that the first case (100 000  INSERTs)  takes about 30 minues,but the 100 000 UPDATES about 1 hour.I can't  prepare the file to  use an COPY, because the application
manipulates the records from the file in a complex way.I've also tried with vaccum to get more performance, but hat no success.Michaela> - Original Message -> From: "A. Kretschmer" <
[EMAIL PROTECTED]>> To: > Sent: Tuesday, April 11, 2006 11:48 AM
> Subject: Re: [GENERAL] Performance UPDATE/INSERT am  11.04.2006, um 11:32:55 +0200 mailte MG folgendes:>>> I can`t use COPY for INSERTs, because I have to manipulate each record
>>> indiviuell.>> But the problem is  the UPDATEs test=# \timing>> Timing is on.>> test=# update mira set y = 123;>> UPDATE 15
>> Time: 1874.894 ms 150 000 Records, a simple table with 2 int-columns. Have you run vacuum? Which version? Can you tell us the explain for the>> update?
>> Greetings>>> Michaela>> - Original Message - From: "A. Kretschmer" Please, no silly TOFU.
>> HTH, Andreas>> -->> Andreas Kretschmer(Kontakt: siehe Header)>> Heynitz:  035242/47215,  D1: 0160/7141639>> GnuPG-ID 0x3FFF606C 
http://wwwkeys.de.pgp.net>> ===Schollglas Unternehmensgruppe=== ---(end of broadcast)--->> TIP 1: if posting/reading through Usenet, please send an appropriate
>>   subscribe-nomail command to [EMAIL PROTECTED] so that your>>   message can get through to the mailing list cleanly
>---(end of broadcast)---TIP 1: if posting/reading through Usenet, please send an appropriate  subscribe-nomail command to 
[EMAIL PROTECTED] so that your  message can get through to the mailing list cleanly


[GENERAL] Last Commited Transaction

2006-04-11 Thread Ricardo Manuel Pereira Vilaça

Hi,

How I can determine information about the last committed transaction in 
postgresql?



Thanks,

Ricardo Vilaça

---(end of broadcast)---
TIP 6: explain analyze is your friend


Fw: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread MG

Hello,
I have the following scenario:
The application read one record from a file, which contains 100 000 records.
The application checks on different conditions if this record is allready
save in the table 'test'.
If this record exists then the application manipulates record and updates
the record with UPDATE in PostgreSQL. This UPDATE affects allways 50
columns and PostgreSQL uses an index.
If this record doesn`t exit the application manipulate the record and makes
an INSERT.
Then it reads the next record from the file and so on ...

In on extreme case the application  makes 100 000  INSERTs, in the other
extreme case 100 000 UPDATES. Normally the UPDATES are more that INSERTs.
Here I notice that the first case (100 000  INSERTs)  takes about 30 minues,
but the 100 000 UPDATES about 1 hour.

I can't  prepare the file to  use an COPY, because the application
manipulates the records from the file in a complex way.
I've also tried with vaccum to get more performance, but hat no success.

Michaela


- Original Message - 
From: "A. Kretschmer" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, April 11, 2006 11:48 AM
Subject: Re: [GENERAL] Performance UPDATE/INSERT



am  11.04.2006, um 11:32:55 +0200 mailte MG folgendes:

I can`t use COPY for INSERTs, because I have to manipulate each record
indiviuell.

But the problem is  the UPDATEs


test=# \timing
Timing is on.
test=# update mira set y = 123;
UPDATE 15
Time: 1874.894 ms

150 000 Records, a simple table with 2 int-columns.

Have you run vacuum? Which version? Can you tell us the explain for the
update?




Greetings
Michaela

- Original Message - From: "A. Kretschmer"


Please, no silly TOFU.


HTH, Andreas
--
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47215,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
===Schollglas Unternehmensgruppe===

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly








---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Debian package for freeradius_postgresql module

2006-04-11 Thread Dave Page
 

> -Original Message-
> From: Alban Hertroys [mailto:[EMAIL PROTECTED] 
> Sent: 11 April 2006 10:45
> To: Dave Page
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
> [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
> [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
> [EMAIL PROTECTED]; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Debian package for freeradius_postgresql module
> 
>
> The only proper fix for this licensing issue IMHO is to fix 
> GPL, not to kludge in some GPL compliant library. The issue 
> at hand obviously is licensing related, the software is not 
> the problem. And the cause of the licensing problem is 
> apparently a restriction in GPL. Fix that, problem solved.

I was having similar thoughts but restrained from airing them for fear
of starting a flamewar. I do find it somewhat ironic that some people
seem to be being forced into using GNU software to resolve these issues
that almost scream 'vendor lockin' and similar phrases normally aimed at
Microsoft et al. by GNU/FSF proponents!

Regards, Dave

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread A. Kretschmer
am  11.04.2006, um 12:15:41 +0200 mailte Alban Hertroys folgendes:
> MG wrote:
> >I have to improve the performance of the UPDATES and I was wondering why 
> >the UPDATEs need twice as long as the INSERTs.
> 
> IIRC, an UPDATE is a DELETE/INSERT combo. That's because the original 
> record still needs to be visible to other existing transactions (because of 
> MVCC). You can't change its data but need to create a new record instead 
> and mark the original as obsolete.

Correctly, and thats why we need VACUUM to clean the space on the disk.


HTH, Andreas
-- 
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47215,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===Schollglas Unternehmensgruppe=== 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] trigger firing order

2006-04-11 Thread Alban Hertroys

Hugo wrote:

Hi, how can I tell in which order are triggered different triggers on the
same table, let me explain ,
I have three triggers for table A, all of then are intended for "before
insert" on the table, in ASA I can tell the Db in which order I want the
triggers to fire, is there an equivalent for postgres ??


They're fired in alphabetical order. I tend to prefix my trigger names 
with numbers for that reason ;)


--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread Alban Hertroys

MG wrote:
I have to improve the performance of the UPDATES and I was wondering why 
the UPDATEs need twice as long as the INSERTs.


IIRC, an UPDATE is a DELETE/INSERT combo. That's because the original 
record still needs to be visible to other existing transactions (because 
of MVCC). You can't change its data but need to create a new record 
instead and mark the original as obsolete.


--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_restore 7.4.7 locks itself out

2006-04-11 Thread Alban Hertroys

Andrew - Supernews wrote:

On 2006-04-10, Alban Hertroys <[EMAIL PROTECTED]> wrote:
Notice the "INSERT" there. For a restore, you'd expect it to be "COPY",
_unless_ you used the -d option to pg_dump (this is a common mistake to
make, given that all the other utilities use -d to specify the database
name).


That explains a lot, thanks. Looking at my command history, it does 
indeed include "-d ".


I was wondering why it was waiting on an insert, would never have 
guessed that my dump was made that way... It also explains why I was 
seeing locks this way.


This must be a very common mistake, isn't there some way to prevent this 
from happening in the future?



Restoring an inserts dump is _SLOW_ to the point of being glacial, because


Yeah, I know. Luckily that restore turned out to have finished this 
morning, so everything is alright. I don't need this dump for restoring 
anything serious, luckily.


Regards,
--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread Thomas Pundt
Hi,

On Tuesday 11 April 2006 11:40, MG wrote:
| I  can`t use COPY or put it in a transaction, because the application which
| writes in PostgreSQL decides if it`s an INSERT or a UPDATE which depends on
| different conditions and also manipulate the records.

I don't see a reason that stops you from using a transaction here...

Ciao,
Thomas

-- 
Thomas Pundt <[EMAIL PROTECTED]>  http://rp-online.de/ 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread A. Kretschmer
am  11.04.2006, um 11:32:55 +0200 mailte MG folgendes:
> I can`t use COPY for INSERTs, because I have to manipulate each record 
> indiviuell.
> 
> But the problem is  the UPDATEs

test=# \timing
Timing is on.
test=# update mira set y = 123;
UPDATE 15
Time: 1874.894 ms

150 000 Records, a simple table with 2 int-columns.

Have you run vacuum? Which version? Can you tell us the explain for the
update?


> 
> Greetings
> Michaela
> 
> - Original Message - From: "A. Kretschmer" 

Please, no silly TOFU.


HTH, Andreas
-- 
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47215,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===Schollglas Unternehmensgruppe=== 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Debian package for freeradius_postgresql module

2006-04-11 Thread Alban Hertroys

Dave Page wrote:

The note on the fsf directory (http://directory.fsf.org/gnutls.html) is a 
little off-putting:

"The program is currently in development and at an alpha stage."

Not to mention that from what I can see in a brief Google the Windows support 
is somewhat rudimentary.

Regards, Dave


Not to mention that the API consists of functions prefixed by "GNUTLS_" 
(or something similar). GnuTLS is something I always try to prevent to 
install, there's a very good alternative called openssl :P


The only proper fix for this licensing issue IMHO is to fix GPL, not to 
kludge in some GPL compliant library. The issue at hand obviously is 
licensing related, the software is not the problem. And the cause of the 
licensing problem is apparently a restriction in GPL. Fix that, problem 
solved.


Regards,
--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread Luckys
a single complex insert/update sql would be always faster compared to having a cursor and manipulating the data.
Again, it depends on what is the logic, and where actually you have the data. 
 
cool.
L. 
On 4/11/06, MG <[EMAIL PROTECTED]> wrote:
I can`t use COPY for INSERTs, because I have to manipulate each recordindiviuell.But the problem is  the UPDATEs
GreetingsMichaela- Original Message -From: "A. Kretschmer" <[EMAIL PROTECTED]>To: <
pgsql-general@postgresql.org>Sent: Tuesday, April 11, 2006 11:20 AMSubject: Re: [GENERAL] Performance UPDATE/INSERT> am  11.04.2006, um 11:08:57 +0200 mailte MG folgendes:>> Hello,
 I have about 100 000 records, which need about 30 minutes to write them>> with single INSERTs into PostgreSQL.>> You should better use the COPY - command for bulk inserts. This is
> faster.>>> HTH, Andreas> --> Andreas Kretschmer(Kontakt: siehe Header)> Heynitz:  035242/47215,  D1: 0160/7141639> GnuPG-ID 0x3FFF606C 
http://wwwkeys.de.pgp.net> ===Schollglas Unternehmensgruppe===>> ---(end of broadcast)---> TIP 3: Have you checked our extensive FAQ?
>>   http://www.postgresql.org/docs/faq>>---(end of broadcast)---
TIP 4: Have you searched our list archives?  http://archives.postgresql.org


Re: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread MG
I  can`t use COPY or put it in a transaction, because the application which 
writes in PostgreSQL decides if it`s an INSERT or a UPDATE which depends on 
different conditions and also manipulate the records.
In on extreme case it can be only INSERTs, in the other extreme case all can 
be UPDATES. Normally the UPDATES are more that INSERTs.


I have to improve the performance of the UPDATES and I was wondering why the 
UPDATEs need twice as long as the INSERTs.


Greetings
Michaela

- Original Message - 
From: "Thomas Pundt" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, April 11, 2006 11:30 AM
Subject: Re: [GENERAL] Performance UPDATE/INSERT



Hi,

On Tuesday 11 April 2006 11:08, MG wrote:
| I have about 100 000 records, which need about 30 minutes to write them
| with single INSERTs into PostgreSQL. If I go through these 100 000 
records

| and make an UPDATE on each record, it takes 1 hour.
|
| Can anyone tell me, about his experience of the performance of INSERT 
and

| UPDATE.

how about using COPY for the import? Or at least putting the INSERTs into
a transaction using BEGIN and COMMIT? I would do the same with the 
UPDATEs:

put it into a transaction frame.

Ciao,
Thomas

--
Thomas Pundt <[EMAIL PROTECTED]>  http://rp-online.de/ 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match






---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread MG
I can`t use COPY for INSERTs, because I have to manipulate each record 
indiviuell.


But the problem is  the UPDATEs

Greetings
Michaela

- Original Message - 
From: "A. Kretschmer" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, April 11, 2006 11:20 AM
Subject: Re: [GENERAL] Performance UPDATE/INSERT



am  11.04.2006, um 11:08:57 +0200 mailte MG folgendes:

Hello,

I have about 100 000 records, which need about 30 minutes to write them 
with single INSERTs into PostgreSQL.


You should better use the COPY - command for bulk inserts. This is
faster.


HTH, Andreas
--
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47215,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
===Schollglas Unternehmensgruppe===

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq






---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread Thomas Pundt
Hi,

On Tuesday 11 April 2006 11:08, MG wrote:
| I have about 100 000 records, which need about 30 minutes to write them
| with single INSERTs into PostgreSQL. If I go through these 100 000 records
| and make an UPDATE on each record, it takes 1 hour.
|
| Can anyone tell me, about his experience of the performance of INSERT and
| UPDATE.

how about using COPY for the import? Or at least putting the INSERTs into
a transaction using BEGIN and COMMIT? I would do the same with the UPDATEs:
put it into a transaction frame.

Ciao,
Thomas

-- 
Thomas Pundt <[EMAIL PROTECTED]>  http://rp-online.de/ 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Debian package for freeradius_postgresql module

2006-04-11 Thread Martijn van Oosterhout
On Mon, Apr 10, 2006 at 02:58:50PM -0700, Tyler MacDonald wrote:
> Dave Page  wrote:
> > >   GnuTLS is LGPL, which isn't quite as liberal as postgresql's
> > > license, but should still be ubiqutous enough to be worthwhile.
> > 
> > The note on the fsf directory (http://directory.fsf.org/gnutls.html) is a 
> > little off-putting:
> > 
> > "The program is currently in development and at an alpha stage."
> > 
> > Not to mention that from what I can see in a brief Google the Windows
> > support is somewhat rudimentary.
> 
>   I don't think we should drop openssl support... just include gnutls
> support so that OS vendors that want to be able to link their libpq against
> GPL software (like debian) have that choice available.

Well, for a program in alpha stage it's working quite well. Just
examining the Debian archives shows GnuTLS used by a few hundred
packages, including apparently everything in Gnome (directly or
indirectly).

It would be worth looking into, to lay this case to rest, finally...

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread A. Kretschmer
am  11.04.2006, um 11:08:57 +0200 mailte MG folgendes:
> Hello,
> 
> I have about 100 000 records, which need about 30 minutes to write them with 
> single INSERTs into PostgreSQL.

You should better use the COPY - command for bulk inserts. This is
faster.


HTH, Andreas
-- 
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47215,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===Schollglas Unternehmensgruppe=== 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread MG



Hello,
 
I have about 100 000 records, which need about 30 minutes to write them 
with single INSERTs into PostgreSQL.
If I go through these 100 000 records and make an UPDATE on each record, it 
takes 1 hour.
 
Can anyone tell me, about his experience of the performance of INSERT and 
UPDATE.
 
Greetings
Michaela


Re: [GENERAL] installing and using autodoc

2006-04-11 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

Hi,

I'm interested in using postgresql_autodoc to document my postgres databases
on Linux, but I'm unable to find any information on how to install the
program or use it.  Can someone point me in the right direction?

I've downloaded postgresql_autodoc-1.25.tar.gz from
http://www.rbt.ca/autodoc/releasenotes.html, un-ziped it and have no idea
what to do.  Does it need to be compiled?  If so, what command do I use?  If
not, what do I do to configure the program?


You might have a packaged version available - worth searching for it 
perhaps.


Failing that, try the following:

cd /path/to/postgresql_autodoc
./configure
make
su -
make install

That will probably install under /usr/local/postgresql_autodoc/...

The ./configure stage will tell you if you need any other 
modules/libraries etc. If you can find a package it should take care of 
this for you.


HTH
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Asking advice on speeding up a big table

2006-04-11 Thread hubert depesz lubaczewski
On 4/10/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
It is, but it is only 32 msec because the  query has already run andcached the useful bits.  And since I have random values, as soon as Ilook up some new values, they are cached and no longer new.
according to my experiene i would vote for too slow filesystem What I was hoping for was some general insight from the EXPLAIN
ANALYZE, that maybe extra or different indices would help, or if thereis some better method for finding one row from 100 million.  I realizeI am asking a vague question which probably can't be solved aspresented.
hmm .. perhaps you can try to denormalize the table, and then use multicolumn indices?depesz


Re: [GENERAL] Debian package for freeradius_postgresql module

2006-04-11 Thread Dave Page
 

> -Original Message-
> From: Tyler MacDonald [mailto:[EMAIL PROTECTED] 
> Sent: 10 April 2006 22:59
> To: Dave Page
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
> [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
> [EMAIL PROTECTED]; [EMAIL PROTECTED]; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Debian package for freeradius_postgresql module
> 
> 
>   I don't think we should drop openssl support... just 
> include gnutls support so that OS vendors that want to be 
> able to link their libpq against GPL software (like debian) 
> have that choice available.

No, I'd be incredibly surprised (and vocal about it) if that were
seriously proposed - I'm just pointing out some downsides to GnuTLS.

Regards, Dave.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster