Re: [GENERAL] Dynamic table with variable number of columns

2006-07-13 Thread nkunkov


Hi Thomas,
No I actually need the product name (prod1, prod2) to become column headings, which is effectively transposing the table.
Thanks.
NK
- Original Message -
From: Thomas Burdairon <[EMAIL PROTECTED]>
Date: Wednesday, July 12, 2006 10:53 am
Subject: Re: [GENERAL] Dynamic table with variable number of columns

> if i understand well you need to have an history for your products. > i would have a table B with > date   products    price > 1/1/2006 prod1 1.0 > 1/1/2006 prod2 3.0 > > or replace prod_name py product_id, ... > > Thomas > > On Jul 12, 2006, at 16:08, [EMAIL PROTECTED] wrote: > > > > > Bruno Wolff III wrote: > >> On Tue, Jul 11, 2006 at 06:05:18 -0700, > >>   [EMAIL PROTECTED] wrote: > >>> Hello, 
> >>> I'm a pgsql novice and here is what I'm trying to do: > >>> 1.    I need to create a dynamic table with the column names > fetched>>> from the database using a select statement from some > other  > >>> table.  Is > >>> it possible?  Could you point me to a simple example on how to > do  > >>> it? > >>> 2.   I would like to compare the list of coulmn names which > are  > >>> values > >>> fetched from some table with the column names of the existing > table.>>> If one of the names doesn't exist as a column name of my > table, I'd > >>> like to dynamically alter the table and add a coulmn with the  > >>> name just > >>> fetched from the DB. 
> >>> Your help is greatly appreciated. > >>> Thanks > >>> NK > >> > >> Information on the column names of tables in the database are  > >> available > >> from the information schema and the catlog tables. You can find > > >> more about this > >> in the documentation: > >> http://www.postgresql.org/docs/8.1/static/information-schema.html > >> http://www.postgresql.org/docs/8.1/static/catalogs.html > >> > >> You might get better help by describing the actual problem you > are  > >> trying to > >> solve rather than asking for help with a particular approach to > > >> solving that > >> problem. The approach you are trying seems to be seriously > broken  > >> and it 
> >> would probably be a good idea to consider other approaches. > >> > >> ---(end of  > >> broadcast)--- > >> TIP 1: if posting/reading through Usenet, please send an > appropriate>>    subscribe-nomail command to > [EMAIL PROTECTED] so  > >> that your > >>    message can get through to the mailing list cleanly > > > > Thank you for the suggestions. > > I will try to describe the problem better. > > I have two problems to solve.  First one is that I have to > transpose a > > table. > > I have table A that looks like this: > > date  product  price description 
> > 1/1/2006   prod1  1.00  some product > > 1/1/2006   prod2  3.00  other product > > > > I need to transpose this table to create table B > > date prod1    prod2 > > 1/1/2006   1.00  3.00 > > > > I think I can use EXECUTE statement and build the table > dynamically by > > using the result of the select statement for column names. Would > that> be the right approach?  Are there good examples somewhere on > how to > > implement this? > > > > My second problem, is that after creating the above transposed  > > table, I > > will be inserting more rows to it from table A and i might have more 
> > products too.  That means I will have to compare the value of > product> from table A with the column names of table B and alter > the table > > accordingly.  To compare coulmn names with the value of product in > > table A I think I can use pg_attribute function.  Would that be > a  > > right > > way to go? > > > > Thanks for your help. > > NK > > > > > > ---(end of  > > broadcast)--- > > TIP 6: explain analyze is your friend > > 


Re: [GENERAL] Dynamic table with variable number of columns

2006-07-13 Thread nkunkov
Hi,
Thanks again.
One more question.  Will crosstab function work if i will not know the
number/names of columns before hand?  Or I need to supply colum
headings?

Thanks again.
NK

Bruno Wolff III wrote:
> On Wed, Jul 12, 2006 at 07:08:15 -0700,
>   [EMAIL PROTECTED] wrote:
> >
> > Thank you for the suggestions.
> > I will try to describe the problem better.
> > I have two problems to solve.  First one is that I have to transpose a
> > table.
> > I have table A that looks like this:
> > date  product  price description
> > 1/1/2006   prod1  1.00  some product
> > 1/1/2006   prod2  3.00  other product
> >
> > I need to transpose this table to create table B
> > date prod1prod2
> > 1/1/2006   1.00  3.00
> >
> > I think I can use EXECUTE statement and build the table dynamically by
> > using the result of the select statement for column names. Would that
> > be the right approach?  Are there good examples somewhere on how to
> > implement this?
>
> The crosstabs contrib module can transpose tables for you.
>
> > My second problem, is that after creating the above transposed table, I
> > will be inserting more rows to it from table A and i might have more
> > products too.  That means I will have to compare the value of product
> > from table A with the column names of table B and alter the table
> > accordingly.  To compare coulmn names with the value of product in
> > table A I think I can use pg_attribute function.  Would that be a right
> > way to go?
>
> I don't think that will work very well. I expect that adding data to the
> original tables and retransposing when you need reports would be a better
> way to go.
>
> Changing table definitions on the fly is going to be very costly and will
> break concurrent access.
>
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Delete Problem

2006-07-13 Thread Jamie Deppeler



db=# \d job
   Table "job"
  Column|Type 
|   Modifiers

-+-+
primary | integer | not null default 
nextval('job_primary_seq'::regclass)

jobnumber   | text|
jobname | text|
jobdetails  | text|
Indexes:
   "job_pkey" PRIMARY KEY, btree ("primary")

Michael Fuhr wrote:

On Thu, Jul 13, 2006 at 04:28:50PM +1000, Jamie Deppeler wrote:
  

From psql

db=# select "primary" from job where "primary" = 370;
primary
-
370
(1 row)

db=# delete from job where "primary" = 370;
DELETE 0



You said that you had deleted all triggers -- have you verified
that none remain?  Does the table have any rules?  What does
"\d job" show?

  


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Delete Problem

2006-07-13 Thread Michael Fuhr
On Thu, Jul 13, 2006 at 04:28:50PM +1000, Jamie Deppeler wrote:
> From psql
> 
> db=# select "primary" from job where "primary" = 370;
> primary
> -
> 370
> (1 row)
> 
> db=# delete from job where "primary" = 370;
> DELETE 0

You said that you had deleted all triggers -- have you verified
that none remain?  Does the table have any rules?  What does
"\d job" show?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Delete Problem

2006-07-13 Thread Jamie Deppeler

Not sure if it disk space as i have around 10gig free

surabhi.ahuja wrote:

even i have seen this problem
 
i am using postgres 8.0.0
 
i open psql 
 
and there i try to do

delete from 
 
it seems that psql gets stuck.
 
even after 5 mins or something, no deletion happens.
 
generally this happens when the disk is nearing to full
 
cant this be avoided, why does postgres hang.
 
thanks,

regards
Surabhi
 
 



*From:* [EMAIL PROTECTED] on behalf of Michael Fuhr
*Sent:* Thu 7/13/2006 11:48 AM
*To:* Jamie Deppeler
*Cc:* pgsql-general@postgresql.org
*Subject:* Re: [GENERAL] Delete Problem

***
Your mail has been scanned by InterScan VirusWall.
***-***


[Please copy the mailing list on replies so others can participate
in and learn from the discussion.]

On Thu, Jul 13, 2006 at 03:32:43PM +1000, Jamie Deppeler wrote:
> Michael Fuhr wrote:
> >On Thu, Jul 13, 2006 at 01:45:36PM +1000, Jamie Deppeler wrote:
> >>Just today i have noticed i have one certain table  that i cannot 
delete

> >>any records from
> >
> >What happens when you try to delete?  Do you get an error?  Does
> >the delete succeed but report zero rows deleted?  Something else?
> >What does "EXPLAIN ANALYZE DELETE ..." show?  Is the delete done
> >in a transaction that doesn't commit?
>
> Delete is performed without any errors or warnings but when i refresh
> the found set it is still in the table

What do you mean by "refresh the found set"?  What client interface
are you using?  Might the deleting transaction not be committing?
What happens if you do the delete using psql?  What version of
PostgreSQL are you running?

--
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.9.10/387 - Release Date: 12/07/2006
  




---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Performance problem with query

2006-07-13 Thread Christian Rengstl
Good morning list,

the following query takes about 15 to 20 minutes for around 2 million lines in 
the file myfile.txt, but with 8 million lines it takes around 5 hours and i 
just don't understand why there is such a huge discrepancy in performance.

COPY public.temp_table FROM 'myfile.txt' WITH DELIMITER '\t';

INSERT INTO public.master(pid,smid, val1, val2, chr)
SELECT pid, smid, val1, val12, CAST(split_part((SELECT chr from 
public.temp_table LIMIT 1), '_', 2) as int2)
   FROM public.temp_table; 

INSERT INTO public.values(smid, pos, chr)
SELECT DISTINCT smid, pos, CAST(split_part((SELECT chr from 
public.temp_table LIMIT 1), '_', 2) as int2) 
FROM public.temp_table; 

I came up with this query, because i wanted to use the COPY command to load 
huge files into the db, but i don't want to load all the columns contained in 
the file in only one table but copy some of them into one table and some in a 
second table. As i found out with logging, the data is loaded into temp_table 
within 15 minutes, but to transfer it from the temp_table to public.master took 
around 5 hours and from temp_table to public.values took again only something 
like 10 minutes. Can it be that the cast takes up so much more time than when 
reading and transferring 2 million lines?

Thanks for any advice!

--
Christian Rengstl M.A.
Klinik und Poliklinik für Innere Medizin II
Kardiologie - Forschung
Universitätsklinikum Regensburg
B3 1.388
Franz-Josef-Strauss-Allee 11
93053 Regensburg
Tel.: +49-941-944-7230


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Delete Problem

2006-07-13 Thread surabhi.ahuja
Title: Re: [GENERAL] Delete Problem






even i have seen this 
problem
 
i am using postgres 8.0.0
 
i open psql 
 
and there i try to do
delete from 
 
it seems that psql gets stuck.
 
even after 5 mins or something, no deletion 
happens.
 
generally this happens when the disk is 
nearing to full
 
cant this be avoided, why does postgres 
hang.
 
thanks,
regards
Surabhi
 
 


From: [EMAIL PROTECTED] on 
behalf of Michael FuhrSent: Thu 7/13/2006 11:48 AMTo: 
Jamie DeppelerCc: pgsql-general@postgresql.orgSubject: Re: 
[GENERAL] Delete Problem

***Your mail has been scanned by 
InterScan VirusWall.***-***[Please copy the 
mailing list on replies so others can participatein and learn from the 
discussion.]On Thu, Jul 13, 2006 at 03:32:43PM +1000, Jamie Deppeler 
wrote:> Michael Fuhr wrote:> >On Thu, Jul 13, 2006 at 
01:45:36PM +1000, Jamie Deppeler wrote:> >>Just today i have 
noticed i have one certain table  that i cannot delete> >>any 
records from> >> >What happens when you try to delete?  
Do you get an error?  Does> >the delete succeed but report zero 
rows deleted?  Something else?> >What does "EXPLAIN ANALYZE 
DELETE ..." show?  Is the delete done> >in a transaction that 
doesn't commit?>> Delete is performed without any errors or 
warnings but when i refresh> the found set it is still in the 
tableWhat do you mean by "refresh the found set"?  What client 
interfaceare you using?  Might the deleting transaction not be 
committing?What happens if you do the delete using psql?  What version 
ofPostgreSQL are you running?--Michael 
Fuhr---(end of 
broadcast)---TIP 9: In versions below 8.0, the 
planner will ignore your desire to   
choose an index scan if your joining column's datatypes do 
not   
match




Re: [GENERAL] Delete Problem

2006-07-13 Thread Michael Fuhr
On Thu, Jul 13, 2006 at 04:58:26PM +1000, Jamie Deppeler wrote:
> db=# \d job
>Table "job"
>   Column|Type 
> |   Modifiers
> -+-+
> primary | integer | not null default 
> nextval('job_primary_seq'::regclass)
> jobnumber   | text|
> jobname | text|
> jobdetails  | text|
> Indexes:
>"job_pkey" PRIMARY KEY, btree ("primary")

You said you were running 8.0 but "nextval('job_primary_seq'::regclass)"
suggests 8.1.  What does "SELECT version()" show?

What are the output of the following?

EXPLAIN ANALYZE SELECT "primary" FROM job WHERE "primary" = 370;
EXPLAIN ANALYZE DELETE FROM job WHERE "primary" = 370;

If these commands use index or bitmap index scans, do you get
different results if you execute the following commands and then
try the delete again?

SET enable_indexscan TO off;
SET enable_bitmapscan TO off;

-- 
Michael Fuhr

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Delete Problem

2006-07-13 Thread Jamie Deppeler


PostgreSQL 8.1.3  on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.0.3 
20060104 (prerelease) (Debian 4.0.2-6)



Michael Fuhr wrote:

On Thu, Jul 13, 2006 at 04:58:26PM +1000, Jamie Deppeler wrote:
  

db=# \d job
   Table "job"
  Column|Type 
|   Modifiers

-+-+
primary | integer | not null default 
nextval('job_primary_seq'::regclass)

jobnumber   | text|
jobname | text|
jobdetails  | text|
Indexes:
   "job_pkey" PRIMARY KEY, btree ("primary")



You said you were running 8.0 but "nextval('job_primary_seq'::regclass)"
suggests 8.1.  What does "SELECT version()" show?

What are the output of the following?

EXPLAIN ANALYZE SELECT "primary" FROM job WHERE "primary" = 370;
EXPLAIN ANALYZE DELETE FROM job WHERE "primary" = 370;

If these commands use index or bitmap index scans, do you get
different results if you execute the following commands and then
try the delete again?

SET enable_indexscan TO off;
SET enable_bitmapscan TO off;

  



--

*Jamie Deppeler
*Database Administrator

*once:technologies pty ltd
*
*Do It Once!*

46 Roseneath Street
North Geelong
Victoria 3215 Australia

Ph: +61 3 5278 6699
Fax: +61 3 5278 6166
Email: [EMAIL PROTECTED]
Web: http://www.oncefabrik.com 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Delete Problem

2006-07-13 Thread Michael Fuhr
On Thu, Jul 13, 2006 at 05:26:31PM +1000, Jamie Deppeler wrote:
> PostgreSQL 8.1.3  on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.0.3 
> 20060104 (prerelease) (Debian 4.0.2-6)

What about the other outputs I mentioned?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] doesn't recognize "!=-" (not equal to a negative value)

2006-07-13 Thread hubert depesz lubaczewski
On 7/11/06, Paul Tilles <[EMAIL PROTECTED]> wrote:
Yes.  That works.  I think that the parser should work properly either way.it works properly. just the proper way of functioning is not the one you would like to have.you can simply add this operator:
CREATE FUNCTION not_equals_minus(int8, int8) RETURNS bool AS $BODY$    SELECT $1 <> -$2;$BODY$ LANGUAGE SQL IMMUTABLE STRICT;CREATE OPERATOR !=- ( leftarg = int8, rightarg = int8,
 procedure = not_equals_minus, commutator = !=-);and then:> select 1!=-2; ?column?-- t(1 row)depesz


Re: [GENERAL] db clustering?

2006-07-13 Thread Bill Moran
Peter Murray & Uta Wussing <[EMAIL PROTECTED]> wrote:

> I have some postgres cluster questions:
>  
>  I would like to have the ability to dynamically re-size a cluster of 
> Postgres DB instances.
>  
>  
>  0. Can I create a Postgres cluster?
>   - more than one instance of Postgres running on more than 1 cpu acting as a 
> single instance of postgres to process work. i.e. 1 pg/cpu
>  
>  1. start postgres on another box
>  2. have it added to the cluster and have it start processing work for it.
>  3. have the instance stop processing work and remove the instance from the 
> cluster 

Have a look at pgCluster and/or pgPool.

-- 
Bill Moran

ZOE: Preacher, don't the Bible have some pretty specific things to say about
 killing?
BOOK: Quite specific. It is, however, somewhat fuzzier on the subject of
  kneecaps.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] stored function - array parameter - how many element in array ?

2006-07-13 Thread Özgür Tuğrul
hello,
 
the question is very clear .. when we write stored function, we can use array parameter as a variable  .. but, in the function, how do i know how many element in that array ?
 
i want to perform some dml operations about each one like (delete, update or delete)
 
can anyone show me the example or tell me the function name i should use ..
 
regards
---There is no e-mail anymore .. There is Gmail :) 


Re: [GENERAL] stored function - array parameter - how many element in array ?

2006-07-13 Thread Merlin Moncure

On 7/13/06, Özgür Tuğrul <[EMAIL PROTECTED]> wrote:


hello,

the question is very clear .. when we write stored function, we can use
array parameter as a variable  .. but, in the function, how do i know how
many element in that array ?

i want to perform some dml operations about each one like (delete, update or
delete)

can anyone show me the example or tell me the function name i should use ..




have you looked at array_upper/array_dims?  you can see an example of
array_upper here:
http://people.planetpostgresql.org/merlin/index.php?/archives/4-fun-with-arrays.html

(shameless plug) :)

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] getting function argument names from psql?

2006-07-13 Thread David Fetter
On Wed, Jul 12, 2006 at 11:37:04PM -0400, Tom Lane wrote:
> "Merlin Moncure" <[EMAIL PROTECTED]> writes:
> > On 7/12/06, Timothy Perrigo <[EMAIL PROTECTED]> wrote:
> >> Is there a way to get the names of the arguments to a function
> >> from psql?  /df and /df+ return the parameter types, but not
> >> their names.
> 
> > select oid::regprocedure from pg_proc where proname like 'your
> > name here';
> 
> Nope, because regprocedureout doesn't include argument names (nor
> modes).  I think the best way ATM is to look directly at
> pg_proc.proargnames :-(

Back when, I submitted a psql patch to get the input names along with
their types.  Shall I dig up that code this weekend?

Cheers,
D
-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Performance problem with query

2006-07-13 Thread Merlin Moncure

On 7/13/06, Christian Rengstl
<[EMAIL PROTECTED]> wrote:

Good morning list,

the following query takes about 15 to 20 minutes for around 2 million lines in 
the file myfile.txt, but with 8 million lines it takes around 5 hours and i 
just don't understand why there is such a huge discrepancy in performance.



on the surface it doesn't make sense, can you post an explain analyze?

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] getting function argument names from psql?

2006-07-13 Thread Merlin Moncure

On 7/13/06, David Fetter <[EMAIL PROTECTED]> wrote:

On Wed, Jul 12, 2006 at 11:37:04PM -0400, Tom Lane wrote:
> > select oid::regprocedure from pg_proc where proname like 'your
> > name here';
>
> Nope, because regprocedureout doesn't include argument names (nor
> modes).  I think the best way ATM is to look directly at
> pg_proc.proargnames :-(


whoop! misread it :)


Back when, I submitted a psql patch to get the input names along with
their types.  Shall I dig up that code this weekend?


that would be great, although is there any concern about backwards
compatibility of regprocedure cast? would it be safer to make a new
cast?  also, does your patch account for in/out?

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] getting function argument names from psql?

2006-07-13 Thread David Fetter
On Thu, Jul 13, 2006 at 09:22:56AM -0400, Merlin Moncure wrote:
> On 7/13/06, David Fetter <[EMAIL PROTECTED]> wrote:
> >On Wed, Jul 12, 2006 at 11:37:04PM -0400, Tom Lane wrote:

> >Back when, I submitted a psql patch to get the input names along
> >with their types.  Shall I dig up that code this weekend?
> 
> that would be great, although is there any concern about backwards
> compatibility of regprocedure cast?

I'm not really concerned with any backwards compatibility for this new
feature, as psql has never attempted it for backslash commands.

> would it be safer to make a new cast?  also, does your patch account
> for in/out?

If it didn't, it will. :)

Cheers,
D
-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Physical block structure in PostgreSQL

2006-07-13 Thread Spendius
Hi,
I found no description of the way the records of a table are physically

stored in files with PostgreSQL. I said "block structure" in my subject

but actually is there any such thing as a block ?
(I read the pages
http://www.postgresql.org/docs/8.1/interactive/storage.html
and saw things regarding files and "pages" that are "usually 8k"-big
etc. but
saw no further info about "blocks" - they speak of "items" here: what
is it ?)

Thanks.
Spendius


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] VPD

2006-07-13 Thread jose

 /Hello,

Oracle has a feature called "Virtual Private Databases" (VPD), that 
enables you to configure permissions at row level. Is there an 
equivalent feature in PostgreSQL?


Thank you,

jss

/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: =???UTF-8?Q?re: [GENERAL] How to insert .xls files into

2006-07-13 Thread Adrian Klaver
Dates don't transfer correctly. When I try it the date moves two days ahead. 
On Wednesday 12 July 2006 08:43 am, Guy Fraser wrote:
> On Fri, 2006-07-07 at 22:41 +, [EMAIL PROTECTED] wrote:
> > > [mailto:[EMAIL PROTECTED] On Behalf Of Adrian Klaver
> > >
> > > I guess the solution depends on what is a 'large amount of data'. The
> > > most time consuming part is going to be converting the single data
> > > elements at the top of each sheet into multiple elements. I would
> > > create columns for the data in the sheet. At the same time I would
> > > order the columns to match the database schema. Then it would a matter
> > > of cut and paste to fill the columns with the data. The event id's
> > > could be renumbered using Excel's series generator to create a non
> > > repeating set of id's. If the amount of data was very large it might
> > > pay to create some macros to do the work. Once the data was filled in
> > > you would have a couple of choices. One, as mentioned by Ron would be
> > > to use OpenOffice v2 to dump the data into the database. The other
> > > would be to save the data as CSV and use the psql \copy command to
> > > move the data into the table.
> > > On Friday 07 July 2006 09:40 am, Parang Saraf wrote:
> >
> > Evrything You described is familiar to me, except the OpenOffice v2
> >  dump - could You explain this more in details pls? I tried to do it
> >  many times, without success.
> >
> > Thank You
> >
> > Tomas
>
> Does the "OpenOffice v2 dump" convert the date correctly when
> exporting into PostgreSQL?
>
> The date in .xls when using excel is exported to CSV as a number
> which is tricky to convert to a date. This is what I use :
>
> date_pli('epoch'::date, date_num::integer - 25569) AS date_fmt
>
> The number "25569" is a fudge factor, that can be different between
> dumps, but stays consistent through the dump. I usually adjust it
> and compare the result to the value shown in excel until I get a
> match.
>
>
>
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly

-- 
Adrian Klaver   
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Trying to connect to an Oracle instance...

2006-07-13 Thread Spendius
Hi,
I've been trying to perform a connection to an Oracle DB for a while,
to no avail. Here is what I get at my psql prompt:
postdb=# Select dblink_connect('login','hostaddr= port=1521 \
dbname=orasid user=mylogin password=mypwd');
ERROR:  could not establish connection
DETAIL:  server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

I installed postgresql 8.1 on an Win XP PC a few days ago without
any problem, and I get no error at all when I run any other SQL
instructions or PostgreSQL functions. I ran the
%POSTGRES_HOME%\share\contrib\dblink.sql
file as the same user (which is the 'postgres' Windows user I created
-as I have admin privs on my PC- and with which I execute the following

every time I want to use postgresql:
DOS>runas /env /user:\postgres
d:\prgs\postgresql\81\bin\postmaster
DOS>runas /env /user:\postgres "d:\prgs\postgresql\81\bin\psql
postdb"
in a DOS box)

, orasid, mylogin and mypwd are valid parameters necessary
to connect myself to an Oracle 'orasid' DB that resides on a
Sun/Solaris
server, they work in a SQL*Plus session for example (I use the IP
address
rather than the name of this server because if I don't I get the error
"could
not translate host name "host name" to address: Unknown host", yet I
can
ping it of course). Obviously I need to configure something for the
network
inside PostgreSQL directories, don't I ?

Hopefully I gave all details required to give me some hints... Thanks
for 
any suggestion... 
Spendius


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] getting function argument names from psql?

2006-07-13 Thread David Fetter
On Thu, Jul 13, 2006 at 03:39:34PM +0200, Peter Eisentraut wrote:
> Am Donnerstag, 13. Juli 2006 15:16 schrieb David Fetter:
> > Back when, I submitted a psql patch to get the input names along
> > with their types.  Shall I dig up that code this weekend?
> 
> Seems reasonable, as long as it doesn't make the output an extra
> three screens wide.

I can't guarantee that people haven't given their input parameters
names like supercalifragilisticexpialidocious_input_01.  I could try
something that would wrap in such cases, but that's a lot more code.
Should I try something that wraps?

Cheers,
D
-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] VPD

2006-07-13 Thread Richard Broersma Jr
> Oracle has a feature called "Virtual Private Databases" (VPD), that 
> enables you to configure permissions at row level. Is there an 
> equivalent feature in PostgreSQL?

Not inherent to Postgresql itself, but there is a complementary package that 
will do this.
http://veil.projects.postgresql.org/curdocs/index.html

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] getting function argument names from psql?

2006-07-13 Thread Peter Eisentraut
Am Donnerstag, 13. Juli 2006 15:16 schrieb David Fetter:
> Back when, I submitted a psql patch to get the input names along with
> their types.  Shall I dig up that code this weekend?

Seems reasonable, as long as it doesn't make the output an extra three screens 
wide.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Trying to connect to an Oracle instance...

2006-07-13 Thread Tony Caduto

Spendius wrote:

Hi,
I've been trying to perform a connection to an Oracle DB for a while,
to no avail. Here is what I get at my psql prompt:
postdb=# Select dblink_connect('login','hostaddr= port=1521 \
  
If you are trying to connect to a Oracle DB, don't you need to be using 
DBI Link instead of DBlink?

I thought DBLink was just for Postgresql databases?

Later,

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Trying to connect to an Oracle instance...

2006-07-13 Thread Michael Fuhr
On Thu, Jul 13, 2006 at 05:46:55AM -0700, Spendius wrote:
> I've been trying to perform a connection to an Oracle DB for a while,
> to no avail. Here is what I get at my psql prompt:
> postdb=# Select dblink_connect('login','hostaddr= port=1521 \
> dbname=orasid user=mylogin password=mypwd');
> ERROR:  could not establish connection
> DETAIL:  server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.

The contrib/dblink module is for connecting to other PostgreSQL
databases, not to Oracle or any other DBMS.  To connect to a non-
PostgreSQL database you could write a function in a server-side
language like PL/Perl and make the connection with DBI.  See also
DBI-Link:

http://pgfoundry.org/projects/dbi-link/

If the backend crashed then dblink might have a bug.  Can anybody
repeat this crash and get a stack trace?  I tried connecting to
MySQL and got a clean failure with the message "ERROR:  could not
establish connection".

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Delete Problem

2006-07-13 Thread Scott Marlowe
On Thu, 2006-07-13 at 01:41, surabhi.ahuja wrote:
> even i have seen this problem
>  
> i am using postgres 8.0.0
>  
> i open psql 
>  
> and there i try to do
> delete from 
>  
> it seems that psql gets stuck.
>  
> even after 5 mins or something, no deletion happens.
>  
> generally this happens when the disk is nearing to full
>  
> cant this be avoided, why does postgres hang.
>  

5 minutes does not a hang make.  :)  It's likely that postgresql hasn't
hung, but is just taking a really long time to complete your request. 
Note that the other user got a response back, it just said zero rows
deleted.  Which points to index corruption.  Which points to broken
hardware.  Hardware which should be tested thoroughly to identify any
problem areas, like bad RAM, CPU, hard drive, cabling, power supply, or
misconfigured memory timings et. al.

Your problem is likely a bit different.  When the machine just sits
there for 5 or more minutes, the CPUs or hard drives are likely still
busy.

What do you postgresql logs say, if anything?

Some other points:

1:  upgrade your postgresql version.  The developers work hard for your
benefit.  8.0.8 or so is out.  There's LOTS of bug fixes between 8.0.0
and 8.0.8, and if you're running into a bug on 8.0.0 you're wasting your
time trying to figure it out if the fix is already in.

2:  Are you running analyze and vacuum regularly?  If your disk is often
nearing full but your actual data set isn't that large, that points to a
lack of vacuuming.  check your fsm settings as well.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] db clustering?

2006-07-13 Thread Scott Marlowe
On Wed, 2006-07-12 at 15:22, Peter Murray & Uta Wussing wrote:
> I have some postgres cluster questions:
> 
> I would like to have the ability to dynamically re-size a cluster of
> Postgres DB instances.
> 
> 
> 0. Can I create a Postgres cluster?
> - more than one instance of Postgres running on more than 1 cpu acting
> as a single instance of postgres to process work. i.e. 1 pg/cpu
> 
> 1. start postgres on another box
> 2. have it added to the cluster and have it start processing work for
> it.
> 3. have the instance stop processing work and remove the instance from
> the cluster 

There is no such software currently existent for postgresql that I know
of.  There are pieces of software that, with the watchful eye of a
competent dba can do most of what you ask, but there's no "push a button
take a banana" interface / implementation.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] SQL Standards Compliance With Case

2006-07-13 Thread Scott Marlowe
On Wed, 2006-07-12 at 22:26, Tom Lane wrote:
> Rich Shepard <[EMAIL PROTECTED]> writes:
> >I'm trying to assist the XRMS developers port their application to
> > postgres (8.1.x on), and it's almost there. One (perhaps the only) stumbling
> > block is case for table and column (relation and attribute) names.
> > Apparently MySQL allows for mixed case, while postgres wants only lower
> > case. One of the development team asked me to enquire when postgres would be
> > fully compliant with the SQL standard in this reqard.
> 
> It's probably worth pointing out here that the MySQL behavior they seem
> to be expecting is considerably further from the spec than Postgres's
> behavior.  If I'm reading between the lines correctly, they are
> expecting foo and Foo (both written without double-quotes) to be
> distinct identifiers.  But these are the same identifier per spec,
> because the spec *requires* case-folding of unquoted identifiers.

It's even worse than that.  MySQL uses filenames to identify tables,
like PostgreSQL did back in the days when dinosaurs roamed the plains
and the British Police force drove sexy cars (ok, not that far back, but
anyway)...

This means that when installed in unix, table Foo and table foo are
unique and pretty flowers, but when installed on Windows, they are the
same name...

My recommendation is to either stick to one case, all the time, or to
quote, all the time.  I prefer to just stick to one case all the time.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] cant connect to the database, even after doing start

2006-07-13 Thread Scott Marlowe
On Thu, 2006-07-13 at 01:20, surabhi.ahuja wrote:
> Hi all,
> i have a script to stop and start postmaster
>  
> However i have noticed this many a time sdnow. I stop postmaster using
> that script
> and then start using a script.
>  
> However if i try to do psql , it gives me an error saying that
> the postmaster is not up.
>  
> Why is it that the postmaster went down on its own, even though i had
> done a srart after stop.

How are you stopping the database?  Let me guess that it's "pg_ctl -m
immediate stop".

pg_ctl --help tells us:

Shutdown modes are:
  smart   quit after all clients have disconnected
  fastquit directly, with proper shutdown
  immediate   quit without complete shutdown; will lead to recovery on
restart

Basically, -m immediate does a kill -9 on all the postgresql processes. 
It's an inch away from pulling the plug, except that lying hardware
still gets to flush its caches.

So, if you're stopping pgsql that way, then when it starts up, it does
so in recovery mode, and it can't allow connections until recovery is
finished.

If you're stopping it some other way though, then things might be going
wrong in some other way.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] stored function - array parameter - how many element in array ?

2006-07-13 Thread Claire McLister
Have you considered using a set instead? We had a similar need and  
were using an array as a parameter. That turned out to be taking too  
long. Recently we have changed it to a set and it seems to work  
faster, although I must admit I haven't timed it yet. In the first  
case you call it by "select deleteEntries(1, '{1, 2, 3}'));" and in  
the second, "select deleteEntries(1, '(1, 2, 3)');"


The first one was defined as:

CREATE OR REPLACE FUNCTION deleteEntries(int, integer[])
RETURNS Void
AS
'
DECLARE
  G ALIAS FOR $1;
  Entries ALIAS FOR $2;
  ThisEntryId Integer;
BEGIN
  IF array_lower(Entries, 1) is NULL THEN
RETURN ''True'';
  END IF;
  FOR I IN array_lower(Entries, 1)..array_upper(Entries, 1) LOOP
   ThisEntryId := Entries[i];
   DELETE FROM DetailsWHERE id = ThisEntryId;
  END LOOP;
RETURN;
END;
'
LANGUAGE plpgsql
  STABLE
RETURNS NULL ON NULL INPUT;


The new one is defined as:

CREATE OR REPLACE FUNCTION deleteEntries(int, varchar)
RETURNS Void
AS
'
DECLARE
  G ALIAS FOR $1;
  Entries ALIAS FOR $2;
BEGIN
   EXECUTE ''DELETE FROM DetailsWHERE idIN '' || Entries;
   RETURN;
END;
'
LANGUAGE plpgsql
  STABLE
 RETURNS NULL ON NULL INPUT;

On Jul 13, 2006, at 5:38 AM, Özgür Tuğrul wrote:


hello,

the question is very clear .. when we write stored function, we can  
use array parameter as a variable  .. but, in the function, how do  
i know how many element in that array ?


i want to perform some dml operations about each one like (delete,  
update or delete)


can anyone show me the example or tell me the function name i  
should use ..


regards
-- 
-

There is no e-mail anymore .. There is Gmail :)


 --
 Claire McLister[EMAIL PROTECTED]
 1684 Nightingale Avenue Suite 201
 Sunnyvale, CA 94087408-733-2737(fax)

 http://www.zeemaps.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Trying to connect to an Oracle instance...

2006-07-13 Thread A.M.
On Thu, July 13, 2006 11:03 am, Tony Caduto wrote:
> Spendius wrote:
>
>> Hi,
>> I've been trying to perform a connection to an Oracle DB for a while,
>> to no avail. Here is what I get at my psql prompt: postdb=# Select
>> dblink_connect('login','hostaddr= port=1521 \
>>
> If you are trying to connect to a Oracle DB, don't you need to be using
> DBI Link instead of DBlink?
> I thought DBLink was just for Postgresql databases?

That's true. But why should dblink crash? That's a bug.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: =???UTF-8?Q?re: [GENERAL] How to insert .xls files into

2006-07-13 Thread Guy Fraser
On Thu, 2006-13-07 at 06:52 -0700, Adrian Klaver wrote:
> Dates don't transfer correctly. When I try it the date moves two days ahead. 

Thanks for the heads up, I will continue to avoid using 
Open Office for Spreadsheet and DB activities. I 
prefer Gnumeric for Spreadsheet activities, and am very 
comfortable using psql for db activities.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Database connectivity using ECPG

2006-07-13 Thread Jasbinder Bali
Hi,I'm using ECPG to connect to my postgres database. Doing it for the very first time in my life.The name of my database is postgres and uses a trusted connection. So, i don't think it needs any authentication parameteres like a 
user name and password.I have the follwoing line of code in my .pgc file for the database connectivity:-EXEC SQL CONNECT TO postgres;after compiling my .c file, i get the following error
In function `main':ecpg_test.c:(.text+0x5d): undefined reference to `ECPGconnect'Can any1 tell me wot is this all about???EXEC SQL CONNECT TO postgres; gets converted to 
{ ECPGconnect(__LINE__, 0, "postgres" , NULL,NULL , NULL, 0); }after ECPG does its internal conversion from .pgc to .cRegards,~Jas


Re: [GENERAL] Database connectivity using ECPG

2006-07-13 Thread Merlin Moncure

On 7/13/06, Jasbinder Bali <[EMAIL PROTECTED]> wrote:

Hi,

I'm using ECPG to connect to my postgres database. Doing it for the very
first time in my life.

 In function `main':ecpg_test.c:(.text+0x5d): undefined
reference to `ECPGconnect'


it looks like you are not linking to the ecpg library.  you need to
build it and link to it i think.  i don't know off the top of my head
if you can get ecpg library without building from source.

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Database connectivity using ECPG

2006-07-13 Thread John Purser
On Thu, 13 Jul 2006 13:22:49 -0400
"Jasbinder Bali" <[EMAIL PROTECTED]> wrote:

> Hi,
> 
> I'm using ECPG to connect to my postgres database. Doing it for the
> very first time in my life.
> 
> The name of my database is postgres and uses a trusted connection.
> So, i don't think it needs any authentication parameteres like a
> user name and password.
> 
> I have the follwoing line of code in my .pgc file for the database
> connectivity:-
> 
> EXEC SQL CONNECT TO postgres;
> 
> after compiling my .c file, i get the following error
> 
> In function `main':ecpg_test.c:(.text+0x5d): undefined reference to
> `ECPGconnect'
> 
> Can any1 tell me wot is this all about???
> 
> EXEC SQL CONNECT TO postgres; gets converted to
> { ECPGconnect(__LINE__, 0, "postgres" , NULL,NULL , NULL, 0); }
> after ECPG does its internal conversion from .pgc to .c
> 
> Regards,
> ~Jas
> 

It has been my impression that PosgreSQL REALLY likes to know who it's
talking to.  I'd include a user name with my connection string at the
least.

John Purser

-- 
Learn to pause -- or nothing worthwhile can catch up to you.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Database connectivity using ECPG

2006-07-13 Thread Shoaib Mir
Hi Jasbinder,You will have to build the libecpg lib from source first and then you can build your C source (got from .pgc file using ecpg) using something like this:
cc -I  -c sample.c
cc -o sample sample.o -L  -lecpgThanks,Shoaib MirEnterpriseDBOn 7/13/06, John Purser <
[EMAIL PROTECTED]> wrote:On Thu, 13 Jul 2006 13:22:49 -0400"Jasbinder Bali" <
[EMAIL PROTECTED]> wrote:> Hi,>> I'm using ECPG to connect to my postgres database. Doing it for the> very first time in my life.>> The name of my database is postgres and uses a trusted connection.
> So, i don't think it needs any authentication parameteres like a> user name and password.>> I have the follwoing line of code in my .pgc file for the database> connectivity:->
> EXEC SQL CONNECT TO postgres;>> after compiling my .c file, i get the following error>> In function `main':ecpg_test.c:(.text+0x5d): undefined reference to> `ECPGconnect'>
> Can any1 tell me wot is this all about???>> EXEC SQL CONNECT TO postgres; gets converted to> { ECPGconnect(__LINE__, 0, "postgres" , NULL,NULL , NULL, 0); }> after ECPG does its internal conversion from .pgc to .c
>> Regards,> ~Jas>It has been my impression that PosgreSQL REALLY likes to know who it'stalking to.  I'd include a user name with my connection string at theleast.John Purser
--Learn to pause -- or nothing worthwhile can catch up to you.---(end of broadcast)---TIP 5: don't forget to increase your free space map settings



[GENERAL] Is there a way to run tables in RAM?

2006-07-13 Thread Roy Souther




I would like to know if there is anyway to move a section of some tables into RAM to work on them.

I have large table, about 700MB or so and growing. I also have a bizarre collection of queries that run hundreds of queries on a small section of this table. These queries only look at about 100 or so records at a time and they run hundreds of queries on the data looking for patterns. This causes the program to run very slowly because of hard drive access time. Some times it needs to write changes back to the records it is working with.

Is there anyway that I can move a few hundred records of the table into RAM and work on it there, it would be much faster.

Is there anyway to create a temporary table that will only exist in RAM and not be written to the hard drive? Or do temporary tables already do that?

If I can use a temporary table this way is there any simple way to merge the changes back into the original table?







Royce Souther
www.SiliconTao.com
Let Open Source help your business move beyond.

For security this message is digitally authenticated by GnuPG.













signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Trying to connect to an Oracle instance...

2006-07-13 Thread Joe Conway

A.M. wrote:

On Thu, July 13, 2006 11:03 am, Tony Caduto wrote:


Spendius wrote:

I've been trying to perform a connection to an Oracle DB for a while,
to no avail. Here is what I get at my psql prompt: postdb=# Select
dblink_connect('login','hostaddr= port=1521 \


If you are trying to connect to a Oracle DB, don't you need to be using
DBI Link instead of DBlink?
I thought DBLink was just for Postgresql databases?


That's true. But why should dblink crash? That's a bug.



Umm, where does he say anything about a crash? I see this from the 
original email:


> Here is what I get at my psql prompt:
> postdb=# Select dblink_connect('login','hostaddr= port=1521 \
> dbname=orasid user=mylogin password=mypwd');
> ERROR:  could not establish connection
> DETAIL:  server closed the connection unexpectedly
>This probably means the server terminated abnormally
>before or while processing the request.

That's a properly handled error. dblink is merely echoing the error 
message propagated up by libpq. This doesn't look to me as though the 
session *running* dblink crashed. Apparently libpq believes that the 
*other end* of the connection was terminated unexpectedly, but 
considering it was an Oracle instance that it was communicating with, 
I'm not sure what it could/should do better. In any case, the Oracle 
side of this connection is what "closed unexpectedly".


Joe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Is there a way to run tables in RAM?

2006-07-13 Thread Martijn van Oosterhout
On Thu, Jul 13, 2006 at 11:44:43AM -0600, Roy Souther wrote:
> I would like to know if there is anyway to move a section of some tables
> into RAM to work on them.
> 
> I have large table, about 700MB or so and growing. I also have a bizarre
> collection of queries that run hundreds of queries on a small section of
> this table. These queries only look at about 100 or so records at a time
> and they run hundreds of queries on the data looking for patterns. This
> causes the program to run very slowly because of hard drive access time.
> Some times it needs to write changes back to the records it is working
> with.

Increase your shared buffers so it can hold all the data at once. Even
then I'm surprised, because any excess data should have been buffered
by the OS. Have you looked at what the traffic actually is that is
being generated?

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


[GENERAL] SPI based table locks.

2006-07-13 Thread Chris Bowlby

Hi All,

I've been working on a small module that I will be pluging into my 
local PostreSQL 8.x database and am in need of doing some table locking. 
At this time, I've used various other examples to no avail and was 
wondering what the proper method for aquiring a table lock within the 
module would be?


For example I am using an SPI based module:

static void mytest(void) {
 LOCKMETHODIDlocalLockTableId = INVALID_LOCKMETHOD;
 LOCKTAG localtag;

 memset(&localtag, 0, sizeof(localtag));
 localtag.relId = XactLockTableId;
 localtag.dbId = 1;
 localtag.objId.xid = InvalidTransactionId;

 if (!LockAcquire(LocalLockTableId, &localtag, 
GetCurrentTransactionId(), Sharelock, false)) {

  elog(ERROR, "mytest: did not acquire table lock");
 }

 

 if(!LockRelease(LocalLockTableId, &localtag, 
GetCurrentTransactionId(), Sharelock)) {

  elog(ERROR, "mytest: could not release lock");
 }
}

I know there is something I am missing and would appreciate any help. I 
believe I need to initialize the LocalLockTableId, but I have not been 
able to find any examples of that being done. Could someone look this 
over and point me in the right direction?





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Is there a way to run tables in RAM?

2006-07-13 Thread Joshua D. Drake


Is there anyway to create a temporary table that will only exist in RAM 
and not be written to the hard drive? Or do temporary tables already do 
that?


That is what temp tables do... but you should only use them for 
throwaway data :)


Joshua D. Drake




If I can use a temporary table this way is there any simple way to merge 
the changes back into the original table?


_Royce Souther _
_www.SiliconTao.com _
Let Open Source help your business move beyond.

For security this message is digitally authenticated by _GnuPG 
_.








--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] I need help creating a query

2006-07-13 Thread Sergio Duran
Hello,I need a little help creating a query, I have two tables, worker and position, for simplicity sake worker only has its ID and its name, position has the ID of the worker, the name of his position, a date, and his salary/
worker:   worker_id, nameposition: position_id, worker_id, position, startdate, salaryIf I perfom a query joining both tables, I can obtain all the workers and the positions the've had.SELECT name, startdate, position,  salary FROM worker JOIN position USING(worker_id);
worker1 | 2001-01-01 | boss | 99worker2 | 2001-01-01 | cleaning    | 100
worker2 | 2006-04-01 | programmer   | 2worker2 | 2006-07-04 | management | 25000
so far so good, now I need to obtain all the workers only with the position they had on a given date.if I wanted to know the positions on '2006-05-01' it would returnworker1 | 2001-01-01 | boss | 99

worker2 | 2006-04-01 | programmer   | 2
Maybe I only need some insight here, maybe it should be done with 2 queries, maybe it should be done with functions, I don't know.Thanks


Re: [GENERAL] I need help creating a query

2006-07-13 Thread Dann Corbit








The query date supplied should be applied
against start date and then take the minimum record from that set.

 











From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of Sergio Duran
Sent: Thursday, July 13, 2006
12:20 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] I need help
creating a query



 

Hello,

I need a little help creating a query, I have two tables, worker and position,
for simplicity sake worker only has its ID and its name, position has the ID of
the worker, the name of his position, a date, and his salary/ 

worker:   worker_id, name
position: position_id, worker_id, position, startdate, salary

If I perfom a query joining both tables, I can obtain all the workers and the
positions the've had.

SELECT name, startdate, position,  salary FROM worker JOIN position
USING(worker_id); 
worker1 | 2001-01-01 |
boss |
99
worker2 | 2001-01-01 | cleaning    | 100
worker2 | 2006-04-01 | programmer   | 2
worker2 | 2006-07-04 | management | 25000

so far so good, now I need to obtain all the workers only with the position
they had on a given date.
if I wanted to know the positions on '2006-05-01' it would return
worker1 | 2001-01-01 |
boss |
99 
worker2 | 2006-04-01 | programmer   | 2

Maybe I only need some insight here, maybe it should be done with 2 queries,
maybe it should be done with functions, I don't know.

Thanks










[GENERAL] Clustering and backup with large objects

2006-07-13 Thread Marco Bizzarri

Hi all.

I'm working on a document management application (PAFlow). The
application is Zope based and uses PostgreSQL as its (main) storage
system.

PostgreSQL must contain both profile data for documents and the
documents themselves. Documents are stored as large objects in
PostgreSQL.

Up to now, we've done backups using the pg_dump, and that was fine.
However, a number of installations have databases which have backups
which are increasingly large. Therefore, making a complete backup (and
a restore) is more and more time consuming.

PostgreSQL, at the moment, is 7.4.x We will move to newer version, but
I think we will not be able to migrate all customers to 8.1.x soon.

I've read the chapter on backups and large backups. Is there any
strategy for doing large backups, aside from those mentioned in the
documentation?

I would also like to ask possible solutions for clustering under
PostgreSQL. My use case scenario would be the following:

1) application makes comparably few writes wrt reads (1 to 10);
2) application is multithreaded, and any thread can do read and write;
3) database contains large objects (as mentioned before);
4) clustering is done for improving performance, rather than availability.

Thanks for your attention.

Regards
Marco
--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] I need help creating a query

2006-07-13 Thread Richard Broersma Jr
> 
> worker:   worker_id, name
> position: position_id, worker_id, position, startdate, salary
> 
> If I perfom a query joining both tables, I can obtain all the workers and
> the positions the've had.
> 
> SELECT name, startdate, position,  salary FROM worker JOIN position
> USING(worker_id);
> worker1 | 2001-01-01 | boss | 99
> worker2 | 2001-01-01 | cleaning| 100
> worker2 | 2006-04-01 | programmer   | 2
> worker2 | 2006-07-04 | management | 25000
> 
> so far so good, now I need to obtain all the workers only with the position
> they had on a given date.
> if I wanted to know the positions on '2006-05-01' it would return
> worker1 | 2001-01-01 | boss | 99
> worker2 | 2006-04-01 | programmer   | 2
> 

This is just a quick guess. I am not sure if the logic is correct but it could 
be a starting
point.

select P2.worker_id, P2.pdate, P1.position, P1.salary
from position as P1
  join
  (select worker_id, max(startdate) as pdate
   from position
   where startdate <= '2006-05-01'
   group by worker_id, position_id) as P2
on (P1.worker_id = P2.worker_id) and (P1.startdate = P2.pdate)
;

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Is there a way to run tables in RAM?

2006-07-13 Thread Merlin Moncure

On 7/13/06, Roy Souther <[EMAIL PROTECTED]> wrote:



 I would like to know if there is anyway to move a section of some tables into 
RAM to work on them.


the operating system should cache the table if there is memory
available. while disk cache is somewhat slower than raw memory read,
now you have memory for lots of other  things, like sorting.

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


=?UTF-8?Q? Re: =3D???UTF-8?Q?re: [GENERAL] How to insert .xls...?=

2006-07-13 Thread lanczos
> [mailto:[EMAIL PROTECTED] On Behalf Of Adrian Klaver
> 
> Dates don't transfer correctly. When I try it the date moves 
> two days ahead. 

I just checked it, on my computer it works fine, no date shift.

Regards

Tomas

> 
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] How come index isn't being used when query by function return value?

2006-07-13 Thread Joseph Shraibman

db:db=>explain select * from elog where id = eds('2006-01-01');
QUERY PLAN
---
 Seq Scan on elog  (cost=0.00..1894975.10 rows=1 width=204)
   Filter: (id = eds('2006-01-01 00:00:00'::timestamp without time zone))
(2 rows)

db:db=>explain select * from elog, (select eds('2006-01-01') as trg) aa 
where id = trg;

  QUERY PLAN
--
 Nested Loop  (cost=0.00..4.36 rows=1 width=208)
   ->  Subquery Scan aa  (cost=0.00..0.02 rows=1 width=4)
 ->  Result  (cost=0.00..0.01 rows=1 width=0)
   ->  Index Scan using elog_pkey on elog  (cost=0.00..4.33 rows=1 
width=204)

 Index Cond: (elog.id = "outer".trg)
(5 rows)

Time: 0.978 ms
db:db=>select version();
 version
--
 PostgreSQL 8.0.8 on x86_64-unknown-linux-gnu, compiled by GCC gcc 
(GCC) 3.4.4 20050721 (Red Hat 3.4.4-2)

(1 row)


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] How come index isn't being used when query by function return value?

2006-07-13 Thread Merlin Moncure

On 7/13/06, Joseph Shraibman  wrote:

db:db=>explain select * from elog where id = eds('2006-01-01');
 QUERY PLAN
---
  Seq Scan on elog  (cost=0.00..1894975.10 rows=1 width=204)
Filter: (id = eds('2006-01-01 00:00:00'::timestamp without time zone))
(2 rows)


is eds immutable function?

Merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Is there a way to run tables in RAM?

2006-07-13 Thread Tim Hart








I can’t say for certain from the detail
you’ve given, but partial indexes may be an acceptable solution to your
problem.

 

http://www.postgresql.org/docs/8.1/interactive/indexes-partial.html

 

 









From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On
Behalf Of Roy Souther
Sent: Thursday, July 13, 2006
12:45 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Is there a way
to run tables in RAM?



 

I would like to know if there is anyway to move a section of some
tables into RAM to work on them.

I have large table, about 700MB or so and growing. I also have a bizarre
collection of queries that run hundreds of queries on a small section of this
table. These queries only look at about 100 or so records at a time and they
run hundreds of queries on the data looking for patterns. This causes the
program to run very slowly because of hard drive access time. Some times it
needs to write changes back to the records it is working with.

Is there anyway that I can move a few hundred records of the table into RAM and
work on it there, it would be much faster.

Is there anyway to create a temporary table that will only exist in RAM and not
be written to the hard drive? Or do temporary tables already do that?

If I can use a temporary table this way is there any simple way to merge the
changes back into the original table?


 
  
   
  
   

Royce Souther
www.SiliconTao.com
Let Open Source help your business move beyond.

For security this message is digitally authenticated by GnuPG. 

   
  
  
  
  
  
 


 








Re: [GENERAL] How come index isn't being used when query by function return

2006-07-13 Thread Joseph Shraibman
It is STABLE, which I finally figured out.  I had to find section 31.6 
of the docs, which is nowhere near the part about writing functions.


Merlin Moncure wrote:

On 7/13/06, Joseph Shraibman  wrote:

db:db=>explain select * from elog where id = eds('2006-01-01');
 QUERY PLAN
--- 


  Seq Scan on elog  (cost=0.00..1894975.10 rows=1 width=204)
Filter: (id = eds('2006-01-01 00:00:00'::timestamp without time 
zone))

(2 rows)


is eds immutable function?

Merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Is there a way to run tables in RAM?

2006-07-13 Thread Karen Hill

Roy Souther wrote:
> I would like to know if there is anyway to move a section of some tables
> into RAM to work on them.
>
> I have large table, about 700MB or so and growing. I also have a bizarre
> collection of queries that run hundreds of queries on a small section of
> this table. These queries only look at about 100 or so records at a time
> and they run hundreds of queries on the data looking for patterns. This
> causes the program to run very slowly because of hard drive access time.
> Some times it needs to write changes back to the records it is working
> with.
>
> Is there anyway that I can move a few hundred records of the table into
> RAM and work on it there, it would be much faster.
>
> Is there anyway to create a temporary table that will only exist in RAM
> and not be written to the hard drive? Or do temporary tables already do
> that?
>

If you are using linux, create a ramdisk and then add a Postgresql
tablespace to that.  

regards,


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Is there a way to run tables in RAM?

2006-07-13 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Roy Souther wrote:
> I would like to know if there is anyway to move a section of some tables
> into RAM to work on them.
> 
> I have large table, about 700MB or so and growing. I also have a bizarre
> collection of queries that run hundreds of queries on a small section of
> this table. These queries only look at about 100 or so records at a time
> and they run hundreds of queries on the data looking for patterns. This
> causes the program to run very slowly because of hard drive access time.
> Some times it needs to write changes back to the records it is working
> with.
> 
> Is there anyway that I can move a few hundred records of the table into
> RAM and work on it there, it would be much faster.
> 
> Is there anyway to create a temporary table that will only exist in RAM
> and not be written to the hard drive? Or do temporary tables already do
> that?
> 
> If I can use a temporary table this way is there any simple way to merge
> the changes back into the original table?

What if you COPY OUT, sort(1) by the most common fields in the WHERE
clauses, and COPY IN the data?

That way, you get locality of data, and so the desired data is more
likely to be in the OS block cache?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEtsY7S9HxQb37XmcRAtXEAKCQKW6fxRKPlSrnsVhmcsvdLCRU+wCcCKLK
FTlvERCCH/ylgVbI2EwujBc=
=KRzE
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Timestamp vs timestamptz

2006-07-13 Thread Antimon
Hi,
I'm working on a web project with pgsql, i did use mysql before and
stored epoch in database so i'm not familiar with these datatypes.

What i wanna ask is, if i don't need to display timestamps in different
timezones, shall i use timestamptz anyway? I mean, i'm gonna need
timestamp columns on some tables for internal calculations and stuff
like delaying actions, adding a row with a timestamp of 10 minutes
later and check for them every minute, fetch elapsed ones and process,
not to display them to users.

Will there be any advantages of using timestamptz type for this? I
thought, only if i move the website to some other server with another
timezone or something, all my timestamps and delayed actions would
still be accurate to the system clock..

Shall i just use timestmap type or timestamptz for this? 

Thanks.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Function Help

2006-07-13 Thread Jamie Deppeler

Hi,

I am having a problem with a delete function. The delete function uses 3 
tables (tablea,tableb,tablec) tablec stores actual values and tablea+b 
stores summaries when a entry in tablec is deleted i updated the values 
in tablea+b, The problem when i do a cascade delete on tablea tablec 
delete trigger generates a error saying it cannot find the record in tablea.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Timestamp vs timestamptz

2006-07-13 Thread David Fetter
On Thu, Jul 13, 2006 at 04:35:20PM -0700, Antimon wrote:
> Hi,
> I'm working on a web project with pgsql, i did use mysql before and
> stored epoch in database so i'm not familiar with these datatypes.
> 
> What i wanna ask is, if i don't need to display timestamps in
> different timezones,

Not this week, but who knows about next week?

> shall i use timestamptz anyway?

Yes.  Timestamptz is the one. :)

Cheers,
D
-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] duplicated values on primary key field on reindex

2006-07-13 Thread Tom Lane
"Weerts, Jan" <[EMAIL PROTECTED]> writes:
> Scott Marlowe wrote:
>> Can you get set of fields in that row to uniquely identify it by?
>> 
>> If so, see if you can update that column to something else and
>> continue 

> The only way would be to update by primarykey. But since the
> select on the primarykey field shows this "strange" ordering,
> I wonder, what effect an update would have.

CTID always works:

SELECT ctid, otherstuff FROM table WHERE ... ;

eyeball otherstuff to determine row you wish to hack

UPDATE table SET ... WHERE ctid = '...';

Note: the act of UPDATE changes the row's ctid, don't be surprised.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Need help with quote escaping in exim for postgresql

2006-07-13 Thread Tom Lane
[ Coming late to the thread... ]

Steve Atkins <[EMAIL PROTECTED]> writes:
> Fortunately all this stuff is MUA-side, not MTA-side, so exim
> should ignore it. SQL_ASCII all the way.

I concur.  The recent encoding fixes are for the situation where the
database server believes a multibyte encoding is in use, but the client
code is ignorant of that encoding and either (a) sends invalidly encoded
data or (b) does escaping that mangles multibyte characters.

If your client-side code is encoding agnostic, then using SQL_ASCII
(which is also effectively encoding agnostic) for both client_encoding
and server_encoding will work nicely.

A possibly safer choice is to use LATIN1 (or another single-byte
encoding) instead; this will avoid problems if someone connects to the
database with a client_encoding other than SQL_ASCII and expects data
to be delivered to him in that encoding.

I would *not* recommend using UTF8 if you want to store arbitrary data
without worrying about encoding issues.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Timestamp vs timestamptz

2006-07-13 Thread Agent M
But watch out! This is mentioned in the docs but it bit me when I used 
timestamp with time zone so:


timestamp with time zone does not record the timezone you inserted it 
with- it simply stores the GMT version and converts to whatever 
timezone you like on demand. If the timezone is important data, you 
will need an extra column for it. For example, flight arrival 
information should probably include the timezone of the destination.


-M

On Jul 13, 2006, at 8:04 PM, David Fetter wrote:


On Thu, Jul 13, 2006 at 04:35:20PM -0700, Antimon wrote:

Hi,
I'm working on a web project with pgsql, i did use mysql before and
stored epoch in database so i'm not familiar with these datatypes.

What i wanna ask is, if i don't need to display timestamps in
different timezones,


Not this week, but who knows about next week?


shall i use timestamptz anyway?


Yes.  Timestamptz is the one. :)


¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬
AgentM
[EMAIL PROTECTED]
¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Timestamp vs timestamptz

2006-07-13 Thread Michael Glaesemann


On Jul 13, 2006, at 21:49 , Agent M wrote:

For example, flight arrival information should probably include the  
timezone of the destination.


Of course, for this example you'd also want to know *where* the  
flight is arriving, which would also let you know the appropriate  
time zone in an indirect way. Hopefully we'll be able to store time  
zone information with the timestamp sometime. It's been discussed  
before, but I don't believe a concrete proposal has been put forward  
yet.


Michael Glaesemann
grzm seespotcode net




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] stored function - array parameter - how many element in array ?

2006-07-13 Thread Volkan YAZICI
On Jul 13 08:28, Claire McLister wrote:
> Have you considered using a set instead? We had a similar need and  
> were using an array as a parameter. That turned out to be taking too  
> long. Recently we have changed it to a set and it seems to work  
> faster, although I must admit I haven't timed it yet.

Did you experience same results when you use an indexable (integer)
array type supplied by intarray contrib module? Furthermore, there're
lots of useful procedures and operators which supplies any kind of
functionality you'll ever need with arrays. Also it's obviouse that
there's no need to tell that these libraries are written in C with quite
optimized algorithms.


Regards.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings