[GENERAL] Formatting TimeStamp

2005-09-08 Thread Alex

Hi,

I am using the timestamp various tables but dont like to see the 
microseconds.
Is there way to format the output of timestamp fields globally (in the 
postgres.conf)?

Preferably i only would like to see the MMDD HH:MM:SS.

Or can a formatting be passed to a SELECT *  hmmm probably not.

Thanks for any advise

A



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Formatting TimeStamp

2005-09-08 Thread Alex

Joshua,
I know that i can format it explicitly, but instead of using a SELECT * 
FROM ..  I have to name the columns which is not very convenient. Right 
now I have created views for each table just to re-format the output of 
the one timestamp field.


Mysql for example has a in addition to the TIMESTAMP also a DATETIME 
type that returns |'-MM-DD HH:MM:SS'. So, I was wondering if there 
is anything similar in postgres or can i set the precision or format of 
the timestamp in the postgres.conf or can I define a new type.


Alex

|
Joshua D. Drake wrote:


Alex wrote:


Hi,

I am using the timestamp various tables but dont like to see the 
microseconds.
Is there way to format the output of timestamp fields globally (in 
the postgres.conf)?

Preferably i only would like to see the MMDD HH:MM:SS.

Or can a formatting be passed to a SELECT *  hmmm probably not.



http://www.postgresql.org/docs/8.0/static/functions-datetime.html

And yes you can do it in a SELECT.

Sincerely,

Joshua D. Drake





Thanks for any advise

A



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly








---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Function to test for Valid Date

2005-09-11 Thread Alex

Hi,
is there a way to use postgres to check if a date provided is valid and 
would return true or false.


Thanks
Alex


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Question on Insert / Update

2005-11-09 Thread Alex

Hi,
have just a general question...

I have a table of 10M records, unique key on 5 fields.
I need to update/insert 200k records in one go.

I could do a select to check for existence and then either insert or update.
Or simply insert, check on the error code an update if required.

The 2nd seems to be to logical choice, but will it actually be faster 
and moreover is that the right way to do it?


Thanks
Alex


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Question on Insert / Update

2005-11-09 Thread Alex

Will give that a try. thanks.
was actually interested if the 2nd approach is common practice or if 
there are some reasons not to do it that way.


Alex

Sean Davis wrote:


On 11/9/05 9:45 AM, "Alex" <[EMAIL PROTECTED]> wrote:

 


Hi,
have just a general question...

I have a table of 10M records, unique key on 5 fields.
I need to update/insert 200k records in one go.

I could do a select to check for existence and then either insert or update.
Or simply insert, check on the error code an update if required.

The 2nd seems to be to logical choice, but will it actually be faster
and moreover is that the right way to do it?
   



Probably the fastest and most robust way to go about this if you have the
records in the form of a tab-delimited file is to COPY or \copy (in psql)
them into a separate loader table and then use SQL to manipulate the records
(check for duplicates, etc) for final insertion into the table.

Sean


---(end of broadcast)---
TIP 6: explain analyze is your friend

 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] TRUNCATE Question

2005-11-09 Thread Alex

Hi,
could anyone tell me if it is necessary to run a Vacuum after truncating 
a table or is that done automatically.


Thanks
Alex

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Help with COPY Error: invalid UTF-8 byte ....

2005-11-28 Thread Alex

Hi,
I am having a problem with a copy command, saying ERROR: invalid UTF-8 
byte sequence detected...


The problem actually is that the entire copy job terminates instead of 
just ignoring the record in question.


Is there a way to have faulty records ignored only without terminating 
the entire copy job?


Alex

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Delete Question

2005-12-06 Thread Alex

Hi,

I have a table where I store changes made to an order. The looks like
ProdID, ChangeDate, Change1, Change2, ... etc.
Some ProdIDs have multiple records.

Is there an easy way to delete all records of a ProdID except the most 
recent (ChangeDate is timestamp) one? Preferably in one SQL statement?


Thanks for any advise

Alex

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Delete Question

2005-12-06 Thread Alex

Thanks ,
this one works beautifully.

Alex

PS: also thanks to the other suggestions, have a look at this one


hubert depesz lubaczewski wrote:

On 12/7/05, *Alex* <[EMAIL PROTECTED] 
<mailto:[EMAIL PROTECTED]>> wrote:


I have a table where I store changes made to an order. The looks like
ProdID, ChangeDate, Change1, Change2, ... etc.
Some ProdIDs have multiple records.
Is there an easy way to delete all records of a ProdID except the
most
recent (ChangeDate is timestamp) one? Preferably in one SQL statement?


delete from table_name where exists (select * from table_name x where 
x.prodid = table_name.prodid and x.changedate > table_name.changedate);


this should work.

depesz



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Some Advice needed with historical data

2005-04-14 Thread Alex
Hi,
I need to implement a solution where I need to save changes of a record 
and was wondering whats the best way to implement it.

I have 2 tables, A Subscription which holds the original information and 
table B Changes which should log the changes. (something like this)

Table A Subscript
-
item_id (key)
customer_id
product_id
attrib_1
attrib_2

attrib_5
start_date
Table B Changes
---
item_id (fkey)
attrib_1
attrib_2

attrib_5
change_date (timestamp)
Here are a few conditions that i must meet.
- Table B can log 0,1 or more change per item_id
- I need to create a view that gives me the Original
 record of Table A and the current current values of Table B.
I can think of 2 solutions:
A) I create a stored procedure to return the data when selecting item_id
B) I add a new key to table A) B) (pointer) which points to the latest
  update in table B), if key is Null, then no change has been logged yet.
Is there an onther way to implement that? A) seems to be easier but how 
about the performance? Can A) be done with a simple view too ?

Any suggestion is appreciated
Thanks
Alex









---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] SQL Question

2005-04-15 Thread Alex
Hi,
i have a table
ProdId | LastUpdate
---+
100| 2005-04-01
100| 2005-03-01
100| 2005-02-01
200| 2005-04-01
200| 2005-03-01
200| 2005-02-01
- How can i select only the newest record for each ProdId ?
100| 2005-04-01
200| 2005-04-01
- How can i select to retrieve the last 2 dates in record
100| 2005-04-01 | 2005-03-01
200| 2005-04-01 | 2005-03-01
Thanks
Alex

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] SQL Question

2005-04-17 Thread Alex
Julian Scarfe wrote:
From: "Alex" <[EMAIL PROTECTED]>
- How can i select only the newest record for each ProdId ?
100| 2005-04-01
200| 2005-04-01
DISTINCT ON was made for this and on the similar tables I have 
performs rather more efficiently than using a subquery.

select distinct on (ProdId) ProdId , LastUpdate
from produpdate
order by ProdId , LastUpdate desc;
- How can i select to retrieve the last 2 dates in record
100| 2005-04-01 | 2005-03-01
200| 2005-04-01 | 2005-03-01

To get the previous one, my first thought is something like:
select distinct on (ProdId) ProdId , LastUpdate
from produpdate p1
where LastUpdate <> (
   select max(LastUpdate ) from produpdate p2 where p2.ProdId = p1.ProdId
)
order by ProdId , LastUpdate desc ;
but there may be a much more efficient way of getting the nth result 
in general.

Julian Scarfe
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly

Thanks for the help. will give it a try.
Alex
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Help with a plperl function

2005-04-18 Thread Alex
Hi,
i am having a problem with a plperl function. bellow function always 
returns me an error saying "elements of Perl result array must be 
reference to hash"

Can anyone point me out what i am doing wrong here?
Basically i try to return data from a select in a specific record format.
(i know that bellow example does not require a function, just want to 
get a feel for it.)

Thanks in advance for any tips.
Alex
DROP FUNCTION get_item ();
DROP TYPE my_items;
CREATE TYPE my_items AS (
 item_id INTEGER, 
 description TEXT,
 order_date  DATE);

CREATE FUNCTION get_item () RETURNS SETOF my_items AS $$
 my $res = [];
 my $rv = spi_exec_query('SELECT * FROM items LIMIT 10');
 my $nrows = $rv->{processed};
   foreach my $rn (0 .. $nrows - 1) {
   my $row = $rv->{rows}[$rn];
   my @record;
   $record[0] = $row->{item_id};
   $record[1] = $row->{item_descr};
   $record[2] = $row->{start_date};
   push @$res, @record;
   }
   return $res;
$$ LANGUAGE plperl;
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL] Help with plperl

2005-08-21 Thread Alex

Hi,
could anyone help me with a problem I am having creating a plperl function.

The return always complains with the following error:
composite-returning Perl function must return reference to hash

Thanks
Alex

CREATE TYPE sometype AS
  ( jid   INTEGER,
start_time TEXT,
descrTEXT
  );

CREATE OR REPLACE FUNCTION testq() RETURNS sometype AS $$

 my $results = [];
 my $query = "SELECT jid, start_time FROM schedule LIMIT 10";

 $rv = spi_exec_query($query);
 elog(INFO, "QUERY: $query");
 elog(INFO, "ROWS: $rv->{processed}");

 for ( $i = 0; $i < $rv->{processed}; $i++ ) {
   my $row = $rv->{rows}[$i];
   push @$results, { jid=> $row->{jid},
 start_time => $row->{start_time},
 descr => 'Test' };
 }
 return $results;

$$ LANGUAGE plperl;



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] DBD::PgSPI 0.02

2004-12-05 Thread alex
Hello,

A short note that I've updated DBD::PgSPI version 0.02 to CPAN.

There are no new features - but the code now expects (and works with) 
reasonably decent versions of perl (5.8.x) and pgsql (8.x). 

No warranty is given, this code compiles and 'scratches my itch'. If it 
happens to scratch yours, more the merrier.

-alex


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] [HACKERS] DBD::PgSPI 0.02

2004-12-05 Thread alex
On Sun, 5 Dec 2004, Michael Fuhr wrote:

> Using PostgreSQL 8.0.0rc1 and Perl 5.8.6 on FreeBSD 4.10-STABLE and
> Solaris 9, I had to make a couple of changes to get DBD::PgSPI to
> build:
> 
> 1. Add -I$POSTGRES_HOME/include/server to Makefile.PL.  Otherwise
> the build fails with:
You should point POSTGRES_HOME to "src" directory of your pgsql tree.

> 
>   In file included from PgSPI.xs:14:
>   PgSPI.h:16:22: postgres.h: No such file or directory
>   PgSPI.h:17:21: funcapi.h: No such file or directory
>   PgSPI.h:18:26: executor/spi.h: No such file or directory
> 
> 2. Remove or comment out #include "ppport.h" from PgSPI.h.  Neither of
> my systems have this file and the module builds without it.
Strange that 'make tardist' didn't include it. I'm not sure if its even
required or not to have backward compatibility (to perl 5.4 for example)  
or not. Or whether I even care about backward compatibility. I'll remove 
it in next release, I suppose.

-alex



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] DB Slowing Down

2004-12-16 Thread Alex
Hi,
we have a database. not to big about 50 tables, 20m records. On a daily 
basis we update/insert/delete between 500k to 1M records in total. We 
run a full vacuum every night. The db grows probably by 200k records 
each day and at the end of the month it gets cleaned up.
We run 7.4.1 and 7.4.6

Now, after a few weeks the db starts to slow down after about 2 months 
same operations take 2-3 times as long as originally. The only way to 
speed it up again is to drop the entire database; run an initdb, create 
and restore the database.

Is that an expected behavior?
Any help or suggestions are appreciated.
Thanks
Alex
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] What HW / OS is recommeded

2004-12-16 Thread Alex
Hmm...
I read that Raid5 is suggested over Raid1. Also HW vendors told us that.
Php :-) is not an option and I dont believe Perl is a bottleneck as well.

Michael Ben-Nes wrote:
I think and please correct me that Postgres loves RAM, the more the 
better.

Any way RAID5 is awful with writing, go with  RAID1 ( mirroring )
I use Debian Sarge and im very happy.
Perl is very slow, maybe you can use PHP ?
Alex wrote:
Hi,
we are planning to upgrade our servers but deciding on the right 
configuration seems to be quite difficult.

As for the system. About 50 tables, 20M records and growing about 
500k-1m per month.
The systems mostly loads data from files (perl batch jobs). And 
generates client files. Jobs generally dont run at the same time, but 
timely loading/delivery is very important. We also run tomcat with no 
more than 5-10 concurrent users connecting. (mostly browsing data)

We are currently looking at Dell / HP
but the questions is
- how many processors (2 or 4)
- do we gain with 4 cpus if we probably never have a few users connected
- what processors are recommended  Opteron / Xeon / Itanium
- how much memory ?  2GB ? 4GB ?
- Disks, i guess we go with Raid5, 15k SCSI
- what OS ? Suse / RHE3 / Fedora /
- Disk controller ?
Currently we run it on a Dell Blade, dual P3 1.4ghz with 1G memory
Adding 1GB memory did actually not bring much performance gains.
Does anyone have some first hand experience? Can anybody point me to 
some resources ? Or recommend certain systems?
What kind of performance gain can be expected going from a P3 to a 
higher end processor ?
Thanks for any suggestions

Alex
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] What HW / OS is recommeded

2004-12-16 Thread Alex
The comment about HW vendor was regarding Raid configuration not the 
software.

Geoffrey wrote:
Alex wrote:
Hmm...
I read that Raid5 is suggested over Raid1. Also HW vendors told us that.
Php :-) is not an option and I dont believe Perl is a bottleneck as 
well.

Why would your HW vendor be stipulating the software you use?

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] What HW / OS is recommeded

2004-12-16 Thread Alex
We use perl for the heavy batch jobs, the web interface is written using 
JSP / applets.
If we would change these then it would be Java or C. But all the heavy 
stuff is handled by Stored Procedures so  I dont see a real need for a 
change.

I actually am more interested to hear if there are an recommended 
systems or setups.
Also with regard to 2/4 CPUs or 32/64 bit etc.



Scott Marlowe wrote:
On Thu, 2004-12-16 at 06:39, Michael Ben-Nes wrote:
 

I think and please correct me that Postgres loves RAM, the more the better.
Any way RAID5 is awful with writing, go with  RAID1 ( mirroring )
   

With battery backed cache and a large array, RAID 5 is quite fast, even
with writes.  Plus with a lot of drives in a mostly read environment,
it's quite likely that each read will hit a different drive so that many
parallel requests can be handled quite well.  The general rule I use is
6 or fewer drives will do better in RAID 1+0, 7 or more will tend to do
better with RAID 5.
 

Perl is very slow, maybe you can use PHP ?
   

While mod_perl and its relations have never been fast running under
apache in comparison to PHP, it's no slouch, paying mostly in startup
time, not run time.  For complex apps, the startup time difference
becomes noise compared to the run time, so it's no big advantage to
PHP.  I really like PHP by the way.  But Perl is pretty nice too. 

Run the Unix OS you're most comfortable with, knowing that PostgreSQL
gets lots of testing on the free unixes more so than on the commercial
ones.  Give it a machine with plenty of RAM and a fast I/O subsystem,
and two CPUS and you'll get good performance.  If your needs exceed the
performance of one of these machines, you're probably better off going
to a pgpool / slony cluster than trying to build a bigger machine.
 


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] Help needed with QueryPlan

2004-12-13 Thread Alex
Hi,
I have a query that runs pretty slow and tried to use explain to see
where the problem is.
Both tables have 2.1 and 2.8 million records.
In the Explain output I see that a Seq Scan is used on nval_price_hist
what I dont quite understand.
Could some one help me creating the correct indices?
SELECT ..
FROM price_hist AS P, nval_price_hist AS N
WHERE P.price_id = N.price_id AND P.sec_code = 512084 AND P.eval_date =
'2004-10-15' ;
Hash Join  (cost=210.16..61370.19 rows=53 width=602)
  Hash Cond: ("outer".price_id = "inner".price_id)
  ->  Seq Scan on nval_price_hist n  (cost=0.00..50575.00 rows=2116900
width=65)
  ->  Hash  (cost=209.99..209.99 rows=69 width=537)
->  Index Scan using price_hist_idx1 on price_hist p
(cost=0.00..209.99 rows=69 width=537)
  Index Cond: ((eval_date = '2004-10-15'::date) AND
(sec_code = 512084))
Indices on nval_price_hist
   "nval_price_hist_pkey" primary key, btree (price_id, logic_id)
   "nval_price_hist_idx1" btree (sec_code, exch_code, sec_type,
nval_price_date, logic_id)
   "nval_price_hist_idx2" btree (nval_price_date, logic_id)
   "nval_price_hist_idx3" btree (price_id)
   "nval_price_hist_idx4" btree (sec_code, nval_price_date)
Thanks
Alex

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] What HW / OS is recommeded

2004-12-16 Thread Alex
Hi,
we are planning to upgrade our servers but deciding on the right 
configuration seems to be quite difficult.

As for the system. About 50 tables, 20M records and growing about 
500k-1m per month.
The systems mostly loads data from files (perl batch jobs). And 
generates client files. Jobs generally dont run at the same time, but 
timely loading/delivery is very important. We also run tomcat with no 
more than 5-10 concurrent users connecting. (mostly browsing data)

We are currently looking at Dell / HP
but the questions is
- how many processors (2 or 4)
- do we gain with 4 cpus if we probably never have a few users connected
- what processors are recommended  Opteron / Xeon / Itanium
- how much memory ?  2GB ? 4GB ?
- Disks, i guess we go with Raid5, 15k SCSI
- what OS ? Suse / RHE3 / Fedora /
- Disk controller ?
Currently we run it on a Dell Blade, dual P3 1.4ghz with 1G memory
Adding 1GB memory did actually not bring much performance gains.
Does anyone have some first hand experience? Can anybody point me to 
some resources ? Or recommend certain systems?
What kind of performance gain can be expected going from a P3 to a 
higher end processor ? 

Thanks for any suggestions
Alex
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Help needed with QueryPlan

2004-12-16 Thread Alex
Richard,
thanks for the reply. I actually did what you suggested but still the 
same. Now i set ENABLE_SEQSCAN=false in the perl script which works but 
I dont think thats the way it shold be done.

Alex
Richard Huxton wrote:
Alex wrote:
Hi,
I have a query that runs pretty slow and tried to use explain to see
where the problem is.
Both tables have 2.1 and 2.8 million records.
In the Explain output I see that a Seq Scan is used on nval_price_hist
what I dont quite understand.
Could some one help me creating the correct indices?

Hmm - it knows there are 2 million rows (2116900) in nval_price_hist 
and yet it isn't using the index on price_id even though it's 
expecting a small number of rows (53) to come out of the join.

SELECT ..
FROM price_hist AS P, nval_price_hist AS N
WHERE P.price_id = N.price_id AND P.sec_code = 512084 AND P.eval_date =
'2004-10-15' ;
Hash Join  (cost=210.16..61370.19 rows=53 width=602)
  Hash Cond: ("outer".price_id = "inner".price_id)
  ->  Seq Scan on nval_price_hist n  (cost=0.00..50575.00 rows=2116900
...
1. VACUUM FULL ANALYSE the two tables.
2. Re-run the query with EXPLAIN ANALYSE instead of just EXPLAIN
3. SET ENABLE_SEQSCAN=false; then re-run step 2
That will ensure the statistics are up-to-date, show the actual costs 
as well as the expected costs and finally let us compare the index 
against a sequential scan.
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 8: explain analyze is your friend


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] DB Slowing Down

2004-12-16 Thread Alex
It a gradual process. For example, we have 3 reference tables that get 
updated very day. they have between  3,5 and 7M records. All we do is 
simple inserts, deletes. The number of records is different each day so 
its a bit difficult to say.
Another table is a price database. This is where we actually see the 
biggest difference. The 2 tables have about 2M records each, each day we 
add about 60-80k records and update about the same number. At the end of 
the day about 80% of these will be removed. So the tables grow by 500k 
records a month, but at the end of the month again we remove about 
300-400k of these records. What we see is that this load of 60-80 
records at the beginning of a month and after re creating the db takes 
in the are of 8min or so. after 2-3 month the whole process will take up 
to 20-25 minutes. even though the two tables have only grown max. 500k. 
Even the cleanup at the end of the month does not help a lot. it speeds 
things up but nowhere close to what it would be after recreating the db.

Alex
Richard Huxton wrote:
Alex wrote:
Hi,
we have a database. not to big about 50 tables, 20m records. On a 
daily basis we update/insert/delete between 500k to 1M records in 
total. We run a full vacuum every night. The db grows probably by 
200k records each day and at the end of the month it gets cleaned up.
We run 7.4.1 and 7.4.6

Now, after a few weeks the db starts to slow down after about 2 
months same operations take 2-3 times as long as originally. The only 
way to speed it up again is to drop the entire database; run an 
initdb, create and restore the database.

Information, Alex, information.
Can you give a specific example of where a query gets slower?
Is this a gradual effect or sudden?
Do you know if your indexes are growing unexpectedly? (shouldn't be, 
but worth keeping an eye on)
Are you happy PG is tuned


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] DB Slowing Down

2004-12-16 Thread Alex
Thanks for the suggestions.
I run a vaccum analyze every night. I will look into all the options you 
suggested.

Thanks
Alex
Richard Huxton wrote:
Alex wrote:
It a gradual process. For example, we have 3 reference tables that 
get updated very day. they have between  3,5 and 7M records. All we 
do is simple inserts, deletes. The number of records is different 
each day so its a bit difficult to say.
Another table is a price database. This is where we actually see the 
biggest difference. The 2 tables have about 2M records each, each day 
we add about 60-80k records and update about the same number. At the 
end of the day about 80% of these will be removed. So the tables grow 
by 500k records a month, but at the end of the month again we remove 
about 300-400k of these records. What we see is that this load of 
60-80 records at the beginning of a month and after re creating the 
db takes in the are of 8min or so. after 2-3 month the whole process 
will take up to 20-25 minutes. even though the two tables have only 
grown max. 500k. Even the cleanup at the end of the month does not 
help a lot. it speeds things up but nowhere close to what it would be 
after recreating the db.

Well, if you're not getting index bloat (and I don't think you should 
be in 7.4.x) then that suggests you're doing a VACUUM not VACUUM FULL 
(and don't have enough free-space-map allocated. Try adding a VERBOSE 
to your VACUUMs and see what figures come out, and check the manuals 
for the fsm... settings.

Alternatively, it could be that the physical order of records is 
better when you dump/restore. It might be looking into what CLUSTER 
might do for you if run overnight.

Basically, it sounds like one of 3 things:
1. The indexes are continually growing. (REINDEX)
2. Increasing numbers of "gaps" in the tables due to updates/deletes. 
(VACUUM)
3. The dump/restore puts the data in a useful physical ordering which 
gets shuffled as you update. (CLUSTER)

--
  Richard Huxton
  Archonet Ltd


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Unique Index

2005-01-19 Thread Alex
Hi,
i have a unique index on a table over multiple columns. If now one of 
the records has a null value in one of the indexed columns i can insert 
the same record multiple times.

Is this a problem within postgres or expected?
Example:
index unique, btree (colA, colB, colC);
would still allow me to insert
, ,
, ,
Thanks for any help on that.
A

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Unique Index

2005-01-19 Thread Alex
I actually just wanted to know if there is a way around this problem. 
Obviously it is implemented that way for whatever reason.

I still though think some arguments given in some of the replies, while 
probably correct, are besides the point.

I use a unique index that may contain null values. On an insert or 
update I can now not rely on the exception thrown but actually have to 
write a select statement to check if the same row exists, which I 
believe defies ONE purpose of having unique indices. Whether Null is 
associated with "unknown value", "divided by zero"... or however one 
wants to interpret it is not the issue here, in my view NULL in the same 
column have the same value or at least should be treated the same. (If I 
want to differentiate the state, I would use a code instead of NULL as a 
NULL does not give any indication of its meaning, thus we could safely 
assume they are treated as equal).

Maybe there could be an option in the creation of the index to indicate 
on how to use NULL values.

How do other DBMS handle this?
A


Tom Lane wrote:
"Dann Corbit" <[EMAIL PROTECTED]> writes:
 

Or (perhaps better yet, violating trichotomy) ...
If  has a null numeric value, then ALL of the following are
FALSE for that case:
   

 

Some_column < 0
Some_column > 0
Some_column = 0 
Some_column <> 0 // This is the one that many find surprising
Some_column <= 0
Some_column >= 0
   

It's worse than that: the above do *not* yield FALSE, they yield NULL.
Which does act like FALSE in a simple WHERE clause, but there are other
cases (like CHECK clauses) where it doesn't.  "x NOT IN (SELECT ...)"
is a case that newbies routinely get bitten by.
 

Even at that, I think that being able to insert more than one null value
into a unique index should be considered as a bug (or diagnosed as an
error).
   

Direct your complaints to the ISO SQL standards committee.
regards, tom lane
---(end of broadcast)---
TIP 8: explain analyze is your friend
 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] PLPERL Function very Slow

2006-08-30 Thread Alex

Hi,
i am having a problem with a plperl function i am trying to write.
(using 8.1.4)

the function does a select (ca 30,000 rows) using

  spi_query($query); while (my $row = spi_fetchrow($handle))

and within the while loop inserts the record into a table using
spi_exec_query($query);

The initial select is pretty fast and first inserts very fast, but after
a few thousand inserts the inserts start to slow down until it crawls.
Writing the same in a normal perl script takes less than 90 seconds
while the function is taking 10 minutes.

My guess is that it is all memory related and was wondering if there is
a memory leak (as I read in some mails) or if there is a better way to
do what I want.

Also is there a way to commit transactions within a stored procedure /
function? I noticed that if I do a delete and insert within the same
function that the deletes are not committed until the function returns.

Thanks for any advice.

Alex





---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] SELECT Question

2006-03-03 Thread Alex

Hi,
i want to calculate the price difference, change% of 2 price records. Is 
there an easy way to do that within one query, rather than writing a 
function?


Example:

company, price_date, price, change, change_perc
compA, 20060203,100,,,
compA, 20060202,100,,,
compA, 20060201,100,,,

for one, i would like to calculate for one date the changes (price, and 
percentage) to the previous day and update the record.


i also want to be able to run that through the entire history to 
re-calculate the changes i.e. not only update one record but update all 
of these.


Currently I am doing it with a function but I wonder if there is an 
easier and faster way.


Any suggestion or help would be appreciated.

Thanks in advance
Alex


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

Hi,
I am experiencing some strange behavior when executing a not too complicated 
pl/perl function.
The Function is not too complicated. It does...
1. Selects about 20 Records from Table A (   - loops though the list and 
deletes in total about 50k records in Table B2. For each record form Table A it 
then selects Records from Table C   - loops through these records about 50K in 
total   - for each runs a query 3 Tables, 10-20M records   - inserts a record 
in Table B .. about 50K3. Returns some stats on the whole operation (100 
records).

I am using PL/Perl for this and everything runs pretty well.
In a single execution i.e. if I only process 1 records the whole process is 
done within a few milliseconds.
The system has 16BG of Memory, and fast disks.
Now here is what I noticed.
a) if I run it in the morning, processing starts very slow, but after a few 
thousand records it will speed up until I actually get about 100 records 
processed per millisecond.
b) it sometime takes about 5-10k records till i really get up to speed. meaning 
the first few hundreds can take up to 1-2 minutes.
c) if i run the same job a few hrs later (we run it twice a day) it generally 
runs much faster. Even though we added more data to one of the big tables it 
selects from.
d) this however starts again the next day. ( not much data has been changed 
between the 2nd run of the day and the first one of the next one, but yet it 
will start crawling again.
e) sometime the 2nd run of the day can also be slow and even though the data in 
the system does not change by a large margin, run times of the jobs vary by a 
large amount. from 17-50 minutes.
Here are my questions:
A) I am running the Auto vacuum. Is it possible that this for some reason slows 
down the system?
B) Are the query planner stats re-set over night i.e. date change? This 
behavior is consistent. i.e. Every morning the processing is slow, afternoon 
generally much faster.
C) Does pl/perl have some memory issues?
D) If i run above job processing 50 records, but stop the process after 10K, 
then run it again it will run fast for the first 10K then slow down.
My assumption is that the query planner keeps the stats for a particular record 
based on a time stamp (used in a select) and every day it will forget about 
that. Is there a way to let the query planner keep stats for a function 
permanently?
Thanks for any advise.
Alex















  
_
If It Exists, You'll Find it on SEEK Australia's #1 job site
http://clk.atdmt.com/NMN/go/157639755/direct/01/

Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

I actually looked at that too, but there is no swapping going on. The system 
also has 16GB memory allocated to postgres and during the processing there is 
not other process really active.

> To: ainto...@hotmail.com
> CC: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] PL/Perl Performance Problems 
> Date: Fri, 18 Dec 2009 15:25:16 -0500
> From: t...@sss.pgh.pa.us
> 
> Alex -  writes:
> > Now here is what I noticed.
> > a) if I run it in the morning, processing starts very slow, but after a few 
> > thousand records it will speed up until I actually get about 100 records 
> > processed per millisecond.
> > b) it sometime takes about 5-10k records till i really get up to speed. 
> > meaning the first few hundreds can take up to 1-2 minutes.
> > c) if i run the same job a few hrs later (we run it twice a day) it 
> > generally runs much faster. Even though we added more data to one of the 
> > big tables it selects from.
> > d) this however starts again the next day. ( not much data has been changed 
> > between the 2nd run of the day and the first one of the next one, but yet 
> > it will start crawling again.
> 
> What this sounds like is it's fast when most of the data has been
> swapped in to kernel disk cache, and slow when the data actually has to
> be read from disk.  Probably the reason it's slow in the morning is
> there are some unrelated tasks that run overnight and cause memory to
> get filled with all their data instead of the PG tables.  You could
> check this theory by watching with vmstat or similar tool to see how
> much actual disk I/O is happening in the slow and fast cases.
> 
>   regards, tom lane
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
  
_
A world FIRST in property search has arrived! Check out Domain Radar NOW!
http://clk.atdmt.com/NMN/go/157631292/direct/01/

Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

Thank, I will check that out.

> To: ainto...@hotmail.com
> CC: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] PL/Perl Performance Problems 
> Date: Fri, 18 Dec 2009 15:36:15 -0500
> From: t...@sss.pgh.pa.us
> 
> Alex -  writes:
> > I actually looked at that too, but there is no swapping going on.
> 
> If you were only watching for swapping, that wouldn't catch what I'm
> talking about.  Replacing cached disk buffers with other disk data
> doesn't count as swapping in any system I've used.
> 
>   regards, tom lane
  
_
If It Exists, You'll Find it on SEEK Australia's #1 job site
http://clk.atdmt.com/NMN/go/157639755/direct/01/

Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -
 2248 1661  609  5  0 74 21  0 1  0   
1468 294088 106768 1712195200  2032  2290 1586  752  4  0 74 21  0 1  1 
  1468 278312 107084 1713626400  2548  2706 1711  914  6  0 75 19  0 0  
2   1468 244156 107208 1716823200  2284  3154 1635  858  6  0 74 19  0 
2  0   1468 68 107420 1718557200  2035  3388 1555  717 12  0 71 17  
0 0  1   1464 229088 107464 1719996000  1279  5878 2048  302 21  0 73  
6  0 1  0   1464 220524 107488 1720880400  1116  3497 1301  270 19  0 
75  6  0 1  0   1464 204912 107520 1722420800  1321  3519 1321  282 20  
0 75  5  0 1  0   1464 182580 107556 1724698400  1446  3726 1371  329 
20  0 75  5  0 1  0   1464 170096 107600 1725962000  1270  3300 1307  
289 20  0 75  5  0 0  1   1464 148800 107636 1728065600  1393  3710 
1351  319 19  0 75  6  0 1  0   1464 136432 107700 1729218800  1384  
3291 1308  291 19  0 75  5  0 1  0   1464 118672 107808 1730830800  
1383  3656 1365  410 21  1 73  6  0 0  1   1464 110880 107840 173170240
0  1534  6340 2207  403 17  1 74  8  0 1  0   1464 106996 107848 173210240  
  0  1142  7217 2932  340 15  1 75 10  0 0  1   1464  93096 107876 17335540
00  1268  7923 2944  387 14  1 74 11  0 0  1   1464 109156 107900 17318800  
  00  1333  7776 2871  380 15  1 75  9  0 0  1   1464  93360 107928 
1733510000  1300  8944 3393  378 17  1 75  8  0 0  1   1464  96200 
107972 1733175200  1136  8365 3101  369 16  1 74  9  0 1  0   1464 
105692 108020 1732183200  1171  7041 2744  352 14  1 75 11  0 1  0   
1464 106068 108056 1732128800  1176  6564 2765  347 14  0 75 11  0 0  1 
  1464 100424 108100 1732699600  1236  6687 2755  360 14  1 74 10  0 1  
0   1464 104532 108140 1732268000  1194  5106 2230  342 13  0 75 12  0 
1  0   1464  98196 107216 1732838800  1238  6720 2611  352 15  1 75 10  0




> To: ainto...@hotmail.com
> CC: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] PL/Perl Performance Problems 
> Date: Fri, 18 Dec 2009 15:36:15 -0500
> From: t...@sss.pgh.pa.us
> 
> Alex -  writes:
> > I actually looked at that too, but there is no swapping going on.
> 
> If you were only watching for swapping, that wouldn't catch what I'm
> talking about.  Replacing cached disk buffers with other disk data
> doesn't count as swapping in any system I've used.
> 
>   regards, tom lane
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
  
_
Get more out of Hotmail Check out the latest features today
http://windowslive.ninemsn.com.au/hotmail/article/878466/your-hotmail-is-about-to-get-even-better

Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

I run AutovacuumI run a Vaccum Full Analyze every SatI re-index the tables 
every Sat

> Date: Fri, 18 Dec 2009 21:20:23 -0300
> From: alvhe...@commandprompt.com
> To: ainto...@hotmail.com
> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] PL/Perl Performance Problems
> 
> Alex - wrote:
> > 
> > Tom, I am logging these stats now, but i am having a similar issue. both 
> > jobs in the morning and after noon insert about 400k records (200k each)
> > In the morning that job takes 450secIn the afternoon only 150; No select, 
> > simple parsing a file and insert the records
> > These two tables don't grow as at night a cleanup job removes about the 
> > same amount of historical data (older about 40 days). 
> > Its kind of the same behavior and i don't really see where the swapping 
> > fits in.
> 
> Are you running vacuum of some sort on this table regularly?
> 
> -- 
> Alvaro Herrerahttp://www.CommandPrompt.com/
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
  
_
A world FIRST in property search has arrived! Check out Domain Radar NOW!
http://clk.atdmt.com/NMN/go/157631292/direct/01/

Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

Hmm...how can that be. This is happening every day, so its not a one off or 
happens once in the morning then in the afternoon. There is also no other task 
running on the system, its dedicated to postgres.
Could the Autovacuum cause problems? Starting to invoke Analyze at the 
beginning of the day but the keep silent till the day timestamp breaks ?
The think is that I have 4 servers setup in a similar way and all have exactly 
the same problem.

> Date: Fri, 18 Dec 2009 23:00:16 -0700
> Subject: Re: [GENERAL] PL/Perl Performance Problems
> From: scott.marl...@gmail.com
> To: ainto...@hotmail.com
> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> 
> Note that you seem to have a lot more IO wait in the first run than in
> the second, which means that the task is hitting the disks more in the
> first run than in the second one.  Once IO wait starts to climb,
> performance starts to dive, generally.
  
_
Use Messenger in your Hotmail inbox Find out how
http://windowslive.ninemsn.com.au/hotmail/article/823454/web-im-for-hotmail-is-here

Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

I have the standard ones and no others. 
# run-parts01 * * * * root run-parts /etc/cron.hourly02 4 * * * root run-parts 
/etc/cron.daily22 4 * * 0 root run-parts /etc/cron.weekly42 4 1 * * root 
run-parts /etc/cron.monthly
ls -l /etc/cron*-rw-r--r-- 1 root root0 Apr  6  2009 
/etc/cron.deny-rw-r--r-- 1 root root  255 Jan  6  2007 
/etc/crontab/etc/cron.d:-rw-r--r-- 1 root root 192 Mar 12  2009 
sysstat/etc/cron.daily:-rwxr-xr-x 1 root root  379 Mar 28  2007 
0anacronlrwxrwxrwx 1 root root   39 Apr  6  2009 0logwatch -> 
/usr/share/logwatch/scripts/logwatch.pl-rwxr-xr-x 1 root root  118 Jan 21  2009 
cups-rwxr-xr-x 1 root root  180 Mar 12  2009 logrotate-rwxr-xr-x 1 root root  
418 Jan  6  2007 makewhatis.cron-rwxr-xr-x 1 root root  137 Mar 12  2009 
mlocate.cron-rwxr-xr-x 1 root root 2181 Jun 21  2006 prelink-rwxr-xr-x 1 root 
root  296 Jan 21  2009 rpm-rwxr-xr-x 1 root root  328 Feb 26  2009 
tmpwatch/etc/cron.hourly:-rwxr-xr-x 1 root root 71 Jan 10  2007 
mcelog.cron/etc/cron.monthly:-rwxr-xr-x 1 root root 381 Mar 28  2007 
0anacron/etc/cron.weekly:-rwxr-xr-x 1 root root 380 Mar 28  2007 
0anacron-rwxr-xr-x 1 root root 414 Jan  6  2007 makewhatis.cron
> Date: Fri, 18 Dec 2009 23:45:07 -0700
> Subject: Re: [GENERAL] PL/Perl Performance Problems
> From: scott.marl...@gmail.com
> To: ainto...@hotmail.com
> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> 
> On Fri, Dec 18, 2009 at 11:37 PM, Alex -  wrote:
> > Hmm...
> > how can that be. This is happening every day, so its not a one off or
> > happens once in the morning then in the afternoon. There is also no other
> > task running on the system, its dedicated to postgres.
> > Could the Autovacuum cause problems? Starting to invoke Analyze at the
> > beginning of the day but the keep silent till the day timestamp breaks ?
> > The think is that I have 4 servers setup in a similar way and all have
> > exactly the same problem.
> 
> What cron jobs are on that machine that run at night?  Note that on
> many OSes, maintenance crons are scheduled in a dir something like
> /etc/cron.daily etc...  On my laptop they all run at midnight.  I'm
> wondering if they're blowing out your cache so that you just don't
> have the same performance the first time you hit a particular dataset
> after they've run.  Just a guess.  You could try disabling them for a
> day and see what happens.
  
_
A world FIRST in property search has arrived! Check out Domain Radar NOW!
http://clk.atdmt.com/NMN/go/157631292/direct/01/

Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

On a 2nd thought... where does the cach come into play when i only do inserts 
and no selects.
Alex

> Date: Fri, 18 Dec 2009 23:45:07 -0700
> Subject: Re: [GENERAL] PL/Perl Performance Problems
> From: scott.marl...@gmail.com
> To: ainto...@hotmail.com
> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> 
> On Fri, Dec 18, 2009 at 11:37 PM, Alex -  wrote:
> > Hmm...
> > how can that be. This is happening every day, so its not a one off or
> > happens once in the morning then in the afternoon. There is also no other
> > task running on the system, its dedicated to postgres.
> > Could the Autovacuum cause problems? Starting to invoke Analyze at the
> > beginning of the day but the keep silent till the day timestamp breaks ?
> > The think is that I have 4 servers setup in a similar way and all have
> > exactly the same problem.
> 
> What cron jobs are on that machine that run at night?  Note that on
> many OSes, maintenance crons are scheduled in a dir something like
> /etc/cron.daily etc...  On my laptop they all run at midnight.  I'm
> wondering if they're blowing out your cache so that you just don't
> have the same performance the first time you hit a particular dataset
> after they've run.  Just a guess.  You could try disabling them for a
> day and see what happens.
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
  
_
Looking for a great date? Meet singles at ninemsn dating
http://clk.atdmt.com/NMN/go/150855801/direct/01/

Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

Yes I do, but this is the pl/perl function called by a batch job i run. before 
the pl/perl function is called i insert 2x200k records into 2 tables (200k per 
table).
First i thought that it might be a problem with the perl function, but then i 
noticed that it even started earlier with the simple inserts.
after the insert the job will call the function and there i have the same 
issues. runs slow in the morning, and fast in the afternoon. it will pick up 
speed after 5-10k records
thanks for your help

> Date: Sat, 19 Dec 2009 00:10:36 -0700
> Subject: Re: [GENERAL] PL/Perl Performance Problems
> From: scott.marl...@gmail.com
> To: ainto...@hotmail.com
> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> 
> According to your original post, you do selects in step 1 and 2...  Or
> is this a different job and I've lost the thread (happens to me plenty
> :) )
> 
> 1. Selects about 20 Records from Table A (
>- loops though the list and deletes in total about 50k records in Table B
> 2. For each record form Table A it then selects Records from Table C
>- loops through these records about 50K in total
>- for each runs a query 3 Tables, 10-20M records
>- inserts a record in Table B .. about 50K
> 3. Returns some stats on the whole operation (100 records).
> 
> On Sat, Dec 19, 2009 at 12:07 AM, Alex -  wrote:
> > On a 2nd thought... where does the cach come into play when i only do
> > inserts and no selects.
> > Alex
> >
> >> Date: Fri, 18 Dec 2009 23:45:07 -0700
> >> Subject: Re: [GENERAL] PL/Perl Performance Problems
> >> From: scott.marl...@gmail.com
> >> To: ainto...@hotmail.com
> >> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> >>
> >> On Fri, Dec 18, 2009 at 11:37 PM, Alex -  wrote:
> >> > Hmm...
> >> > how can that be. This is happening every day, so its not a one off or
> >> > happens once in the morning then in the afternoon. There is also no
> >> > other
> >> > task running on the system, its dedicated to postgres.
> >> > Could the Autovacuum cause problems? Starting to invoke Analyze at the
> >> > beginning of the day but the keep silent till the day timestamp breaks ?
> >> > The think is that I have 4 servers setup in a similar way and all have
> >> > exactly the same problem.
> >>
> >> What cron jobs are on that machine that run at night? Note that on
> >> many OSes, maintenance crons are scheduled in a dir something like
> >> /etc/cron.daily etc... On my laptop they all run at midnight. I'm
> >> wondering if they're blowing out your cache so that you just don't
> >> have the same performance the first time you hit a particular dataset
> >> after they've run. Just a guess. You could try disabling them for a
> >> day and see what happens.
> >>
> >> --
> >> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> >> To make changes to your subscription:
> >> http://www.postgresql.org/mailpref/pgsql-general
> >
> > 
> > Meet singles at ninemsn dating Looking for a great date?
> 
> 
> 
> -- 
> When fascism comes to America, it will be intolerance sold as diversity.
  
_
If It Exists, You'll Find it on SEEK Australia's #1 job site
http://clk.atdmt.com/NMN/go/157639755/direct/01/

Re: [GENERAL] PL/Perl Performance Problems

2009-12-20 Thread Alex -

Tom, Scott, Alvaro,thanks for the hints on this issue. It looks as if one of 
the EOD maintenance jobs which does a few extensive queries does push data out 
of memory leading to this behavior. 
Is there a way to permanently cash some tables into memory?
ThanksAlex

> Date: Sat, 19 Dec 2009 00:10:36 -0700
> Subject: Re: [GENERAL] PL/Perl Performance Problems
> From: scott.marl...@gmail.com
> To: ainto...@hotmail.com
> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> 
> According to your original post, you do selects in step 1 and 2...  Or
> is this a different job and I've lost the thread (happens to me plenty
> :) )
> 
> 1. Selects about 20 Records from Table A (
>- loops though the list and deletes in total about 50k records in Table B
> 2. For each record form Table A it then selects Records from Table C
>- loops through these records about 50K in total
>- for each runs a query 3 Tables, 10-20M records
>- inserts a record in Table B .. about 50K
> 3. Returns some stats on the whole operation (100 records).
> 
> On Sat, Dec 19, 2009 at 12:07 AM, Alex -  wrote:
> > On a 2nd thought... where does the cach come into play when i only do
> > inserts and no selects.
> > Alex
> >
> >> Date: Fri, 18 Dec 2009 23:45:07 -0700
> >> Subject: Re: [GENERAL] PL/Perl Performance Problems
> >> From: scott.marl...@gmail.com
> >> To: ainto...@hotmail.com
> >> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> >>
> >> On Fri, Dec 18, 2009 at 11:37 PM, Alex -  wrote:
> >> > Hmm...
> >> > how can that be. This is happening every day, so its not a one off or
> >> > happens once in the morning then in the afternoon. There is also no
> >> > other
> >> > task running on the system, its dedicated to postgres.
> >> > Could the Autovacuum cause problems? Starting to invoke Analyze at the
> >> > beginning of the day but the keep silent till the day timestamp breaks ?
> >> > The think is that I have 4 servers setup in a similar way and all have
> >> > exactly the same problem.
> >>
> >> What cron jobs are on that machine that run at night? Note that on
> >> many OSes, maintenance crons are scheduled in a dir something like
> >> /etc/cron.daily etc... On my laptop they all run at midnight. I'm
> >> wondering if they're blowing out your cache so that you just don't
> >> have the same performance the first time you hit a particular dataset
> >> after they've run. Just a guess. You could try disabling them for a
> >> day and see what happens.
> >>
> >> --
> >> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> >> To make changes to your subscription:
> >> http://www.postgresql.org/mailpref/pgsql-general
> >
> > 
> > Meet singles at ninemsn dating Looking for a great date?
> 
> 
> 
> -- 
> When fascism comes to America, it will be intolerance sold as diversity.
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
  
_
Looking for a great date? Meet singles at ninemsn dating
http://clk.atdmt.com/NMN/go/150855801/direct/01/

Re: [GENERAL] PL/Perl Performance Problems

2009-12-21 Thread Alex -

Thanks,I am already have started doing that. i.e. running the dummy task to get 
the tables loaded again and it worked fine today. 
regardsAlex

> To: ainto...@hotmail.com
> CC: scott.marl...@gmail.com; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] PL/Perl Performance Problems 
> Date: Sun, 20 Dec 2009 22:39:00 -0500
> From: t...@sss.pgh.pa.us
> 
> Alex -  writes:
> > Tom, Scott, Alvaro,thanks for the hints on this issue. It looks as if one 
> > of the EOD maintenance jobs which does a few extensive queries does push 
> > data out of memory leading to this behavior. 
> > Is there a way to permanently cash some tables into memory?
> 
> Not as such, and if there were it probably wouldn't be an overall
> performance win anyway, because you'd hurt your maintenance tasks.
> What you might consider doing is, at the end of the EOD sequence,
> run some dummy queries that scan the tables you use normally, causing
> them to get swapped back in so the cache is already primed when people
> come to work in the morning.
> 
>   regards, tom lane
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
  
_
Get more out of Hotmail Check out the latest features today
http://windowslive.ninemsn.com.au/hotmail/article/878466/your-hotmail-is-about-to-get-even-better

[GENERAL] Get Comments on Tables / Functions

2009-12-27 Thread Alex -

Hi,i am adding comments to tables and functions with version information and 
would like to extract that information through a query.
Is there and easy way to do that?  simple list. table_name, comment

ThanksAlex
_
Use Messenger in your Hotmail inbox Find out how
http://windowslive.ninemsn.com.au/hotmail/article/823454/web-im-for-hotmail-is-here

[GENERAL] DB Link Question

2009-12-28 Thread Alex -

Hi,is there away to check with DB Link if one record exists/not exists in a 
remote database?
Something like...
SELECT item_id FROM TABLE A WHERE NOT EXISTS (SELECT 1 FROM TABLE B WHERE 
A.item_id=B.item_id)
Where Table A is on a Local machine and Table B is on a remote machine.
Thanks for any help on that.
Alex  
_
A world FIRST in property search has arrived! Check out Domain Radar NOW!
http://clk.atdmt.com/NMN/go/157631292/direct/01/

[GENERAL] pl/perl setof function

2010-01-13 Thread Alex -

Hi,i have a PL/PERL RETURN SETOF function which processes a few 10k records.The 
processing takes quite some time and in order to display progressI use a 
return_next after every few thousand records.
However, the function returns all messages when it completes and exists which 
is not really what i want. I also tried the elog NOTICE but then I only get the 
messages back to the psql.
I am calling the function from a perl script and would like to catch these 
progress messages.
Is there any way to solve my problem?
ThanksAlex
_
Search for properties that match your lifestyle! Start searching NOW!
http://clk.atdmt.com/NMN/go/157631292/direct/01/

[GENERAL] Slow Query / Check Point Segments

2010-01-21 Thread Alex -

Hii am experience slow queries when i run some functions. I noticed the 
following entries in my server log.
>From this, can anyone tell me if I need to change some config parmeters?
System has 18GB Memoryshared_buffers = 4GB# min 
128kBtemp_buffers = 32MB # min 
800kBmax_prepared_transactions = 100 # zero disables the 
featurework_mem = 256MB# min 64kBmaintenance_work_mem = 
1GB  # min 1MBwal_buffers = 1024kB# min 
32kBcheckpoint_segments = 32# in logfile segments, min 1, 16MB 
eachcheckpoint_timeout = 30min  # range 30s-1h


2010-01-22 12:18:11 JSTLOG:  checkpoint complete: wrote 52037 buffers (9.9%); 0 
transaction log file(s) added, 0 removed, 0 recycled; write=67.441 s, 
sync=0.000 s, total=67.453 s2010-01-22 12:21:48 JSTLOG:  checkpoint complete: 
wrote 83874 buffers (16.0%); 0 transaction log file(s) added, 0 removed, 32 
recycled; write=138.040 s, sync=0.000 s, total=138.063 s2010-01-22 12:23:32 
JSTLOG:  checkpoint complete: wrote 82856 buffers (15.8%); 0 transaction log 
file(s) added, 0 removed, 32 recycled; write=18.740 s, sync=0.000 s, 
total=18.783 s2010-01-22 12:24:26 JSTLOG:  checkpoint complete: wrote 75145 
buffers (14.3%); 0 transaction log file(s) added, 0 removed, 32 recycled; 
write=12.129 s, sync=0.000 s, total=12.132 s2010-01-22 12:25:30 JSTLOG:  
checkpoint complete: wrote 82108 buffers (15.7%); 0 transaction log file(s) 
added, 0 removed, 32 recycled; write=10.619 s, sync=0.000 s, total=10.621 
s2010-01-22 12:28:03 JSTLOG:  checkpoint complete: wrote 87349 buffers (16.7%); 
0 transaction log file(s) added, 0 removed, 32 recycled; write=82.190 s, 
sync=0.000 s, total=82.192 s2010-01-22 12:30:02 JSTLOG:  checkpoint complete: 
wrote 80797 buffers (15.4%); 0 transaction log file(s) added, 0 removed, 32 
recycled; write=78.198 s, sync=0.000 s, total=78.201 s2010-01-22 12:32:03 
JSTLOG:  checkpoint complete: wrote 81365 buffers (15.5%); 0 transaction log 
file(s) added, 0 removed, 32 recycled; write=75.968 s, sync=0.000 s, 
total=75.971 s2010-01-22 12:33:32 JSTLOG:  checkpoint complete: wrote 98258 
buffers (18.7%); 0 transaction log file(s) added, 0 removed, 32 recycled; 
write=52.156 s, sync=0.000 s, total=52.159 s2010-01-22 12:34:51 JSTLOG:  
checkpoint complete: wrote 80089 buffers (15.3%); 0 transaction log file(s) 
added, 0 removed, 32 recycled; write=53.385 s, sync=0.000 s, total=53.387 
s2010-01-22 12:36:08 JSTLOG:  checkpoint complete: wrote 80819 buffers (15.4%); 
0 transaction log file(s) added, 0 removed, 32 recycled; write=51.476 s, 
sync=0.000 s, total=51.478 s2010-01-22 13:01:54 JSTLOG:  checkpoint complete: 
wrote 4892 buffers (0.9%); 0 transaction log file(s) added, 0 removed, 32 
recycled; write=494.868 s, sync=0.000 s, total=494.982 s
Thanks for any adviceAlex 
_
Shopping Trolley Mechanic If It Exists, You'll Find it on SEEK
http://clk.atdmt.com/NMN/go/157639755/direct/01/

Re: [GENERAL] Slow Query / Check Point Segments

2010-01-22 Thread Alex -

Greg,thanks for the suggestions, will try an test.
You wrote: That just suggests that the system was mostly idle during that 
period.
Thats actually not the case, it was just the time the function finished. 

Alex

Date: Fri, 22 Jan 2010 01:21:59 -0500
From: g...@2ndquadrant.com
To: ainto...@hotmail.com
CC: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Slow Query / Check Point Segments






  


Alex - wrote:

  
  
  checkpoint_segments = 32# in logfile segments,
min 1, 16MB each
  checkpoint_timeout = 30min  # range 30s-1h
  



These parameters are not so interesting on their own.  The important
thing to check is how often checkpoints are happening, and how much
work each of them do.  Here's a useful section from your logs to
comment on:




  
  2010-01-22 12:21:48 JSTLOG:  checkpoint complete: wrote 83874
buffers (16.0%); 0 transaction log file(s) added, 0 removed, 32
recycled; write=138.040 s, sync=0.000 s, total=138.063 s
  2010-01-22 12:23:32 JSTLOG:  checkpoint complete: wrote 82856
buffers (15.8%); 0 transaction log file(s) added, 0 removed, 32
recycled; write=18.740 s, sync=0.000 s, total=18.783 s
  2010-01-22 12:24:26 JSTLOG:  checkpoint complete: wrote 75145
buffers (14.3%); 0 transaction log file(s) added, 0 removed, 32
recycled; write=12.129 s, sync=0.000 s, total=12.132 s
  2010-01-22 12:25:30 JSTLOG:  checkpoint complete: wrote 82108
buffers (15.7%); 0 transaction log file(s) added, 0 removed, 32
recycled; write=10.619 s, sync=0.000 s, total=10.621 s
  2010-01-22 12:28:03 JSTLOG:  checkpoint complete: wrote 87349
buffers (16.7%); 0 transaction log file(s) added, 0 removed, 32
recycled; write=82.190 s, sync=0.000 s, total=82.192 s
  



Here you're getting a checkpoint every minute or three, and each of
them is writing out ~8 buffers = 625MB.  That is crazy, and no
wonder your queries are slow--the system is spending all of its time
doing constant, extremely expensive checkpoints.



You should re-tune this system until the checkpoints show up no more
often than every 5 minutes, and preferably closer to 10.  In your case,
that means greatly increasing checkpoint_segments..  And you might as
well decrease checkpoint_timeout, because right now the timeout is
rarely ever being reached before the system runs out of working
segments and executes a checkpoint for that reason.



I would suggest changing the parameters to these:




checkpoint_segments = 96# in logfile segments, min
1, 16MB each
checkpoint_timeout = 10min  # range 30s-1h



And seeing how long the interval between checkpoints becomes under load
afterwards.  That should make it in the just >5 minute range.  I'd
expect that the number of buffers will only increase a bit, so instead
of dumping out 625MB every minute or three you should see maybe 800MB
every 5 minutes--big improvement.



Also:  just after making the change, save a copy of:



select now(),* from pg_stat_bgwriter



And then run that same query again a few hours after the change (and
maybe the next day twoo).  Subtract the two values to see how much they
changed, and then you'll find some useful info to compute the
checkpoint interval without even looking at the logs.  There's a lot of
theory and comments about this area at
http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm




  
  2010-01-22 12:36:08 JSTLOG:  checkpoint complete: wrote 80819
buffers (15.4%); 0 transaction log file(s) added, 0 removed, 32
recycled; write=51.476 s, sync=0.000 s, total=51.478 s
  2010-01-22 13:01:54 JSTLOG:  checkpoint complete: wrote 4892
buffers (0.9%); 0 transaction log file(s) added, 0 removed, 32
recycled; write=494.868 s, sync=0.000 s, total=494.982 s
  

  
  



See that last one?  That's a checkpoint that's being driven by time
expiring (30 minutes) rather than running out of segments.  That just
suggests that the system was mostly idle during that period.  Long
write times are perfectly fine here, the checkpoints are spread out
over a long time if possible in order to reduce average I/O.  You're
just not seeing that the rest of the time because checkpoints are
happening so often.



-- 
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com
  
_
Shopping Trolley Mechanic If It Exists, You'll Find it on SEEK
http://clk.atdmt.com/NMN/go/157639755/direct/01/

[GENERAL] pg_catalog.pg_stat_activity and current_query

2011-03-01 Thread Alex
Good morning,

I'm trying to make a Postgre profiler reading
pg_catalog.pg_stat_activity.
But when I read that view... I always find my query.

Example:

ResultSet rs = st.executeQuery("SELECT query_start,current_query FROM
pg_catalog.pg_stat_activity where xact_start is not null");
while(rs.next()) {
  String sQuery = rs.getString("current_query");
  qStart = rs.getTimestamp("query_start");
  if(!qStart.equals(qStart_last)){
display.append(sQuery+'\n');
qStart_last=(Timestamp)qStart.clone();
  }
}

Always prints "SELECT query_start,current_query FROM
pg_catalog.pg_stat_activity where xact_start is not null" :)

I'd like to discard my query... any ideas?

Thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgresql install problem

2011-03-21 Thread Alex
It seems that I don't get the postgresql server installed or available 
for me to run in admin tools then services. postgresql is not listed as 
a service available to start or stop!


Why do I not get the service installed and running?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Install issues

2011-03-22 Thread Alex
Using Windows 7 64 bit. Tried to install 8.4 and 9.0and it fails right 
near the end when it tries to create or read the conf file. If I 
transfer my postgres 8.4 file over the upgrade takes but the postgres 
service doesn't exist so no communication occurs.


Is there anyway to just install the postgres service? Or another solution?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Recursive select / updates

2011-05-11 Thread Alex -


Hi,I have a problem where i need to calculate totals in a table that are based 
on previous values and calculations.I am currently doing that in a function 
which works fine but was wondering if there is a more elegant or efficient way 
to do this.
Here is an example table, ordered by row no.The total is based on the previous 
rows. Note that the difference of a values does not always need to be 1
 row  |  a |   b   | total  |--++---+--- + 1|  3 |11 |  
0 |   2|  5 |34 | 22 |   3|  6 |64 | 67 |   4|  
7 |   525 |176 | 5|  8 |  9994 |810 | 6|  9 | 26216 |  11438 | 
7| 10 |  3363 |  48282 | 8| 11 |  1147 |  88489 | 9| 12 |  1037 | 
129843 | 10   | 13 |   200 | 172234 |

Total Row 2(a2-a1)*b1 i.e. (5-3)*11 = 22
Total Row 3 (a3-a2)*b2 + (a3-a1)*b1  i.e (6-5)*34 + (6-3)*11 = 34+33=67 
Total Row 4 (a4-a3)*b3 + (a4-a2)*b2 + (a4-a1)*b1  i.e (7-6)*64 + (7-5)*34 + 
(7-3)*11 = 64+68+44=176 
Has anyone a clever solution for that problem?
Thanks for any suggestions.
Alex  

[GENERAL] Query to return every 1st Sat of a month between two dates

2011-05-11 Thread Alex -

Hi,is there an easy way to return the date of every first Saturday of a month 
in a data range i.e. 2011-2013
Any help would be appreciated
ThanksAlex

Re: [GENERAL] Full text index not being used

2009-02-01 Thread Alex
So this seems to be because the result size is too big.  I still don't
know why it is looping through every record and printing a warning,
but adding a LIMIT makes the queries complete in a reasonable time
(although not all that fast).

However I need to sort and also have many other facets that may or may
not be included in the query.  Adding a sort makes it load every
record again and take forever.

I tried to create an index including all of the fields I query on to
see if that would work, but I get an error the the index row is too
large:

=> create index master_index on source_listings(geo_lat, geo_lon,
price, bedrooms, region, city, listing_type, to_tsvector('english',
full_listing), post_time);
NOTICE:  word is too long to be indexed
DETAIL:  Words longer than 2047 characters are ignored.
NOTICE:  word is too long to be indexed
DETAIL:  Words longer than 2047 characters are ignored.
NOTICE:  word is too long to be indexed
DETAIL:  Words longer than 2047 characters are ignored.
NOTICE:  word is too long to be indexed
DETAIL:  Words longer than 2047 characters are ignored.
ERROR:  index row requires 13356 bytes, maximum size is 8191

Any ideas about how to resolve this?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres 9.0 Hiding CONTEXT string in Logs

2010-12-17 Thread Alex -

Hi,
I occasionally output information from my PL/Perl functions to show a progres 
or a debug info like this
 
elog(NOTICE, "Table some_table analyzed");
 
which generated the 
 
NOTICE:  Table some_table analyzed
 
however since upgrading to version 9.0 I also get this annoying string
 
CONTEXT:  PL/Perl function "func_some_fun"
 
Is there any way to suppress this as I don't really need it?
 
 
Thanks
Alex

  

Re: [GENERAL] Postgres 9.0 Hiding CONTEXT string in Logs

2010-12-20 Thread Alex -

No reply... I guess its not possible then to switch that off.
 
 
Hi,
I occasionally output information from my PL/Perl functions to show a progres 
or a debug info like this
 
elog(NOTICE, "Table some_table analyzed");
 
which generated the 
 
NOTICE:  Table some_table analyzed
 
however since upgrading to version 9.0 I also get this annoying string
 
CONTEXT:  PL/Perl function "func_some_fun"
 
Is there any way to suppress this as I don't really need it?
 
 
Thanks
Alex

 
  

[GENERAL] Alter Column from inet to inet[]

2012-03-14 Thread Alex -

Hi,I need to change a column type from inet to inet[] but the alter command 
always gives me the following errors
ERROR:  column "access_ip" cannot be cast to type inet[]
ALTER TABLE users ALTER COLUMN access_ip SET DATA TYPE inet[] USING 
access_ip::inet[];
Tried various but no luck. Couldn't find anything in the manuals either.
Can anyone point me in the right direction or tell me what's wrong with my 
statement.
Thanks for any help.
Alex
  

[GENERAL] PG_RESTORE/DUMP Question

2003-10-29 Thread Alex
Hi,

I have a test system that is setup the same as a production system and 
would like to frequently copy the database over.
pg_dump takes a few hours and even sometimes hangs.

Are there any reasons not to simply just copy the entire data directory 
over to the test system? I could not find any postings on the net 
suggesting otherwise. Is there anything to pay attention too ?

Thanks for any advise
Alex


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] SELECT question

2003-11-04 Thread Alex
Hi,

I have a bit string , 7 bits, every bit representing a day of the week. 
e.g. 1110011.
Is there and easy way where I can translate/format that string in a query.
I want to give the string back with a '-' for every 0 and the first char 
of the Day for every '1'.
example 1100111 = SM--TFS.

thanks for any suggestions
Alex






---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] Changes in Rel. 7.4

2003-11-05 Thread Alex
Hi,
is there a document available describing the changes / additions in 7.4 ?
Thanks
Alex


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Question on Select

2003-11-09 Thread Alex
Hi,
I have two tables,
 Table A: item_id
 Table B: item_id,  ref_code
and i want to list  all ref_codes in table B that are not referenced by 
Table A.
Table A has about 3million records./ table B 200

What is the best way to do that ?

Thanks
Alex


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] Concatenation in SELECT

2003-11-21 Thread Alex
Hi,
is it possible to concatenate two rows (same or different data type) 
into one.
like First Name, Last Name   or ZIP City  etc.

Thanks
Alex


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] SELECT Question

2003-11-21 Thread Alex
All,
thanks for the many suggestions
Alex
Manfred Koizar wrote:

On Thu, 20 Nov 2003 16:52:37 +0900, Alex <[EMAIL PROTECTED]> wrote:
 

Is there an easy way to write a select statement that returns me the
frist free number or any within the range of 200?
For example if 1-30, and 32-50 are occupied then i would like to fill in
the new entry with id 31.
 

Fortunately this is not the performance mailing list :-)

First free number:
SELECT max(t1.id) + 1
 FROM t AS t1 INNER JOIN t AS t2
  ON (t1.id < 200 AND t1.id < t2.id AND t2.id <= 200)
GROUP BY t2.id
HAVING max(t1.id) + 1 < t2.id
ORDER BY t2.id
LIMIT 1;
Make sure that there is always a row with id=0 and a row with id=200.

Any free number:
SELECT id - 1
 FROM t
WHERE 1 < id AND id <= 200
  AND NOT EXISTS (SELECT * FROM t AS t2 WHERE t2.id = t.id - 1)
LIMIT 1;
Always having a row with id=200 helps avoid unwanted corner cases.

One more:
SELECT coalesce(max(id), 0) + 1
 FROM t
WHERE id <= 200
  AND id = (SELECT count(*) FROM t AS t2 WHERE t2.id <= t.id);
This should work without any dummy rows.  And it will not work, if id
is not unique or there is any row with id < 1.
Servus
Manfred
 



---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] PG_DUMP question

2003-12-01 Thread Alex
Hi,
is ther a way to pass a password to the pg_dump command to avoid being 
prompted for it.
I want to execute the dump from a perl or shell script and dont want to 
open accees for the user in the pg_hba.conf

I tried
PGUSER=userid PGPASSWD=passwd pg_dump -Fc -f dump.db mydb;
Thanks
Alex


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Slow Performance with 7.4.1

2004-01-05 Thread Alex
Hi,
I just installed pg 7.4.1 on a new system P4, 3.2Gh HT, 1GB Mem, RH9 and 
discovered that some of the processes have become extremely slow. Mainly 
I am talking about a SELECT COUNT(*) or VACUUM.

Before running the same database on a P4. 2Ghz 412MB mem, I got the job 
done in some cases half the time.

The weird thing however is, that normal inserts seem to be much faster 
on the new system and I am wondering if it is Postgres or HW related.

Does anyone has an advise ?

Thanks
Alex


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Slow Performance with 7.4.1

2004-01-05 Thread Alex
The old system has a 80GB, 5200rpm ATA , the new system a 120GB, 7200rpm 
ATA. The new system should at least not be slower. Also, on the new 
system I have a separate partition for the database.

If I run a Vacuum or a SELECT COUNT(*) on a larger table (2-6 million 
records) the systems responds very slow, the screen almost freezes.

Thanks
Alex
Doug McNaught wrote:

Alex <[EMAIL PROTECTED]> writes:

 

Hi,
I just installed pg 7.4.1 on a new system P4, 3.2Gh HT, 1GB Mem, RH9
and discovered that some of the processes have become extremely
slow. Mainly I am talking about a SELECT COUNT(*) or VACUUM.
Before running the same database on a P4. 2Ghz 412MB mem, I got the
job done in some cases half the time.
The weird thing however is, that normal inserts seem to be much faster
on the new system and I am wondering if it is Postgres or HW related.
Does anyone has an advise ?
   

What kind of disk hardware is on the two systems?  

-Doug

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
 



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Slow Performance with 7.4.1

2004-01-05 Thread Alex
Yup, DMA is not enabled, probably that is the problem. Tried to enable 
it but get  an error HDIO_SET_DMA failed: Operation not permitted.

Thanks for the hint.
Alex
Doug McNaught wrote:

Alex <[EMAIL PROTECTED]> writes:

 

The old system has a 80GB, 5200rpm ATA , the new system a 120GB,
7200rpm ATA. The new system should at least not be slower. Also, on
the new system I have a separate partition for the database.
If I run a Vacuum or a SELECT COUNT(*) on a larger table (2-6 million
records) the systems responds very slow, the screen almost freezes.
   

Sounds like you may not have DMA enabled on the newer machine, or
there is some other kernel problem that slows the disk down.  You
might try running Bonnie or a similar disk benchmark to make sure you
are getting the right disk performance.
-Doug

 



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] SQL Exception Relation xxx does not exist

2004-01-20 Thread Alex
Kris,
thanks for the reply. I dont actually use temp tables in the function 
(not that I know of) but I did truncated and reloaded a few tables incl. 
recreating indices the previous day, however the line no. indicated in 
the serverlog does not point to these sql calls.

I am now re-starting tomcat every night which is not a bad thing anyway 
but still am wondering what the real reason could be.

Alex

Kris Jurka wrote:

On Tue, 20 Jan 2004, Alex wrote:

 

Hi,
I am getting the following error when running an update from a JAVA
program using a Tomcat Connection Pool.
SQLException: Error Relation 215106760 does not exist

In the server log I see additional info Error occured while executing
PL/pgSQL function funcName
line 105 at select into variables
   

This error is likely the result of using a temp table without EXECUTE or
having one of your tables dropped (recreating it won't help).  plpgsql
caches query plans, but doesn't track the plans dependencies, so if any of
the underlying objects change you can get this error.
 

2. Could this be a problem with the connection pool of tomcat or the
postgres JDBC driver ?
   

The plans are cached once per backend, the connection pool keeps that same
backend open forever which means you can never safely change your
schema without restarting the pool.
 

3. how do i find out the name of the actual object referred by 215106760
in the error message?
   

SELECT relname FROM pg_class WHERE oid = 215106760;

Kris Jurka

---(end of broadcast)---
TIP 8: explain analyze is your friend
 



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] HELP with a Query

2004-02-21 Thread Alex
Hi,
I need some help with a query.
I use a reference table to lookup some codes. Everything works well if the reference table contains a reference but the query fails if no reference fails (Z.ref_code=123456). 
What I need to is either return 0 or null into Z.prim_exch if no reference can be found. Whats the best way to do that ? I tried a sub query but also did not succeed.

SELECT A.sec_code,A.exch_code,A.price_date,Z.prim_exch,
  CASE
WHEN A.exch_code = Z.prim_exch THEN 1
ELSE 0
  END AS is_prim
FROM price_hist AS A, references AS Z
WHERE   A.price_date <= '20040219' AND A.sec_code=123456 AND Z.ref_code=123456 LIMIT 
10;
Thanks for any help.
Alex


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] Running multiple databases PG vs MySQL

2004-03-27 Thread Alex
Hi,
what is the recommended way to run multiple databases under postgres.
In MySQL it is rather simple to give different users or websites their 
own database with all the access rights.

Any suggestion or links to documents are highly appreciated.

Alex







---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] PG vs MySQL

2004-03-28 Thread Alex
Frank,
pls. apologize. Maybe my description was not so clear. What I was 
referring to was the fact that under MySQL you have more freedom to give 
individual users of a shared server rights to create and manage their 
databases In addition all databases are kept in separate directories 
unlike postgres.

As a server administrator, this make life simpler and you dont need to 
worry about other users messing around outside their designated environment.

I guess one reason users compare MySQL with Postgres is that they see 
the benefits of postgres and intend to switch or at least look into it, 
but at the same time they dont want to compromise when it comes to ease 
of use and administration.
MySQL is still the default database offered by any web hosting company 
and if Postgres wants to become the designated db engine for these 
services or become the worlds no.1 open source db then i think lots of 
things need to be done. Take for example the admin interface (MySQL 
Administrator) for MySQL which is done very professionally or the ease 
of setting up Replication. Postgres still is quite far behind there and 
for normal users that know MySQL best the transition is probably a too 
big step and risk.

But then again, it might not be the aim of postgres to become that no1 
open source db and part of every web hosting environment. Instead rather 
to be an alternative for the serious databases for corporate use.

Might actually quite interesting to start a discussion on this topic 
here :-)

Cheers
Alex
Frank Finner wrote:

On Sun, 28 Mar 2004 14:24:15 +0900 Alex <[EMAIL PROTECTED]> sat down, thought long and 
then
wrote:
 

Hi,
what is the recommended way to run multiple databases under postgres.
In MySQL it is rather simple to give different users or websites their 
own database with all the access rights.

Any suggestion or links to documents are highly appreciated.

Alex

   

If you call "createdb -?" within a shell you will get the following:

---
createdb creates a PostgreSQL database.
Usage:
 createdb [OPTION]... [DBNAME] [DESCRIPTION]
Options:
 -D, --location=PATH   alternative place to store the database
 -E, --encoding=ENCODING   encoding for the database
 -O, --owner=OWNER database user to own the new database
 -T, --template=TEMPLATE   template database to copy
 -e, --echoshow the commands being sent to the server
 -q, --quiet   don't write any messages
 --helpshow this help, then exit
 --version output version information, then exit
Connection options:
 -h, --host=HOSTNAME   database server host or socket directory
 -p, --port=PORT   database server port
 -U, --username=USERNAME   user name to connect as
 -W, --passwordprompt for password
By default, a database with the same name as the current user is created.

Report bugs to <[EMAIL PROTECTED]>.
---
So, to create a database from shell, you have to call 
"createdb -O  ". You should do this as valid database superuser
who may add databases!

Another way is to connect to template1 as the future owner:
"psql template1 "
and create the database with "CREATE DATABASE ...". Every valid user may 
connect to
template1. You don´t need to use psql, this works for example also with PgAdmin, it´s 
even simpler
because you don´t need to remember the syntax, just click around.
Of course, the  must be enabled to create databases, therefore it 
must have
been created either by calling "createuser -d  ..." or with an 
appropriate SQL
command "CREATE USER  ... CREATEDB" by a database superuser, or 
again with a
tool like PgAdmin.
BTW: Why do so many people comparisons with MySQL syntax during the last days? "MySQL 
can do this,
in MySQL I can do this that way" and so on. Next time I´ d like to read something like 
"In DB2 I can
simply add a database by whatever." or "With MS-SQL-Server you just have to do the 
following 32
steps to create a backup.". :-)
Who cares about how something works in MySQL? They are NOT the providers of a standard 
everybody has
to use!
Regards, Frank.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html
 



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] timestamp changed on 7.1.1 ?

2001-05-15 Thread Alex

Hello there!

I've just install PostgreSQL 7.1.1 and noticed that timestamp have changed
in this release.

for example:

CREATE TABLE "foo" ( test timestamp );
if you "\d foo" you will notice the column test is not a timestamp, it's a
"timestamp with time zone"

In all the docs I've read (7.2 and 7.1) it mentions 'timestamp' AND
'timestamp [ with time zone ]' as two different Data Types, and it used to
be like that in previous releases. But somehow, somwhere in 7.1.1 this is no
longer true, timestamp seems to be forced as timestamp with time zone.

anyway, how can I make a timestamp data type be outputted without the time
zone?
I just want the so common "-MM-DD HH:MM:SS" format, not a "-MM-DD
HH:MM:SS-time-stamp_offset"

I expect to get this result *WITHOUT* rewritting existing queries.

Any opinions would be great.

Thanks in advance

Alex


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] Timestamp Format

2001-05-17 Thread Alex

how can I make a timestamp data type be outputted without the time zone?

I expect:
"-MM-DD HH:MM:SS"
instead of postgresql output:
"-MM-DD HH:MM:SS-timezone"

I know I can do that by using some functions on the select query, but my
idea is to get this result without altering the queries, like changing an
enviroment variable or a setting in the configuration file or whatever like
that, but NOT in the query.

from what I've read so far, set DATETYLE won't help at all, it just changes
the way the date and the timezone is printed, but it won't  ouput a strict
'-MM-DD HH:MM:SS'

can postgresql do this at all (once again, without using functions or
altering the query)

Thanks in advance, Alex


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] plpgsql FOUND Variable

2003-08-28 Thread Alex
Hi,
when I run a SELECT INTO and the result is 0 or zero row, then I still 
get a positive result if
using IF FOUND THEN

Is there a problem or do we need to use ROW_COUNT instead ?

Alex



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] plpgsql , dynamic queries

2003-08-28 Thread Alex
Hi,
I am having problems in forming a dynamic query that can be used in  
Execute statements.

Problem:
I have database fields that are either INT or VARCHAR. Both can have NULL.
If I use variables that hold the db field's values ... how do I need to 
quote them to pass them to the query string.

example:

_var1 := NULL;
_var2 := ''NULL'';
or
queryStr :=  ''WHERE name NOTNULL";
queryStr :=  ''WHERE name ='' || _var2/1
_var3  :=  ''somename'';
quereyStr := ''SET name='' || _var3;
I tried combinations but ending up with using 4 or 6 single quotes. Is 
there an easy way to form such query strings ?

Any help would be appreciated.

Thanks





---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] plpgsql FOUND Variable

2003-08-28 Thread Alex
Tom,
I am using 7.3.4
The problem

_tmpRec = RECORD;
_name   = VARCHAR;
SELECT INTO _tmpRec * FROM address WHERE name = _name AND status = ''1'' 
AND last_name NOTNULL
IF FOUND THEN
 RETURN ''found'';
ELSE 

Above Query does not produce any results. Executed in psql  result = 0 
rows. However I do get the return string "found" back.

Alex

Tom Lane wrote:

Alex <[EMAIL PROTECTED]> writes:
 

when I run a SELECT INTO and the result is 0 or zero row, then I still 
get a positive result if
using IF FOUND THEN
   

I recall Neil Conway fixed some problems with FOUND a version or two
back.  If you are not on 7.3.*, update.  If you are, let's see the
details.
			regards, tom lane

 



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] Question Join/Subselect

2003-08-28 Thread Alex
Hi,

I have some problems with creating a query that will replace values in 
one table from another one.

Table 1:
userName : refCode1 : refCode2
--
alex :  12  : 24
Table 2:
refCode :  ActualCode
-
12  AA
24  BB
Result Desired
userName : refCode1 : refCode2
--
alex :  AA  : BB
I need to crete a view that returns me Table1 but replaces refCode1, 
refCode2 with the ActualCode.

I did not have any success with Joins or Subselects so far.

Thanks for any advise

Alex



---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] Date Time Function / Age

2003-08-28 Thread Alex
Hi,
is there an easy way to get the Difference between 2 dates in Days... as 
an integer?

age('date1','date2') results in  x years y days or something similar. 
but I would like to
have the result in number of days as an integer.

Thanks for your help

Alex



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Arrays and Indices / Foreign Keys

2003-08-29 Thread Alex
Joe, that is good news.
When will be 7.4 availbable?
Also,
what i actually wanted  is to ckeck that if  lets say ARRAY[1,2,3]
is inserted but 1,2 or 3 (the the entire array) is not  present in the 
reference table.
Will that be available too ?

Alex

Joe Conway wrote:

Alex wrote:

a) index array fields


It is possible in 7.4 beta, but not before.

b) to create a foreign key constraint from the array


Same answer. In 7.4:

# CREATE TABLE t1 (id int[] PRIMARY KEY);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
"t1_pkey" for table "t1"
CREATE TABLE
# INSERT INTO t1 VALUES(ARRAY[1,2,3]);
INSERT 2028925 1
# CREATE TABLE t2 (id int, fk int[] REFERENCES t1(id));
NOTICE:  CREATE TABLE will create implicit trigger(s) for FOREIGN KEY 
check(s)
CREATE TABLE
# INSERT INTO t2 VALUES(1, ARRAY[1,2,3]);
INSERT 2028935 1
# INSERT INTO t2 VALUES(2, ARRAY[3,4,5]);
ERROR:  insert or update on "t2" violates foreign key constraint "$1"
DETAIL:  Key (fk)=({3,4,5}) is not present in "t1".

Joe

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] Fast Table Replication / Mirroring

2003-08-29 Thread Alex
Hi,
I saw some replication projects but think this might be an overkill for 
my intentions.

I have two databases, Primary / Secondary where the Secondary is 
readyonly for
users. I need to mirror tables when changes occur on that table. That 
may happen a few times a day.
Tables are in the area of  few 100k records.

A) Is there an easy way to copy the data from Primary to Secondary
Can this be triggered by updates on the primary table ?
B) Is there a fast way to check if table A on Primary and Secondary 
server are in Sync?
Not number of records but contents.

Any advise would be appreciated.

Alex



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Quetions on Joins

2003-09-01 Thread Alex
Ron,
the idea is to provide a table where users can define filters. But it 
this table may be as well empty.

Alex

Ron Johnson wrote:

On Sun, 2003-08-31 at 11:56, Alex wrote:
 

Hi,

I have a query where I want to filter out records from table_a if a 
field in table_a matches in table table_b. Basically table_b defines the 
filter.

If table_b however is empty i dont get any results

SELECT A.value_one FROM table_a AS A, table_b AS B WHERE  A.value_two <> 
B.value_two;
or
SELECT A.value_one FROM table_a AS A, table_b AS B WHERE  A.value_two <> 
B.value_two AND B.value_two NOTNULL;

Only work if the there is a value in table_b.
Could anyone tell me if there is a way to do that ?
   

If the filter is empty, how can you filter anything?

 

PS: I'd like to thank here persons who reply rather than sending the 
message per mail.
   

What does that mean?

 



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Prevent from Deletion

2003-09-01 Thread Alex
Hi,
I have tables that have default records that must not be deleted or 
modified.
Is there an easy way to do this. Like setting a trigger on the Primary 
key value ?

Alex



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] SELECT Question

2003-09-01 Thread Alex
Jeffrey,
second solution is a beauty... thanks a lot.
Alex

Jeffrey Melloy wrote:

If I'm understanding you correctly, you can do something like:

select cola,
colb,
exists
 (select 'x'
  from tableb
  where colc = colb)
from tablea
Since that has a subselect, you may get better performance with 
something like this:
select   cola,
 colb,
case when colc is null
 then 'f' else 't' end as exists
from table1 left join table2 on colb = colc;

jmelloy=# create table table1(cola serial, colb char);
NOTICE:  CREATE TABLE will create implicit sequence 'table1_cola_seq' 
for SERIAL column 'table1.cola'
CREATE TABLE
jmelloy=# create table table2 (colc char);
CREATE TABLE
jmelloy=# insert into table1 (colb) values ('A');
INSERT 1551538 1
jmelloy=# insert into table1 (colb) values ('B');
INSERT 1551539 1
jmelloy=# insert into table1 (colb) values ('a');
INSERT 1551540 1
jmelloy=# insert into table2 values ('B');
INSERT 1551541 1
jmelloy=# select cola, colb, exists (select 'x' from table2 where colc 
= colb) from table1;
 cola | colb | ?column?
--+--+--
1 | A| f
2 | B| t
3 | a| f
(3 rows)
jmelloy=# select cola, colb, case when colc is null then 'f' else 't' 
end as exists from table1 left join table2 on colb = colc;
 cola | colb | exists
--+--+
1 | A| f
2 | B| t
3 | a| f
(3 rows)
On Sunday, August 31, 2003, at 12:03  PM, Alex wrote:

Hi,

I need to form a query where i can add some columns based on the result.

Table A
ColA, ColB
--
1  A
2  B
3  A
Table B
ColC

A
If A exists if would like the result back as
1  A   OK
2  B   NG
3  A   OK
Is it possible to replace the value in the query ?

Thanks
Alex






---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] Cannot Delete

2003-09-18 Thread Alex
Hi,
hi have a table with 2.5 million records which i try do delete. i have 
several constraints on it too.
i tried to delete the records using delete but it does not seem to work. 
the delete runs forever. hrs...
i cannot truncate it as it complains about foreign keys.

What is the problem ?

Thanks
Alex


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Arrrr... date formatting.

2005-09-19 Thread Alex Turner
Shiver me timbers, whats up with to_char()?!

Avast!  Is there no way to get a month without it being padded to
nine characters?  It makes using a database to actualy format a
date for output kinda impossible?

I'll have to walk the plank if I can't get an answer, the crew will
mutiny and leave me on an island with a one-shot pistol and the sight
of me sails dissapearing over the horizon!

Alex Turner
NetEconomist
P.S. For those who didn't know - it's national talk like a pirate day.


Re: [GENERAL] Arrrr... date formatting.

2005-09-19 Thread Alex Turner
50 pieces of eight to that man there!!

You are exactly right, the FM prefix is exactly what I'm seeking! 
I missed that table right below the main formaing table that describes
the prefixes.

May you find much buried treasure,

Alex Turner
NetEconomistOn 9/19/05, Dianne Yumul <[EMAIL PROTECTED]> wrote:
May be you want the FM prefix, i.e. to_char(current_timestamp,'FMMonth'). See the docs for more info:http://www.postgresql.org/docs/8.0/interactive/functions-
formatting.html#FUNCTIONS-FORMATTING-EXAMPLES-TABLEOn Sep 19, 2005, at 8:20 AM, Alex Turner wrote:> Shiver me timbers, whats up with to_char()?!>>  Avast! Is there no way to get a month without it being padded to
> nine characters? It makes using a database to actualy format a date> for output kinda impossible?>>  I'll have to walk the plank if I can't get an answer, the crew will> mutiny and leave me on an island with a one-shot pistol and the sight
> of me sails dissapearing over the horizon!>>  Alex Turner>  NetEconomist>  P.S. For those who didn't know - it's national talk like a pirate day.


Re: [GENERAL] character varying == text?

2005-09-19 Thread Alex Turner
I thought a char field was supposed to return a padded string, and varchar was supposed to return a non-padded string?

I just checked though:

create table test (
stuff char(10)
);

insert into test values ('foo');

select stuff || 'lemon' from test;

This returns 'foolemon', not 'foo    lemon' as I would have expected.

Alex Turner
NetEconomistOn 9/15/05, Tom Lane <[EMAIL PROTECTED]> wrote:
CSN <[EMAIL PROTECTED]> writes:> Just something I was curious about - is there any> difference at all between "character varying" (in the
> SQL spec) without a length specified and "text" (not> in the SQL spec)?The SQL standard doesn't allow "character varying" without a length spec.But yeah, in Postgres they're essentially the same thing.
regards,
tom lane---(end of broadcast)---TIP 9: In versions below 8.0, the planner will ignore your desire to   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Arrrr... date formatting.

2005-09-19 Thread Alex Turner
Add to that they just re-released Sid Meier's Pirates, and I'm a hopeless case mateys.

AlexOn 9/19/05, IanL PostgreSQL Lists <[EMAIL PROTECTED]> wrote:
Alex Turner wrote:> May you find much buried treasure,Somebody PLEASE burn his Privates of the Caribbean DVD!


Re: [GENERAL] Slow search.. quite clueless

2005-09-20 Thread Alex Turner
I"m by no means an expert on this, and perhaps someone with more
knowledge can help, but it looks to me like the planner estimate and
the actual cost are significantly different which to me means that an
analyze is required, or/and increase the stats on these tables would be
usefull.  Also I'm wondering if you can avoid the dereference oid
lookup by created the index as keyword,product_id instead of just
keyword.

Alex Turner
NetEconomistOn 9/20/05, Oleg Bartunov <oleg@sai.msu.su> wrote:
contrib/tsearch2 ( http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/ )might works for you. It might because performance depends oncardinality of your keywords.
OlegOn Tue, 20 Sep 2005, Yonatan Ben-Nes wrote:> Hi all,>> Im building a site where the users can search for products with up to 4> diffrent keywords which all MUST match to each product which found as a
> result to the search.>> I got 2 tables (which are relevant to the issue :)), one is the product table> (5 million rows) and the other is the keyword table which hold the keywords> of each product (60 million rows).
>> The scheme of the tables is as follows:>>  Table
"public.product">  
Column  
|
Type  |  Modifiers> +---+->
product_id
| text  | not
null>
product_name  
| text  | not
null> retail_price   | numeric(10,2) | not null> etc...> Indexes:>"product_product_id_key" UNIQUE, btree (product_id)>> Table "public.keyword
">   Column| Type  | Modifiers> -+---+---> product_id  | text  | not null> keyword | text  | not null> Indexes:>"keyword_keyword" btree (keyword)
>> The best query which I succeded to do till now is adding the keyword table> for each keyword searched for example if someone search for "belt" & "black"> & "pants" it will create the following query:
>> poweraise.com=# EXPLAIN ANALYZE SELECT> product_id,product_name,product_image_url,short_product_description,long_product_description,discount,discount_type,sale_price,retail_price> FROM product INNER JOIN keyword t1 USING(product_id) INNER JOIN keyword t2
> USING(product_id) INNER JOIN keyword t3 USING(product_id) WHERE> t1.keyword='belt' AND t2.keyword='black' AND t3.keyword='pants' LIMIT 13;>>   QUERY PLAN> ---
> Limit  (cost=37734.15..39957.20 rows=13 width=578) (actual> time=969.798..1520.354 rows=6 loops=1)>   ->  Hash Join  (cost=37734.15..3754162.82 rows=21733 width=578) (actual> time=969.794..1520.337
 rows=6 loops=1)> Hash Cond: ("outer".product_id = "inner".product_id)>
->  Nested Loop  (cost=18867.07..2858707.34
rows=55309 width=612)> (actual time=82.266..1474.018 rows=156 loops=1)>  
->  Hash Join  (cost=18867.07..2581181.09
rows=55309 width=34)> (actual time=82.170..1462.104 rows=156 loops=1)>
Hash Cond: ("outer".product_id = "inner".product_id)>
->  Index Scan using keyword_keyword on keyword t2> (cost=0.00..331244.43 rows=140771 width=17) (actual time=0.033..1307.167> rows=109007 loops=1)>  
Index Cond: (keyword = 'black'::text)>
->  Hash  (cost=18851.23..18851.23 rows=6337
width=17)> (actual time=16.145..16.145 rows=0 loops=1)>  
->  Index Scan using keyword_keyword on keyword t1> (cost=0.00..18851.23 rows=6337 width=17) (actual time=0.067..11.050 rows=3294> loops=1)>
Index Cond: (keyword = 'belt'::text)>  
->  Index Scan using product_product_id_key on product> (cost=0.00..5.01 rows=1 width=578) (actual time=0.058..0.060 rows=1> loops=156)>
Index Cond: (product.product_id = "outer".product_id)>
->  Hash  (cost=18851.23..18851.23 rows=6337
width=17) (actual> time=42.863..42.863 rows=0 loops=1)>  
->  Index Scan using keyword_keyword on keyword t3> (cost=0.00..18851.23 rows=6337 width=17) (actual time=0.073..36.120 rows=3932> loops=1)>
Index Cond: (keyword = 'pants'::text)> Total runtime: 1521.441 ms> (17 rows)>> Sometimes the query work fast even for 3 keywords but that doesnt help me if> at other times it take ages
>> Now to find a result for 1 keyword its really 

Re: [GENERAL] PostgreSQL Gotchas

2005-10-06 Thread Alex Turner
Compared to MySQL ditching referential integrity because of a typo, I
would consider these 'gotchas' extremely minor, hence the reason I use
Postgresql not MySQL.  Postgresql does what you expect from an
RDBMS system out of the box in 99.99% of cases.  I don't have to
toggle things on special like, or install additional modules, or check
every time I create a foreign key that the system will actualy enforce
it.

In short Postgresql makes less work for a DBA than MySQL when you
actualy care about your data.  It's cleaner, faster and easier to
deal with in all cases of real world database applications.

I'm not even gonna talk about the query planner in MySQL trying to deal
with an eight way join, let's just say that if you plan on using
anything even near third normal, MySQL will dog.  Oh - yeah - and
the fact that it doesn't scale...

If those 'gotchas' are all one has against Postgresql... I don't know
why people are still even using Oracle or DB2 ;) MySQL you say, I
thought we were talking about RDBMSes here.

MySQL is to linux, what Jet  is to Windows IMHO, oh wait - Jet has foreign keys by default...

AlexOn 10/6/05, Aly S.P Dharshi <[EMAIL PROTECTED]> wrote:
http://sql-info.de/postgresql/postgres-gotchas.htmlAny comments from folks on the list ?Cheers,Aly.--Aly S.P Dharshi
[EMAIL PROTECTED] "A good speech is like a good dress  that's short enough to be interesting  and long enough to cover the subject"
---(end of broadcast)---TIP 9: In versions below 8.0, the planner will ignore your desire to   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL Gotchas

2005-10-06 Thread Alex Turner
Sorry.

AlexOn 10/6/05, Gavin M. Roy <[EMAIL PROTECTED]> wrote:
This sure sounds like a flamewar bait email?On Oct 6, 2005, at 11:07 AM, Aly S.P Dharshi wrote:> http://sql-info.de/postgresql/postgres-gotchas.html
>> Any comments from folks on the list ?>> Cheers,>> Aly.>> --> Aly S.P Dharshi> [EMAIL PROTECTED]
>>  "A good speech is like a good dress>   that's short enough to be interesting>   and long enough to cover the subject">> ---(end of
> broadcast)---> TIP 9: In versions below 8.0, the planner will ignore your desire to>   choose an index scan if your joining column's datatypes do not>   match>
Gavin M. Roy800 Pound Gorilla[EMAIL PROTECTED]---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster



Re: [GENERAL] PostgreSQL 8.1 vs. MySQL 5.0?

2005-10-06 Thread Alex Turner
Support for windows 98 was infact extended to June 2006:
http://support.microsoft.com/gp/lifean1

AlexOn 10/6/05, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
On Thu, 2005-10-06 at 21:40 +0300, Andrus wrote:> > Just so I know (and am armed ;) ), are there any new> > comparable features in MySQL 5.0 that aren't in> > PostgreSQL up to the forthcoming 
8.1? AFAIK, PG just> > lacks updatable views (which are on the TODO).>> PostgreSQL does not run in Windows 98>> There is a LOT of customers running Windows 98 .>> So I must switch to a Firebird, am I right ?
Over MySQL, yes. However since not even Microsoft supports Windows 98anymore, it is better to update them.Sincerely,Joshua D. Drake>> Andrus. ---(end of broadcast)---
> TIP 4: Have you searched our list archives?>>http://archives.postgresql.org--Your PostgreSQL solutions company - Command Prompt, Inc. 
1.800.492.2240PostgreSQL Replication, Consulting, Custom Programming, 24x7 supportManaged Services, Shared and Dedicated HostingCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/
---(end of broadcast)---TIP 1: if posting/reading through Usenet, please send an appropriate   subscribe-nomail command to 
[EMAIL PROTECTED] so that your   message can get through to the mailing list cleanly


Re: [GENERAL] Cluster/redundancy question

2005-10-11 Thread Alex Stapleton
Don't forget that MySQL replication also has a habit of silently  
failing on you and in my experience needs continuous monitoring to  
make sure it actually keeps reasonably up to date (e.g. not days of  
data behind on the slaves.)


That was a while ago though, maybe they fixed it?
British Telecom use Slony as well, so I guess it can't be *that* bad?

On 11 Oct 2005, at 17:38, Scott Marlowe wrote:


On Mon, 2005-10-10 at 15:16, Travis Brady wrote:


All,

Forgive me if this has been answered before, but I've searched the
archives and the net extensively and have come up mostly empty so  
far.


I'm working at convincing my firm to implement a postgresql database
cluster.
Specifically, we'd like to get a few machines running to be more
available and to protect against any kind of failure.

Right now people are leaning toward MySQL and Emic, but I've been
evangelizing for postgres for the standard reasons.
How does one go about setting this kind of thing up?
I've looked at Bizgres and Pervasive but I'm not sure if that's
necessarily what we want.



If you're looking at the replication systems FIRST, then you've  
missed a

few steps.  Look at what the databases, sans replication, can bring to
the table, and what they lack.

If you MUST have a database that checks input against check  
constraints,

then MySQL is out.  It simply doesn't support them.

If you need a database that converts the number 234987234987234987234
into 2147483647 when you insert it into an int field and doesn't throw
an error or warning, then MySQL should be your choice.

How important is good transactional performance?  MySQL has serious
limitations for certain storage engines and you need to compare those
things before looking at replication.

Don't get me wrong, if replication is one of the things you need, then
consider it, but if you're putting bad data into your database, what
good is replicating it gonna do ya?

---(end of  
broadcast)---

TIP 2: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL missing in SuSE 10?

2005-10-11 Thread Alex Turner
I know this is a bit off-topic, but this is part of the problem I'm
encountering.  I can't find a definitive document on SuSe
licensing and what is 'eval' and what I have to pay for... can I still
download the full version for free? or is that just the eval? 
It's kind of confusing.  Does anyone have information/a link to
documentation that clarifies that stuff?

Alex Turner
NetEconomistOn 10/11/05, Gregory Youngblood <[EMAIL PROTECTED]> wrote:
On Mon, 2005-10-10 at 11:04 -0700, Steve Crawford wrote:> > > Gregory Youngblood <[EMAIL PROTECTED]> writes:> > >> I've been using SuSE and PostgreSQL for a fairly long time.
> > >> Recently (last 12 months), I've noticed that the 9.x (9.2 and> > >> 9.3 specifically) versions of SuSE do not include PostgreSQL on> > >> the CD install -- only on the DVD. At first (
9.2), I thought it> > >> was just a glitch that didn't get fixed in 9.3. Now, it seems to> > >> have been deliberate.>> The real problem is that SuSE is not upfront about the difference
> between the CD and the DVD versions of their product. Someone (me)> picking up the box would assume that the versions are the same but> packaged on two different media but they (um, me) would be wrong.
You are absolutely correct! It used to be that the DVD and the CDversions were identical, or pretty close to it. I never had to use theDVD, and in fact, SuSE DVDs have been notoriously wishy-washy, unable to
complete the installation process on about 20% of the DVD readers I'veused (no other problems for those drives with other disks), requiringthe CDs to be used for an install.And to move it back on topic -- now, it appears PostgreSQL is not on the
DVD either. For the record, this is the Eval SuSE 10.0 DVD.---(end of broadcast)---TIP 3: Have you checked our extensive FAQ?   
http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL 8.1 vs. MySQL 5.0?

2005-10-13 Thread Alex Turner
Actualy to me, it seems like postgres is a perfect partner for MS
Access.  Throw out Jet, and use Pgsql.  It's infinately
better than Jet, so operating in a Win98 environment seems reasonable
in this scenario.

I swear you could build a business just building MS Access apps on a
Postgresql databases so that they can actualy _scale_ when a business
grows.

AlexOn 10/13/05, Chris Travers <[EMAIL PROTECTED]> wrote:
Alex Turner wrote:> Support for windows 98 was infact extended to June 2006:> http://support.microsoft.com/gp/lifean1>RightAnd it was extended again last year as it was supposed to extend this
last June, and Last June, etc.  We will see if it is not extended againBut if you are running an production database on Windows 98 you havebigger problems than support from MicrosoftBest Wishes,
Chris TraversMetatron Technology Consulting


Re: [GENERAL] PostgreSQL 8.1 vs. MySQL 5.0?

2005-10-13 Thread Alex Turner
>>Instance Manager:  Uniquely MySQL.  It allows things like starting and
>stopping the database remotely.>>I cannot think of a reason ever to need this when we have OpenSSH
I'm just curious, but how does this work for a windows box?
 >>Federated Storage Engine:  Allows MySQL to access tables in other
>servers like they are here.  No real direct equivalent in PostgreSQL,>but dblink provides similar functionality.>>DBI-Link also has a wider range of functionality and can access tableson MySQL, Oracle, DB2, etc. servers.

If I had just one wish for postgresql it would be to support
cross-database queries like Oracle.  This is a HUGE pain in the
ass, and DBI-Link syntax is clunky as hell.

I would switch to Oracle tomorrow if I had the budget just because of
this feature.  I have data across four and five databases that are
related, and I need to build cross database views, and do data munging
_easily_, DBI link is far from easy, and I suspect that it's
performance is far from stellar, but I've not actualy benched it. 
For me this needs to be a core database feature.  I have certain
legal problems that are also an issue where I have to keep data that is
related in seperate databases, and my clients _want_ me to cross join
it for select purposes, but I'm legaly required to keep it in a
seperate database.

Maybe it's just difference shock - Postgresql<>Oracle so I'm scared ;), but I don't like dblink very much ;)



Alex Turner
NetEconomist


Re: [GENERAL] PostgreSQL 8.1 vs. MySQL 5.0?

2005-10-13 Thread Alex Turner
I could, but it would breach the terms of our contract.  Our
contract with the data providers clearly specifies seperate databases
;), so I'm kind of tied down by the legalese.

I have certainly considered just putting them in schemas, but I talked to legal and they didn't really like that idea ;).

AlexOn 10/13/05, Tino Wildenhain <[EMAIL PROTECTED]> wrote:
Am Donnerstag, den 13.10.2005, 13:00 -0400 schrieb Alex Turner:...>>>> If I had just one wish for postgresql it would be to support> cross-database queries like Oracle.  This is a HUGE pain in the ass,
> and DBI-Link syntax is clunky as hell.>> I would switch to Oracle tomorrow if I had the budget just because of> this feature.  I have data across four and five databases that are> related, and I need to build cross database views, and do data munging
> _easily_, DBI link is far from easy, and I suspect that it's> performance is far from stellar, but I've not actualy benched it.  For> me this needs to be a core database feature.  I have certain legal
> problems that are also an issue where I have to keep data that is> related in seperate databases, and my clients _want_ me to cross join> it for select purposes, but I'm legaly required to keep it in a
> seperate database.Why not put them in separate schemas and tell the customers theseare separate databases? From outside it looks exactly like it.You can constraint the users to the different schemas and still
join between the tables at will. See schema-searchpath andstuff for sticking users to a schema.HTHTino


Re: [GENERAL] PostgreSQL 8.1 vs. MySQL 5.0?

2005-10-13 Thread Alex Turner
Of course, but _legaly_ we would be complying with the contract ;)

AlexOn 10/13/05, Scott Marlowe <[EMAIL PROTECTED]> wrote:
If separate databases are required by contract, and oracle lets youtreat multiple databases like one big one, wouldn't using oracle breachyour contract then?  In this case, PostgreSQL's schemas and Oracle'sseparate databases are functionally identical, nomenclature aside.
On Thu, 2005-10-13 at 13:58, Alex Turner wrote:> I could, but it would breach the terms of our contract.  Our contract> with the data providers clearly specifies seperate databases ;), so> I'm kind of tied down by the legalese.
>> I have certainly considered just putting them in schemas, but I talked> to legal and they didn't really like that idea ;).>> Alex>> On 10/13/05, Tino Wildenhain <
[EMAIL PROTECTED]> wrote:> Am Donnerstag, den 13.10.2005, 13:00 -0400 schrieb Alex> Turner:> ...> >> >> >> > If I had just one wish for postgresql it would be to support
> > cross-database queries like Oracle.  This is a HUGE pain in> the ass,> > and DBI-Link syntax is clunky as hell.> >> > I would switch to Oracle tomorrow if I had the budget just
> because of> > this feature.  I have data across four and five databases> that are> > related, and I need to build cross database views, and do> data munging
> > _easily_, DBI link is far from easy, and I suspect that it's> > performance is far from stellar, but I've not actualy> benched it.  For> > me this needs to be a core database feature.  I have certain
> legal> > problems that are also an issue where I have to keep data> that is> > related in seperate databases, and my clients _want_ me to> cross join
> > it for select purposes, but I'm legaly required to keep it> in a> > seperate database.>> Why not put them in separate schemas and tell the customers
> these> are separate databases? From outside it looks exactly like it.> You can constraint the users to the different schemas and> still> join between the tables at will. See schema-searchpath and
> stuff for sticking users to a schema.>> HTH> Tino>>


Re: [GENERAL] PostgreSQL 8.1 vs. MySQL 5.0?

2005-10-13 Thread Alex Turner
heh... anythings possible ;)  I guess we are okay for now then seeing that we are using postgresql with no dblinkg ;)

AlexOn 10/13/05, Scott Marlowe <[EMAIL PROTECTED]> wrote:
I wouldn't be so sure of that.  IT might be that in order to beconsidered to be complying with the contract you have to setup oracle insuch a way as to disable any database to database access / joining.Seems to me the second you can run a query that hits both databases you
might well be in breach of contract, depending on the terminology used.On Thu, 2005-10-13 at 14:44, Alex Turner wrote:> Of course, but _legaly_ we would be complying with the contract ;)>> Alex
>> On 10/13/05, Scott Marlowe <[EMAIL PROTECTED]> wrote:> If separate databases are required by contract, and oracle> lets you
> treat multiple databases like one big one, wouldn't using> oracle breach> your contract then?  In this case, PostgreSQL's schemas and> Oracle's> separate databases are functionally identical, nomenclature
> aside.>> On Thu, 2005-10-13 at 13:58, Alex Turner wrote:> > I could, but it would breach the terms of our contract.  Our> contract> > with the data providers clearly specifies seperate databases
> ;), so> > I'm kind of tied down by the legalese.> >> > I have certainly considered just putting them in schemas,> but I talked> > to legal and they didn't really like that idea ;).
> >> > Alex> >> > On 10/13/05, Tino Wildenhain <[EMAIL PROTECTED]> wrote:>
> Am Donnerstag, den
13.10.2005, 13:00 -0400 schrieb> Alex> > Turner:> > ...> > >> > >> > >
>
> > If I had just
one wish for postgresql it would be> to support>
> >
cross-database queries like Oracle.  This is a> HUGE pain in> > the ass,>
> > and DBI-Link
syntax is clunky as hell.> > >>
> > I would
switch to Oracle tomorrow if I had the> budget just> > because of>
> > this
feature.  I have data across four and five> databases> > that are>
> > related, and
I need to build cross database views,> and do> > data munging>
> > _easily_, DBI
link is far from easy, and I suspect> that it's>
> > performance
is far from stellar, but I've not> actualy>
> benched
it.  For>
> > me this needs
to be a core database feature.  I> have certain> > legal>
> > problems that
are also an issue where I have to> keep data> > that is>
> > related in
seperate databases, and my clients> _want_ me to> > cross join>
> > it for select
purposes, but I'm legaly required to> keep it> > in a>
> > seperate
database.> >>
> Why not put them
in separate schemas and tell the> customers> > these>
> are separate
databases? From outside it looks> exactly like it.>
> You can constraint
the users to the different> schemas and> > still>
> join between the
tables at will. See> schema-searchpath and>
> stuff for sticking
users to a schema.> >> > HTH> > Tino> >> >>


Re: [GENERAL] On "multi-master"

2005-10-14 Thread Alex Turner
 
>multi-master.  It provides a certain amount of scaling, but nothing>I've seen or heard suggests that the license cost couldn't just as
>easily and effectively be thrown at larger hardware for better>scaling.  The really big reason to use RAC is five-nines situations:>you're trying to make sure that even unlikely failures of your>machines never cause the database to stop working (for suitably
>lawyer-understood values of "stop".  RAC remastering is not a>zero-cost, nor even invisible, operation.  But from an application>perspective, it can be made to look like "database is slow" as
>opposed to "database crashed").>>So this is basically a multimaster synchronous replication solutionutilizing a shared disk architecture.  I generally agree with yourassessment that the license costs could be better spent on redundant
hardware and more scalable hardware.  Also if the shared disk fails, youmay lose everything after your last backup.

Of course thats highly unlikely because in Oracle you have _two_
complete copies of your active database from your last backup with
archive redo logs, so in reality you would have to loose your _entire_
disk cluster, which if you have things organised by the book, you would
have archive redo on a seperate controller, and preferably on a
seperate array for that very reason.

Oracle though this out pretty well ;)
 


Re: [GENERAL] On "multi-master"

2005-10-18 Thread Alex Stapleton


On 18 Oct 2005, at 15:59, Andrew Sullivan wrote:


On Sat, Oct 15, 2005 at 05:58:20PM -0700, Chris Travers wrote:






  And in the market we're talking
about "cheaper" is not the main consideration for "better than".  I
think other arguments are useful -- access to source (and therefore
auditability) is an obvious one -- but one needs to establish a
well-known set of practices around these things if one wishes to be
taken seriously for this kind of application.


The current market thinks like that, but I suspect that a lot of  
small to medium sized companies which don't want to get sucked into  
the Oracle consultancy / £16,000 per CPU licensing vacuum are  
currently prepared to simply choose a less good solution that just  
happens to kinda get the job done. The current market for these  
solutions is made up of high paying customers with very expensive  
data precisely because nobody has released a cheaper alternative.  
There is no serious variation in price in the market, and hence the  
client base doesn't change because there isn't any real innovation.


Release a cheaper / free alternative and people will use it because  
they will have almost no reason not to. This means that cheaper and  
as good as does have a place in the market even if it's not a  
conventional solution. It just needs evidence and evangelism. The  
current market should not be the principal target.


I do agree that there being a single solution under PostgreSQL to  
this problem is the best path though, it is attractive to everyone  
for there to by one way to do it, not just the current users of  
similar systems.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] On "multi-master"

2005-10-18 Thread Alex Stapleton


On 18 Oct 2005, at 16:57, Andrew Sullivan wrote:


On Tue, Oct 18, 2005 at 04:37:23PM +0100, Alex Stapleton wrote:


Release a cheaper / free alternative and people will use it because
they will have almost no reason not to. This means that cheaper and
as good as does have a place in the market even if it's not a
conventional solution. It just needs evidence and evangelism. The
current market should not be the principal target.



I agree with this; but I'm always concerned about something that's
_almost_ as good as the competition, but not quite there, being
pointed at as being "as good as" the competition.  That way lies an
invitation to the point-and-laugh responses that MySQL's so-called
cluster system has garnered: it's too dangerous to use for many
systems where the data is important enough, because the failure mode
is "near-complete catastrophe".  See another thread, where I talk
about the strategy some companies may be using of lumping PostgreSQL
in with other products, and then attacking the other product.
Irrelevance may be fallacious, but it makes for depressingly
successful marketing.



If MySQL advertised it's bad points as well as it's good points it  
would not be nearly as dangerous, or generally laughable as a  
database. For example MySQL Cluster (NDB) is actually not that bad in  
a lot of cases, and can probably be pretty useful in a lot of  
situations. Unfortunately it seems to get mistaken (possibly even  
marketed as?) an actual database, rather than just a particularly  
smart caching system.


A known quantity, is often infinitely better than a homegrown or  
unknown solution (which most homegrown ones really are.) Even if it  
is not as good as RAC.


I agree that any solution which has a failure mode of nearly complete  
catastrophe is pretty much useless, but it's only dangerous (and an  
industry joke) if marketed using FUD in the same way MySQL do. I  
don't see any reason to worry about PostgreSQL getting bad (false)  
press, there are plenty of big names using PG, and plenty of  
commercial interest in it. It's not like PG should be aiming to  
render Oracle and so on redundant, it's the wrong way of thinking  
about it imo.
 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

2005-10-21 Thread Alex Turner

It appears that casting to a char() causes spaces to be stripped (ignored) from the string:
mls=# select length('123   '::char(8));
 length

  3
(1 row)


mls=# select length('123   '::char(8)::varchar(8));
 length

  3
(1 row)


but:
mls=# select length('123   '::varchar(8));
 length

  6
(1 row)

I'm not sure about anyone else, but I would personaly consider that a
bug?  Should the length function return the correct length of a
fixed length string, or the length of significant characters? 
What does the SQL standard say on this one?  I googled it a bit,
but didn't come up with much.

Alex Turner
NetEconomist
On 10/20/05, Marc G. Fournier <[EMAIL PROTECTED]> wrote:
On Thu, 20 Oct 2005, Doug Quale wrote:> "Guy Rouillier" <[EMAIL PROTECTED]> writes:>>> Doug Quale wrote:>>>>>> # select 'a'::char(8) = 'a '::char(8);
>>>  ?column?>>> -->>>  t>>> (1 row)>>>>>> Trailing blanks aren't significant in fixed-length strings, so the>>> question is whether Postgresql treats comparison of varchars right.
>>>> This result is being misinterpreted.>>>> select length('a'::char(8)) ==> 1>> select length('a '::char(8)) ==> 1>>>> So it isn't that the two different strings are comparing equal.  The
>> process of casting them to char(8) is trimming the blanks, so by the>> time they become fixed length strings, they are indeed equal.>> Huh??? What version of PG are you using?  On 7.4.9
,>>> test=# select length('a'::char(8));> length> >  8> (1 row)>> test=# select length('a '::char(8));> length> >  8
> (1 row)>> The truncation you describe would simply be wrong.ams=# select length('a '::char(8));  length   1(1 row)ams=# select version();version
--  PostgreSQL 8.0.2 on i386-portbld-freebsd4.11, compiled by GCC 2.95.4(1 row)ams=#Marc G. Fournier   Hub.Org
 Networking Services (http://www.hub.org)Email:
[EMAIL PROTECTED]  
Yahoo!:
yscrappy  ICQ:
7615664---(end of broadcast)---TIP 6: explain analyze is your friend


  1   2   3   4   5   6   >