[SQL] Serialized Transaction Locking Issues

2003-02-04 Thread justin
Hello,

I'm currently in the midst of working on a serializable transaction 
which drops indexes on several tables, does a bulk copy, and rebuilds 
the indexes. Based on what I've read it seemed as though I'd be able to 
concurrently execute read only queries against these tables, returning 
results based on the table snapshot from before the serialized 
transaction began. However, this doesn't work. A simple read-only select 
query waits until the serialized transaction is finished before 
returning results.

Based on the user documentation, specifically 
(http://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=explicit-locking.html#LOCKING-TABLES), 
it would seem that the only issue in PostgreSQL that could block a 
simple select call would be an ACCESS EXCLUSIVE lock, which is acquired 
only by DROP TABLE, ALTER TABLE, and VACUUM FULL, none of which I'm 
using. In fact, I've noticed this exact behavior with DROP INDEX.

Please excuse my futile attempt to outline two concurrent transactions 
here:

testdb=# \d trans_test
Table "public.trans_test"
Column | Type | Modifiers
- ---++---
val integer
Indexes: idx_trans_test btree(val)

testdb=# SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL 
SERIALIZABLE;
SET

[TRANSACTION 1] BEGIN;
BEGIN
[TRANSACTION 1] SELECT * FROM trans_test;
val
- -
1
2

[TRANSACTION 2] SELECT * FROM trans_test;
val
- -
1
2

[TRANSACTION 1] DROP INDEX idx_trans_test;
DROP INDEX

[TRANSACTION 2] SELECT * FROM trans_test;
... (Waiting)

[TRANSACTION 1] COMMIT;
COMMIT

(TRANSACTION 2 returns after TRANSACTION 1 COMMIT)
val
- -
1
2

So is this a glitch or am I missing some nuance of a serializable 
transaction? In either case I'm eager to figure out whats actually going 
on.

Thanks,

-justin




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Create custom aggregate function and custom sfunc

2009-07-02 Thread justin

Jasmin Dizdarevic wrote:

 hi,
 
i have to create a aggregate function which evaluates a maximum text 
value but with some conditions i have to take care of.

is there a way to access a value set of each group?
e.g.:
 
customer ; seg

111 ; L1
111 ; L2
111 ; L1
222 ; L3
222 ; L3
222 ; L2
 
the result should look like this:
 
111: L1 - because L1 is higher than L2 and ratio of L1 : L2 is 2 : 1

222: L3 - because L3 is higher than L2 and ratio of L3 : L2 is 2 : 1
 
i hope you know what i mean.
 
ty
 
You don't have to create an aggregate function.  I have similar problem 
where the part numbers have to order based on the contents and the first 
thing you have to do is split it apart then set the ordering you want.  
This gives you an idea of what you can do and what the results look like. 

If the data in the table is laid out as you describe with where 111 and 
L1  are different fields its very easy and you don't have to create an 
aggregate function  


Select '111', 'L1',
   regexp_replace( 'L1', '[1-9 `]+', ''),
   regexp_replace( 'L1', '[a-zA-Z `]+', '')::integer
union
Select '111', 'L3',
   regexp_replace( 'L3', '[1-9 `]+', ''),
   regexp_replace( 'L3', '[a-zA-Z `]+', '')::integer
union
Select'111', 'L2',
   regexp_replace( 'L2', '[1-9 `]+', ''),
   regexp_replace( 'L2', '[a-zA-Z `]+', '')::integer

order by 3, 4


if the data is 111;L1 in a single field its still very easy. Example 
like so


Select split_part('111;L1', ';',1),
   split_part('111;L1', ';',2),
   regexp_replace( split_part('111;L1', ';',2), '[1-9 `]+', ''),
   regexp_replace( split_part('111;L1', ';',2), '[a-zA-Z `]+', '')::integer
union
Select split_part('111;L3', ';',1),
   split_part('111;L3', ';',2),
   regexp_replace( split_part('111;L3', ';',2), '[1-9 `]+', ''),
   regexp_replace( split_part('111;L3', ';',2), '[a-zA-Z `]+', '')::integer
union
Select split_part('111;L2', ';',1),
   split_part('111;L2', ';',2),
   regexp_replace( split_part('111;L2', ';',2), '[1-9 `]+', ''),
   regexp_replace( split_part('111;L2', ';',2), '[a-zA-Z `]+', 
'')::integer


order by 3, 4 desc


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] simple (?) join

2009-09-24 Thread justin






David W Noon wrote:

  On Thu, 24 Sep 2009 16:16:36 +0100, Gary Stainburn wrote about [SQL]
simple (?) join:

  
  
create table orders (
o_id serial primary key
...
);

create table orders_log (
ol_id serial primary key,
o_id int4 not null references orders(o_id),
ol_timestamp timestamp,
ol_user,
);

How can I select all from orders and the last (latest) entry from the 
orders_log?

  
  
SELECT * FROM orders
WHERE o_id IN (SELECT o_id FROM orders_log
   WHERE ol_timestamp = (SELECT MAX(ol_timestamp) FROM orders_log));

No joins required.
  


I don't think that is what he is requesting.  I read it he also wants
the timestamp included in the result set

A nested query 

Select 
    orders.*, 
    (SELECT MAX(ol_timestamp) FROM orders_log where orders_log.o_id =
orders.oid) 
>From orders

Still another option is using a join 

Select 
    orders.*, ol_timestamp
    From orders 
    left join (SELECT MAX(ol_timestamp), o_id FROM orders_log group by
o_id) as JoinQuery on JoinQuery.o_id = orders.o_id 

The second one should be faster







Re: [SQL] simple (?) join

2009-09-26 Thread justin






Oliveiros C, wrote:

  
  
  
  Hello, Justin, Gary.
   
  Justin, your (the second one) query
is not much different from mine.

No its not,  

   You previewed  the possibility of
having orders without any matching entry on orders_log with your left
join, something that I haven't. Gary, will you have
  records on your orders table that
don't reference any record on your orders_log table? If so, Justin's
query is the right one you should use.
   
  You return the full record from
orders and an additional column from orders_log, the ol_timestamp
column. I understood that Gary wanted the full record from orders_log,
not
  just the timestamp column. That
part is done by my subquery .
   
   I think Gary could clarify what he
wants exactly. Gary? :)

Yes gary please do if these do not answer your questions???

   
  Also, Justin, your query
design seems right to me, but maybe you should add this (the part in
comment) to your subquery 
  SELECT MAX(ol_timestamp) /* as ol_timestamp */ , o_id FROM
orders_log group by o_id 
  because the MAX(ol_timestamp) will receive the name max, not
ol_timestamp, and probably
  the parser will complain that column ol_timestamp  does not
exist.
  Ain't I right?


Yes you are correct i forgot to add the AS 




Re: [SQL] simple (?) join

2009-09-26 Thread justin






David W Noon wrote:

  
A nested query

Select
   orders.*,
   (SELECT MAX(ol_timestamp) FROM orders_log where orders_log.o_id =
orders.oid)

  
  >From orders

That won't give the desired results. I don't think the SQL parser will
even accept it.
  

Yes this is a  valid SQL statement use similar ones a great deal.  The
problem with nested queries is they can only return 1 record per
match.  Another problem can be performance it has to run for every
record in order tables and it occurs pre filter conditions 

  
  
  
Still another option is using a join

Select
   orders.*, ol_timestamp
   From orders
   left join (SELECT MAX(ol_timestamp), o_id FROM orders_log group by
o_id) as JoinQuery on JoinQuery.o_id = orders.o_id

  
  
That won't give the desired results either. If you change "left" to
"inner" you will be closer though.

Both of your queries will retrieve the entire orders table with a
timestamp of some sort from the orders_log table.
  


I read the question as Gary wanted the entire orders table with the
greatest timestamp from the logs table

Quoting Gary
"How can I select all from orders and the last (latest) entry from the 
orders_log?"








Re: [SQL] simple (?) join

2009-09-26 Thread justin




David W Noon wrote:

  On Sat, 26 Sep 2009 14:54:24 -0400, justin wrote about Re: [SQL] simple
(?) join:

[snip]
  
  
Quoting Gary
"How can I select all from orders and the last (latest) entry from the 
orders_log?"

  
  
In that case, a simple Cartesian product will do:

SELECT o.*, maxi.ts
FROM orders AS o,
(SELECT MAX(ol_timestamp) AS ts FROM orders_log) AS maxi;

Since the cardinality of the subquery "maxi" is 1, it will give a result
set with cardinality of the complete orders table.

I don't understand why anybody would want to do that. [De gustibus ... ]
  


Guessing here 

 Answer to return the last time someone either viewed or edited the
order.

This is a very common audit requirement to track who what, when and why
something happened.  




Re: [SQL] join help

2009-04-08 Thread Justin




Kashmir wrote:only difference is:

  first table stores data per 'f_rrd_id' evey 5min, and the second table every single minute.
I
want to run a query that would return for the same 'f_rrd_id' all
values from both tables sorted by f_timestamp, of course a set would
only have values from the 5m table if the timestamp was present there
too (every 5th set only)

being a sql-lamer, i used some query
builder help to build my query (which served me quite well in the past
for all my 'complicated' sqls), and was suggested for f_rrd_id=444 to
use something as:
SELECT
td_fetch1m_by_rrd_id.f_timestamp,
td_fetch_by_rrd_id.f_ds,
td_fetch_by_rrd_id.f_ds,
td_fetch1m_by_rrd_id.f_ds,
td_fetch1m_by_rrd_id.f_us
FROM td_fetch_by_rrd_id
RIGHT JOIN td_fetch1m_by_rrd_id ON td_fetch_by_rrd_id.f_timestamp=td_fetch1m_by_rrd_id.f_timestamp
WHERE td_fetch1m_by_rrd_id.f_rrd_id=444
ORDER BY td_fetch1m_by_rrd_id.f_timestamp;

and this works quite fine and as expected in the source env (some gui-sqler). 
but when i take this into psql, i get totally messed up results, the values just dont make any sense...
  


The sql is joining on a time stamp??Using the time stamp i would expect odd ball results because a several unique f_rr_id could have the same timestamp especially if its heavy write table .

every 5th set only   What does this mean what makes something the 5th set.   

I normally avoid table aliasing  but these names i'm having a real tough time reading so we are going to use 1Minute = td_fetch1m_by_rrd_id and the 5Minute = td_fetch_by_rrd_id from here on out. 

You want to join whats in the 1Minute table to whats in the 5Minute only if it is in the 5Minute table and only return from 1Minute table where the timestamps is in the 5Minute table   If my understanding is correct this will work minus any typos.  To create a join condition we need a composite identity to join on.  So what i did is cast F_rr_id and F_timestamp to text adding them together to create a unique condition to join on.  

Also there is a typo above noted in bold f_ds is listed twice i believe that is a mistake. 

SELECT
	OneM.f_timestamp,
	FiveM.f_ds,
	FiveM.f_us,
	OneM.f_ds,
	OneM.f_us
FROM td_fetch1m_by_rrd_id OneM,
left Join (select f_rrd_id, f_ds, f_us, f_timestamp
		from td_fetch_by_rrd_id ) FiveM
	ON (OneM.f_rrd_id::text || OneM.f_timestamp::text) = (FiveM.f_rrd_id::text || FiveM.f_timestamp::text)
ORDER BY OneM.f_timestamp;





[SQL] Bad (null) varchar() external representation

2001-01-09 Thread Justin Clift

Hi all,

I'm getting the following problem when trying to do a simple insert
statement...

"Bad (null) varchar() external representation"

WHY?

I'm running PostgreSQL 7.03 on Linux Mandrake 7.2 (using a specially
compiled version, not an RPM).

Here's the table :

foobar=# \d staff_details
  Table "staff_details"
   Attribute| Type | Modifier
+--+--
 userid | varchar(24)  | not null
 password   | char(13) |
 name   | varchar(96)  |
 role   | smallint |
 dob| date |
 phone_one  | varchar(14)  |
 phone_two  | varchar(14)  |
 phone_three| varchar(14)  |
 address| varchar(280) |
 status | smallint |
 managers_notes | varchar(600) |
Index: staff_details_pkey
Constraints: (length(userid) < 25)
 (length("password") < 14)
 (length(name) < 97)
 (length(phone_one) < 17)
 (length(phone_two) < 17)
 (length(phone_three) < 17)
 (length(address) < 281)
 (length(managers_notes) < 601)

foobar=# insert into staff_details values ('A', NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL);
ERROR:  Bad (null) varchar() external representation
foobar=# insert into staff_details (userid, password, name, role, dob,
phone_one) values ('', 'foobarbaz1234', 'Joshua', 1,
'1970-07-01', '(03) 9867 5432');
ERROR:  Bad (null) varchar() external representation
foobar=# insert into staff_details values ('',
encrypt('foo'), 'Joshua', 1, '1970-07-01', '(03) 9867 5432', '(041) 309
2819', NULL, '1 Blankety-Blank Way\nBazzville', NULL, NULL);
ERROR:  Bad (null) varchar() external representation


etc...

I've tried everything I can think of, also exported and reloaded the
database, etc.  This is a new table with nothing in it.

This is driving me nuts.  :-(

+ Justin Clift
Database Administrator



[SQL] Possible bug? WAS :Bad (null) varchar() external representation.

2001-01-09 Thread Justin Clift

Hi,

I haven't seen a mention of a maximum number of constraints of similar
applying to a table.  If so, then could someone please point me to it...

The reason I mention this is because I've found what seems to be causing
this problem I'm experiencing with Postgres 7.03 :

CREATE TABLE "staff_details" (
"userid" character varying(24) NOT NULL,
"password" character(13) NOT NULL,
"name" character varying(96) NOT NULL,
"role" int2 NOT NULL,
"dob" date NOT NULL,
"phone_one" character varying(14) NOT NULL,
"phone_two" character varying(14),
"phone_three" character varying(14),
"address" character varying(280),
"status" int2,
"managers_notes" character varying(600),
CONSTRAINT "staff_details_uesrid" CHECK ((length(userid) < 25)),
CONSTRAINT "staff_details_password" CHECK ((length("password") <
14)),
CONSTRAINT "staff_details_name" CHECK ((length(name) < 97)),
CONSTRAINT "staff_details_dob" CHECK
(date_ge(date(("timestamp"('2001-01-08'::date) - '18 years
00:00'::"interval")), dob)),
CONSTRAINT "staff_details_phone_one" CHECK ((length(phone_one) <
17)),
CONSTRAINT "staff_details_phone_two" CHECK ((length(phone_two) <
17)),
CONSTRAINT "staff_details_phone_three" CHECK
((length(phone_three) < 17)),
CONSTRAINT "staff_details_address" CHECK ((length(address) <
281)),
CONSTRAINT "staff_details_managers_notes" CHECK
((length(managers_notes) < 601)),
PRIMARY KEY ("userid")
);

When I attempt to insert data into this table, I get the following error
:

foobar=# insert into staff_details values ('',
encrypt('foo'), 'Joshua', 1, '1970-07-01', '(03) 9867 5432', '(041) 309
2819', NULL, '1 Blankety-Blank Way\nBazzville', NULL, NULL);

Yet if I remove BOTH the "staff_details_phone_three" &
"staff_details_managers_notes" constraints it works  :

foobar=# insert into staff_details values ('',
encrypt('foo'), 'Joshua', 1, '1970-07-01', '(03) 9867 5432', '(041) 309
2819', NULL, '1 Blankety-Blank Way\nBazzville', NULL, NULL);
INSERT 27605472 1 

Removing EITHER of these constraints doesn't work, and neither does
removing any of the other constraints on this table.  Just these two
TOGETHER.  AND they're not even defined one-after-another possibly
indicating some formatting error.

Does anyone have an idea why this is occuring?

Regards and best wishes,

+ Justin Clift
Database Administrator


Justin Clift wrote:
> 
> Hi all,
> 
> I'm getting the following problem when trying to do a simple insert
> statement...
> 
> "Bad (null) varchar() external representation"
> 
> WHY?
> 
> I'm running PostgreSQL 7.03 on Linux Mandrake 7.2 (using a specially
> compiled version, not an RPM).
> 
> Here's the table :
> 
> foobar=# \d staff_details
>   Table "staff_details"
>Attribute| Type | Modifier
> +--+--
>  userid | varchar(24)  | not null
>  password   | char(13) |
>  name   | varchar(96)  |
>  role   | smallint |
>  dob| date |
>  phone_one  | varchar(14)  |
>  phone_two  | varchar(14)  |
>  phone_three| varchar(14)  |
>  address| varchar(280) |
>  status | smallint |
>  managers_notes | varchar(600) |
> Index: staff_details_pkey
> Constraints: (length(userid) < 25)
>  (length("password") < 14)
>  (length(name) < 97)
>  (length(phone_one) < 17)
>  (length(phone_two) < 17)
>  (length(phone_three) < 17)
>  (length(address) < 281)
>  (length(managers_notes) < 601)
> 
> foobar=# insert into staff_details values ('A', NULL, NULL, NULL, NULL,
> NULL, NULL, NULL, NULL, NULL, NULL);
> ERROR:  Bad (null) varchar() external representation
> foobar=# insert into staff_details (userid, password, name, role, dob,
> phone_one) values ('', 'foobarbaz1234', 'Joshua', 1,
> '1970-07-01', '(03) 9867 5432');
> ERROR:  Bad (null) varchar() external representation
> foobar=# insert into staff_details values ('',
> encrypt('foo'), 'Joshua', 1, '1970-07-01', '(03) 9867 5432', '(041) 309
> 2819', NULL, '1 Blankety-Blank Way\nBazzville', NULL, NULL);
> ERROR:  Bad (null) varchar() external representation
> 
> etc...
> 
> I've tried everything I can think of, also exported and reloaded the
> database, etc.  This is a new table with nothing in it.
> 
> This is driving me nuts.  :-(
> 
> + Justin Clift
> Database Administrator



Re: [SQL] Possible bug? WAS :Bad (null) varchar() external representation.

2001-01-11 Thread Justin Clift

Hi Tom,

I think you are right.  It does seem to barf on NULLS and
length(varchar), regardless.

Thanks for your assistance.

BTW - How do things normally get added to the FAQ?  I would like to add
something about length(varchar) and NULLS not working in PostgreSQL
7.0.x

Regards and best wishes,

Justin Clift
Database Administrator

Tom Lane wrote:
> 
> Justin Clift <[EMAIL PROTECTED]> writes:
> > I haven't seen a mention of a maximum number of constraints of similar
> > applying to a table.  If so, then could someone please point me to it...
> 
> There is no such limit that I know of.
> 
> > Yet if I remove BOTH the "staff_details_phone_three" &
> > "staff_details_managers_notes" constraints it works  :
> 
> Are you absolutely certain that that's how it went?  I think the most
> likely story is just that you were hitting the length(varchar)-barfs-
> on-NULL bug, and got confused about which combinations you'd tried.
> 
> regards, tom lane



Re: [SQL] Possible bug? WAS :Bad (null) varchar() external representation.

2001-01-11 Thread Justin Clift

Hi Tom and Stephan,

Thanks for your help guys.

I'm using varchar constraint definitions now that are "CHECK ((foobar
ISNULL) OR (length(foobar) < 17))"  The short-circuiting of OR's in
7.0.3 allow this to work without error, thereby avoiding the "Bad (null)
varchar() external representation" error that I was getting before due
to inserting NULL's in length(varchar) constraint checked fields.

I've also extended the varchar columns to be the same size as the length
checking I'm doing as Tom suggested, to ensure the constraints do work.

Tom has also suggested using COALESCE instead, so I'll check this out
too.

Regards and best wishes,

Justin Clift
Database Administrator


Tom Lane wrote:
> 
> > The reason I'm using constraints in the table is to allow the database
> > to recognise when oversize data is being fed to it and generate an
> > error, instead of silently accepting the data and truncating it.
> 
> OK, but have you actually stopped to check whether the combination gives
> the results you expect?  I believe the data will be coerced to the
> destination column type --- including any implicit truncation or padding
> --- before the constraint expressions are checked.  (I further believe
> that that's the right order of events.)
> 
> You might need to make the declared column widths one larger than what
> the constraints check for, if you want to raise an error for this.
> 
> > I'm just about to try the same constraints with the ISNULL first, incase
> > the OR's in postgreSQL are short-circuited like you mention they might
> > be.
> 
> I'd suggest
> 
> CHECK (length(COALESCE(column,'')) < n)
> 
> as a workaround for 7.0.*, if you don't want to hack up the source
> code as I mentioned.
> 
> regards, tom lane



[SQL] Re: HELP: Scarey pl/pgsql problem

2001-01-31 Thread Justin Clift

Hi all,

I must apologise as it turns out the 'culprit' wasn't really pl/pgsql.

The test box I was testing on is Mandrake Linux 7.2, which comes with
PostgreSQL 7.0.2.  Everything else has version 7.0.3 installed on it,
and I naively assumed that 7.0.3 was installed on the test box.

After installing the Postgres 7.0.3 rpms from the PostgreSQL site,
pl/pgsql is working consistently again.  It looks like the rpms for
PostgreSQL supplied with Mandrake Linux 7.2 are broken, I guess they
didn't run the supplied tests before packaging.  :-(

Regards and best wishes,

Justin Clift
Database Administrator


Justin Clift wrote:
> 
> Hi all,
> 
> I'm having trouble with what MAY BE a bug in PL/PGSQL for PG 7.0.3 on
> Linux (Mandrake Linux 7.2).
> 
> It appears pl/pgsql is munging values.  When I pass it a 'time' value,
> the value is altered without my code touching it.  This is evidenced by
> the stripped down function below :
> 
> CREATE FUNCTION which_block(time)
> RETURNS time
> AS 'DECLARE
> 
> /* Given a time, this function works out the name of the correct field
> in the reservations table for it
>  * Written by : Justin Clift
>  * Date   : 1st February 2001
>  * Version: 1.00
>  */
> hours   char(3);
> minutes char(2);
> result  char(5);
> tempres char(5);
> curnow  datetime;
> 
> BEGIN
> 
> RETURN $1;
> END;'
> LANGUAGE 'plpgsql';
> 
> foobar=# select which_block(time '12:40:00');
>  which_block
> -
>  12:39:00
> (1 row)
> 
> foobar=#
> 
> Having passed it the time value of '12:40:00', I am immediately
> returning that value and it is no longer '12:40:00'.
> 
> Being over 1 month into using PostgreSQL 7.0.3 for a particular project,
> this is scaring me as I'm now doubting the reliability of things.
> 
> Regards and best wishes,
> 
> Justin Clift
> Database Administrator



[SQL] HELP: Scarey pl/pgsql problem

2001-02-05 Thread Justin Clift

Hi all,

I'm having trouble with what MAY BE a bug in PL/PGSQL for PG 7.0.3 on
Linux (Mandrake Linux 7.2).

It appears pl/pgsql is munging values.  When I pass it a 'time' value,
the value is altered without my code touching it.  This is evidenced by
the stripped down function below :

CREATE FUNCTION which_block(time)
RETURNS time
AS 'DECLARE

/* Given a time, this function works out the name of the correct field
in the reservations table for it
 * Written by : Justin Clift
 * Date   : 1st February 2001
 * Version: 1.00
 */
hours   char(3);
minutes char(2);
result  char(5);
tempres char(5);
curnow  datetime;

BEGIN

RETURN $1;
END;'
LANGUAGE 'plpgsql';


foobar=# select which_block(time '12:40:00');
 which_block
-
 12:39:00
(1 row)
 
foobar=# 

Having passed it the time value of '12:40:00', I am immediately
returning that value and it is no longer '12:40:00'.

Being over 1 month into using PostgreSQL 7.0.3 for a particular project,
this is scaring me as I'm now doubting the reliability of things.

Regards and best wishes,

Justin Clift
Database Administrator



[SQL] Suggestions on finetuning this search?

2001-02-16 Thread Justin Long

I would welcome any suggestions for fine-tuning this search to run faster.


Here is the SQL. Basically what we're allowing people to do is to specify
words to search our article index.

THE TABLES:
knowledge = the knowledge base of articles
kb_categories = the category that each article is assigned to
kbwords = an index of every word in the knowledge base (kbid, wordid)
wordindex = an index of every word in the knowledge base (wordid, word)

AMPLIFICATION:
kbwords = a list of all the words that appear in a specific article (might
return a list of 5 articles where the word 'monk' appears)
wordindex = a unique list of all the words that appear in all the articles
(would only return a single entry for the word 'monk')

THE SQL:
select * from knowledge k, kb_categories c , kbwords w0 , kbwords w1 WHERE
k.catid=c.catid AND ((k.kbid=w0.kbid and w0.wordid=(SELECT wordid from
wordindex where word='BUDDIST')) AND  (k.kbid=w1.kbid and w1.wordid=(SELECT
wordid from wordindex where word='MONK'))) ORDER BY k.regionid , k.ctryid ,
k.catid , k.title ;

WHAT "EXPLAIN" RETURNS:
Sort  (cost=2796577.40..2796577.40 rows=2878549 width=332)
  InitPlan
->  Index Scan using wordindex_word on wordindex  (cost=0.00..247.71
rows=376 width=4)
->  Index Scan using wordindex_word on wordindex  (cost=0.00..247.71
rows=376 width=4)
  ->  Merge Join  (cost=21187.45..21993.59 rows=2878549 width=332)
->  Merge Join  (cost=11060.50..11140.94 rows=52582 width=324)
  ->  Sort  (cost=933.56..933.56 rows=960 width=316)
->  Hash Join  (cost=1.43..885.97 rows=960 width=316)
  ->  Seq Scan on knowledge k  (cost=0.00..559.25
rows=2825 width=284)
  ->  Hash  (cost=1.34..1.34 rows=34 width=32)
->  Seq Scan on kb_categories c
(cost=0.00..1.34 rows=34 width=32)
  ->  Sort  (cost=10126.95..10126.95 rows=5474 width=8)
->  Seq Scan on kbwords w0  (cost=0.00..9787.02
rows=5474 width=8)
->  Sort  (cost=10126.95..10126.95 rows=5474 width=8)
  ->  Seq Scan on kbwords w1  (cost=0.00..9787.02 rows=5474
width=8)

Please e-mail suggestions to [EMAIL PROTECTED] Thanks!

To see the code in action, visit
http://www.strategicnetwork.org/index.asp?loc=kb




[SQL] Re: Help needed -> ERROR: record arow has no field description

2001-02-28 Thread Justin Clift

Hi all,

Solved my own problem.  I was just misreading the error message.  It was
actually TELLING me the problem (I was referencing a table with no field
called "description" in the select, but trying to use it in the loop.

Sorry for disturbing people.

As an aside, this has motivated me to add a section on error messages
into techdocs.postgresql.org (very messy at the moment, I'll fix it
tonight).

Regards and best wishes,

+ Justin Clift
Database Administrator

Justin Clift wrote:
> 
> Hi all,
> 
> I'm getting this error, which to me makes no sense.  Running PostgreSQL
> 7.0.3 on Mandrake 7.2 (compiled from source, not the rpms).
> 
> The code is in a pl/pgsql function I am writing and I can't see why it's
> complaining.
> 
> This is the appropriate part of the code :
> 
> 
> 
> arowrecord;
> 
> 
> 
> BEGIN
> 
> 
> 
> FOR arow IN select transaction_payments.cashback from
> transaction_payments, payment_types where
>  transaction_payments.payment_type = payment_types.idnum LOOP
> textbuf := text(arow.cashback);
> textbuf := textcat(textbuf, ''  '');
> insert into receipts (receipt_num, data) values (trans_num,
> textbuf);
> END LOOP;
> 
> 
> 
> >From the PostgreSQL log file (debug is set to 2), I am getting :
> 
> query: SELECT  transaction_payments.cashback from transaction_payments,
> payment_types where transaction_payments.payment_type =
> payment_types.idnum
> ERROR:  record arow has no field description
> DEBUG:  Last error occured while executing PL/pgSQL function
> create_receiptp3
> DEBUG:  line 105 at assignment
> AbortCurrentTransaction
> 
> The "arow" record type variable is used quite a lot in previous FOR
> loops in this function.  This is the first FOR loop in the function that
> uses more than one table though.  I suspect this may have something to
> do with it.
> 
> As far as I can tell, this SQL statement is valid.  Does anyone have any
> ideas how to get this to work.  I have tried several variations, and the
> mailing lists don't even have a reference for this error message.
> 
> Regards and best wishes,
> 
> Justin Clift
> Database Administrator



[SQL] Help needed -> ERROR: record arow has no field description

2001-03-02 Thread Justin Clift

Hi all,

I'm getting this error, which to me makes no sense.  Running PostgreSQL
7.0.3 on Mandrake 7.2 (compiled from source, not the rpms).

The code is in a pl/pgsql function I am writing and I can't see why it's
complaining.

This is the appropriate part of the code :



arowrecord;



BEGIN



FOR arow IN select transaction_payments.cashback from
transaction_payments, payment_types where
 transaction_payments.payment_type = payment_types.idnum LOOP
textbuf := text(arow.cashback);
textbuf := textcat(textbuf, ''  '');
insert into receipts (receipt_num, data) values (trans_num,
textbuf);
END LOOP;



>From the PostgreSQL log file (debug is set to 2), I am getting : 

query: SELECT  transaction_payments.cashback from transaction_payments,
payment_types where transaction_payments.payment_type =
payment_types.idnum
ERROR:  record arow has no field description
DEBUG:  Last error occured while executing PL/pgSQL function
create_receiptp3
DEBUG:  line 105 at assignment
AbortCurrentTransaction

The "arow" record type variable is used quite a lot in previous FOR
loops in this function.  This is the first FOR loop in the function that
uses more than one table though.  I suspect this may have something to
do with it.

As far as I can tell, this SQL statement is valid.  Does anyone have any
ideas how to get this to work.  I have tried several variations, and the
mailing lists don't even have a reference for this error message.

Regards and best wishes,

Justin Clift
Database Administrator

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] Optimizing Query

2001-03-05 Thread Justin Long

Any suggestions welcome!

Here is my query:

select k.*, c.category from knowledge k, kb_categories c , kbwords w0 ,
kbwords w1 WHERE k.catid=c.catid AND ((k.kbid=w0.kbid and
w0.wordid=42743) AND (k.kbid=w1.kbid and w1.wordid=85369)) ORDER BY
k.kbid DESC LIMIT 25;

Now for the details
knowledge k = 3,150 records
kbwords = 2-field database (kbid, wordid), 825,748 records

Each word in the knowledge base is stored in a database called wordindex,
which has 50,000 records or so. The system first explodes the query
string and pulls the word #s from this database, which is where we get
42743 and 85369 above, "ASIA" and "CHILDREN"
respectively.) The idea is then to pull all the articles in the knowledge
base which contain both of these words.

Here is the EXPLAIN for the query:

NOTICE:  QUERY PLAN:

Nested Loop  (cost=0.00..527690060.67 rows=2878549 width=308)
  ->  Nested Loop  (cost=0.00..9472443.40 rows=52582
width=304)
    ->  Nested Loop 
(cost=0.00..6278.63 rows=960 width=300)
 
->  Index Scan Backward using knowledge_kbid_key on knowledge
k  (cost=0.00..1292.51 rows=2825 width=284)
 
->  Seq Scan on kb_categories c  (cost=0.00..1.34 rows=34
width=16)
    ->  Seq Scan on
kbwords w0  (cost=0.00..9787.02 rows=5474 width=4)
  ->  Seq Scan on kbwords w1  (cost=0.00..9787.02
rows=5474 width=4)

This takes quite a while to return results... prohibitively long. There
are indexes on k.catid, c.catid, k.kbid, w0.kbid, w0.wordid. Any
suggestions for further optimization would be very welcome. We get about
3,000 searches on our database daily...

Blessings,
Justin Long



________
Justin
Long Network
for Strategic Missions
[EMAIL PROTECTED] 1732 South Park
Court
http://www.strategicnetwork.org Chesapeake,
VA 23320, USA
Reality Check e-zine: [EMAIL PROTECTED]

Law: Never retreat. Never surrender. Never cut a deal with a 
dragon.
Corollary: No armor? Unclean life? Then do not mess in the affairs 
of dragons, for you are crunchy and taste good with ketchup.


________
Justin
Long Network
for Strategic Missions
[EMAIL PROTECTED] 1732 South Park
Court
http://www.strategicnetwork.org Chesapeake,
VA 23320, USA
Reality Check e-zine: [EMAIL PROTECTED]

Law: Never retreat. Never surrender. Never cut a deal with a 
dragon.
Corollary: No armor? Unclean life? Then do not mess in the affairs 
of dragons, for you are crunchy and taste good with ketchup.



Re: [SQL] Optimizing Query

2001-03-05 Thread Justin Long

Does that mean that if you have 3 indexes on a table and you cluster one,
it deletes the other 2?

At 04:45 PM 3/5/2001 -0500, you wrote:
Yes.

> On Mon, Mar 05, 2001 at 04:07:57PM -0500, Bruce Momjian allegedly
wrote:
> > Have you tried VACUUM ANALYZE and CLUSTER?
> 
> I assume CLUSTER still drops all indexes except the one you're
clustering
> on?
> 
> Mathijs
> -- 
> It's not that perl programmers are idiots, it's that the
language
> rewards idiotic behavior in a way that no other language or tool
has
> ever done.
>
Erik Naggum
> 


-- 
  Bruce
Momjian   
| 
http://candle.pha.pa.us
 
[EMAIL PROTECTED]  
|  (610) 853-3000
  +  If your life is a hard drive,
|  830 Blythe Avenue
  +  Christ can be your
backup.    |  Drexel Hill,
Pennsylvania 19026


____
Justin
Long Network
for Strategic Missions
[EMAIL PROTECTED] 1732 South Park
Court
http://www.strategicnetwork.org Chesapeake,
VA 23320, USA
Reality Check e-zine: [EMAIL PROTECTED]

Law: Never retreat. Never surrender. Never cut a deal with a 
dragon.
Corollary: No armor? Unclean life? Then do not mess in the affairs 
of dragons, for you are crunchy and taste good with ketchup.



Re: [SQL] Optimizing Query

2001-03-05 Thread Justin Long

Yes, it drops indexes, much to my chagrin, as I just realized ...
including SERIALs...

Justin

At 04:45 PM 3/5/2001 -0500, you wrote:
Yes.

> On Mon, Mar 05, 2001 at 04:07:57PM -0500, Bruce Momjian allegedly
wrote:
> > Have you tried VACUUM ANALYZE and CLUSTER?
> 
> I assume CLUSTER still drops all indexes except the one you're
clustering
> on?
> 
> Mathijs
> -- 
> It's not that perl programmers are idiots, it's that the
language
> rewards idiotic behavior in a way that no other language or tool
has
> ever done.
>
Erik Naggum
> 


-- 
  Bruce
Momjian   
| 
http://candle.pha.pa.us
 
[EMAIL PROTECTED]  
|  (610) 853-3000
  +  If your life is a hard drive,
|  830 Blythe Avenue
  +  Christ can be your
backup.    |  Drexel Hill,
Pennsylvania 19026


____
Justin
Long Network
for Strategic Missions
[EMAIL PROTECTED] 1732 South Park
Court
http://www.strategicnetwork.org Chesapeake,
VA 23320, USA
Reality Check e-zine: [EMAIL PROTECTED]

Law: Never retreat. Never surrender. Never cut a deal with a 
dragon.
Corollary: No armor? Unclean life? Then do not mess in the affairs 
of dragons, for you are crunchy and taste good with ketchup.



Re: [SQL] Optimizing Query

2001-03-05 Thread Justin Long

Ok, now I have another question... it doesn't seem to be accessing the
index.

explain select k.kbid,k.title from knowledge k , kbwords w0 , kbwords w1
WHERE ((k.kbid=w0.kbid and w0.wordid=42743) AND (k.kbid=w1.kbid and
w1.wordid=85369))

NOTICE:  QUERY PLAN:

Merge Join  (cost=32339.30..35496.97 rows=19262538 width=24)
  ->  Merge Join  (cost=16530.24..16668.77 rows=233274
width=20)
    ->  Sort 
(cost=15809.06..15809.06 rows=8257 width=4)
 
->  Seq Scan on kbwords w1  (cost=0.00..15271.85 rows=8257
width=4)
    ->  Sort 
(cost=721.18..721.18 rows=2825 width=16)
 
->  Seq Scan on knowledge k  (cost=0.00..559.25 rows=2825
width=16)
  ->  Sort  (cost=15809.06..15809.06 rows=8257
width=4)
    ->  Seq Scan on
kbwords w0  (cost=0.00..15271.85 rows=8257 width=4)

Note the sequential scans... there is a wordindex where
w0.wordid=42743... why isn't it doing an indexscan? wouldn't that be more
efficient?

Justin


At 04:45 PM 3/5/2001 -0500, you wrote:
Yes.

> On Mon, Mar 05, 2001 at 04:07:57PM -0500, Bruce Momjian allegedly
wrote:
> > Have you tried VACUUM ANALYZE and CLUSTER?
> 
> I assume CLUSTER still drops all indexes except the one you're
clustering
> on?
> 
> Mathijs
> -- 
> It's not that perl programmers are idiots, it's that the
language
> rewards idiotic behavior in a way that no other language or tool
has
> ever done.
>
Erik Naggum
> 


-- 
  Bruce
Momjian   
| 
http://candle.pha.pa.us
 
[EMAIL PROTECTED]  
|  (610) 853-3000
  +  If your life is a hard drive,
|  830 Blythe Avenue
  +  Christ can be your
backup.    |  Drexel Hill,
Pennsylvania 19026



Justin
Long Network
for Strategic Missions
[EMAIL PROTECTED] 1732 South Park
Court
http://www.strategicnetwork.org Chesapeake,
VA 23320, USA
Reality Check e-zine: [EMAIL PROTECTED]

Law: Never retreat. Never surrender. Never cut a deal with a 
dragon.
Corollary: No armor? Unclean life? Then do not mess in the affairs 
of dragons, for you are crunchy and taste good with ketchup.



Re: [SQL] Optimizing Query

2001-03-06 Thread Justin Long

Wow. I can't believe the difference. It didn't take too long. I'll set up
a script in my etc/cron.weekly to run it... would there be any benefit to
doing a vacuum analyze nightly?

Justin Long

At 11:10 PM 3/5/2001 -0500, you wrote:
Justin Long
<[EMAIL PROTECTED]> writes:
> Ok, now I have another question... it doesn't seem to be accessing
the index.

> explain select k.kbid,k.title from knowledge k , kbwords w0 ,
kbwords w1 
> WHERE ((k.kbid=w0.kbid and w0.wordid=42743) AND (k.kbid=w1.kbid and

> w1.wordid=85369))

> NOTICE:  QUERY PLAN:

> Merge Join  (cost=32339.30..35496.97 rows=19262538
width=24)
>    ->  Merge Join 
(cost=16530.24..16668.77 rows=233274 width=20)
>  -> 
Sort  (cost=15809.06..15809.06 rows=8257 width=4)
>   
->  Seq Scan on kbwords w1  (cost=0.00..15271.85 rows=8257

> width=4)
>  -> 
Sort  (cost=721.18..721.18 rows=2825 width=16)
>   
->  Seq Scan on knowledge k  (cost=0.00..559.25 rows=2825

> width=16)
>    ->  Sort  (cost=15809.06..15809.06
rows=8257 width=4)
>  -> 
Seq Scan on kbwords w0  (cost=0.00..15271.85 rows=8257 
width=4)

> Note the sequential scans... there is a wordindex where
w0.wordid=42743... 
> why isn't it doing an indexscan? wouldn't that be more
efficient?

It probably thinks not, because the estimated number of hits (8257)
is
so high.  That estimate is currently driven by the frequency of the
most
common value in the column (mainly because that's the only stat we
have
:-().  I am guessing that you have a few very common words, which
are
skewing the stats for kbwords and causing it not to pick an
indexscan.

Does your setup have a notion of "stop words" that shouldn't be
indexed,
like "a", "an", "the", etc?  Perhaps
you need to add such a feature, or
throw in a few more stopwords if you already have 'em.

regards,
tom lane

---(end of
broadcast)-------
TIP 1: subscribe and unsubscribe commands go to
[EMAIL PROTECTED]



Justin
Long Network
for Strategic Missions
[EMAIL PROTECTED] 1732 South Park
Court
http://www.strategicnetwork.org Chesapeake, VA 23320,
USA
Reality Check e-zine: [EMAIL PROTECTED]

Law: Never retreat. Never surrender. Never cut a deal with a 
dragon.
Corollary: No armor? Unclean life? Then do not mess in the affairs 
of dragons, for you are crunchy and taste good with ketchup.



Re: [SQL] quotes in pl/pgsql

2001-03-07 Thread Justin Clift

Hi Najm,

Is this what you mean?

CREATE FUNCTION foobar(int4) returns int4
  as 'DECLARE

  textbuf   varchar(120);

BEGIN

textbuf := ''Example Text '';

insert into sometable (something) values (textbuf);

RETURN 0;
  END;'
LANGUAGE 'plpgsql';


Najm Hashmi wrote:
> 
> Hi all, I just want to know how to put quotes around a string. Is there a
> function to do so?
> If not how can I escape  a single quote.
> Thank you in advance.
> 
>   
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Function Vanished

2001-03-26 Thread Justin Clift

Hi would it work to do a pg_dump -d or a pgdumpall, then look through
the dumped sql file?

I do that to retrieve PL/pgSQL functions from the database when I've
accidentally wiped or modified the source (not often, but it happens).

Regards and best wishes,

Justin Clift

Tom Lane wrote:
> 
> "Josh Berkus" <[EMAIL PROTECTED]> writes:
> >   I'm using 7.1 Beta 3, which has been pretty stable up until now.  This
> > morning, I went to export a function I spent 5 hours debugging on on
> > Friday to text so that I could have a copy of the final version.  To my
> > horror, the function was GONE from the system catalog (pg_proc).
> 
> Ick.  Were you maybe working on it inside a transaction that you forgot
> to commit?
> 
> > I have
> > not run VACUUM on the database anytime recently ... is there any
> > possibility that my function is still present as a "deleted" row?
> 
> Should still be there in the table, if you haven't vacuumed.  Getting
> it out again is another story though.  If it was a small enough function,
> good ol' "strings" would do to extract the function body, which is
> probably all that you really need.  But if it's more than a couple K
> then the text will be compressed and difficult to spot or extract.
> 
> regards, tom lane
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://www.postgresql.org/search.mpl

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [SQL] Range of Serial values

2001-04-16 Thread Justin Clift

Hi,

I believe you could also create the sequence, then update it with
setval('', );

Regards and best wishes,

Justin Clift

"Poul L. Christiansen" wrote:
> 
> Yes, there is.
> 
> When you create a serial column a sequence is created, which does the
> counting.
> 
> You can create a serial column, drop the associated sequence and create
> a new one with the command:
> "CREATE SEQUENCE seqname START 1000".
> 
> See also "\h CREATE SEQUENCE".
> 
> HTH,
> Poul L. Christansen
> 
> cbell wrote:
> >
> > Hello everyone,
> >
> > when creating a serial column, is there a way to specify which number it
> > will start counting from?  For example, if you wanted all Serial ID's to
> > start at 1000 and count up from there, with no numbers below that.
> >
> > Thanks,
> > Chris.
> >
> > ---(end of broadcast)---
> > TIP 5: Have you checked our extensive FAQ?
> >
> > http://www.postgresql.org/users-lounge/docs/faq.html
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

-- 
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
 - Indira Gandhi

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] any proper benchmark scripts?

2001-04-19 Thread Justin Clift

Hi Clayton,

Was it opening a new connection to the database every time, or did it
open one connection each time and pump multiple queries through it?

It would be a good things to develop your script and benchmark this both
ways.  Could become a useful tool for sizing both of these databases.

Out of curiosity do you have access to other databases such as
Interbase, Oracle, DB2, Informix, and so forth?  The more it can connect
to, the better people will be able to understand each one's strengths
and weaknesses, in terms of Perl's DB access.

Regards and best wishes,

Justin Clift

Clayton Cottingham aka drfrog wrote:
> 
> hello
> 
> on the modperl list a good thread was happening called
> 'fast db access'
> find attached scripts used to do this
> 
> here are my results:
> 
> [drfrog]$ perl fast_db.pl
> postgres
> 16 wallclock secs ( 0.05 usr +  0.00 sys =  0.05 CPU) @ 400.00/s (n=20)
> mysql
>  3 wallclock secs ( 0.07 usr +  0.00 sys =  0.07 CPU) @ 285.71/s (n=20)
> postgres
> 17 wallclock secs ( 0.06 usr +  0.00 sys =  0.06 CPU) @ 333.33/s (n=20)
> mysql
>  3 wallclock secs ( 0.01 usr +  0.01 sys =  0.02 CPU) @ 1000.00/s (n=20)
> 
> correct me if im wrong but if fast_db.pl is
> working right
> first set is insert
> second set is select
> 
> comp stats
> 
> running stock rpms from mandrake 7.2 for both
> postgresql and mysql
>  3.23.23-beta of mysql and
> 7.02 of postgresql
> 
> [drfrog@nomad desktop]$ uname -a
> Linux nomad.localdomain 2.2.18 #2 Tue Apr 17 22:55:04 PDT 2001 i686 unknown
> 
> [drfrog]$ cat /proc/meminfo
> total:used:free:  shared: buffers:  cached:
> Mem:  257511424 170409984 87101440 24219648 96067584 44507136
> Swap: 2549432320 254943232
> MemTotal:251476 kB
> MemFree:  85060 kB
> MemShared:23652 kB
> Buffers:  93816 kB
> Cached:   43464 kB
> SwapTotal:   248968 kB
> SwapFree:248968 kB
> [drfrog]$ cat /proc/cpuinfo
> processor   : 0
> vendor_id   : AuthenticAMD
> cpu family  : 6
> model   : 3
> model name  : AMD Duron(tm) Processor
> stepping: 1
> cpu MHz : 697.535
> cache size  : 64 KB
> fdiv_bug: no
> hlt_bug : no
> sep_bug : no
> f00f_bug: no
> coma_bug: no
> fpu : yes
> fpu_exception   : yes
> cpuid level : 1
> wp  : yes
> flags   : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat
> pse36 psn mmxext mmx fxsr 3dnowext 3dnow
> bogomips: 1392.64
> 
> i will recomp both the newest postgresql and  mysql
> 
> not using any optimizing techs at all i'll post the
> 
> config scripts i use
> 
> --
> back in the day
> we didn't have no
> old school
> -dr. frog
> http://www.hyperbomb.com
> it sells itself
> 
>   
>  Name: fast_db.pl
>fast_db.plType: Perl Program (application/x-perl)
>  Encoding: base64
> 
> Name: benchmark.sql
>benchmark.sqlType: application/x-unknown-content-type-sql_auto_file
> Encoding: base64
> 
>   
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html

-- 
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
 - Indira Gandhi

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] error message...

2001-05-15 Thread Justin Clift

Hi Sven,

There is a startup option, "-N" of how many clients are allowed to be
connected simultaneously.  The default is 32 in the source code, not
sure if SuSE has it changed.

You can also alter the default maximum before compiling it, but I get
the feeling you've installed pre-built packages.

Where does SuSE put it's startup scripts?  In /etc/rc.d/init.d ?  If so,
there should be something there relating to PostgreSQL, and it's this
you should modify.

If it starts PostgreSQL with "pg_ctl" then you'll need to pass the
option -o '-N '.  If it starts it with
"postmaster", then use -N  directly, without the
-o.

The man pages for pg_ctl and postmaster should be of some benefit also.

Regards and best wishes,

Justin Clift

"S.E.Franke" wrote:
> 
> Hi I have Postgres 7.0.3/6 on a Suse Professional 7.1 (kernel 2.4.0)
> machine.
> 
> The database is used very often and I see in the logfile the error
> message: Sorry, too many clients already
> 
> Can I set the number of 'active' clients? And where can I set this? And
> How?
> 
> Thanx in advance!
> 
> Sven Franke
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

-- 
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
 - Indira Gandhi

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] execute client application from PL/pgSql

2001-05-15 Thread Justin Clift

Hi,

It's exact URL is :

http://www.greatbridge.org/project/phppgadmin/projdisplay.php

For PostgreSQL related stuff like this, you can look them up at :

http://techdocs.postgresql.org/oresources.html

I still have to add a lot of the GreatBridge.org projects to it, as
they're expanding quite nicely.

:-)

Regards and best wishes,

Justin Clift

Jeff MacDonald wrote:
> 
> hi,
> 
> phpPGAdmin is a web based php driven postgresql
> admin tool. not sure of the exact url, try
> google :)
> 
> it has a pg_dump option in it.
> 
> jeff
> 
> On Sat, 12 May 2001, datactrl wrote:
> 
> > Date: Sat, 12 May 2001 10:23:39 +1000
> > From: datactrl <[EMAIL PROTECTED]>
> > To: [EMAIL PROTECTED]
> > Subject: Re: [SQL] execute client application from PL/pgSql
> >
> > Thank You Jeff,
> > What is phpPgAdmin and where can get it?
> >
> > Jack
> >
> > - Original Message -
> > From: "Jeff MacDonald" <[EMAIL PROTECTED]>
> > To: "Jack" <[EMAIL PROTECTED]>
> > Cc: <[EMAIL PROTECTED]>
> > Sent: Saturday, May 12, 2001 2:28 AM
> > Subject: Re: [SQL] execute client application from PL/pgSql
> >
> >
> > > you could hack the pg_dump bit out of phpPgAdmin
> > > i think the license permits it.
> > >
> > > just my 2 cents.
> > >
> > > jeff
> > >
> > > On Wed, 9 May 2001, Jack wrote:
> >
> >
> >
> > ---(end of broadcast)---
> > TIP 4: Don't 'kill -9' the postmaster
> >
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly

-- 
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
 - Indira Gandhi

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] system time

2001-05-15 Thread Justin Clift

select now();

???

+ Justin

Seema Noor wrote:
> 
> is there any function from which i can get system time?
> 
> 
> Do You Yahoo!?
> Get your free @yahoo.co.uk address at http://mail.yahoo.co.uk
> or your free @yahoo.ie address at http://mail.yahoo.ie
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

-- 
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
 - Indira Gandhi

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] has anyone tried running in MAC OS X

2001-05-18 Thread Justin Clift

Hi,

PostgreSQL 7.1.x does work on MacOS X, there's evan a MacOS X
installation guide at :

http://techdocs.postgresql.org/installguides.php#macosx

Hope that's helpful.  It's mainly focused on Apache + PHP + PostgreSQL
7.1 on MacOS X, although you should be able to make use of it.

If you need further assistance, feel free to ask.

:-)

Regards and best wishes,

Justin Clift


Tom Lane wrote:
> 
> "postgresql" <[EMAIL PROTECTED]> writes:
> > I guess the subject line says ithas anyone tried running
> > PostgreSQL in MAC OS X.
> 
> Some folks at Apple did.
> 
> Postgres 7.1 is alleged to build out-of-the-box on OSX.
> (But I haven't tried it myself.)  Don't bother trying with
> earlier releases.
> 
> regards, tom lane
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://www.postgresql.org/search.mpl

-- 
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
 - Indira Gandhi

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Re: [GENERAL] Re: Mirroring the database?

2001-08-14 Thread Justin Clift

There's also PostgreSQL Replicator (which I haven't gotten around to
trying) :

http://pgreplicator.sourceforge.net

:-)

Regards and best wishes,

Justin Clift


Allan Engelhardt wrote:
> 
> Ehhh, use dual-ported RAID disks?  (Well, tri-port in your case, but maybe A and B 
>machines are enough.  You still have a (small) single point of failure, but you 
>probably have that anyhow (network switch?).  You'll need some way of brining 
>PostgreSQL up on B when A dies, but that could be a simple script.  Reconnect would 
>of course be manual from the point of the client.).
> 
> Or buy Oracle.
> 
> Replication is at the top of the TODO list: 
>http://www.uk.postgresql.org/docs/todo.html
> 
> Allan.
> 
> PS:
> Maybe SQLB does some of what you want?  http://sourceforge.net/projects/sqlb/  [The 
>documentation is a little opaque...I'm not quite sure what it does, exactly.  Anybody 
>want to comment?]
> 
> Raymond Chui wrote:
> 
> > Does the latest PostgreSQL 7.1.2 support database mirroring?
> >
> > I have machine A, B and C, they all have the same database and tables.
> > Machine A is the operational machine, machine B and C are backup.
> >
> > If users do INSERT, UPDATE and DELETE in machine A, I want have the
> > same SQL statements in machine B and C.
> >
> > 1. I can do pg_dump or COPY every hour.
> > 2. I can use PerlDBI or JDBC open multiple connections for each SQL
> > statement.
> > 3. But I prefer if the system auto mirroring the database, then I can do
> > nothing.
> > All I need is set the auto mirroring configure.
> >
> > Please tell me how to do in 3. above. Thank you in advance!
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster

-- 
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
   - Indira Gandhi

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] select for update

2002-11-26 Thread Justin Georgeson
I'm pretty new to databases in general, and would like to find a spiffy 
way to do something. I want to use two columns from one table to 
populate three columns in two other tables. Assuming t1 is the table I 
want to take the values from, here is the structure of what I want to 
insert into t2 and t3.

t2.id = t1.id
t2.groupname = t1.username
t2.owner = t1.username

t3.id = 
t3.groupid = t1.id
t3.username = t1.username
t3.writeperms = 31

PS - I'm not subscribed to the list, so please CC my email with responses.

--
; Justin Georgeson
; http://www.lopht.net
; mailto:[EMAIL PROTECTED]
; "Free the mallocs, delete the news"


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Re: [SQL] join on next row

2006-06-21 Thread Justin Lintz
what about putting a limit of 1 on your select statement, so you will just get the first event greater then the initial date for the employee?On 6/18/06, Sim Zacks
 <[EMAIL PROTECTED]> wrote:I am having brain freeze right now and was hoping someone could help me
with a (fairly) simple query.I need to join on the next row in a similar table with specific criteria.I have a table with events per employee.I need to have a query that gives per employee each event and the event
after it if it happened on the same day.The Events table structure is:EventIDEmployeeEventDateEventTimeEventTypeI want my query resultset to beEmployee,EventDate(1),EventTime(1),EventType(1),EventTime(2),EventType(2)
Where Event(2) is the first event of the employee that took place afterthe other event.ExampleEventID EmployeeEventDate   EventTime   EventType1   John6/15/2006   7:00A
2   Frank   6/15/2006   7:15B3   Frank   6/15/2006   7:17C4   John6/15/2006   7:20C5   Frank   6/15/2006   7:25D
6   John6/16/2006   7:00A7   John6/16/2006   8:30RExpected ResultsJohn, 6/15/2006, 7:00, A, 7:20, CFrank, 6/15/2006, 7:15, B, 7:17, C
Frank, 6/15/2006, 7:17, C, 7:25, DJohn, 6/16/2006, 7:00, A, 8:30, RTo get this result set it would have to be an inner join on employee anddate where the second event time is greater then the first. But I don't
want the all of the records with a greater time, just the first event after.Thank YouSim---(end of broadcast)---TIP 3: Have you checked our extensive FAQ?
   http://www.postgresql.org/docs/faq-- - Justin


Re: [SQL] plpgsql loop question

2010-02-10 Thread Justin Graf
On 2/10/2010 11:29 AM, Andrea Visinoni wrote:
> hi,
> i have a table called "zones": idzone, zone_name
> and several tables called zonename_records (same structure), where 
> zonename is one of the zone_name in the "zones" table.
> What i want to do is a function that union all of this tables 
> dinamically based on "zones" table, this is what i've done so far:
>
> CREATE OR REPLACE FUNCTION get_all_records()
>   RETURNS SETOF record AS
> $BODY$DECLARE
> zones record;
> recs record;
> BEGIN
> for zones in select lower(zone_name) as n from zones loop
> for recs in select * from quote_ident(zones.n || '_records') loop
> return next recs;
> end loop;
> end loop;
> RETURN;
> END;
> $BODY$
>   LANGUAGE 'plpgsql' VOLATILE
>   COST 100;
>
> but i get this error!
>
> ERROR:  wrong record type supplied in RETURN NEXT
> CONTEXT:  PL/pgSQL function "get_all_records" line 9 at RETURN NEXT
>
> Andrea
>

Pg will not auto build the columns outputted from generic type record.  
The function needs  to describe  what the output is going  to look like.

The function can inherit the layout from a table or create the new pg 
data type but PG does not know what the data looks like to create the 
result set.  .


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] 'image' table with relationships to different objects

2010-02-10 Thread Justin Graf
On 2/9/2010 6:59 AM, Richard Huxton wrote:
> On 09/02/10 07:49, Louis-David Mitterrand wrote:
>> Hello,
>>
>> In my database I have different object types (person, location, event,
>> etc.) all of which can have several images attached.
>>
>> What is the best way to manage a single 'image' table with relationships
>> to (potentially) many different object types while keeping referrential
>> integrity (foreign keys)?
>
> The "clean" way to do this would be with a number of joining tables:
>
> images(img_id, file_name, title ...)
> persons   (psn_id, first_name, last_name, ...)
> locations (loc_id, loc_name, lat, lon, ...)
> events(evt_id, evt_name, starts_on, ends_on, ...)
>
> person_images   (psn_id, img_id)
> location_images (loc_id, img_id)
> event_images(evt_id, img_id)


Another why that reduces the number of tables and simplifies the selects 
and allows linking to unlimited number of objects is something like this

is something like this
Create table images (img_id serial , img_data bytea );

Create table image_related( img_rel_id serial, img_rel_img_id int, 
img_link_key int, img_link_from char(10) );

Create table persons   (psn_id serial, first_name text, last_name text) ;
create table locations (loc_id serial, loc_name text) ;
create table events(evt_id serial, evt_name text, starts_on 
timestamp, ends_on timestamp);

Insert into images values (default, null), (default, null), (default, null);

Insert into persons values ( default, 'me me', 'yes itsme');
Insert into locations values (default,  'I home');
Insert into events values (default, 'friends party', now(),  now() );
insert into image_related values (default, 1, 1, 'persons'), (default 
,2, 1, 'events'), (default ,3, 1, 'locations'), (default , 2, 1, 'persons');


Select img_data, first_name  from persons, images, image_related
 where img_id = img_rel_img_id
 and img_link_key = psn_id
 and img_link_from  = 'persons'


then create a rule on img_related before insert and update to make sure 
the parent records exist  for integrity checks.





All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] MSSQL to PostgreSQL - Issue trying to generate unique ID using actual date

2010-03-18 Thread Justin Graf
On 3/17/2010 9:52 AM, Ignacio Balcarce wrote:
>
> Hi all,
>
> I am facing a problem trying to convert from MSSQL procedure to 
> PostgreSQL function.
>
> CREATE PROCEDURE dbo.THUBAN_SP_GENERATEID
>
>  @NEWID VARCHAR(20) OUTPUT
>
> AS
>
> SET @NEWID = (
>
> SELECT REPLACE(SUBSTRING(CONVERT(CHAR(10),GETDATE(),20 ),1,10),'-','')
>
> + CAST(REPLICATE(0,8-LEN (ISNULL(CAST(SUBSTRING(MAX(SEQ_ID),9,8) AS
>
> INTEGER),0) + 1)) AS VARCHAR)
>
> + CAST(ISNULL(CAST(SUBSTRING(MAX(SEQ_ID),9,8) AS INTEGER),0) + 1 AS
>
> VARCHAR)
>
> FROM THUBAN_SEQ
>
> WHERE SUBSTRING(SEQ_ID,1,8)=
>
> REPLACE(SUBSTRING(CONVERT(CHAR(10),GETDATE(),20 ),1,10),'-','')
>
> )
>
> INSERT INTO THUBAN_SEQ VALUES (@NEWID)
>
> SELECT @NEWID AS ITEM_ID;
>
> GO
>


I surprised this works in MSSQL

CREATE SEQUENCE THUBAN_SEQ
   INCREMENT 1
   MINVALUE 1
   MAXVALUE 9223372036854775807
   START 1
   CACHE 1;


Now for the function to generate the ID with the date leading

CREATE OR REPLACE FUNCTION THUBAN_SP_GENERATEID()
RETURNS VARCHAR

AS $$

BEGIN

--now we get the next value from the thuban_seq and add the date to the  
front.

return  to_char( current_timestamp,   'MMDD')::varchar || 
nextval('THUBAN_SEQ')::varchar

RETURN NEWID;

END;

$$ LANGUAGE plpgsql;

If this is not what your after you need to give more information what 
you want to accomplish





All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [SQL] MSSQL to PostgreSQL - Issue trying to generate unique ID using actual date

2010-03-18 Thread Justin Graf
On 3/18/2010 12:53 PM, Ignacio Balcarce wrote:
>
> Justin,
>
> Thanks in advance for your email. I forgot to tell than everyday IDs 
> must start from 0. So… sequence id would look like: MMDD 0001, 
> MMDD 0002, etc.
>
> Is there any way to make this sequence start from 0 every day?
>
> Thanks & Regards,
>
> Ignacio
>
> -
>
>
>
--we need to create a table so we keep track sequence number and when to 
reset the count

create table sequ_id ( id_number int, sequ_name char(25), date_lastrun );

--insert a record ;
insert into sequ_id  values (1, 'thuban_seq', current_date);


  Now for the function to generate the ID with the date leading

  CREATE OR REPLACE FUNCTION THUBAN_SP_GENERATEID()
  RETURNS VARCHAR

  AS $$

  BEGIN
--now update the sequ_id table so we know the value we get makes sense,
Update sequ_id set id_number = 1 where sequ_name= 'thuban_seq' and 
date_lastrun <> current_date;

  --now we get the next value from the thuban_seq and add the date to 
the  front.

  return  to_char( current_date,   'MMDD')::varchar || ' ' || 
(Select lpad( id_number::char, 7, '0' )::varchar from sequ_id where 
sequ_name= 'thuban_seq' and date_lastrun)

Update sequ_id set id_number = (id_number + 1) where sequ_name= 
'thuban_seq';


  END;
$$ LANGUAGE plpgsql;

this will do what you want.

now i  have NOT  tested this but should get you closer, inside of the 
god awful code from before.




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [SQL] MSSQL to PostgreSQL - Issue trying to generate unique ID using actual date

2010-03-18 Thread Justin Graf
OOPS did not mean to click send

On 3/18/2010 12:53 PM, Ignacio Balcarce wrote:
>
> Justin,
>
> Thanks in advance for your email. I forgot to tell than everyday IDs 
> must start from 0. So… sequence id would look like: MMDD 0001, 
> MMDD 0002, etc.
>
> Is there any way to make this sequence start from 0 every day?
>
> Thanks & Regards,
>
> Ignacio
>
> -
>
>
>
--we need to create a table so we keep track sequence number and when to 
reset the count

create table sequ_id ( id_number int, sequ_name char(25), date_lastrun 
date);

--insert a record ;
insert into sequ_id  values (1, 'thuban_seq', current_date);


--- Now for the function to generate the ID with the date leading

  CREATE OR REPLACE FUNCTION THUBAN_SP_GENERATEID()
  RETURNS VARCHAR

  AS $$
declare creturn varchar ;

  BEGIN
--now update the sequ_id table so we know the value we get makes sense,
Update sequ_id set id_number = 1 where sequ_name= 'thuban_seq' and 
date_lastrun <> current_date;

  --now we get the next build the ID go to the table get the current 
value add some zeros in front and add the date to the  front.

creturn = to_char( current_date,   'MMDD')::varchar || ' ' || 
(Select lpad( id_number::char, 7, '0' )::varchar from sequ_id where 
sequ_name= 'thuban_seq' )

--update the sequence table
Update sequ_id set id_number = (id_number + 1) where sequ_name= 
'thuban_seq';
--return the value
return creturn ;
  END;
$$ LANGUAGE plpgsql;

this will do what you want.

now i  have NOT  tested this but should get you closer, inside of the 
god awful code from before.




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [SQL] Inserting Multiple Random Rows

2010-04-28 Thread Justin Graf
That won't work because Order by does not recompute Random() once gets a 
number it  stops

you need to generate a series of random numbers then select each record 
one at a time out of cities table .  You will have to write a plsql 
function to do this As any join will cause the result to be ordered.

Here is example of pl-sql procedure that inserts records randomly into a 
table from a another table.  This is an  excerpt from the function.  
There is more logic that limits the random result set size and  rules 
not to repeat a given number.

--First create cursor of the source records
 OPEN _questions SCROLL for  (Select  *from questions
 where quest_expire > now()::date
 and trim( both '' from quest_type)  = 
_classexams.exam_quest_type
 and trim( both '' from quest_level) = 
_classexams.exam_level
 order by quest_id );

--need to limit the number range created by random so not to exceed the 
record count created by the Cursor
select count(quest_id) into _rcount from educate.questions
 where quest_expire > now()::date
 and trim( both '' from quest_type)  = 
_classexams.exam_quest_type
 and trim( both '' from quest_level) = 
_classexams.exam_level ;

Generate a Random list of  of numbers
for _randlist IN (Select num from (
 select round(random()*1000)::int as num from 
generate_series(1,10)) rand
 where num <= _rcount and num > 0 ) LOOP

FETCH ABSOLUTE _randlist.num from _questions into _quest ;

Next Insert into Into the destination  Table

end loop;

On 4/28/2010 12:22 PM, Gary Chambers wrote:
> All,
>
> I have a table of user addresses that I'm trying to randomly populate
> with data from a cities table.  Using the following query:
>
> INSERT INTO useraddrs(userid, addrdesc, city, stprov)
> SELECT u.userid, 'Home', c.cityname, c.stateabbr
> FROM users u, cities c
> WHERE u.userid NOT IN (SELECT userid FROM useraddrs)
>  AND cid=(SELECT cid FROM cities ORDER BY RANDOM() LIMIT 1);
>
> I am able to achieve most of what I am trying to accomplish, but once
> the random number is selected, it doesn't change.  What am I missing?
> Thank you in advance.
>
> -- Gary Chambers
>
> /* Nothing fancy and nothing Microsoft! */
>
>


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Inserting Multiple Random Rows

2010-04-28 Thread Justin Graf
On 4/28/2010 1:48 PM, Gary Chambers wrote:
> pen?
>
> The clouds parting, choirs of angels singing, and fireworks
> celebrating the veil of my obtuseness being lifted, and my grasp and
> command of SQL to be complete and infinite.  None of which appears
> will ever happen...
>
> -- Gary Chambers
>
> /* Nothing fancy and nothing Microsoft! */
>
>
You now what *_might _*work

Insert into useraddrs(userid, addrdesc, city, stprov)
select u.userid, 'Home', c.cityname, c.stateabbr FROM users u, cities c
WHERE u.userid NOT IN (SELECT userid FROM useraddrs)
and cid IN (select round(random()*1)::int as num from 
generate_series(1,10)) rand )

As you don't care if a record repeats


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [SPAM]-D] [SQL] How to find broken UTF-8 characters ?

2010-04-29 Thread Justin Graf
On 4/28/2010 10:34 PM, Andreas wrote:
> Hi,
>
> while writing the reply below I found it sounds like beeing OT but 
> it's actually not.
> I just need a way to check if a collumn contains values that CAN NOT 
> be converted from Utf8 to Latin1.
> I tried:
> Select convert_to (my_column::text, 'LATIN1') from my_table;
>
> It raises an error that says translated:
> ERROR:  character 0xe28093 in encoding »UTF8« has no equivalent in 
> »LATIN1«
>
> I'd like to select all those records.
> When I know which record has faulty content I can correct it.
>
> If this is really OT on the SQL list then please tell me where to ask.

That's easy enough  you need to write an Update statement using regular 
expression to replace  all non legal Latin/ASCII char
http://www.postgresql.org/docs/8.4/interactive/functions-string.html 


the command is regexp_replace('MyBadString', 
'SearchForallNoneAsccIIChars', 'ReplaceWithBlankString ')

I'm pretty sure this is the regualr expression to find all non ASCII 
chars.. [^\x00-\xFF]

To test is try to  Select regexp_replace( MyColumn, '[^\x00-\xFF]', ' ') 
from screweduptable

If the regular expression does not work, I'm dry well, when it comes to 
regular expressions.  Dd i say i hate regular expression.  It dam near 
impossible to write.
Once you get the expression right and working

the Update is straight forward.
Update mytable set mybadcolumn = regexp_replace( mybadcolumn, 
'[^\x00-\xFF]', ' ')

>> Select covert('MyUtf8', 'UTF8', 'LATIN')
>> or
>> Select covert_to('MyUtf8',  'LATIN')
>
> I found them before but didn't understand their output.
> e.g.
> Select convert('1aäßx', 'utf8', 'LATIN1') ;
> Result = "1a\344\337x"
> so it translated  ä = 344  and  ß = 337. The other 3 are just as they 
> were before.
> How can this be valid in a single byte charset like Latin1?
> Especially as ä, ß are E4 and DF.
> Why do they come out as escaped codes when they are in Latin1 aswell 
> as 1, a and x?

Someone with more knowledge how convert()  works is going to have to 
explain why they have been escaped. PgAdmin may have escaped them.  But 
those characters are valid Latin1 characters

http://en.wikipedia.org/wiki/%C3%84
http://en.wikipedia.org/wiki/%C3%9F
ß = latin Beta

It seems Access and Excel are putting in extra bits of data into the 
field.  In the past i had to change inserts/updates from Access so it 
would send data in a specific char encoding.  I had problems where 
Access was using a Windows Encoding, the ODBC converted it to Latin1, 
and the MsSQL Database put it in UTF8.  It was no fun cleaning it up.

>
>> What ever pg client library used to move Excel data to PG my have 
>> incorrectly converted some of the data or moved formatting 
>> information into the database.  I have seen Access and Excel do 
>> mightily odd things when connecting to DB's  I don't know about 
>> current versions but 2000 and 2003 Excels did really stupid things 
>> when trying to write to DB's including MSSQL.
>
> Cute ... we use Access 2000 and 2003   :(

Been more Screwed by Excel and Access flakiness, and Access Programmers 
thinking they are DBA's.





All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [SQL] Column Specific Update Trigger Routine

2010-05-06 Thread Justin Graf
On 5/6/2010 4:12 PM, Plugge, Joe R. wrote:
>
> I am trying to create a update trigger on a table that basically will 
> only fire when a specific column is updated.  I am using version 8.4.3.
>
> My plan of attack was to always fire on any row update, and pass in 
> the OLD and NEW column that I want to check.
>
> CREATE TRIGGER check_lockout
>
> AFTER UPDATE ON acct_table
>
> FOR EACH ROW
>
> EXECUTE PROCEDURE 
> acct_unlock(OLD.userid,OLD.ownerid,OLD.password,NEW.password);
>
> This fails with :
>
> [postg...@linux1505 ~]$ cat check_lockout_trig.sql | psql testdb
>
> ERROR:  syntax error at or near "OLD"
>
> LINE 4: EXECUTE PROCEDURE 
> acct_unlock(OLD.userid,OLD.ownerid,OLD.password,NEW.password);
>
> What am I doing wrong? Or is there a better way to go about this?
>
You don't call the trigger procedure with the old and new as parameters

new and old are automatically created for the function acct_unlock()

CREATE TRIGGER check_lockout

 AFTER UPDATE ON acct_table

 FOR EACH ROW

 EXECUTE PROCEDURE acct_unlock();


Next the trigger function would look something like this

create or replace function acct_unlock()
returns trigger as
$$

if (OLD.userid <> NEW.password)
 do  something
end if;

$$


keep in mind the acct_unlock must be  returns trigger

Then return either NEW or the OLD record
OLD if  not changing the record or NEW if  the updated values are to be 
stored in the table.




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [SQL] Celko take on EAV

2010-05-07 Thread Justin Graf
On 5/7/2010 12:33 PM, Richard Broersma wrote:
> 
>
> I'm rereading my Joe Celko's SQL Programming Style and I noticed an
> interesting comment regarding the EAV model (of course he discourages
> its use):
>
> "There are better tools for collecting free-from data."
>
> What tools was he referring to?
>

http://en.wikipedia.org/wiki/Entity-attribute-value_model
has notes on other models.

All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] best paging strategies for large datasets?

2010-05-12 Thread Justin Graf
On 5/12/2010 1:41 AM, Louis-David Mitterrand wrote:
> Hi,
>
> I have a large dataset (page 1 at http://www.cruisefish.net/stat.md) and
> am in the process of developping a pager to let users leaf through it
> (30K rows).
>

That's not that big of a record set.

> Ideally I'd like to know when requesting any 'page' of data where I am
> within the dataset: how many pages are available each way, etc.
>
> Of course that can be done by doing a count(*) query before requesting a
> limit/offset subset. But the main query is already quite slow, so I'd
> like to minimize them.
>

What do you mean by quite slow??

On a 30K record table count() and query should speed should be a problem..

> But I am intrigued by window functions, especially the row_number() and
> ntile(int) ones.
>
> Adding "row_number() over (order by)" to my query will
> return the total number of rows in the first row, letting my deduce the
> number of pages remaining, etc. row_number() apparently adds very little
> cost to the main query.
>

That will get a sequential number,  but you still don't know how many 
records are in the table,  limit and offset block that value.
I don't see how this helps?

  Limit and Offset with Total Record count tell us where we are in the 
record set and which page we are on.

RecordCount/Limit = Number of pages
CurrentPage = (offset%RecordCount)/Limit

to complicate things further what if the site allows user to change the 
number of records displayed per page.  The pager logic needs to figure 
out how many records need to be return per page, and what the next and 
previous iterations are. Without the total count records I don't see how 
that is even possible.

I have written pagers in ASP and PHP


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] best paging strategies for large datasets?

2010-05-12 Thread Justin Graf
oops typos
On 5/12/2010 1:41 AM, Louis-David Mitterrand wrote:
> Hi,
>
> I have a large dataset (page 1 at http://www.cruisefish.net/stat.md) and
> am in the process of developping a pager to let users leaf through it
> (30K rows).
>

That's not that big of a record set.

> Ideally I'd like to know when requesting any 'page' of data where I am
> within the dataset: how many pages are available each way, etc.
>
> Of course that can be done by doing a count(*) query before requesting a
> limit/offset subset. But the main query is already quite slow, so I'd
> like to minimize them.
>

What do you mean by quite slow??

On a 30K record table count() and query speed should not be a problem..

> But I am intrigued by window functions, especially the row_number() and
> ntile(int) ones.
>
> Adding "row_number() over (order by)" to my query will
> return the total number of rows in the first row, letting my deduce the
> number of pages remaining, etc. row_number() apparently adds very little
> cost to the main query.
>

That will get a sequential number,  but you still don't know how many 
records are in the table,  limit and offset block that value.
I don't see how this helps?

  Limit and Offset with Total Record count tell us where we are in the 
record set and which page we are on.

RecordCount/Limit = Number of pages
CurrentPage = (offset%RecordCount)/Limit

to complicate things further what if the site allows user to change the 
number of records displayed per page.  The pager logic needs to figure 
out how many records need to be return per page, and what the next and 
previous iterations are. Without the total record count  I don't see how 
that is even possible.

I have written pagers in ASP and PHP



All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] best paging strategies for large datasets?

2010-05-13 Thread Justin Graf
On 5/13/2010 4:41 AM, silly sad wrote:
>
>>> First u count(*) the rows and select a requested page
>>> returning to a client the count result bundled "with a page of rows"
>>>
>>> (1) client renders the acquired rows
>>> (2)__memorize__ what part of the data he just got
>>> (3) and stores the count result to calculate "the pager div"
>>>
>>> all the subsequent clicks on "the pager div" should not immediately
>>> generate requests and decides if the request is needed.
>>
>> Yes, rendering the results throught ajax is a good idea, but one has to
>> be careful not to expose one's LIMIT and OFFSET to the client, but only
>> the "page" number. Or else the client could query the whole data set. A
>> lot of "professional" web site have that hole.
>>
>
> this is not a hole, it is only a matter of aesthetic
>
Silly Sad is right this is not a hole but a matter of aesthetics.  To 
keep the code simple and limit the amount of things that have to be 
tracked with client session on the server,  I pass the limit and offset 
to the client normally in a url/link.  This also solves the problem if 
the users sessions expires, the information is lost meaning the user has 
to restart.  Very annoying.

So the urls look something like this   
www.mywebsit.com/index.php?module=getthedata&limit=10&offset=30&orderby=5

On the server  set the these three data types to integer to block sql 
injection.

I really don't care if the user sends a command to get all 10,000 
records.  If you block that, all that been accomplished is slowing down 
data harvesting and eating up even more resources, as the client/user 
will send ever more GETs to harvest data.  Nothing has been accomplished.

to keep the casual on looker for screwing with the url encode in 
base64.  It keeps honest people honest but the hackers will pull that 
apart in a second.

All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] best paging strategies for large datasets?

2010-05-13 Thread Justin Graf
On 5/13/2010 3:43 AM, Louis-David Mitterrand wrote:
**snip***
>
>> What do you mean by quite slow??
>>  
> Like several seconds. I have to cache the results.
>

Well then i suggest posting the queries to Performance or here and let 
us take a look them
don't forget to include the explain/analyze, and number of records in 
each table.
>> On a 30K record table count() and query speed should not be a problem..
>>  
> This query is a large multi-join of times series data, not a single
> table. And it's not (prematurely :) optimized.
>
> I'm planning a materialized view for it.
>

here read this
http://www.pgcon.org/2008/schedule/events/69.en.html

The question that begs to be asked how big are the tables the query is 
accessing??

if its not hundreds of thousands to millions of records or on a stone 
age Server , my guess is  the  query can be improved.


> **snip**
>> RecordCount/Limit = Number of pages
>> CurrentPage = (offset%RecordCount)/Limit
>>  
> These simple formulas we bill handy.
>

don't forget to use floor on these in what ever language your 
programming in




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] user function and bind

2010-05-19 Thread Justin Graf
On 5/19/2010 9:56 AM, David Harel wrote:
> Hi,
> I need an example how to write user function with columns binding and 
> how to use it on PHP
> -- 
> Thanks.
>


I'm not sure i understand your   question.  You want a function to 
return record type correct??








All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] how to construct sql

2010-06-02 Thread Justin Graf
On 6/2/2010 12:31 PM, Wes James wrote:
> On Wed, Jun 2, 2010 at 10:55 AM, Oliveiros
>   wrote:
>
>> Hi,
>> Have you already tried this out?
>>
>> select MAX(page_count_count) - MIN(page_count_count)  from page_count group
>> by page_count_pdate.
>>
>>
>> Best,
>> Oliveiros
>>  
> Oliveiros,
>
> Thx that mostly works.  I just tried it and on the days there is only
> 1 entry it is 0 since max is the same as min so max - min is 0.  Is
> there a way to take in to account the 1 entry days?
>
> Again thx - I appreciate your help :)
>
> -wes
>
>
Put in a case

select
case when MAX(page_count_count) - MIN(page_count_count)>  0 then
MAX(page_count_count) - MIN(page_count_count)  
else
MAX(page_count_count)
from page_count
group by page_count_pdate.




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] how to construct sql

2010-06-02 Thread Justin Graf
On 6/2/2010 2:52 PM, Wes James wrote:
>
>   **snip***
> Thx it is closer (with an end in the case):
>
> select
> case when MAX(page_count_count) - MIN(page_count_count)>  0 then
> MAX(page_count_count) - MIN(page_count_count)
> else
> MAX(page_count_count)
> end as day_max
> from page_count
> group by page_count_pdate order by page_count_pdate;
>
> the else puts out the total count on that day.  I would need
> max(page_count_count) - max(page_count_count_of_previous_day)
>
> thx,
>
> -wes
>

A windowing query makes sense in this case  which i'm not very good at



All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL]

2010-07-06 Thread Justin Graf
I wrote an article covering this on the wiki

http://wiki.postgresql.org/wiki/BinaryFilesInDB

I need to update to for 9.0  as bytea now allows HEX format strings

http://developer.postgresql.org/pgdocs/postgres/datatype-binary.html








All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL]

2010-07-07 Thread Justin Graf
On 7/7/2010 12:00 AM, silly sad wrote:
> On 07/06/10 21:52, Justin Graf wrote:
>
>> I wrote an article covering this on the wiki
>>
>> http://wiki.postgresql.org/wiki/BinaryFilesInDB
>>  
> there are some "red flags" in communication
> (particularly reading papers)
> one of them is "binary data" which ITSELF IS NONSENCE.
>

WHAT???

You do understand that if you don't like it you can spend time fixing it.


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] strangest thing happened

2010-07-07 Thread Justin Graf
Are you using PG's  sequence/auto increment???

If so.
Once PG fires off the nextval() for the sequence that number is 
considered used and gone even if the transaction that called nextval() 
is rolled back

Depending on how the app is written nextval() might be called, but allow 
the User to cancel the invoice creation before the insert into table is 
completed eating up Invoice numbers

To reset Sequences number call
Select setval('Sequence_Name', VAlue_To_Set_To);

Most people ignore this kind of annoyance when sequence numbers jump.  
Now if it happens all the time where every X hours eating up Z number of 
sequence numbers then one needs to dig into the logs and figure out what 
is calling nextval()

Search the logs to see what is calling nextval('My_Sequence')

You may need to turn up logging to find it.


On 7/7/2010 2:59 PM, John wrote:
> I am the only developer, DBA etc.. for a small project.  Today (yesterday was
> everything was perfect) many of the sequence numbers fell behind what is the
> actual PK value.   For example the invoice PK sequence current value = 1056
> but the table PK was 1071.  Nobody (other than myself) knows how to
> edit/access the postgres server.  So
>
> 1. Does anyone know how this could have happened?? Other than human
> interaction.
>
> 2. Does anyone have a script to reset the sequences to match the tables?
>
> Thanks in advance,
>
> Johnf
>
>



All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] strangest thing happened

2010-07-07 Thread Justin Graf
On 7/7/2010 3:42 PM, Ross J. Reedstrom wrote:
>
> Justin, you're missing that John reported that the sequences are
> _behind_ the table. This only happens for me if I've been doing
> bulk data loads. Then I use:
>
> select setval(sequence_name,max(serial_id_column)) from table_with_serial_id;
>
> You do need to trackdown how this might have happened, though. Any
> clever code doing it's own 'serial' incrementing?
>
> Ross
>


Yes i did miss read his statement,  oops =-O

The highest PK value in the table is 1071  but the next sequence is 
1056.  That's  interesting and could be a big problem

Quoteing JonF


I'm thinking/guessing it had something to do with
vacumn or the backup.
The backup is a windows product "exec" and I'm using a

special plug-in from exec for the Linux backup.  But I still can't see this
actually happening.

--

BakupExec HMMM. Are you doing a file level backup, meaning backing up 
PGDATA folder or are you doing pg_dump??

I don't think its a backup issue, unless you have done a restore. Which 
this would say there are more problems else where


Are there invoices that use up numbers 1056 to 1071 in that table???

Does the app allow for resetting Sequence in a admin interface???  Many 
apps have such features and someone could have accidentally rest the 
value???

I would be looking at the log files for the Inserts into that table as a 
means to track down what is the cause.  If there are no log files or 
don't have enough detail, crank up the logging level and wait for it to 
happen again???





All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] strangest thing happened

2010-07-08 Thread Justin Graf
On 7/7/2010 5:41 PM, John wrote:
> On Wednesday 07 July 2010 03:14:40 pm Justin Graf wrote:
>
>> I would be looking at the log files for the Inserts into that table as a
>> means to track down what is the cause.  If there are no log files or
>> don't have enough detail, crank up the logging level and wait for it to
>> happen again???
>>  
>
> That is scary - let it happen again  I'm not keeping enough info in the
> log.  I actually turned off most of the info the log files are gathering
> because the system has been running for 6-7 months without an issue.  I just
> got a call around noon telling me something was going wrong.  That's when I
> discovered the sequences were the wrong values.  I'm sure there has to be
> some sort of real explanation - but I don't know what it is.
>
> Johnf
>
>

Sometimes we just don't have a choice but to let things become broke 
again to figure out what is the cause.

I had an odd case where Parent records could become deleted every once 
in a while.  For what appeared to be no rhyme or reason
The app had around 1000 functions in pg/psql not counting triggers,  so 
figuring out the cause by just staring at the code was not practical.
The log was already was set to record
 log_statement (all)
 log_line_prefix ('User %u, DB%d, Client%r, PID %p, Time %m, SID %c, 
LineCount %l , TID %x);
 log_destination ('csvlog')

The logs rotated out every 30 days.  So I had good sample of the 
commands sent to PG to figure out what went wrong

As the statements are logged all i had to do was search for
 Delete from cohead where cohead_id = 

this gave me the transaction ID and the Session ID to start backtracking 
to see if a Trigger or Function issued the delete.  Once I had figured 
out that it was pg/psql procedure, i needed to figure out where in the 
application called this seriously miss thought out  DELETE Sales Order 
function.  I threw in a RAISE EXCEPTION in the psql and waited for the 
Data entry people to come screaming.

Around 3 weeks later a data entry girl came and found me asking to 
explain why her computer ordered her to come find me ASAP and why 
nothing else matter but stop and find me.

With that error, I now the call stack from the app to figure out the 
serious of events that allowed a Sales Order to be deleted.

After all was said and done there was a total of 50 records deleted out 
of 60,000 (not even 0.1%)  not a big deal unless you are customer who's 
order was deleted.

My experience has taught me never turn off logging because we never when 
we may need it.





All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] how to escape _ in select

2010-07-28 Thread Justin Graf
On 7/28/2010 12:35 PM, Wes James wrote:
> I'm trying to do this:
>
> select * from table where field::text ilike '%\_%';
>
> but it doesn't work.
>
> How do you escape the _ and $ chars?
>
> The docs say to use \, but that isn't working.
>
> ( http://www.postgresql.org/docs/8.3/static/functions-matching.html )
>
> The text between '%...%' can be longer, I'm just trying to figure out
> how to escape some things.  I've found that ' works with '' and \
> works with \\
>

Instead of escaping how about looking at double $ quoting.

http://www.postgresql.org/docs/8.4/static/sql-syntax-lexical.html

4.1.2.4. Dollar-Quoted String Constants

While the standard syntax for specifying string constants is usually 
convenient, it can be difficult to understand when the desired string 
contains many single quotes or backslashes, since each of those must be 
doubled. To allow more readable queries in such situations, PostgreSQL 
provides another way, called "dollar quoting", to write string 
constants. A dollar-quoted string constant consists of a dollar sign 
($), an optional "tag" of zero or more characters, another dollar sign, 
an arbitrary sequence of characters that makes up the string content, a 
dollar sign, the same tag that began this dollar quote, and a dollar 
sign. For example, here are two different ways to specify the string 
"Dianne's horse" using dollar quoting:

$$Dianne's horse$$
$SomeTag$Dianne's horse$SomeTag$

Notice that inside the dollar-quoted string, single quotes can be used 
without needing to be escaped. Indeed, no characters inside a 
dollar-quoted string are ever escaped: the string content is always 
written literally. Backslashes are not special, and neither are dollar 
signs, unless they are part of a sequence matching the opening tag.

It is possible to nest dollar-quoted string constants by choosing 
different tags at each nesting level. This is most commonly used in 
writing function definitions. For example:

$function$
BEGIN
 RETURN ($1 ~ $q$[\t\r\n\v\\]$q$);
END;
$function$

Here, the sequence $q$[\t\r\n\v\\]$q$ represents a dollar-quoted literal 
string [\t\r\n\v\\], which will be recognized when the function body is 
executed by PostgreSQL. But since the sequence does not match the outer 
dollar quoting delimiter $function$, it is just some more characters 
within the constant so far as the outer string is concerned.

The tag, if any, of a dollar-quoted string follows the same rules as an 
unquoted identifier, except that it cannot contain a dollar sign. Tags 
are case sensitive, so $tag$String content$tag$ is correct, but 
$TAG$String content$tag$ is not.

A dollar-quoted string that follows a keyword or identifier must be 
separated from it by whitespace; otherwise the dollar quoting delimiter 
would be taken as part of the preceding identifier.

Dollar quoting is not part of the SQL standard, but it is often a more 
convenient way to write complicated string literals than the 
standard-compliant single quote syntax. It is particularly useful when 
representing string constants inside other constants, as is often needed 
in procedural function definitions. With single-quote syntax, each 
backslash in the above example would have to be written as four 
backslashes, which would be reduced to two backslashes in parsing the 
original string constant, and then to one when the inner string constant 
is re-parsed during function execution.




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Domains, casts, and MS Access

2010-08-04 Thread Justin Graf
On 8/4/2010 1:56 PM, Richard Broersma wrote:
> On Wed, Aug 4, 2010 at 11:51 AM, Peter Koczan  wrote:
>
>
>> This is one of my first forays into ODBC, so I didn't know that was a
>> possibility. Is there any place where these are documented? Searching
>> for ODBC options yields info on connection options, but none on
>> behavior that I could find.
>>  
> I know that there are a couple of options that affect the
> representation of Booleans in the odbc driver.  I'm not sure it will
> do what you need though.
>
> However, here is the official documentation: (hopefully it helpful)
>
> http://psqlodbc.projects.postgresql.org/
>
>
>
Yes there is an option to change bools to char

* *Data Type Options:* affects how some data types are mapped:
  o /Text as LongVarChar/: PostgreSQL TEXT type is mapped to
SQLLongVarchar, otherwise SQLVarchar.
  o /Unknowns as LongVarChar/: Unknown types (arrays, etc) are
mapped to SQLLongVarChar, otherwise SQLVarchar
  o /Bools as Char/: Bools are mapped to SQL_CHAR, otherwise to
SQL_BIT.

My memory is fuzzy but there are some additional settings in Access that 
allows data type mapping...



All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql