Re: [SQL] Finding Max Value in a Row

2012-05-11 Thread Viktor Bojović
you can convert from type to type using ::varchar or ::char(size) or
::integer
so you can use sequence but you will have to convert it's result to
suitable type (that can also be put in default value of user_id attribute)

On Fri, May 11, 2012 at 9:30 PM, Carlos Mennens carlos.menn...@gmail.comwrote:

 Thanks for all the help thus far everyone! I sadly didn't
 create/design the table and would love to create a SEQUENCE on that
 particular field but not sure how unless I DROP the table and create
 from scratch.

 Currently the data TYPE on the primary key field (users_id) is CHAR
 and I have no idea why...it should be NUMERIC or SERIAL but it's not
 so my question is if I want to ALTER the column and create a sequence,
 would I simply do:

 ALTER TABLE users
 ALTER COLUMN users_id TYPE serial
 ;

 Obviously if any of the data stored in users_id is actual CHAR, I'm
 guessing the database would reject that request to change type as the
 existing data would match. However the data type is CHAR but the field
 values are all numeric from 100010 - 100301 so I'm hoping that
 would work for SERIAL which is just INTEGER, right?

 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql




-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] Is there a way to remove every character but numbers?

2012-02-05 Thread Viktor Bojović
On Sun, Feb 5, 2012 at 2:47 PM, Andreas maps...@gmx.net wrote:

 Hi,

 I need to clean up a text column so that it just contains numbers or the
 + sign if it is the first character of the string.

 Well, it is needed to normalise up a phone number column.
 So it'd be great if the + could be transformed in 00 in the same step.

 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-sqlhttp://www.postgresql.org/mailpref/pgsql-sql


try regexp_replace


-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] Subselects to Joins? Or: how to design phone calls database

2011-12-10 Thread Viktor Bojović
Hi Mario,
over - partition by will help. Iy that syntax is uncomfortable you can use
multiple joins, using maximum length or max(prefix::int) but that will slow
down the process. select over - partition by is fastest solution i think.



On Sat, Dec 10, 2011 at 11:24 PM, Mario Splivalo
mario.spliv...@megafon.hrwrote:

 I have a table called 'calls' which holds 'call detail records'. Let's
 assume the table looks like this:

 CREATE TABLE cdr (
call_id serial,
phone_number text
 );

 And I have a table with country call prefixes, that looks like this:

 CREATE TABLE prefixes (
prefix text,
country text
 );

 And now some test data:

 INSERT INTO prefixes VALUES ('1', 'USA');
 INSERT INTO prefixes VALUES ('44', 'UK');
 INSERT INTO prefixes VALUES ('385', 'Croatia');
 INSERT INTO prefixes VALUES ('387', 'Bosnia');
 INSERT INTO prefixes VALUES ('64', 'New Zeland');
 INSERT INTO prefixes VALUES ('642', 'New Zeland Mobile');
 INSERT INTO calls VALUES (1, '11952134451');
 INSERT INTO calls VALUES (2, '448789921342');
 INSERT INTO calls VALUES (3, '385914242232');
 INSERT INTO calls VALUES (4, '385914242232');
 INSERT INTO calls VALUES (5, '645122231241');
 INSERT INTO calls VALUES (6, '444122523421');
 INSERT INTO calls VALUES (7, '64212125452');
 INSERT INTO calls VALUES (8, '1837371211');
 INSERT INTO calls VALUES (9, '11952134451');
 INSERT INTO calls VALUES (10, '448789921342');
 INSERT INTO calls VALUES (11, '385914242232');
 INSERT INTO calls VALUES (12, '385914242232');
 INSERT INTO calls VALUES (13, '645122231241');
 INSERT INTO calls VALUES (14, '4441232523421');
 INSERT INTO calls VALUES (15, '64112125452');
 INSERT INTO calls VALUES (16, '1837371211');


 Now, if I want to have a 'join' between those two tables, here is what I
 am doing right now:

 SELECT
call_id,
phone_number,
(SELECT
country
FROM
prefixes
WHERE
calls.phone_number LIKE prefix || '%'
ORDER BY
length(prefix) DESC LIMIT 1
) AS country
 FROM calls;


 Is there a way I could use join here? I can do something like:

 SELECT ... FROM calls JOIN prefixes ON calls.phone_number LIKE prefix ||
 '%'

 but I'd get duplicate rows there (for instance, for New Zeland calls,
 from my test data).

 Or should I add 'prefix' field to the calls table, and then do a inner
 join with prefixes table?

Mario

 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql




-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


[SQL] function based index problem

2011-08-31 Thread Viktor Bojović
Hi,
on table entry (17M records) there is one index:

CREATE INDEX ndxlen
  ON uniprot_frekvencije.entry
  USING btree
  (length(sequence::text));

When using = in search which returns only two records, query runs much
(hundred times) slower. i don't know why it doesn't use index scan. I just
wanted to ask how can i modify the query to use that index? Explain plans
are pasted below.

bioinf= explain select * from entry where length(sequence)=36805;
 QUERY PLAN

 Bitmap Heap Scan on entry  (cost=1523.54..294886.26 rows=81226 width=1382)
   Recheck Cond: (length((sequence)::text) = 36805)
   -  Bitmap Index Scan on ndxlen  (cost=0.00..1503.23 rows=81226 width=0)
 Index Cond: (length((sequence)::text) = 36805)
(4 rows)

bioinf= explain select * from entry where length(sequence)=36805;
 QUERY PLAN

 Seq Scan on entry  (cost=0.00..5400995.21 rows=5415049 width=1382)
   Filter: (length((sequence)::text) = 36805)
(2 rows)

Thanx in advance
-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] function based index problem

2011-08-31 Thread Viktor Bojović
On Thu, Sep 1, 2011 at 12:09 AM, David Johnston pol...@yahoo.com wrote:

 ** **

 *From:* pgsql-sql-ow...@postgresql.org [mailto:
 pgsql-sql-ow...@postgresql.org] *On Behalf Of *Viktor Bojovic
 *Sent:* Wednesday, August 31, 2011 5:27 PM
 *To:* pgsql-sql@postgresql.org; pgsql-ad...@postgresql.org
 *Subject:* [SQL] function based index problem

 ** **

 Hi,
 on table entry (17M records) there is one index:

 CREATE INDEX ndxlen
   ON uniprot_frekvencije.entry
   USING btree
   (length(sequence::text));

 When using = in search which returns only two records, query runs much
 (hundred times) slower. i don't know why it doesn't use index scan. I just
 wanted to ask how can i modify the query to use that index? Explain plans
 are pasted below.

 bioinf= explain select * from entry where length(sequence)=36805;
  QUERY PLAN


 
  Bitmap Heap Scan on entry  (cost=1523.54..294886.26 rows=81226 width=1382)
Recheck Cond: (length((sequence)::text) = 36805)
-  Bitmap Index Scan on ndxlen  (cost=0.00..1503.23 rows=81226 width=0)
  Index Cond: (length((sequence)::text) = 36805)
 (4 rows)

 bioinf= explain select * from entry where length(sequence)=36805;
  QUERY PLAN
 
  Seq Scan on entry  (cost=0.00..5400995.21 rows=5415049 width=1382)
Filter: (length((sequence)::text) = 36805)
 (2 rows)

 Thanx in advance
 --
 ---
 Viktor Bojović
 ---
 Wherever I go, Murphy goes with me

 ** **

 Some observations/suggestions:

 ** **

 Please do not Cross-Post

 You have not provided your PostgreSQL version

 ** **

 You state the “=” query only returns 2 rows but the plan expects to return
 5.4 MILLION – with that many results Sequential Scan is going to be faster
 than an Index

 Either you have not run “ANALYZE” or you have more data than you think
 matching your criteria.  Try “EXPLAIN ANALYZE” to actually run the query and
 see what you get.

 ** **

 It is likely that a simple ANALYZE on the table will solve your problem
 (ALWAYS RUN ANALYZE BEFORE POSTING QUESTIONS LIKE THIS); in the unlikely
 event it does not please post the “EXPLAIN ANALYZE” results so we can see
 exactly how many records each query returned.

 ** **

 David J.

 ** **


It works now after analyze entry was executed. thanx a lot.


-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] which is better: using OR clauses or UNION?

2011-08-16 Thread Viktor Bojović
)
  -  Bitmap Index Scan on
 in_dba_data_base_identifier  (cost=0.00..32.64 rows=964 width=0) (actual
 time=0.036..0.036 rows=0 loops=1)
Index Cond: (lower(identifier) ~=~
 'bugs001884677'::character varying)
-  Bitmap Heap Scan on dba_data_base a
  (cost=32.88..3786.62 rows=964 width=62) (actual time=0.010..0.010 rows=0
 loops=1)
  Filter: (lower(identifier) ~~
 'bugs001884678'::text)
  -  Bitmap Index Scan on
 in_dba_data_base_identifier  (cost=0.00..32.64 rows=964 width=0) (actual
 time=0.008..0.008 rows=0 loops=1)
Index Cond: (lower(identifier) ~=~
 'bugs001884678'::character varying)
-  Bitmap Heap Scan on dba_data_base a
  (cost=32.88..3786.62 rows=964 width=62) (actual time=0.010..0.010 rows=0
 loops=1)
  Filter: (lower(identifier) ~~
 'bugs001884679'::text)
  -  Bitmap Index Scan on
 in_dba_data_base_identifier  (cost=0.00..32.64 rows=964 width=0) (actual
 time=0.008..0.008 rows=0 loops=1)
Index Cond: (lower(identifier) ~=~
 'bugs001884679'::character varying)
-  Bitmap Heap Scan on dba_data_base a
  (cost=32.88..3786.62 rows=964 width=62) (actual time=0.255..1.676 rows=318
 loops=1)
  Filter: (lower(identifier) ~~ 'sptigr4-2210
 (6f24)'::text)
  -  Bitmap Index Scan on
 in_dba_data_base_identifier  (cost=0.00..32.64 rows=964 width=0) (actual
 time=0.178..0.178 rows=318 loops=1)
Index Cond: (lower(identifier) ~=~
 'sptigr4-2210 (6f24)'::character varying)
 Total runtime: 4.174 ms

 Also which should scale better if I add more strings to match? would there
 be any better design patterns for this problem?

 Thanks for any help

 Adam

 select version();
   version
 
 PostgreSQL 8.2.12 on i686-pc-linux-gnu, compiled by GCC 2.95.4



 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql




-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] overload

2011-07-11 Thread Viktor Bojović
;
   
   
   
--
---
Viktor Bojovi??
---
Wherever I go, Murphy goes with me
  
 
 
 
  --
  ---
  Viktor Bojovi??
  ---
  Wherever I go, Murphy goes with me




-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] overload

2011-07-11 Thread Viktor Bojović
I have found cursors example in plperl. now it works fine. the code is
below.

CREATE OR REPLACE FUNCTION uniprot_frekvencije.pattern_counter2(patLength
integer)
  RETURNS character varying AS
$BODY$
my $sth = spi_query(select sequence from entry);
my $patLen = $_[0];
my $patt = '';
my $row;

my %patterns=();
while (defined ($row = spi_fetchrow($sth))) {
my $seq = $row-{sequence};
for (my $x = 0;$x=length($seq) - $patLen;$x++){
$patt=substr($seq,$x,$patLen);
if (! defined $patterns{$patt}) {
$patterns{$patt}=1;
}else{
$patterns{$patt}++;
}
}
}
foreach $patt (keys %patterns){
my $sql=insert into patterns values('.$patt.',.$patterns{$patt}.);
spi_exec_query($sql);
}
return '';
$BODY$
  LANGUAGE plperl VOLATILE
  COST 100;

On Mon, Jul 11, 2011 at 4:42 PM, Viktor Bojović viktor.bojo...@gmail.comwrote:

 Thanx Wayne and Pavel,

 i will try to make this work with cursors. Theoretically there can be more
 than 4 bilion records in result, but only 58mil returned which took 3 days.
 Possibly i will have to create temporary table and store results form
 %patterns into it after each 500k records, and group them at the end.

 I didn't manage to find example where plpgsql uses hash arrays or where
 plperl uses cursors.
 Any of these methods links/examples would be helpful to me.

 Thanx in advance


 On Fri, Jul 8, 2011 at 9:49 PM, lists-pg...@useunix.net wrote:

 Hi Viktor,

 I'm not sure what your requirements are in terms of performance and
 stability of the your result set. See Pavel's response. A cursor issues
 a single query and renders a single result set. The result set is
 static, the cursor just gives you finer control/performance when
 retrieving rows from the set. Using a transaction will also render better
 performance when %patterns contains a large number of keys/values,
 insert all of them in one transaction, the same one you opened for the
 cursor.

 Your method issues many queries and will take longer for each successive
 query.  And the number of queries will increase as table size increases.
 It could also return duplicate rows and/or missed rows due to other
 transactions completing between your select query.

 If you can tolerate the above issues then so be it, if not you really
 should look at cursors.

 Also there might be a bug in your code if you delete entries from
 'entry'. Your depending on $rowCountAll to remain static which is not the
 case if you ever delete entries. You can fix this by skipping the
 select count(1) step and just breaking your loop when less then
 $windowSize entries are returned from the select sequence.. query.

 Wayne


 On Fri, Jul 08, 2011 at 08:55:36PM +0200, Viktor Bojovi?? wrote:
  Thanx Wayne,
  at the end i did it that way and it works.
  The code is below.
 
  CREATE FUNCTION pattern_counter1(patLength integer) RETURNS character
  varying
  LANGUAGE plperl
  AS $_X$
  my $rvCnt = spi_exec_query(select count(1) as cnt from entry);
  #my $rowCountAll = $rvCnt-{processed};
  my $row = $rvCnt-{rows}[0];
  my $rowCountAll = $row-{cnt};
  my $windowSize = 50;
  my %patterns=();
  for (my $p=0;$p$rowCountAll;$p+=$windowSize){
  my $sql=select sequence from entry limit $windowSize offset $p;
 
  my $rv = spi_exec_query($sql);
  my $rowCount = $rv-{processed};
  my $patLen = $_[0];
  my $patt = '';
 
  foreach my $rn (0 .. $rowCount -1){
  my $row = $rv-{rows}[$rn];
  my $seq = $row-{sequence};
  for (my $x = 1;$x=length($seq) - $patLen;$x++){
  $patt=substr($seq,$x,$patLen);
  if (! defined $patterns{$patt}) {
  $patterns{$patt}=1;
  }else{
  $patterns{$patt}++;
  }
  }
  }
  }
 
  foreach $patt (keys %patterns){
  my $sql=insert into patterns values('.$patt.',.$patterns{$patt}.);
  spi_exec_query($sql);
  }
  return $tmp;
  $_X$;
 
 
  On Fri, Jul 8, 2011 at 8:50 PM, lists-pg...@useunix.net wrote:
 
   I'm have the same situation with large tables.  Take a look at using a
   cursor to fetch several thousand rows at a time.  I presume what's
   happening is that perl is attempting to create a massive list/array in
   memory.  If you use a cursor the list should only contain X number of
   rows where X in the number specified at each fetch execution.  You'll
   need to define the cursor inside a transaction block.
  
   - begin transaction
   - define the cursor
   - fetch rows from cursor
   - while row count from previous step  0, execute previous step
   - terminate transaction
  
   Or you could use plpgsql instead of plperl, FOR loops over result sets
 in
   plpgsql implicitly use cursors... it's just a little less code.
  
   Hope that helps,
   Wayne
  
   On Tue, Jul 05, 2011 at 10:29:03PM +0200, Viktor Bojovi?? wrote:
Hi,
while reading 20GB table through PL/PERL function , it constantly
 grows
   in
RAM.
I wanted to ask you which is the best way to read table inside that
function without such memory consumption.
Thanks

Re: [SQL] overload

2011-07-08 Thread Viktor Bojović
Thanx Wayne,
at the end i did it that way and it works.
The code is below.

CREATE FUNCTION pattern_counter1(patLength integer) RETURNS character
varying
LANGUAGE plperl
AS $_X$
my $rvCnt = spi_exec_query(select count(1) as cnt from entry);
#my $rowCountAll = $rvCnt-{processed};
my $row = $rvCnt-{rows}[0];
my $rowCountAll = $row-{cnt};
my $windowSize = 50;
my %patterns=();
for (my $p=0;$p$rowCountAll;$p+=$windowSize){
my $sql=select sequence from entry limit $windowSize offset $p;

my $rv = spi_exec_query($sql);
my $rowCount = $rv-{processed};
my $patLen = $_[0];
my $patt = '';

foreach my $rn (0 .. $rowCount -1){
my $row = $rv-{rows}[$rn];
my $seq = $row-{sequence};
for (my $x = 1;$x=length($seq) - $patLen;$x++){
$patt=substr($seq,$x,$patLen);
if (! defined $patterns{$patt}) {
$patterns{$patt}=1;
}else{
$patterns{$patt}++;
}
}
}
}

foreach $patt (keys %patterns){
my $sql=insert into patterns values('.$patt.',.$patterns{$patt}.);
spi_exec_query($sql);
}
return $tmp;
$_X$;


On Fri, Jul 8, 2011 at 8:50 PM, lists-pg...@useunix.net wrote:

 I'm have the same situation with large tables.  Take a look at using a
 cursor to fetch several thousand rows at a time.  I presume what's
 happening is that perl is attempting to create a massive list/array in
 memory.  If you use a cursor the list should only contain X number of
 rows where X in the number specified at each fetch execution.  You'll
 need to define the cursor inside a transaction block.

 - begin transaction
 - define the cursor
 - fetch rows from cursor
 - while row count from previous step  0, execute previous step
 - terminate transaction

 Or you could use plpgsql instead of plperl, FOR loops over result sets in
 plpgsql implicitly use cursors... it's just a little less code.

 Hope that helps,
 Wayne

 On Tue, Jul 05, 2011 at 10:29:03PM +0200, Viktor Bojovi?? wrote:
  Hi,
  while reading 20GB table through PL/PERL function , it constantly grows
 in
  RAM.
  I wanted to ask you which is the best way to read table inside that
  function without such memory consumption.
  Thanks in advance
 
  Code is here:
 
  CREATE  FUNCTION pattern_counter(patLength integer)
RETURNS varchar AS
  $BODY$
  my $rv = spi_exec_query(select sequence from entry);
  my $rowCount = $rv-{processed};
  my $patLen = $_[0];
  my $patt = '';
  my %patterns=();
  foreach my $rn (0 .. $rowCount -1){
  my $row = $rv-{rows}[$rn];
  my $seq = $row-{sequence};
  for (my $x = 1;$x=length($seq) - $patLen;$x++){
  $patt=substr($seq,$x,$patLen);
  if (! defined $patterns{$patt}) {
  $patterns{$patt}=1;
  }else{
  $patterns{$patt}++;
  }
  }
  }
  foreach $patt (keys %patterns){
  my $sql=insert into patterns
 values('.$patt.',.$patterns{$patt}.);
  spi_exec_query($sql);
  }
  return '';
  $BODY$
LANGUAGE plperl VOLATILE
COST 100;
 
 
 
  --
  ---
  Viktor Bojovi??
  ---
  Wherever I go, Murphy goes with me




-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


[SQL] overload

2011-07-05 Thread Viktor Bojović
Hi,
while reading 20GB table through PL/PERL function , it constantly grows in
RAM.
I wanted to ask you which is the best way to read table inside that
function without such memory consumption.
Thanks in advance

Code is here:

CREATE  FUNCTION pattern_counter(patLength integer)
  RETURNS varchar AS
$BODY$
my $rv = spi_exec_query(select sequence from entry);
my $rowCount = $rv-{processed};
my $patLen = $_[0];
my $patt = '';
my %patterns=();
foreach my $rn (0 .. $rowCount -1){
my $row = $rv-{rows}[$rn];
my $seq = $row-{sequence};
for (my $x = 1;$x=length($seq) - $patLen;$x++){
$patt=substr($seq,$x,$patLen);
if (! defined $patterns{$patt}) {
$patterns{$patt}=1;
}else{
$patterns{$patt}++;
}
}
}
foreach $patt (keys %patterns){
my $sql=insert into patterns values('.$patt.',.$patterns{$patt}.);
spi_exec_query($sql);
}
return '';
$BODY$
  LANGUAGE plperl VOLATILE
  COST 100;



-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] Sqlite to postgres database conversion.swedish chararacter issue

2011-06-16 Thread Viktor Bojović
i have never used ogr2ogr or done such thing but you migh try to export from
sqlite by dumping or selecting data to CSV format (.separator command) and
then  make conversions using tr command ih shell for conversion of unwanted
characters, or to use iconv for conversion to another encoding.
after convert is done you can insert it.

On Tue, Jun 14, 2011 at 7:37 PM, sara vanan saravananlatit...@gmail.comwrote:

 Hi,
 I am new to postgre sql.I need to convert sqlite datbase to postgres
 database.i have nearly 15 table in my sqlite database in which 3 table has
 swedish characters.

 I have used ogr2ogr to convert sqlite database to postgres.I was able to
 convert all the tables except the table which had the swedish character.I
 have used utf-8 encoding.

 I am searching for a sql insert which i can use to convert the table which
 has swedish character.

 any help regarding this would be appriciated.

 Thanks,
 Sara





-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] PLpgSQL variables persistance between several functions calls

2011-05-03 Thread Viktor Bojović
the best way is to put all calls into one function and store values to
variables..

if that is not suitable you can try the way (which im not sure if anyone
uses) and it is to store values to sequences if value type is integer.
for other formats you will have to do conversions.
but im not sure if sequence values are stored in database or in memory.
that way will make your results visible to all sessions.


On Mon, May 2, 2011 at 11:43 PM, Charles N. Charotti ccharo...@yahoo.comwrote:

 Hello everybody !

 I want to know if I could share PLpgSQL variables between different
 functions and within different calls just using memory (not tables or other
 methods).

 If it is really possible ?

 Thanks in advance,

 Chuck




-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


[SQL] function timeout

2011-05-03 Thread Viktor Bojović
im writting plsh function which will execute some shell commands and return
 result as varchar to database.
problem is that some commands will possibly cause to large timeout or will
never stop so i wanted to ask
if somehow function can be autokilled if it doesn't finish in time defined
for that function.


-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] is there a refactor

2011-04-05 Thread Viktor Bojović
Hi John,

everything you need is stored in these tables:
http://www.postgresql.org/docs/7.4/static/catalog-pg-attribute.html
http://www.postgresql.org/docs/7.4/static/catalog-pg-class.html
http://www.postgresql.org/docs/7.4/static/catalog-pg-namespace.html


On Tue, Apr 5, 2011 at 4:27 PM, John Fabiani jo...@jfcomputer.com wrote:

 Hi,
 I would like to have a simple way to retrieve information for a field name.
 By that I mean have some SQL select that will return all the tables a field
 name exist within a database.  I did not find anything with google but of
 course google depends on the search string.

 Thanks in advance,
 Johnf

 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql




-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] pg_attributte, data types

2011-03-24 Thread Viktor Bojović
On Thu, Mar 24, 2011 at 12:56 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 =?UTF-8?Q?Viktor_Bojovi=C4=87?= viktor.bojo...@gmail.com writes:
  while listing data types i have noticed that some types have underscore
  prefix but i don't know why is that. so im asking if somene can tell me.
  These are type examples:

  _char
  _float4
  _int2
  _oid
  _regtype
  _text
  _varchar

 Those are array types.  The normal convention is that foo[] is named
 _foo under the surface.

regards, tom lane


thanx

-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


[SQL] pg_attributte, data types

2011-03-23 Thread Viktor Bojović
while listing data types i have noticed that some types have underscore
prefix but i don't know why is that. so im asking if somene can tell me.
These are type examples:

_char
_float4
_int2
_oid
_regtype
_text
_varchar


select distinct typname
from
(
SELECT attname AS name,
attnum ,
typname,
atttypmod - 4 as length,
attnotnull AS notnull,
atthasdef AS def,
c.oid AS tbloid,
d.adsrc AS defval,
ds.description,
attndims AS dimnum,
attstattarget,
attislocal,
attinhcount

FROM pg_attribute a
INNER JOIN pg_class c ON a.attrelid = c.oid
INNER JOIN pg_type t ON a.atttypid = t.oid
LEFT OUTER JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = a.attnum
LEFT OUTER JOIN pg_description ds ON ds.objoid = c.oid AND ds.objsubid =
a.attnum
INNER JOIN pg_namespace n ON t.typnamespace = n.oid
WHERE attnum  0 AND
attisdropped  't'
ORDER BY a.attnum
) as foo


-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] How to workaround DROP CONSTRAINT [ IF EXISTS ] in Postgres version 8.1?

2011-02-04 Thread Viktor Bojović
use function which searches through this tables:

-pg_constraint
-pg_trigger

On Wed, Feb 2, 2011 at 11:21 PM, creationw
mr.creation.regis...@gmail.comwrote:


 Hello,


 I found that DROP CONSTRAINT [ IF EXISTS ] is available in Postgres 9.1
 http://developer.postgresql.org/pgdocs/postgres/sql-altertable.html
 http://developer.postgresql.org/pgdocs/postgres/sql-altertable.html


 But it is not avaiable in Postgres 8.1
 http://www.postgresql.org/docs/8.1/interactive/sql-altertable.html
 http://www.postgresql.org/docs/8.1/interactive/sql-altertable.html


 Is there a way to achieve this on Postgres 8.1?

 Thanks a lot
 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/How-to-workaround-DROP-CONSTRAINT-IF-EXISTS-in-Postgres-version-8-1-tp3368570p3368570.html
 Sent from the PostgreSQL - sql mailing list archive at Nabble.com.

 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql




-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] aggregation of setof

2011-01-31 Thread Viktor Bojović
maybe custom types can be helpful. i use them when i have to many variables
or arrays.

On Mon, Jan 31, 2011 at 9:52 AM, Andreas Gaab a.g...@scanlab.de wrote:

 Functions apparently cannot take setof arguments.



 Postgres 8.4:



 CREATE OR REPLACE FUNCTION testtable(IN setof anyarray)

   RETURNS anyarray AS

 $BODY$

 SELECT $1 LIMIT 1;

 $BODY$

   LANGUAGE 'sql' STABLE;



 à

 ERROR:  functions cannot accept set arguments







 *Von:* Viktor Bojović [mailto:viktor.bojo...@gmail.com]
 *Gesendet:* Samstag, 29. Januar 2011 09:28
 *An:* Andreas Gaab
 *Betreff:* Re: [SQL] aggregation of setof



 i have never used that type but maybe you can try this;

 -create function which returns text[], and takse setof text as argument (if
 possible)

 -reach every text[] in set of text[] using array index

 -return values using return next for each text in text[] which is in set
 of text[]



 On Fri, Jan 28, 2011 at 12:42 PM, Andreas Gaab a.g...@scanlab.de wrote:

 Hi all,



 I would like to write a query, which aggregates the results of
 regexp_matches(). The problem is that regexp_matches returnes setof text[]
 as documented even if I discard the global flag (
 http://www.postgresql.org/docs/8.4/static/functions-matching.html#FUNCTIONS-POSIX-REGEXP).
  Thus resulting in an error when I try to aggregate the result:



 “

 SELECT array_accum(

 regexp_matches('foobarbequebazilbarfbonk', '(b[^b]+)(b[^b]+)')

 )

 ---

 ERROR:  set-valued function called in context that cannot accept a set

 ** Fehler **

 ERROR: set-valued function called in context that cannot accept a set

 SQL Status:0A000

 “



 Can I convert a ‚setof text[]‘ to a ‚text[]‘?



 Alternatively I could use a sub-select, but I am curious if there are other
 solutions around.



 Regards,

 Andreas



 ___



 SCANLAB AG

 Dr. Andreas Simon Gaab

 Entwicklung • R  D



 Siemensstr. 2a • 82178 Puchheim • Germany

 Tel. +49 (89) 800 746-513 • Fax +49 (89) 800 746-199

 mailto:a.g...@scanlab.de a.g...@scanlab.de • www.scanlab.de



 Amtsgericht München: HRB 124707 • USt-IdNr.: DE 129 456 351

 Vorstand: Georg Hofner (Sprecher), Christian Huttenloher, Norbert Petschik

 Aufsichtsrat (Vorsitz): Dr. Hans J. Langer

 ___






 --
 ---
 Viktor Bojović
 ---
 Wherever I go, Murphy goes with me




-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] Automating PostgreSql table partition using triggers

2011-01-27 Thread Viktor Bojović
when creating dynamic query try to store it completey as string because you
will not be able to change tableName (i havent been doing that for a long
time , so possibly i can be wrong).
to exec or execute command you have to pass the query as parameter.
it would look like this

declare sql as varchar;
Field2=new. testing_name;
Field1=new.testing_id;
sql:='insert into tableName'||id||' (testing_id,testing_name) values
('||Field1||','||quote_literal(Field2)||')';
exec(sql);

On Thu, Jan 27, 2011 at 6:50 PM, Amitabh Kant amitabhk...@gmail.com wrote:

 Hi

 I am trying to write a function which is being called from a trigger used
 for partitioning a large table. The partitioning is to happen based on an
 integer field (testing_id). A simplified structure of what I am trying to do
 is written below.

 Create Table tbltesting(
  testing_id int not null,
  testing_name character varying(255));

 Create table tbltesting1(check(testing_id = 1)) inherits(tbltesting);
 Create table tbltesting2(check(testing_id = 2)) inherits(tbltesting);

 CREATE OR REPLACE FUNCTION partition_insert_trigger()
 RETURNS TRIGGER AS $$
 DECLARE id integer ;
 BEGIN
 id := NEW.testing_id;

 INSERT INTO tbltesting'||id||' VALUES (NEW.*);   //Problem line, not
 sure what syntax to use here

 RETURN NULL;
 END;
 $$
 LANGUAGE plpgsql;


 CREATE TRIGGER partition_trigger
 BEFORE INSERT ON tbltesting
 FOR EACH ROW EXECUTE PROCEDURE partition_insert_trigger();

 Creating tables or child tables is not a problem and the trigger works fine
 if the function has static definitions. What I am trying to achieve is use
 the new testing_id to create a table name for use in the insert statement.
 If I am able to use the variable in the table name, I would not have to
 re-declare the function each time with modified conditions for each separate
 testing_id.


 With regards

 Amitabh




-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] failing to connect to postgresql with php

2011-01-17 Thread Viktor Bojović
On Mon, Jan 17, 2011 at 9:42 AM, victor katemana developervick...@gmail.com
 wrote:

 i have installed postgres 8.4 server in ubuntu 10.04, clearly following all
 procedures. what is disappointing me i cant establish connection to the
 database server when am configuring postgres with php. whenever i use this
 postgres database function i dont even receive an error message. the server
 just remains quite. more on than when i execute the command

 #psql template1 on my terminal i receive this error message

 psql: could not connect to server: No such file or directory
 Is the server running locally and accepting
 connections on Unix domain socket /var/run/postgresql/.s.PGSQL.5432?
  can someone help me out please


if not started start it using
/etc/init.d/postgresql-8.3 start



-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] data import

2010-12-28 Thread Viktor Bojović
Thanx Chris.
Also i want to thank Achilleas and Pavel for help

On Tue, Dec 21, 2010 at 5:18 PM, Chris Ruprecht ch...@ruprecht.org wrote:

 If your file is TAB delimited, you can simply do:

 \copy tablename from 'some file'

 To get details in psql, do:
 psql mydatabase
 and at the prompt:
 \h copy



 On Dec 21, 2010, at 06:34 , Viktor Bojović wrote:

  Hi,
 
  can anyone recommend me a windows and linux free tools for importing data
 into postgre.
  Source files are CSV or excel.
 
  Thanx in advance
  --
  ---
  Viktor Bojović
  ---
  Wherever I go, Murphy goes with me


 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql




-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


[SQL] data import

2010-12-21 Thread Viktor Bojović
Hi,

can anyone recommend me a windows and linux free tools for importing data
into postgre.
Source files are CSV or excel.

Thanx in advance
-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] How to convert string to integer

2010-12-15 Thread Viktor Bojović
On Wed, Dec 15, 2010 at 11:23 AM, venkat ven.tammin...@gmail.com wrote:

 Dear All,

   How do i convert string to int

 select SUM(pan_1) from customers1 where name='101'

 When i run the above query i m getting  function sum(character varying)
 does not exist..

 Please anyone can guide me..

 Thanks



select SUM(pan_1::integer) from customers1 where name='101'

also you can use conversion functions described here
http://www.postgresql.org/docs/8.2/static/functions-formatting.html

-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


[SQL] constraint with check

2010-12-13 Thread Viktor Bojović
Hi
im trying to create foreign key constraint which checks not only the
existence of key in foreign table, but it has to check if node field has
value=true
but i don't know how to do that, so im asking for help.

alter table wg.nc
add CONSTRAINT nc_fk FOREIGN KEY (Id1)
  REFERENCES wg.entities(Id),
check (node=true)

thanx in advance

-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] large xml database

2010-10-31 Thread Viktor Bojović
On Sun, Oct 31, 2010 at 7:08 AM, Lutz Steinborn l.steinb...@4c-ag.dewrote:

 On Sat, 30 Oct 2010 23:49:29 +0200
 Viktor Bojović viktor.bojo...@gmail.com wrote:

 
  many tries have failed because 8GB of ram and 10gb of swap were not
 enough.
  also sometimes i get that more than 2^32 operations were performed, and
  functions stopped to work.
 
 we have a similar problem and we use the Amara xml Toolkit for python. To
 avoid
 the big memory consumption use pushbind. A 30G bme catalog file takes a
 maximum
 up to 20min to import. It might be faster because we are preparing complex
 objects with an orm. So the time consumption depends how complex the
 catalog is.
 If you use amara only to perform a conversion from xml to csv the final
 import
 can be done much faster.

 regards

 --
 Lutz

 http://www.4c-gmbh.de


Thanx Lutz, I will try to use that Amara and also I will try to parse it
with SAX.
I have tried twig and some other parsers but they consumed too much RAM.



-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] large xml database

2010-10-31 Thread Viktor Bojović
On Sun, Oct 31, 2010 at 2:26 AM, James Cloos cl...@jhcloos.com wrote:

  VB == Viktor Bojović viktor.bojo...@gmail.com writes:

 VB i have very big XML documment which is larger than 50GB and want to
 VB import it into databse, and transform it to relational schema.

 Were I doing such a conversion, I'd use perl to convert the xml into
 something which COPY can grok.  Any other language, script or compiled,
 would work just as well.  The goal is to avoid having to slurp the whole
 xml structure into memory.

 -JimC
 --
 James Cloos cl...@jhcloos.com OpenPGP: 1024D/ED7DAEA6


The insertion into dabase is not very big problem.
I insert it as XML docs, or as varchar lines or as XML docs in varchar
format. Usually i use transaction and commit after block of 1000 inserts and
it goes very fast. so insertion is over after few hours.
But the problem occurs when i want to transform it inside database from
XML(varchar or XML format) into tables by parsing.
That processing takes too much time in database no matter if it is stored as
varchar lines, varchar nodes or XML data type.

-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] large xml database

2010-10-31 Thread Viktor Bojović
On Sun, Oct 31, 2010 at 9:42 PM, Rob Sargent robjsarg...@gmail.com wrote:




 Viktor Bojovic' wrote:



 On Sun, Oct 31, 2010 at 2:26 AM, James Cloos cl...@jhcloos.com mailto:
 cl...@jhcloos.com wrote:

 VB == Viktor Bojovic' viktor.bojo...@gmail.com

mailto:viktor.bojo...@gmail.com writes:

VB i have very big XML documment which is larger than 50GB and
want to
VB import it into databse, and transform it to relational schema.

Were I doing such a conversion, I'd use perl to convert the xml into
something which COPY can grok. Any other language, script or compiled,
would work just as well. The goal is to avoid having to slurp the
whole
xml structure into memory.

-JimC
--
James Cloos cl...@jhcloos.com mailto:cl...@jhcloos.com

OpenPGP: 1024D/ED7DAEA6


 The insertion into dabase is not very big problem.
 I insert it as XML docs, or as varchar lines or as XML docs in varchar
 format. Usually i use transaction and commit after block of 1000 inserts and
 it goes very fast. so insertion is over after few hours.
 But the problem occurs when i want to transform it inside database from
 XML(varchar or XML format) into tables by parsing.
 That processing takes too much time in database no matter if it is stored
 as varchar lines, varchar nodes or XML data type.

 --
 ---
 Viktor Bojovic'

 ---
 Wherever I go, Murphy goes with me


 Are you saying you first load the xml into the database, then parse that
 xml into instance of objects (rows in tables)?


Yes. That way takes less ram then using twig or simple xml, so I tried using
postgre xml functions or regexes.



-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


11.xml.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] large xml database

2010-10-31 Thread Viktor Bojović
On Sun, Oct 31, 2010 at 10:26 PM, Rob Sargent robjsarg...@gmail.com wrote:



 Viktor Bojović wrote:



 On Sun, Oct 31, 2010 at 9:42 PM, Rob Sargent robjsarg...@gmail.commailto:
 robjsarg...@gmail.com wrote:




Viktor Bojovic' wrote:



On Sun, Oct 31, 2010 at 2:26 AM, James Cloos
cl...@jhcloos.com mailto:cl...@jhcloos.com
mailto:cl...@jhcloos.com mailto:cl...@jhcloos.com wrote:

VB == Viktor Bojovic' viktor.bojo...@gmail.com
mailto:viktor.bojo...@gmail.com

   mailto:viktor.bojo...@gmail.com
mailto:viktor.bojo...@gmail.com writes:

   VB i have very big XML documment which is larger than 50GB and
   want to
   VB import it into databse, and transform it to relational
schema.

   Were I doing such a conversion, I'd use perl to convert the
xml into
   something which COPY can grok. Any other language, script
or compiled,
   would work just as well. The goal is to avoid having to
slurp the
   whole
   xml structure into memory.

   -JimC
   --
   James Cloos cl...@jhcloos.com mailto:cl...@jhcloos.com
mailto:cl...@jhcloos.com mailto:cl...@jhcloos.com


   OpenPGP: 1024D/ED7DAEA6


The insertion into dabase is not very big problem.
I insert it as XML docs, or as varchar lines or as XML docs in
varchar format. Usually i use transaction and commit after
block of 1000 inserts and it goes very fast. so insertion is
over after few hours.
But the problem occurs when i want to transform it inside
database from XML(varchar or XML format) into tables by parsing.
That processing takes too much time in database no matter if
it is stored as varchar lines, varchar nodes or XML data type.

-- ---
Viktor Bojovic'

---
Wherever I go, Murphy goes with me


Are you saying you first load the xml into the database, then
parse that xml into instance of objects (rows in tables)?


 Yes. That way takes less ram then using twig or simple xml, so I tried
 using postgre xml functions or regexes.



 --
 ---
 Viktor Bojović
 ---
 Wherever I go, Murphy goes with me

 Is the entire load a set of entry elements as your example contains?
  This I believe would parse nicely into a tidy but non-trivial schema
 directly without the middle-man of having xml in db (unless of course you
 prefer xpath to sql ;) )

 The single most significant caveat I would have for you is Beware:
 Biologists involved. Inconsistency (at least overloaded concepts)  almost
 assured :).  EMBL too is suspect imho, but I've been out of that arena for a
 while.


Unfortunately some elements are always missing, so I had to create script
which scanned whole document of swissprot and trembl , and stored it into
file to use it as a template to build a code generator if I find a best
parser for this purpose. To parse all elements it in one day I should use
parser which is capable to parse at least 128 entry blocks for an second @
2.4GHz. You are right about inconsistency, im constantly have problems with
PDB files.

btw.
you have mentioned This I believe would parse nicely into a tidy but
non-trivial schema directly, does it mean that postgre has a support for
restoring the database schema from xml files?

-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me

entry[]-sequence[]
entry[]-feature[]
entry[]-reference[]
entry[]-feature[]-location[]-position[]-status
entry[]-dbReference[]-property[]
entry[]-reference[]-citation[]-last
entry[]-comment[]-text[]-status
entry[]-geneLocation[]-type
entry[]-comment[]-experiments[]
entry[]-comment[]-conflict[]-sequence[]
entry[]-comment[]-subcellularLocation[]-orientation[]-status
entry[]-protein[]-domain[]-alternativeName[]-fullName[]
entry[]-evidence[]-category
entry[]-feature[]-location[]-begin[]-status
entry[]-reference[]-citation[]-volume
entry[]-feature[]-evidence
entry[]-dbReference[]-type
entry[]-reference[]-citation[]-authorList[]-consortium[]
entry[]-version
entry[]-comment[]-location[]-sequence
entry[]-sequence[]-version
entry[]-proteinExistence[]
entry[]-reference[]-scope[]
entry[]-reference[]-source[]-plasmid[]
entry[]-reference[]-citation[]-dbReference[]
entry[]-comment[]-locationType
entry[]-protein[]-domain[]
entry[]-reference[]-citation[]-publisher
entry[]-gene[]-name[]
entry[]-protein[]-domain[]-alternativeName[]-ref
entry[]-comment[]-conflict[]
entry[]-evidence[]
entry[]-sequence[]-modified
entry[]-comment[]-conflict[]-sequence[]-id
entry[]-keyword[]-id
entry[]-comment[]-redoxPotential[]-evidence
entry[]-comment[]-link[]
entry[]-feature[]-location[]-position[]
entry[]-reference

[SQL] large xml database

2010-10-30 Thread Viktor Bojović
Hi,
i have very big XML documment which is larger than 50GB and want to import
it into databse, and transform it to relational schema.
When splitting this documment to smaller independent xml documments i get
~11.1mil XML documents.
I have spent lots of time trying to get fastest way to transform all this
data but every time i give up because it takes too much time. Sometimes more
than month it would take if not stopped.
I have tried to insert each line as varchar into database and parse it using
plperl regex..
also i have tried to store every documment as XML and parse it, but it is
also to slow.
i have tried to store every documment as varchar but it is also slow when
using regex to get data.

many tries have failed because 8GB of ram and 10gb of swap were not enough.
also sometimes i get that more than 2^32 operations were performed, and
functions stopped to work.

i wanted just to ask if someone knows how to speed this up.

thanx in advance

-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] insert into table from list or array ?

2010-10-18 Thread Viktor Bojović
i think that you can try to
create temporary table tmptable(id integer, numbers integer[]);
and
insert into table values (1,array[2, 3, 5, 7, 11, 13, 17, 19, 23, 31, 37,
...]);

On Mon, Oct 18, 2010 at 8:00 PM, Andreas maps...@gmx.net wrote:

  Hi,

 is it possible to insert into a table from list or an array ?

 Suppose there is a set of numbers that might be IDs of tables within the
 DB.
 To work with them I'd need a temporary table that just holds a single
 column with those numbers.
 Something like

 create temporary table tmptable as select id from
 ( 2, 3, 5, 7, 11, 13, 17, 19, 23, 31,
  37, ... );

 would be great.

 I get those numbers as textfile with 10 numbers per line and devided by
 comma+space as in the sample above, though the comma+space is negotiable if
 this were an issue.

 :)

 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql




-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] get attribute from XML

2010-10-12 Thread Viktor Bojović
Dear Markus,
it works now. Thanx.

On Tue, Oct 12, 2010 at 2:22 AM, Markus Schatten markus.schat...@foi.hrwrote:

 Dear Viktor,

 'modified' is an attribute node, so you need to use the @ sign to
 acquire its value:

 mschatte=# select xpath( '//entry/@modified', 'entry version=21
 modified=2009-07-07 dataset=S created=2009-06-16 /' );
xpath
 --
  {2009-07-07}
 (1 row)

 HTH

 M.S.

 On Tue, Oct 12, 2010 at 1:03 AM, Viktor Bojović
 viktor.bojo...@gmail.com wrote:
  Hi,
  while parsing this type of XML:
  entry version=21 modified=2009-07-07 dataset=S
 created=2009-06-16
  accessionQ91G55/accession
 
  i tried this query to get the modified attribute
  select xpath('//entry/modified/text()',xml) from xml_sprot limit 10;
  but it doesn't work.
  So im asking if anyone can help me with that.
  Thanx in advance
  --
  ---
  Viktor Bojović
  ---
  Wherever I go, Murphy goes with me
 

 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql




-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


[SQL] get attribute from XML

2010-10-11 Thread Viktor Bojović
Hi,
while parsing this type of XML:
entry version=21 modified=2009-07-07 dataset=S created=2009-06-16
accessionQ91G55/accession

i tried this query to get the modified attribute
select xpath('//entry/modified/text()',xml) from xml_sprot limit 10;
but it doesn't work.

So im asking if anyone can help me with that.

Thanx in advance
-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


[SQL] naming arguments in aggregate function

2010-09-05 Thread Viktor Bojović
Hi,
I am trying to name arguments in aggregate function, but i don't know how,
so im asking if someone can help me to solve this.
Im working with 8.3.11 version of postgre, and the aggregate function is
this:
CREATE AGGREGATE grafika.agg_pov_sphere (numeric, numeric, numeric,
numeric, varchar) (
SFUNC = grafika._tran_pov_sphere,
STYPE = varchar,
INITCOND = ''
);

when i change the header to this CREATE AGGREGATE grafika.agg_pov_sphere
(x numeric, y numeric, z numeric, rad numeric, pigment varchar) (
i get syntax error.

Thanx in advance
-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


[SQL] aggregate function

2010-08-03 Thread Viktor Bojović
I am trying to make aggregate function of existing function which looks like
this.
CREATE OR REPLACE FUNCTION grafika.pov_sphere (x numeric, y numeric, z
numeric, rad numeric, pigment varchar) RETURNS varchar AS
$body$
DECLARE
_pov varchar;
BEGIN
_pov:='sphere {'||x||','||y||','||z||','||rad||' '||pigment ||'}';
return _pov;
END;
$body$
LANGUAGE 'plpgsql'
IMMUTABLE
RETURNS NULL ON NULL INPUT
SECURITY INVOKER
COST 100;


Aggregate should concatenate results of pov_sphere using this function
below.
CREATE OR REPLACE FUNCTION public.concat (varchar, varchar) RETURNS
varchar AS
$body$
DECLARE
t varchar;
BEGIN
IF character_length($1)  0 THEN
t = $1 || $2;
ELSE
t = $2;
END IF;
RETURN t;
END;
$body$
LANGUAGE 'plpgsql'
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER
COST 100;


I tried to write this part below, but something is wrong (ERROR: function
grafika.pov_sphere(character varying, numeric, numeric, numeric, numeric,
character varying) does not exist) so I wanted to ask if someone knows how
to solve this problem.

CREATE AGGREGATE grafika.agg_pov_sphere (NUMERIC, NUMERIC, NUMERIC,
NUMERIC, VARCHAR) (
SFUNC = grafika.pov_sphere,
STYPE = varchar,
FINALFUNC = public.grp_concat
);

Thanx in advance.
-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] strangest thing happened

2010-07-08 Thread Viktor Bojović
On Wed, Jul 7, 2010 at 9:59 PM, John jo...@jfcomputer.com wrote:

 I am the only developer, DBA etc.. for a small project.  Today (yesterday
 was
 everything was perfect) many of the sequence numbers fell behind what is
 the
 actual PK value.   For example the invoice PK sequence current value = 1056
 but the table PK was 1071.  Nobody (other than myself) knows how to
 edit/access the postgres server.  So

 1. Does anyone know how this could have happened?? Other than human
 interaction.

 2. Does anyone have a script to reset the sequences to match the tables?

 Thanks in advance,

 Johnf

 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql



John,

If the insert is performed to triggered table, and that trigger returns null
(doesn't insert) then the sequence will increment, but no data will be
inserted. If needed I will send you some code examples where it happened to
me many times.

Sincerely
-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [SQL] How to Get Column Names from the Table

2010-07-07 Thread Viktor Bojović
\d tableName

On Wed, Jul 7, 2010 at 11:08 AM, venkat ven.tammin...@gmail.com wrote:

 Dear All,

How to get Column Names from Table in PostgreSQL.

 Thanks and Regards,

 Venkat




-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me