Re: [SQL] need some magic with generate_series()

2013-01-22 Thread Filip Rembiałkowski
or even

select m from generate_series( '20121101'::date, '20130101'::date, '1
month'::interval) m;



On Tue, Jan 22, 2013 at 3:49 PM, jan zimmek jan.zim...@web.de wrote:
 hi andreas,

 this might give you an idea how to generate series of dates (or other 
 datatypes):

 select g, (current_date + (g||' month')::interval)::date from 
 generate_series(1,12) g;

 regards
 jan

 Am 22.01.2013 um 22:41 schrieb Andreas maps...@gmx.net:

 Hi
 I need a series of month numbers like  201212, 201301 MM to join other 
 sources against it.

 I've got a table that describes projects:
 projects ( id INT, project TEXT, startdate DATE )

 and some others that log events
 events( project_id INT, createdate DATE, ...)

 to show some statistics I have to count events and present it as a view with 
 the project name and the month as MM starting with startdate of the 
 projects.

 My problem is that there probaply arent any events in a month but I still 
 need this line in the output.
 So somehow I need to have a select that generates:

 project 7,201211
 project 7,201212
 project 7,201301

 It'd be utterly cool to get this for every project in the projects table 
 with one select.

 Is there hope?


 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql



 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to convert SQL store procedure to Postgresql function

2012-02-28 Thread Filip Rembiałkowski
On Tue, Feb 28, 2012 at 9:50 AM, Rehan Saleem pk_re...@yahoo.com wrote:
 hi ,
 how can i convert this sql store procedure to postgresql function , i shall
 be very thankful to you, as i am new to postgresql and i dont know how to
 handle this kind of store procedure in postgresql


Most people handle this with user-defined functions (UDF) written in
PL/PgSQL procedural language.

Try to read The Friendly Manual
http://www.postgresql.org/docs/current/static/sql-createfunction.html
http://www.postgresql.org/docs/current/static/plpgsql.html

Don't worry - all Transact-SQL constructs have their equivalent.

Just start rewriting your function and begin asking specific questions
here... People will help.

I would begin with

create or replace function sp_GetUserByID( in_UserId varchar(50), ...)
returns varchar(50)
language plpgsql as $$
declare
...
begin
 ...
 return somevariable;
end;
$$;

HTH,
Filip

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to split up phone numbers?

2012-02-20 Thread Filip Rembiałkowski

At 2012-02-20 15:50, Andreas wrote:

Hi,
is there a way to split up phone numbers?
I know that's a tricky topic and it depends on the national phone 
number format.
I'm especially interested in a solution for Germany, Swizerland and 
Austria.


I've got everything in a phone number column that makes hardly sense 
like:

+49432156780
0049 4321 5678 0
04321/5678-0
and so on...
Those 3 samples are actually the same number in different notations.

Aim would be to get a normalized number split up in 4 seperate columns
nr_nation
nr_city
nr_main
nr_individual

so I end up with
49   4321   5678   0 for central
49   4321   5678   42   for Mr. Smith

Is this doable?



yes, sure - with regular expressions and/or with bunch of IF/THEN/ELSE 
blocks in PL/PgSQL.


see regexp_split_to_array, and other regex functions in the manual:

http://www.postgresql.org/docs/current/static/functions-matching.html#FUNCTIONS-POSIX-REGEXP


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Wrong query plan when using a left outer join

2012-01-17 Thread Filip Rembiałkowski
On Tue, Jan 17, 2012 at 7:54 AM, Feike Steenbergen
feikesteenber...@gmail.com wrote:
 I have the following setup:

 A table called hand:


                                        Table stage.hand_meta
    Column     |           Type           |
 Modifiers
 ---+--+-
  hand_id       | integer                  | not null default
 nextval('hand_meta_hand_id_seq'::regclass)
  hand_no       | bigint                   | not null
  site_id       | smallint                 | not null
  game_id       | smallint                 | not null
  time          | timestamp with time zone | not null
  tournament_id | bigint                   |
 Indexes:
    hand_meta_pkey PRIMARY KEY, btree (hand_id) CLUSTER
    hand_meta_hand_no_site_unq UNIQUE, btree (hand_no, site_id)
    hand_meta_time_idx btree (time)
    hand_meta_tournament_id_idx btree (tournament_id)
 Referenced by:
    TABLE handhistory_plain CONSTRAINT
 handhistory_plain_hand_id_fkey FOREIGN KEY (hand_id) REFERENCES
 hand_meta(hand_id)
    TABLE handhistory_staged CONSTRAINT staged_hand_hand_id_fkey
 FOREIGN KEY (hand_id) REFERENCES hand_meta(hand_id)

 Getting the max hand_id (primary key) results in using an index:


 feiketracker= explain analyze select max(hand_id) from stage.hand;

  QUERY PLAN
 ---
  Result  (cost=0.03..0.04 rows=1 width=0) (actual time=0.379..0.383
 rows=1 loops=1)
   InitPlan 1 (returns $0)
     -  Limit  (cost=0.00..0.03 rows=1 width=4) (actual
 time=0.337..0.340 rows=1 loops=1)
           -  Index Scan Backward using hand_meta_pkey on hand_meta
 (cost=0.00..82667.12 rows=2479440 width=4) (actual time=0.319..0.319
 rows=1 loops=1)
                 Index Cond: (hand_id IS NOT NULL)
  Total runtime: 0.823 ms
 (6 rows)


 Now, if i create a view which left outer joins another table and
 select max hand_id it uses a seq_scan, which I think it should'nt use,
 as it only needs to query hand_meta and then use the index:


 feiketracker= create view seqscan_example as (select * from hand_meta
 left join handhistory_plain using(hand_id));
 CREATE VIEW
 Time: 72.736 ms

 feiketracker= explain analyze select max(hand_id) from seqscan_example;
                                                         QUERY PLAN
 -
  Aggregate  (cost=49261.00..49261.01 rows=1 width=4) (actual
 time=34672.052..34672.054 rows=1 loops=1)
   -  Seq Scan on hand_meta  (cost=0.00..43062.40 rows=2479440
 width=4) (actual time=0.180..16725.109 rows=2479440 loops=1)
  Total runtime: 34672.874 ms
 (3 rows)


 feiketracker= select version();
                                                              version
 
  PostgreSQL 9.0.6 on armv5tejl-unknown-linux-gnueabi, compiled by GCC
 gcc (GCC) 3.4.4 (release) (CodeSourcery ARM 2005q3-2), 32-bit
 (1 row)


 I cannot think of a reason to use a seqscan, the left join should
 indicate all results from hand_meta should be used, hand_id is the
 primary key, so selecting max(hand_id) from the table or the view
 should result in the same execution plan or am I thinking wrong?


it's not always so simple for the planner to eliminate left join...
imagine that the view on the right side of join has some side effects.

so postgres will never cut off the right join side. but postgres
will still try to choose best execution plan. seq scan may simply be
faster here. breaking point is somewhere near 50% selectivity.

when handhistory_plain starts geting much bigger, plan will change.

try to experiment with SET enable_seqscan TO false; - and see what happens.


BTW, add a foreign key and index on handhistory_plain.hand_id (unless
you have it already).
BTW2, if you really don't care on handhistory you can just use
original query with no join.


Filip

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Call a external app on postgreSQL start-up (windows)

2011-12-27 Thread Filip Rembiałkowski
2011/12/27 Maurício Cruz c...@sygecom.com.br:
 Hi all,

 I need to execute some rotines on windows every time that postgreSQL is
 start at the server machine,

before it's started or just after it's started?


 I was thinking to create a PL/PGSQL to call this executable, or something
 like...
there are no triggers other than INSERT/UPDATE/DELETE yet.

I would rather go and create external script
- read lastruntime from savefile (simple one line text file will do)
- connect to postgres and fetch SELECT now(), pg_postmaster_start_time();
- if pg_postmaster_start_time is newer than statefile timestamp,
save now to savefile and run the job.

(I looked at PgAgent but from the docs I see it does not yet have
once per server startup logic).



 Does any one, have done something like this ? or have any idea ?



 Thanks everyone!



 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Call a external app on postgreSQL start-up (windows)

2011-12-27 Thread Filip Rembiałkowski
2011/12/27 Maurício Cruz c...@sygecom.com.br:

 I need to execute it after the postgreSQL started service.

OK, so the method I described should be good for you.
The script could be put in Task Scheduler every 15 minutes, or minimal
delay that is acceptable.
It can be written in windos batch language, or Perl, or Python, or whatever...



please CC the list when replying ( use reply to all )

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Natural sort order

2011-12-17 Thread Filip Rembiałkowski
If you use btrsort(column) from the example, you can just create a
functional index on this expression.

CREATE INDEX mytable_column_btrsort_idx ON mytable( btrsort(column) );

this can help.





2011/12/17 Richard Klingler rich...@klingler.net:
 Morning...

 What is the fastest way to achieve natural ordering from queries?

 I found a function at:
 http://2kan.tumblr.com/post/361326656/postgres-natural-ordering

 But it increases the query time from around 0.4msecs to 74msecs...
 Might be not much if occasional queries are made..but I use it for
 building
 up a hierarchical tree menu in a web application where every msecs
 counts (o;


 cheers
 richard


 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] plpgsql: how to get the exception's detail information?

2011-12-01 Thread Filip Rembiałkowski
2011/11/29 Muiz work.m...@gmail.com:
    I write a function to execute a sql string. E.g. update tableA set
 field1='abc' where name='123'; deletee from tableB where id=333;
    The following is my function:
 -
 CREATE OR REPLACE FUNCTION no_err_rollback()
   RETURNS boolean AS
 $BODY$
 BEGIN
     ROLLBACK;
     RETURN TRUE;
 EXCEPTION
      WHEN others THEN
             RETURN TRUE;
 END
 $BODY$
   LANGUAGE plpgsql;


I think this does not do what you think.

Transaction control commands (like ROLLBACK) inside functions does not
work in PostgreSQL.
Using ROLBACK in PgSQL will raise an exception (which you forcibly
ignored above).
This is a big feature which is sometimes called autonomous
transactions and is not yet implemented, AFAIK.
You can test this quite easilly; use txid_current() function to check
current transaction ID.

  1. when I execute a sql, can I get the total records user updated or
 deleted ?
see GET DIAGNOSTICS -
http://www.postgresql.org/docs/9.0/static/plpgsql-statements.html
NOTE: this will count rows affected by last query only.

  2. if I cache the exceptions, can I get the detail information?
what do you mean by cache exceptions?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] handling duplicate row exception

2011-09-26 Thread Filip Rembiałkowski
W dniu 22 września 2011 08:11 użytkownik Amar Dhole adh...@tibco.comnapisał:

 **

 Hi Filip,

 ** **

 No not sure 100% when this can happen. This approach will not be possible
 as in our application we are programmatically handling these cases and going
 in other route to add the record with increased key. 

 **


I am almost sure this approach will be possible, for more details see:

http://stackoverflow.com/questions/1109061/insert-on-duplicate-update-postgresql



 **

 I am using 9.0.4 version.

 ** **

 Thanks
 Amar 

 **



Re: [SQL] help with xpath namespace

2011-09-23 Thread Filip Rembiałkowski
2011/9/22 Brian Sherwood bds...@gmail.com


 select (xpath('/chassis-inventory/chassis/serial-number/text()',
data_xml,
ARRAY[ARRAY['junos',
 'http://xml.juniper.net/junos/9.6R4/junos-chassis']]
   )) from xml_test;

 Can anyone suggest how I would go about getting the serial-number with
 xpath?



http://www.postgresql.org/docs/9.1/static/functions-xml.html#FUNCTIONS-XML-PROCESSING-
see mydefns.

This will work:

select xpath(
'/junos:chassis-inventory/junos:chassis/junos:serial-number/text()',
data_xml,
ARRAY[ARRAY['junos', 'http://xml.juniper.net/junos/9.6R4/junos-chassis'
]]
)
from xml_test;



cheers, Filip


Re: [SQL] a spatial table's bounding box

2011-09-23 Thread Filip Rembiałkowski
2011/9/22 Asli Akarsakarya aslia...@yahoo.com


 I want to get the extensions of  spatial table. Say that there is a spatial 
 table named 'roads', and a some road geometry inside. I want to fetch the 
 bounding box (envelope or extension) for that table, that means for all the 
 roads that it has.

 According the OGC specification, this should be defined in the 
 geometry_columns table. As four columns, minX, minY, maxX and maxY. 
 But I couldn't find it anywhere on my postgis. Yet.

 From which table/view? With what SQL?



I would try

SELECT st_envelope( st_collect(geom) ) FROM roads;


HTH, Filip


Re: [SQL] handling duplicate row exception

2011-09-21 Thread Filip Rembiałkowski
Hi

There is no IGNORE_DUP_KEY equivalent in PostgreSQL.

If you are 100% sure that you want to ignore unique key violations, you can
wrap your INSERT code in PL/PgSQL block and handle the exception yourself.

I mean:

DO $$
BEGIN
 INSERT INTO foo (bar,baz) SELECT 42, 666;
EXCEPTION WHEN unique_violation THEN RAISE NOTICE 'row skipped';
END;
$$


BTW - which version of PostgreSQL are you using?

Filip



2011/9/21 Amar Dhole adh...@tibco.com

 Hi All,

 I have a requirement where my application tries to enter the duplicate row
 in table using batchexceute code. And batch exception is thrown we checked
 error code and skip it but after this exception all my next update/insert
 gets error out with following exception

 Database error. SQL state 25P02. Database specific error code (if any) was
 0. Database error message (if any) was: org.postgresql.util.PSQLException:
 ERROR: current transaction is aborted, commands ignored until end of
 transaction block.:


  Is there any way to proceed ahead like in sql server we have options while
 creating table IGNORE_DUP_KEY = ON if this is set warning is generated
 instead of Exception so the other insert/update can proceed ahead.



 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql



Re: [SQL] loosing data in postgres database

2011-01-24 Thread Filip Rembiałkowski
2011/1/24 victor katemana developervick...@gmail.com:
 hi everyone, i have a problem of loosing data in my postgres database
 whenever i restart my machine what could be the possible problem help me
 out?

can you please adjust your question to this
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems


How exactly do you insert data? maybe there is a BEGIN but no COMMIT?
How exactly dou you verify that it does not exist?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] help needs in converting db2 function in postgresql.

2011-01-11 Thread Filip Rembiałkowski
2011/1/11 Amar Dhole adh...@tibco.com

 Hi,
 I need helping converting following db2 function in postgresql function.
 Any pointer will be great help in proceeding me ahead.

 CREATE FUNCTION in_liststring ( string CLOB(64K) )
   RETURNS TABLE ( ordinal INTEGER, index INTEGER )
   LANGUAGE SQL
   DETERMINISTIC
   NO EXTERNAL ACTION
   CONTAINS SQL
   RETURN
  WITH t(ordinal, index) AS
 ( VALUES ( 0, 0 )
   UNION ALL
   SELECT ordinal+1, COALESCE(NULLIF(
 -- find the next delimiter ','
 LOCATE(',', string, index+1), 0),
 LENGTH(string)+1)
   FROM   t
  -- to prevent a warning condition for infinite
  -- recursions, we add the explicit upper
  -- boundary for the quot;ordinalquot; values
   WHERE  ordinal  1 AND
  -- terminate if there are no further delimiters
  -- remaining
  LOCATE(',', string, index+1)  0 )
  SELECT ordinal, index
  FROM   t
  UNION ALL
  -- add indicator for the end of the string
  SELECT MAX(ordinal)+1, LENGTH(string)+1
  FROM   t
 ;

 commit;

 DROP FUNCTION INSTRTBL;

 CREATE FUNCTION INSTRTBL ( string CLOB(64K) )
   RETURNS TABLE ( INSTRTBL CLOB(64K) )
   LANGUAGE SQL
   DETERMINISTIC
   NO EXTERNAL ACTION
   CONTAINS SQL
   RETURN
  WITH t(ordinal, index) AS
 ( SELECT ordinal, index
   FROM   TABLE ( in_liststring(string) ) AS x )
  SELECT SUBSTR(string, t1.index+1, t2.index - t1.index - 1)
  -- the join below makes sure that we have the lower and
  -- upper index where we can find each of the ',' delimiters
  -- that are separating the INSTRTBL.  (For this, we exploit
  -- the additional indexes pointing to the beginning and end
  -- of the string.)
  FROM   t AS t1 JOIN t AS t2 ON
( t2.ordinal = t1.ordinal+1 )
 ;




create or replace function instrtbl(text)
returns table(instrtbl text)
language sql
immutable
strict
as $$
SELECT * FROM regexp_split_to_table($1, ',')
$$;

fi...@filip=# select * from instrtbl( 'one, two, really long three' );
  instrtbl

 one
  two
  really long three
(3 rows)

I love PostgreSQL.

Filip


Re: [SQL] Translate Function PL/pgSQL to SQL92

2010-12-16 Thread Filip Rembiałkowski
2010/12/16 serviciotdf servicio...@gmail.com

 Hello,

 I have a Function in PL/pgSQL and I need to translate it to SQL92, but I'm
 stuck.

 ###
 CREATE OR REPLACE FUNCTION pruebas(varchar, varchar, integer)
 RETURNS SETOF personal AS
 $delimiter$
 BEGIN
 PERFORM id from documentos WHERE descripcion = $1;
 IF NOT FOUND THEN
INSERT INTO documentos(descripcion) VALUES($1);
 END IF;
 INSERT INTO personal(nombre,idtipodocumento,numdoc)
 VALUES($2, (SELECT id from documentos WHERE descripcion = $1), $3);
 END;
 $delimiter$
 LANGUAGE plpgsql;
 ###

 Tables

 CREATE TABLE documentos
  id serial NOT NULL,
  descripcion character varying(60),
  CONSTRAINT pkdocumentos PRIMARY KEY (id)


 CREATE TABLE personal
  id serial NOT NULL,
  nombre character varying(60),
  idtipodocumento smallint NOT NULL,
  numdoc integer,
  CONSTRAINT pkpersonal PRIMARY KEY (id),
  CONSTRAINT fkpdoc FOREIGN KEY (idtipodocumento)
  REFERENCES documentos (id) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE NO ACTION,
  CONSTRAINT unqnumdoc UNIQUE (idtipodocumento, numdoc)



If I understand correctly, you mean translating this function into a
sequence of plain SQL commands:

INSERT INTO documentos(descripcion) SELECT $1
WHERE NOT EXISTS ( SELECT id FROM documentos WHERE descripcion = $1 );

INSERT INTO personal ( nombre, idtipodocumento, numdoc )
SELECT $2, ( SELECT id FROM documentos WHERE descripcion = $1 ), $3;

of course you will need to bind / pass parameters...

HTH

Filip


Re: [SQL] Aggregating by unique values

2010-12-14 Thread Filip Rembiałkowski
try

select zip, count(distinct id) from customer_service_date group by zip;

2010/12/14 Lee Hachadoorian lee.hachadoor...@gmail.com

 Hello,

 I'm trying to count customers who have received services by ZIP code,
 but I want to count each customer only once even though customers may
 have received services on multiple dates, and therefore appear in the
 table multiple times. There *is* a separate customers table, but because
 of dirty data, I cannot rely on it.

 The best I can come up with is:

 SELECT
zip, count(*) AS count_serviced
 FROM
(SELECT DISTINCT zip, id FROM customer_service_date) a
 GROUP BY
zip
 ;

 The table (with some irrelevant fields dropped) is:

 CREATE TABLE customer_service_date
 (
  id integer,
  address character varying,
  city character varying,
  state character varying,
  zip character varying,
  service_date date
 )
 ;

 The table is missing a primary key field, but it would be (id,
 service_date) if it had one.

 Any suggestions to improve this?

 Thanks,
 --Lee

 --
 Lee Hachadoorian
 PhD Student, Geography
 Program in Earth  Environmental Sciences
 CUNY Graduate Center


 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql



Re: [SQL] Sorting router interfaces

2010-11-01 Thread Filip Rembiałkowski
2010/11/1 Brian Sherwood bds...@gmail.com:
 I am trying to sort router interface names.
 The problem is that I am doing a text sort and need to do a numerical sort.

 I want the interfaces to be in numerical order:

  GigabitEthernet1/0/1    | 1/0/1 | {1,0,1}
  GigabitEthernet1/0/2    | 1/0/2 | {1,0,2}
  GigabitEthernet1/0/3    | 1/0/3 | {1,0,3}
 etc.


 What I get instead is the following text ordering:

  GigabitEthernet1/0/1    | 1/0/1 | {1,0,1}
  GigabitEthernet1/0/10   | 1/0/10    | {1,0,10}
  GigabitEthernet1/0/11   | 1/0/11    | {1,0,11}
  GigabitEthernet1/0/12   | 1/0/12    | {1,0,12}
  GigabitEthernet1/0/13   | 1/0/13    | {1,0,13}
  GigabitEthernet1/0/14   | 1/0/14    | {1,0,14}
  GigabitEthernet1/0/15   | 1/0/15    | {1,0,15}
  GigabitEthernet1/0/16   | 1/0/16    | {1,0,16}
  GigabitEthernet1/0/17   | 1/0/17    | {1,0,17}
  GigabitEthernet1/0/18   | 1/0/18    | {1,0,18}
  GigabitEthernet1/0/19   | 1/0/19    | {1,0,19}
  GigabitEthernet1/0/2     | 1/0/2 | {1,0,2}
  GigabitEthernet1/0/20   | 1/0/20    | {1,0,20}
  GigabitEthernet1/0/21   | 1/0/21    | {1,0,21}
  GigabitEthernet1/0/22   | 1/0/22    | {1,0,22}
  GigabitEthernet1/0/23   | 1/0/23    | {1,0,23}
  GigabitEthernet1/0/24   | 1/0/24    | {1,0,24}
  GigabitEthernet1/0/25   | 1/0/25    | {1,0,25}
  GigabitEthernet1/0/26   | 1/0/26    | {1,0,26}
  GigabitEthernet1/0/27   | 1/0/27    | {1,0,27}
  GigabitEthernet1/0/28   | 1/0/28    | {1,0,28}
  GigabitEthernet1/0/29   | 1/0/29    | {1,0,29}
  GigabitEthernet1/0/3     | 1/0/3 | {1,0,3}
  GigabitEthernet1/0/30   | 1/0/30    | {1,0,30}
  GigabitEthernet1/0/31   | 1/0/31    | {1,0,31}
  GigabitEthernet1/0/32   | 1/0/32    | {1,0,32}
  GigabitEthernet1/0/33   | 1/0/33    | {1,0,33}


 FYI: I also have entries like the following:
  lc-5/2/0.32769  | 5/2/0.32769  | {5,2,0.32769}
  irb.5        | .5   | {.5}
  irb.51          | .51  | {.51}
  irb.52      | .52  | {.52}
  ae6         | 6    | {6}
  ae7         | 7    | {7}
  lo0.0       | 0.0  | {0.0}
  Vlan710      | 710  | {710}
  Vlan760      | 760  | {760}
  Vlan910      | 910  | {910}
  Vlan910      | 910  | {910}
  gre     |     | {}
  tap     |         | {}
  dsc     |         | {}


 The above listings are produced with the following:

 SELECT
    interface,
    regexp_replace(interface,'[A-Za-z -]+','','g') as sort_col1,
    regexp_split_to_array(regexp_replace(interface,'[A-Za-z
 -]+','','g'),E'/') as sort_col
 FROM all_ports
 ORDER BY devicename,sort_col


 I have tried to break out the interface number to a separate array
 column to sort on and was hoping to cast the array to a float[], but
 no luck:

 SELECT
    interface,
    regexp_replace(interface,'[A-Za-z -]+','','g') as sort_col1,
    regexp_split_to_array(regexp_replace(interface,'[A-Za-z
 -]+','','g'),E'/')::float as sort_col
 FROM all_ports

 psql:-:15: ERROR:  cannot cast type text[] to double precision
 LINE 5: ...gexp_replace(interface,'[A-Za-z -]+','','g'),E'/')::float as...



 Can anyone suggest a better approach or help with this approach?

good approach, jus needed to be generalized...

fi...@filip=# SELECT interface,
regexp_replace( interface, '[0-9].*', '' ) as before_1st_digit,
regexp_replace( interface, '^[^0-9]+', '' ) as from_first_digit,
CASE WHEN interface ~ '[0-9]' THEN
regexp_split_to_array( regexp_replace( interface, '^[^0-9]+', ''
), '[^0-9]+' )::int[]
ELSE array[-1] END as nums
FROM interfaces
ORDER BY 2,4;

   interface   | before_1st_digit | from_first_digit |   nums
---+--+--+--
 eth0  | eth  | 0| {0}
 eth0/0| eth  | 0/0  | {0,0}
 eth0/1| eth  | 0/1  | {0,1}
 eth0/10   | eth  | 0/10 | {0,10}
 eth1  | eth  | 1| {1}
 GigabitEthernet1/0/2  | GigabitEthernet  | 1/0/2| {1,0,2}
 GigabitEthernet1/0/20 | GigabitEthernet  | 1/0/20   | {1,0,20}
 irb.5 | irb. | 5| {5}
 irb.51| irb. | 51   | {51}
 tun   | tun  |  | {-1}
 tun0  | tun  | 0| {0}
 Vlan72| Vlan | 72   | {72}
 Vlan710   | Vlan | 710  | {710}
(13 rows)



-- 
Filip Rembiałkowski
JID,mailto:filip.rembialkow...@gmail.com
http://filip.rembialkowski.net/

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org

Re: [SQL] help

2010-05-07 Thread Filip Rembiałkowski
select substring( data from '\((.*)\)' ) from table;


2010/5/5 Nicholas I nicholas.domni...@gmail.com:
 Hi,

 I have a table in which the data's are entered like,

 Example:

 One (1)
 Two (2)
 Three (3)

 I want to extract the data which is only within the parentheses.

 that is
 1
 2
 3


 Thank You
 Nicholas I





-- 
Filip Rembiałkowski
JID,mailto:filip.rembialkow...@gmail.com
http://filip.rembialkowski.net/

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to cascade information like the user roles ?

2010-01-19 Thread Filip Rembiałkowski
2010/1/19 Andreas maps...@gmx.net

 Hi,

 I need something like the user-roles of PG to store options of my users.
 I guess i need a table with roles, options and one that stores the
 refernces from roles to options.

 roles (role_id, role_name)
 option (option_id, option_name)
 role_has_option (role_fk, option_fk)

 so far is easy. Now I can let role1 have option1 and option2 ...

 But I'd further like to let role2 inherit role1's options and also have
 option3.
 role_inherits_role (parent_role_fk, child_role_fk)
 1, 2

 What SELECT would deliver all options for role2 inkluding the inherited
 ones?
 like
 role_id, option_id
 2, 1
 2, 2
 2, 3



select role_fk as role_id, option_fk as option_id from role_has_option where
role_fk = 2
union
select inh.child_role_fk, opt.option_fk from role_has_option opt join
role_inherits_role inh on inh.parent_role_fk = opt.role_fk where
inh.child_role_fk = 2


?



 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql




-- 
Filip Rembiałkowski
JID,mailto:filip.rembialkow...@gmail.com
http://filip.rembialkowski.net/


Re: [SQL] short-cutting if sum()constant

2009-12-22 Thread Filip Rembiałkowski
2009/12/22 Ivan Sergio Borgonovo m...@webthatworks.it

 Hi,


Hi :-)



 I'd like to know if

 select sum(qty) from t where status=37;

 is  constant.

 qty is always 0.

 Is there a way to skip examining further rows and return a result
 ASAP?



With plain SQL, no.

With a user defined function in PL/PgSQL, yes.





 --
 Ivan Sergio Borgonovo
 http://www.webthatworks.it


 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql




-- 
Filip Rembiałkowski
JID,mailto:filip.rembialkow...@gmail.com
http://filip.rembialkowski.net/


Re: [SQL] get distinct + group by then filter

2009-12-18 Thread Filip Rembiałkowski
2009/12/17 Ivan Sergio Borgonovo m...@webthatworks.it

 I've a web application and I'm trying to do some reporting on
 affiliate commission

 create table tracky_hit (
  hitid serial,
  esid varchar(32), -- related to browser session
  track_time timestamp,
  aid varchar(32), -- affiliate code
  -- some other tracking stuff
 );

 create table tracky_event (
  eventid serial,
  esid varchar(32) references tracky_hit (esid)


This imples that tracky_hit.esid is at least UNIQUE.

 );

 create table tracky_ordergroup_event (
  ordergroupid int references ...,
  eventid int references tracky_event (eventid)
 );

 Now I'd like to pick up the first hit for each esid in a given
 interval of time for a given aid and relate them with ordergroupid.

 aid may change across the same esid.


If  tracky_hit.esid is unique,  then why same esid can have many aids?

Can you specify more complete schema (at least PKeys would be nice)?




 Getting the first hit for each esid can be done:

 select min(hitid) as h
  from tracky_hit
group by esid;

 or

 select distinct on (esid) hitid
  from tracky_hit
order by esid, track_time;


DISTINCT ON seems a good aproach tu such queries.




 If I put a where aid='somestuff' right in the above query... I'm not
 picking up the first hit in an esid.

 The only way that comes to my mind to solve the problem is applying
 the condition later in a subquery, but no conditions means a lot of
 data returned.

 I've a similar problem with the interval: if I chop in the middle of
 a session I may not pick up the beginning of each session.
 Furthermore I've to count session just once even if they cross the
 boundary of an interval.

 I could do something like:

 select oe.ordergroupid from
  tracky_ordergroup_event oe
  join tracky_event e on e.eventid=oe.eventid
  join tracky_hit th on th.esid=e.esid
  where th.hitid in
(select distinct on (esid) hitid
  from tracky_hit
  where track_time between
('2009-12-01'::timestamp - interval '1 days')
and
('2009-12-01'::timestamp + interval '1 months' + interval '1
  days')
  order by esid, track_time
)
  and th.aid='someaid'
  and th.track_time between
  ('2009-12-01'::timestamp)
and
  ('2009-12-01'::timestamp + interval '1 months');

 but this looks awful. Any better way?

 I'm on 8.3 and no short term plan to move to 8.4

 thanks

 --
 Ivan Sergio Borgonovo
 http://www.webthatworks.it


 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql




-- 
Filip Rembiałkowski
JID,mailto:filip.rembialkow...@gmail.com
http://filip.rembialkowski.net/


Re: [SQL] Client-side compression

2009-06-24 Thread Filip Rembiałkowski
2009/6/23 Rob Sargent robjsarg...@gmail.com


 Not sure if this belongs here or on the admin or performance list.
  Apologies if so. (And this may be a second posting as the first was from an
 un-registered account.  Further apologies)

 My assumption is that any de/compression done by postgres would be
 server-side.

 We're considering minimizing bandwidth utilization


Why? Are you hitting some limit?



 by using client-side compression on a column value that will typically be
 multi-megabyte in size.  We would use ALTER TABLE SET STORAGE EXTERNAL to
 prevent the server from un-necessary compression.

 Is this generally worthwhile?


No general answer, sorry.

It depends on other conditions and on what you want to achieve.

1. Almost always you will get better on-disk compression ratio. That's
because, quote from pg docs:
The compression technique used is a fairly simple and very fast member of
the LZ family of compression techniques. See
src/backend/utils/adt/pg_lzcompress.c for the details.

2. You will force all client apps to decompress data on their side.

3. To minimize bandwith utilisation, there are other ways (think compressed
tunnels, SSL and so on - but it adds per-connection overhead)




 Is there a great penalty for a query which delves into the value, given
 that the server will not be aware it's compressed?


not clear. do you mean something like

SELECT ... FROM table where decompress(compressed_data) LIKE 'whatever'
???

of course it will be a great penalty.


 I assume we're pretty much on our own to prevent such actions (i.e. the app
 can never query against this column via sql).


certainly.



-- 
Filip Rembiałkowski
JID,mailto:filip.rembialkow...@gmail.com
http://filip.rembialkowski.net/


Re: [SQL] Extract week from date

2009-05-18 Thread Filip Rembiałkowski
2009/5/18 Dani Castaños dcasta...@androme.es

 Hi again,

 I need to extract date grouped by week from an statistics table.

 I was trying something like this:

 SELECT total_duration, EXTRACT( week from date statistics_date )
 FROM statistics_daily
 GROUP BY EXTRACT( week from date statistics_date ), total_duration;

 But it doesn't works... Neither:

 SELECT total_duration, statistics_date
 FROM statistics_daily
 GROUP BY EXTRACT( day from statistics_date ), total_duration,
 statistics_date;


hmmm.. it's not clear what do you want.

why not just something like

SELECT  EXTRACT( week from statistics_date ) as week, SUM(
total_duration) as total_duration_sum
FROM statistics_daily GROUP BY 1;


?



-- 
Filip Rembiałkowski
JID,mailto:filip.rembialkow...@gmail.com
http://filip.rembialkowski.net/


Re: [SQL] Collapsing (select) row values into single text field.

2008-12-10 Thread Filip Rembiałkowski
2008/12/10 Allan Kamau [EMAIL PROTECTED]

 Hi all,
 I would like to concatenate the field values of several rows in a table
 that meet some similarity criteria  based on a the values of  some other
 field (more like a group by). Then I would also like to also include the
 lowest value of another associated field along.

 I have a table that contains 3 fields of interest.
 create table temp
 (id INTEGER NOT NULL
 ,location TEXT NOT NULL --this will hold the zip code
 ,lowest_temp NUMERIC(5,2) NOT NULL --The lowest temperature at some given
 night
 ,location_bit_data VARBIT NOT NULL
 ,PRIMARY KEY(id)
 );

 There will be usually more than one record for a location
 (location+lowest_temp is not unique either).
 Now I would like to collapse the data in this table (an populate another
 table) as follows.
 Lets assume this table has the structure below.

 create table temp_major
 (id INTEGER NOT NULL
 ,location TEXT NOT NULL --this will hold the zip code
 ,lowest_overall_temp NUMERIC(5,2) NOT NULL --The lowest temperature at some
 given night
 ,overall_location_bit_data VARBIT NOT NULL
 ,PRIMARY KEY(id)
 ,UNIQUE(location)
 );

 The new table (temp_major) is population as follows: the
 location_bit_data values for a given location are grouped into one entry
 (to create a concatenation effect), the lowest_temp reading across all the
 records of the given location is noted and the location is also noted, this
 data is used in populating the table.

 The solution I have so far involves using a stored procedure and cursors
 (on Select .. order by location) to continuously grow the data for a given
 location's overall_location_bit_data field.

 Allan.



sounds like you need a custom aggregate function.
http://www.postgresql.org/docs/current/static/xaggr.html

however it's not clear how you want to aggregate; what does your actual
grouping function do?

general pattern is:

CREATE FUNCTION varbit_concat(varbit,varbit)
returns varbit
as 'whatever you need' language 'of your choice' immutable;

CREATE AGGREGATE agg_varbit_concat ( varbit ) (
SFUNC = varbit_concat,
STYPE = varbit
-- check CREATE AGGREGATE syntax, maybe you need something fancy here
);


-- and finally:

SELECT
 location,
 min(lowest_temp) as lowest_overall_temp,
 agg_varbit_concat(location_bit_data) as overall_location_bit_data
FROM temp;


-- 
Filip Rembiałkowski


Re: [SQL] psql: FATAL: Ident authentication failed for user postgres

2008-09-08 Thread Filip Rembiałkowski
2008/9/8 VG [EMAIL PROTECTED]:
 Hi,

 I have installed postgresql-serve using yum
 hba.conf group and owner were postgres then i changed it to root.

 when i type psql -U postgres command ( as root  user) I get:

 psql: FATAL:  Ident authentication failed for user postgres


read http://www.depesz.com/index.php/2007/10/04/ident/
what's in pg_hba.conf?



 My file permission are:
 drwx-- 5 postgres postgres  4096 2008-09-06 00:36 base
 drwx-- 2 postgres postgres  4096 2008-09-08 21:44 global
 drwx-- 2 postgres postgres  4096 2008-09-06 00:36 pg_clog
 -rw-r--r-- 1 postgres postgres  3200 2008-09-08 21:35 pg_hba.conf
 -rw--- 1 postgres postgres  1460 2008-09-06 00:36 pg_ident.conf
 drwx-- 2 postgres postgres  4096 2008-09-08 15:52 pg_log
 drwx-- 4 postgres postgres  4096 2008-09-06 00:36 pg_multixact
 drwx-- 2 postgres postgres  4096 2008-09-06 00:36 pg_subtrans
 drwx-- 2 postgres postgres  4096 2008-09-06 00:36 pg_tblspc
 drwx-- 2 postgres postgres  4096 2008-09-06 00:36 pg_twophase
 -rw--- 1 postgres postgres 4 2008-09-06 00:36 PG_VERSION
 drwx-- 3 postgres postgres  4096 2008-09-06 00:36 pg_xlog
 -rw--- 1 postgres postgres 15336 2008-09-06 00:36 postgresql.conf
 -rw--- 1 postgres postgres49 2008-09-08 21:44 postmaster.opts
 -rw--- 1 postgres postgres46 2008-09-08 21:44 postmaster.pid

 How could i achieve this..
 Regards,
 Vikas






-- 
Filip Rembiałkowski

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Check a column value not in Array.

2008-08-14 Thread Filip Rembiałkowski
2008/8/14 Emi Lu [EMAIL PROTECTED]:
 Greetings,

 May I know the command to check whether a column value is in array please?

 For example, I am looking  for sth like:

 select *
 from   test
 where  test.col not in ARRAY['val1', 'val2'];


select * from test where test.col  ALL ( ARRAY['val1', 'val2'] );

see http://www.postgresql.org/docs/current/static/functions-comparisons.html

be careful with NULLs in this type of comparisons.


-- 
Filip Rembiałkowski

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] transaction and triggers

2008-01-18 Thread Filip Rembiałkowski
2008/1/18, Gerardo Herzig [EMAIL PROTECTED]:
 Hi all. Im puzzled again. Just thinking:

 As im having fun trying to make my own replication system, im stuck in
 this situation:
 Consider a simple table with a unique index on the `id' field, and a
 function who will fail, such as

 insert into test (id) values (1);
 insert into test (id) values (1);

 This will fail and the transaction will be rollback'ed, but as the basis
 of my replication system is on row level triggers, the first time the
 insert is called, the trigger will be executed, and i will like to be
 able to stack the triggers in some way, in order to be fired only after
 a succesfull execution of the hole function.

If the transaction is rolled back, changes made by your trigger to
local database will be also canceled.

Unless you make any manipulation on remote databases, you have no problem.

Any changes made to remote databases, for example if you call some
dblink functions, are not transactional, and will not be rolled back.

In this case you have to rethink your design, as there is no ON
COMMIT trigger (yet?)









-- 
Filip Rembiałkowski

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] ERROR: failed to re-find parent key in pk_ep07

2007-10-25 Thread Filip Rembiałkowski
2007/10/25, Otniel Michael [EMAIL PROTECTED]:
 OC. I will try to upgrade the postgres. What version that can fix this
 problem? Postgres 8.2.5? Postgres 8.1?

 Now, i use postgres 8.0.3, what effect when i upgrade the postgres version?
 I heared that i should check my sql? In postgres 8.0.3 show as warning, in
 postgres 8.1 as error?


If you can afford some testing time, try with the very latest stable
version (8.2.5 at the moment).

If you can't, just take the last patchlevel from 8.0.x series  - the
upgrade is seamless
(just backup - stop the server - replace binaries - start servar )

Detrailed upgrade procedures are described either in manual or
somewhere on the net.


cheers

-- 
Filip Rembiałkowski

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Finding broken regex'es

2007-10-03 Thread Filip Rembiałkowski
2007/10/3, Dawid Kuroczko [EMAIL PROTECTED]:

 CREATE TABLE rx_check (
 rx text CHECK ('' ~ rx IN ('t','f'))
 );

wow. This is beautiful :)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] How pull

2007-09-23 Thread Filip Rembiałkowski
On 23/09/2007, Matt Magoffin [EMAIL PROTECTED] wrote:
 Hello,

 My SQL skills are limited and I'm struggling with a query where I want to
 return a single item of an aggregate join. The query looks like this:

 select
   (case
 when agg.avg_rating is null then 0.0
 when agg.avg_rating  0.75 then 0.5
 when agg.avg_rating  1.25 then 1.0
 when agg.avg_rating  1.75 then 1.5
 when agg.avg_rating  2.25 then 2.0
 when agg.avg_rating  2.75 then 2.5
 when agg.avg_rating  3.25 then 3.0
 when agg.avg_rating  3.75 then 3.5
 when agg.avg_rating  4.25 then 4.0
 when agg.avg_rating  4.75 then 4.5
 else 5.0
   end) as avg_rating,
   count(item.itemid) as item_count
 from media_item item
 inner join (
   select rating.mediaitem_userrating_hjid as ritemid,
 avg(rating.rating) as avg_rating
   from media_item_rating rating, media_item item
   where rating.mediaitem_userrating_hjid = item.itemid
   group by rating.mediaitem_userrating_hjid
 ) as agg
 on item.itemid = agg.ritemid
 group by avg_rating
 order by avg_rating desc

 and a sample of results is this:

  avg_rating | item_count
 +
 5.0 | 21
 4.0 | 33
 3.0 | 13
 2.0 |  4
 1.0 |  1

 What I want as well is the ID of the item (and possibly it's avg_rating
 value) from the agg join with the highest avg_rating for each output
 row... something like this

  avg_rating | item_count | item_id | item_rating
 +---
 5.0 | 21 | 109890  | 4.9
 4.0 | 33 | 89201   | 4.1
 3.0 | 13 | 119029  | 2.8
 2.0 |  4 | 182999  | 2.2
 1.0 |  1 | 1929| 1.0

 So the intention in this example is that item #109890 has an average
 rating of 4.9 and that is the highest rating within the  4.75 rating
 group.

 If anyone had any tips I'd greatly appreciate it.


create ranking function to make queries look simpler:

create or replace function ranking_group(numeric) returns numeric as
$$ select case
  when $1  0.3456 then 'quite small'
  ...
end $$ language sql immutable;
   (I'd make it STRICT, but you allow null rankings)

1st way: DISTINCT ON + subquery

select *, (select count(*) from rating where rating_group(rating) =
subq.rating_group ) as rating_group_size
from (
select distinct on (rating_group)
rating_group(r.rating),
r.item_id as best_rated_item_id,
r.rating as best_rating
from rating r
order by rating_group desc, r.rating desc
) subq;


2nd way (faster - actually 2 x faster)
using FIRST aggregate to calculate all in one pass

create function first(numeric,numeric) returns numeric as 'select $1'
language sql immutable strict;
create function first(integer,integer) returns integer as 'select $1'
language sql immutable strict;
create aggregate first( integer ) ( SFUNC = first, STYPE = integer );
create aggregate first ( numeric ) ( SFUNC = first, STYPE = numeric );

select
  rating_group(rating),
  count(*) as num_ratings,
  first(item_id) as best_rated_item_id,
  first(rating) as best_rating
from ( select * from rating order by rating desc ) ordered_ratings
group by rating_group
order by rating_group desc;


note: if you can, get rid of null ratings. what are they supposed to
mean? they make things a bit more complicated.



-- 
Filip Rembiałkowski

---(end of broadcast)---
TIP 6: explain analyze is your friend