[GENERAL] to_char with locale decimal separator

2013-07-29 Thread Ingmar Brouns
Hi,

I need to convert some numerical values to text using the decimal
separator that corresponds to the current locale. However, I do
not want to lose information by padding with zero decimals or
truncating zero decimals. So I basically want a text cast that
also replaces the dot by a comma. I've looked at the to_char
function and the formatting patterns, but when using those I
either add or truncate zero decimals.


# show lc_numeric;
 lc_numeric

 nl_NL.utf8
(1 row)

# select 1.500::text;
 text
---
 1.500
(1 row)

# select to_char(1.500, '99D');
 to_char
--
   1,5000
(1 row)

# select to_char(1.500, 'FM99D');
 to_char
-
 1,5
(1 row)


I would like to have '1,500' as the output, what is the best way
to achieve this?

Thanks in advance,

Ingmar


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] to_char with locale decimal separator

2013-07-29 Thread Ingmar Brouns
On Mon, Jul 29, 2013 at 1:24 PM, Ingmar Brouns  wrote:
> Hi,
>
> I need to convert some numerical values to text using the decimal
> separator that corresponds to the current locale. However, I do
> not want to lose information by padding with zero decimals or
> truncating zero decimals. So I basically want a text cast that
> also replaces the dot by a comma. I've looked at the to_char
> function and the formatting patterns, but when using those I
> either add or truncate zero decimals.
>
>
> # show lc_numeric;
>  lc_numeric
> 
>  nl_NL.utf8
> (1 row)
>
> # select 1.500::text;
>  text
> ---
>  1.500
> (1 row)
>
> # select to_char(1.500, '99D');
>  to_char
> --
>1,5000
> (1 row)
>
> # select to_char(1.500, 'FM99D');
>  to_char
> -
>  1,5
> (1 row)
>

Maybe its important to add that the nr of decimals in the values
is variable. I could of course adjust the pattern to work for
1.500, but I'm looking for a solution that will work with an
arbitrary numerical value and that's a little more elegant than
casting to text, and then replacing the dot by a comma.


>
> I would like to have '1,500' as the output, what is the best way
> to achieve this?
>
> Thanks in advance,
>
> Ingmar


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] to_char with locale decimal separator

2013-07-29 Thread Ingmar Brouns
On Mon, Jul 29, 2013 at 3:45 PM, Adrian Klaver  wrote:
> On 07/29/2013 04:24 AM, Ingmar Brouns wrote:
>>
>> Hi,
>>
>> I need to convert some numerical values to text using the decimal
>> separator that corresponds to the current locale. However, I do
>> not want to lose information by padding with zero decimals or
>> truncating zero decimals. So I basically want a text cast that
>> also replaces the dot by a comma. I've looked at the to_char
>> function and the formatting patterns, but when using those I
>> either add or truncate zero decimals.
>>
>>
>> # show lc_numeric;
>>   lc_numeric
>> 
>>   nl_NL.utf8
>> (1 row)
>>
>> # select 1.500::text;
>>   text
>> ---
>>   1.500
>> (1 row)
>>
>> # select to_char(1.500, '99D');
>>   to_char
>> --
>> 1,5000
>> (1 row)
>>
>> # select to_char(1.500, 'FM99D');
>>   to_char
>> -
>>   1,5
>> (1 row)
>>
>>
>> I would like to have '1,500' as the output, what is the best way
>> to achieve this?
>
>
> This work?:
>
> test=> select replace(1.500::text, '.', ',');
>  replace
> -
>  1,500
> (1 row)
>


that would work, but that requires keeping track of which decimal
separator to use yourself.  If you change the locale, the code
has to change as well. As to_char already converts the decimal
separator in a locale aware manner, I wonder whether there is a
way to do this using the existing locale facilities.

Ingmar


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] to_char with locale decimal separator

2013-07-30 Thread Ingmar Brouns
On Mon, Jul 29, 2013 at 3:12 PM, Ingmar Brouns  wrote:
> On Mon, Jul 29, 2013 at 1:24 PM, Ingmar Brouns  wrote:
>> Hi,
>>
>> I need to convert some numerical values to text using the decimal
>> separator that corresponds to the current locale. However, I do
>> not want to lose information by padding with zero decimals or
>> truncating zero decimals. So I basically want a text cast that
>> also replaces the dot by a comma. I've looked at the to_char
>> function and the formatting patterns, but when using those I
>> either add or truncate zero decimals.
>>
>>
>> # show lc_numeric;
>>  lc_numeric
>> 
>>  nl_NL.utf8
>> (1 row)
>>
>> # select 1.500::text;
>>  text
>> ---
>>  1.500
>> (1 row)
>>
>> # select to_char(1.500, '99D');
>>  to_char
>> --
>>1,5000
>> (1 row)
>>
>> # select to_char(1.500, 'FM99D');
>>  to_char
>> -
>>  1,5
>> (1 row)
>>
>
> Maybe its important to add that the nr of decimals in the values
> is variable. I could of course adjust the pattern to work for
> 1.500, but I'm looking for a solution that will work with an
> arbitrary numerical value and that's a little more elegant than
> casting to text, and then replacing the dot by a comma.
>

anyone? Giving a locale corresponding textual representation
of a numerical value keeping the exact nr of decimal digits
must be a fairly common use case. Would it be an idea to
implement a to_char function that does not take a formatting
pattern and has this behaviour?

>
>>
>> I would like to have '1,500' as the output, what is the best way
>> to achieve this?
>>
>> Thanks in advance,
>>
>> Ingmar


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] to_char with locale decimal separator

2013-07-30 Thread Ingmar Brouns
On Tue, Jul 30, 2013 at 4:42 PM, Adrian Klaver  wrote:
> On 07/30/2013 03:03 AM, Ingmar Brouns wrote:
>>
>> On Mon, Jul 29, 2013 at 3:12 PM, Ingmar Brouns  wrote:
>
>
>>>
>>
>> anyone? Giving a locale corresponding textual representation
>> of a numerical value keeping the exact nr of decimal digits
>> must be a fairly common use case. Would it be an idea to
>> implement a to_char function that does not take a formatting
>> pattern and has this behaviour?
>>
>
> Best I can do is a proof of concept in plpythonu for determining locale
> decimal point:
>
> test=# SHOW lc_numeric ;
>  lc_numeric
> -
>  en_US.UTF-8
> (1 row)
>
> test=# DO $$
> import locale
> rs = plpy.execute("SHOW lc_numeric")
> lc_n = rs[0]["lc_numeric"]
> locale.setlocale(locale.LC_NUMERIC, lc_n)
> d = locale.nl_langinfo(locale.RADIXCHAR)
> plpy.notice("Decimal point is " + d)
> $$ LANGUAGE plpythonu;
> NOTICE:  Decimal point is .
> CONTEXT:  PL/Python anonymous code block
> DO
>
>
> test=# set lc_numeric = 'nl_NL.utf8';
> SET
> test=# DO $$
> import locale
> rs = plpy.execute("SHOW lc_numeric")
> lc_n = rs[0]["lc_numeric"]
> locale.setlocale(locale.LC_NUMERIC, lc_n)
> d = locale.nl_langinfo(locale.RADIXCHAR)
> plpy.notice("Decimal point is " + d)
> $$ LANGUAGE plpythonu;
> NOTICE:  Decimal point is ,
> CONTEXT:  PL/Python anonymous code block
> DO
>

Thanks for your time, appreciate it! As a dirty alternative, you could also do
something like:

select translate(5.000::text,'.',substr(to_char(.0),2,1));

Not so nice, but would work. Though I still feel there should be a more
elegant of doing this...

>
>
>>>
>>>>
>>>> I would like to have '1,500' as the output, what is the best way
>>>> to achieve this?
>>>>
>>>> Thanks in advance,
>>>>
>>>> Ingmar
>>
>>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] variadic array arguments, can it work?

2012-01-19 Thread Ingmar Brouns
Hi,

I was trying to write a variadic function where the arguments themselves
are arrays, but calling it does not seem to work. I couldn't find
documentation mentioning this restriction

postgres=# create or replace function foo(variadic args integer[][])
returns integer
as $$
begin return args[2][2]; end;
$$ language plpgsql;

Now I can call the function using variadic:

postgres=# select foo(variadic array[array[1,2],array[3,4]]);
 foo
-
   4
(1 row)

but I cannot call it in the normal way...

postgres=# select foo( array[1,2] , array[3,4] );
ERROR:  function foo(integer[], integer[]) does not exist at character 8
HINT:  No function matches the given name and argument types. You might
need to add explicit type casts.
STATEMENT:  select foo( array[1,2] , array[3,4] );
ERROR:  function foo(integer[], integer[]) does not exist
LINE 1: select foo( array[1,2] , array[3,4] );
   ^
HINT:  No function matches the given name and argument types. You might
need to add explicit type casts.

I suspect this has to do something with multiple dimensional arrays not
truly being arrays of arrays...


Kind regards,

Ingmar Brouns



version

--

 PostgreSQL 9.1.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.1
20110908 (Red Hat 4.6.1-9), 64-
bit
(1 row)


[GENERAL] set returning functions and resultset order

2012-05-04 Thread Ingmar Brouns
Hi all,

I'm a bit confused about when exactly I can rely on the resultset order
when using set returning functions. The documentation states:

'After a query has produced an output table (after the select list has been
processed) it can optionally be sorted. If sorting is not chosen, the rows
will be returned in an unspecified order.'

So when taking this very literally, I would expect that you cannot rely on
the order of

SELECT * FROM generate_series(2,4);
 generate_series
-
   2
   3
   4


I'm asking this because I'm writing queries similar to

SELECT string_agg(foo, '@') FROM regexp_split_to_table('1@2@3@4','@') foo;
 string_agg

 1@2@3@4
(1 row)

and

SELECT generate_subscripts(regexp_split_to_array('bird@dog@cow@ant','@'),
1), regexp_split_to_table('bird@dog@cow@ant','@');
 generate_subscripts | regexp_split_to_table
-+---
   1 | bird
   2 | dog
   3 | cow
   4 | ant
(4 rows)

and need to know whether the order in these cases is guaranteed. Is there a
more general statement that could be made, something like:
If you use only set returning functions, and do not join their results,
then the returning order of the individual functions will be respected?

Thanks in advance,

Ingmar Brouns


[GENERAL] The semantics of (NULL,NULL) vs NULL

2012-08-02 Thread Ingmar Brouns
Hi,

I have a question.

As I noted that (null,null) is null, I thought why put (null,null) in an
array when that is the same as putting null in there.
However, when trying to unnest that array I got an error when using null
instead of the tuple. I experimented a bit, and
read the documentation on row and array comparison, but I could not find
documentation explaining the results below.

create type int_tuple as (a int, b int);
CREATE TYPE
=# select (null,null)::int_tuple is null;
 ?column?
--
 t
(1 row)

=# select array[null::int_tuple] = array[null::int_tuple];
 ?column?
--
 t
(1 row)

as the documentation states: 'Array comparisons compare the array contents
element-by-element'
Taking into account the results above I would expect the following to be
true

=# select array[(null,null)::int_tuple] = array[null::int_tuple];
 ?column?
--
 f
(1 row)

apparently (null,null) is has more information then just null:

=# select null::int_tuple;
 int_tuple
---

(1 row)


=# select (null,null)::int_tuple;
 row
-
 (,)
(1 row)


=# select * from  unnest(array[null::int_tuple]);
ERROR:  function returning set of rows cannot return null value

=# select * from unnest(array[(null,null)::int_tuple]);
 a | b
---+---
   |
(1 row)


Can anyone explain why:


create type int_tuple as (a int, b int);
CREATE TYPE
=# select (null,null)::int_tuple is null;
 ?column?
--
 t
(1 row)

and not

=# select array[(null,null)::int_tuple] = array[null::int_tuple];
 ?column?
--
 f
(1 row)


Thanks in advance,

Ingmar



version
--
 PostgreSQL 9.1.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.1
20110908 (Red Hat 4.6.1-9), 64-bit
(1 row)


[GENERAL] issue, dumping and restoring tables with table inheritance can alter column order

2015-09-14 Thread Ingmar Brouns
Hi,

Im running:

 PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.9.2
20150212 (Red Hat 4.9.2-6), 64-bit

I make use of table inheritance. I've created a table, added a
child table, and after that added a column to the parent table
(example code below).

As you can see below the newly added column is listed last in the
child table. I would have expected column 'c' to come after
column 'a', but can imagine that you can argue that it is safest
to put it last as people may have code that depends on column
positions.

However, when I dump the schema using pg_dump, and then load the
resulting sql file, suddenly 'c' does follow 'a'. So restoring my
schema has changed my table's layout. I feel dumping and loading
should not alter column positions. Any thoughts?

create schema test_issue;
create table test_issue.foo ( a integer );
create table test_issue.bar ( b text ) inherits ( test_issue.foo );
alter table test_issue.foo add column c integer;

postgres=# \d test_issue.bar;
Table "test_issue.bar"
 Column |  Type   | Modifiers
+-+---
 a  | integer |
 b  | text|
 c  | integer |
Inherits: test_issue.foo

]$ ~/bin/pg_dump -n test_issue > test_issue.sql;

postgres=# drop schema test_issue cascade;

]$ psql -f test_issue.sql;

postgres=# \d test_issue.bar;
Table "test_issue.bar"
 Column |  Type   | Modifiers
+-+---
 a  | integer |
 c  | integer |
 b  | text|
Inherits: test_issue.foo


Ingmar


[GENERAL] Query performs badly with materialize node

2011-09-19 Thread Ingmar Brouns
Hi,

I have a query for which PostgreSQL 9.0.3 and 9.1 rc1 both come up with what
seems to be a very bad plan when materialize is enabled.
The plan with materialize takes 5 seconds to execute, the plan without 7 ms.

Part of the plan with materialization enabled

#effectively loop over all rows in the participates table
  ->  Nested Loop  (cost=0.00..84762.57 rows=391
width=8) (actual time=0.863..5074.229 rows=45 loops=1)
Join Filter: (ptcp.cars_id = crs.id)
#This next index scan has no condition, so all rows:
->  Index Scan using ptcp_event_fk_i on
participates ptcp  (cost=0.00..51591.41 rows=1105378 width=16) (actual
time=0.024..976.792 rows=1105028 loops=1)
->  Materialize  (cost=0.00..9.82 rows=2
width=8) (actual time=0.000..0.001 rows=2 loops=1105028)
  ->  Index Scan using crs_owner on cars
crs  (cost=0.00..9.81 rows=2 width=8) (actual time=0.021..0.024 rows=2
loops=1)
Index Cond: (owner = $3)


There is an index on the cars_id column of participates
There is an index on the owner column of cars

What I find strange is that there is a conditionless index scan on
participates, retrieving all its rows, and then a nested loop over all those
rows and a materialize node. Because there is an index on ptcp.cars_id, if
would expect the planner to use that index, instead of choosing to loop over
1105028 rows. When I disable materialize, this is exactly what it does

->  Index Scan using crs_owner on cars crs
 (cost=0.00..9.81 rows=2 width=8) (actual time=0.076..0.079 rows=2 loops=1)
  Index Cond: (owner = $3)
->  Index Scan using ptcp_car_fk_i on
participates ptcp  (cost=0.00..779.41 rows=196 width=16) (actual
time=0.057..0.218 rows=22 loops=2)
  Index Cond: (ptcp.cars_id = crs.id)


I know that postgresql's planner is driven by statistics, but this seems
strange...

Any thoughts?

Kind regards,

Ingmar


Re: [GENERAL] Query performs badly with materialize node

2011-09-23 Thread Ingmar Brouns
>
> In short: if you want to get useful commentary on your problem, you
> need to provide a more complete picture.
>
>                        regards, tom lane
>

I've posted the query plans now several times to this list, but they
do not show up, almost as if being caught by a spam filter or
something. I've wrote to pgsql-general-owner about this a few days ago
but haven't received a reply. Am I missing something?

Kind regards,

Ingmar

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] concat(NULL,NULL) returns empty string

2011-12-08 Thread Ingmar Brouns
Hi,

The documentation of the function concat states: Concatenate all arguments.
NULL arguments are ignored.
Intuitively I would expect that

concat(NULL, NULL) IS NULL

but this is not the case as it equals the empty string. As no empty string
was passed, the empty string seems
to come from nowhere. I found this behavior not immediately clear from the
documentation..

Kind regards,

Ingmar Brouns


[GENERAL] volatile void returning function not executed as often as expected in sql function

2013-05-15 Thread Ingmar Brouns
Hi,

I have a volatile void returning function that I call in the SELECT
clause of a query. When I execute the query, the function is called
for every row in the result, this is also what I expect. However, if I
embed that same query in an sql function and then call that function
it gets executed only once. This is not what I expect, am I missing
something?

test case:
--

create table foo(a int);
insert into foo values (0),(10),(100);

create or replace function foofunc (a_in int) returns void as
$func$
begin
update foo set a = a+1 where a = a_in;
end;
$func$ language plpgsql;
--increase every a in foo by 1
select foofunc(a) from foo;
--as exected, all records are increased
select * from foo;

--increase every a in foo by 1, but now in function
create or replace function foofunc2() returns void as
$func$
select foofunc(a) from foo
$func$ language sql;
select foofunc2();
--only one record is increased, this is not what I expect
select * from foo;

--cleanup
drop table foo;
drop function foofunc(int);
drop function foofunc2();

*
output:
*

pv=# create table foo(a int);
CREATE TABLE
pv=# insert into foo values (0),(10),(100);
INSERT 0 3
pv=#
pv=# create or replace function foofunc (a_in int) returns void as
pv-# $func$
pv$# begin
pv$# update foo set a = a+1 where a = a_in;
pv$# end;
pv$# $func$ language plpgsql;
CREATE FUNCTION
pv=# --increase every a in foo by 1
pv=# select foofunc(a) from foo;
 foofunc
-



(3 rows)

pv=# --as exected, all records are increased
pv=# select * from foo;
  a
-
   1
  11
 101
(3 rows)

pv=#
pv=# --increase every a in foo by 1, but now in function
pv=# create or replace function foofunc2() returns void as
pv-# $func$
pv$# select foofunc(a) from foo
pv$# $func$ language sql;
CREATE FUNCTION
pv=# select foofunc2();
 foofunc2
--

(1 row)

pv=# --only one record is increased, this is not what I expect
pv=# select * from foo;
  a
-
  11
 101
   2
(3 rows)


pv=# select pg_version();
 pg_version

 9.2.4
(1 row)



Thanks,

Ingmar


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: volatile void returning function not executed as often as expected in sql function

2013-05-15 Thread Ingmar Brouns
When I send the mail, I saw it, the problem is that the function
should return setof void instead of void, then it works, sorry for
bothering.

pv=# --increase every a in foo by 1, but now in function
create or replace function foofunc2() returns setof void as
$func$
select foofunc(a) from foo
$func$ language sql;


pv=# select foofunc2();
 foofunc2
--



(3 rows)

pv=# select * from foo;
  a
-
   1
  11
 101
(3 rows)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] volatile void returning function not executed as often as expected in sql function

2013-05-15 Thread Ingmar Brouns
I have a volatile void returning function that I call in the SELECT
clause of a query. When I execute the query, the function is called
for every row in the result, this is also what I expect. However, if I
embed that same query in an sql function and then call that function
it gets executed only once. This is not what I expect, am I missing
something?

test case:
--

create table foo(a int);
insert into foo values (0),(10),(100);

create or replace function foofunc (a_in int) returns void as
$func$
begin
update foo set a = a+1 where a = a_in;
end;
$func$ language plpgsql;
--increase every a in foo by 1
select foofunc(a) from foo;
--as exected, all records are increased
select * from foo;

--increase every a in foo by 1, but now in function
create or replace function foofunc2() returns void as
$func$
select foofunc(a) from foo
$func$ language sql;
select foofunc2();
--only one record is increased, this is not what I expect
select * from foo;

--cleanup
drop table foo;
drop function foofunc(int);
drop function foofunc2();

*
output:
*

pv=# create table foo(a int);
CREATE TABLE
pv=# insert into foo values (0),(10),(100);
INSERT 0 3
pv=#
pv=# create or replace function foofunc (a_in int) returns void as
pv-# $func$
pv$# begin
pv$# update foo set a = a+1 where a = a_in;
pv$# end;
pv$# $func$ language plpgsql;
CREATE FUNCTION
pv=# --increase every a in foo by 1
pv=# select foofunc(a) from foo;
 foofunc
-



(3 rows)

pv=# --as exected, all records are increased
pv=# select * from foo;
  a
-
   1
  11
 101
(3 rows)

pv=#
pv=# --increase every a in foo by 1, but now in function
pv=# create or replace function foofunc2() returns void as
pv-# $func$
pv$# select foofunc(a) from foo
pv$# $func$ language sql;
CREATE FUNCTION
pv=# select foofunc2();
 foofunc2
--

(1 row)

pv=# --only one record is increased, this is not what I expect
pv=# select * from foo;
  a
-
  11
 101
   2
(3 rows)


pv=# select pg_version();
 pg_version

 9.2.4
(1 row)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Get / Set Composite Type Fields within triggers

2013-06-24 Thread Ingmar Brouns
> My problem is inside triggers: How can I set or get type field values for
> NEW or OLD? (I need it to check users permissions at row level, etc.)
>
> I tried NEW.((created).by), NEW.(created.by),  NEW.created.by and nothing
> works...
>
> Could you help me please?
>
> Thanks in advance.
> Martin


for accessing you could use

(NEW.created).by

for setting it, as far as I know  it's not possible to assign directly
to a field of a composite type in a record,
but something like this would work

# create table foo (a info_base);

# create or replace function foobartrig() returns trigger AS
$func$
declare rec info_base;
begin
rec := ('AA', (NEW.a).at);
NEW.a := rec;
raise warning '%', NEW;
RETURN NEW;
END;
$func$
language plpgsql;
CREATE FUNCTION


# insert into foobar values (('aoeu',current_timestamp)::info_base)
;
WARNING:  ("(AA,""2013-06-24 11:55:55.610049"")")


Hope this was of some help...


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres case insensitive searches

2013-07-01 Thread Ingmar Brouns
On Mon, Jul 1, 2013 at 10:01 AM, Albe Laurenz  wrote:
> bhanu udaya wrote:
 What is the best way of doing case insensitive searches in postgres using 
 Like.
>>>
>>> Table "laurenz.t"
>>> Column | Type | Modifiers
>>> +-+---
>>> id | integer | not null
>>> val | text | not null
>>> Indexes:
>>> "t_pkey" PRIMARY KEY, btree (id)
>>>
>>>
>>> CREATE INDEX t_val_ci_ind ON t ((upper(val) text_pattern_ops);
>>>
>>> ANALYZE t;
>>>
>>> EXPLAIN SELECT id FROM t WHERE upper(val) LIKE 'AB%';
>>>
>>> QUERY PLAN
>>> --
>>> Index Scan using t_val_ci_ind on t (cost=0.01..8.28 rows=1 width=4)
>>> Index Cond: ((upper(val) ~>=~ 'AB'::text) AND (upper(val) ~<~ 'AC'::text))
>>> Filter: (upper(val) ~~ 'AB%'::text)
>>> (3 rows)
>

>
> My solution is fast and efficient, it will call upper() only once
> per query.  I don't see your problem.  Different database systems
> do things in different ways, but as long as you can do what you need
> to do, that should be good enough.
>
> Yours,
> Laurenz Albe
>

I was toying around a little bit with this example, just for my
understanding, the function upper is called for every row in the
result. I think this has something to to with the filter in the plan.
This is what I did

create table foo as (select md5(random()::text) from
generate_series(1,2.5e6::integer));
-- create a little wrapper function to see when it is called
create ': create or replace function test_upper(text_in TEXT) RETURNS TEXT AS
$func$
begin
raise warning 'called';
return upper(text_in);
end;
$func$ LANGUAGE plpgsql IMMUTABLE;

create index foo_ind on foo (test_upper(md5) text_pattern_ops); --lots
of 'called' ouptut
analyze foo;

-- here you see that the function is called for every row in the result
postgres=#  select * from foo where test_upper(md5) like  'ABAAB%';
WARNING:  called
WARNING:  called
WARNING:  called
   md5
--
 abaab10ff1690418d69c360d2dc9c8fc
 abaab339fb14a7a10324f6007d35599a
 abaab34f0cebabee89fa222bfee7b6ea
(3 rows)


postgres=# explain select * from foo where test_upper(md5) like  'ABAAB%';
  QUERY PLAN
--
 Index Scan using foo_ind on foo  (cost=0.50..14.02 rows=250 width=33)
   Index Cond: ((test_upper(md5) ~>=~ 'ABAAB'::text) AND
(test_upper(md5) ~<~ 'ABAAC'::text))
   Filter: (test_upper(md5) ~~ 'ABAAB%'::text)
(3 rows)


So under my assumption that it is the filter that causes the function
execution, I don't understand
how a row can satisfy

--which I read as >= 'ABAAB' and < 'ABAAC'
((test_upper(md5) ~>=~ 'ABAAB'::text) AND (test_upper(md5) ~<~ 'ABAAC'::text))

and not

(test_upper(md5) ~~ 'ABAAB%'::text)


Ingmar


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Thread scheduling in postgresql 8.1.3 on windows

2006-03-02 Thread Ingmar Brouns
I'm writing my thesis on performance testing. First I had postgresql 8.0.1 installed. However when stress
testing with multiple threads I got an unacceptable amount of context switches. Thats why I installed
8.1.3 . With 8.1.3 I dont have the context switch problem, however now there is a problem with the standard
deviation of response times. With my stress test software I use 5 concurrent threads issueing the same request,
when a response has been received by one of these threads immediatly a new request is issued.
Now the strange thing is that with postgresql 8.1.3 the standard deviation of the response time is large,
in fact it is even much larger than for 8.0.1 . I think this is strange since all requests are similar. Hence, they should
all be served in approximatly the same amount of time (as is the case in 8.0.1). The only reason I can think of
is that postgresql allows some of its workerthreads to run more frequenly than others. However, since all requests are
similar I don't see why some threads are allowed to execute more frequently than others. Below is the standard deviation
of two requests that I've tested.


  standard deviation
 
8.0.1  8.1.3
request 1    14  ms.    111 ms.
request 2    44  ms.     706 ms.

Does anyone know why CPU time is not spread evenly over the worker threads.
Thanx in advance,

Ingmar