[GENERAL] hi, is that the standard:ISO/IEC 13249-6:2006 if I investigate the data mining extension for SQL language?

2011-09-26 Thread sunpeng
Recently I noticed there are only 1-4, 9-14 parts in SQL:2008 standard:
ISO/IEC 9075-1:2008. Here is:
http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=45498
   why don't they define 5-8 parts in this standard?
   I also noticed there is a standard for data mining: ISO/IEC 13249-6:2006,
here is:

http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=38648
.
  is that the standard if I investigate the data mining extension for SQL
language?
Thanks!


[GENERAL] changing from postgreSQL 8.3 to 9.0

2011-09-26 Thread Malm Paul
Hi,
I have a database created on ver 8.3 an have restored the database in a new 
database in PostgreSQL 9.0.
In the database i'm using a column storing "bytea".

When trying to read the database with my java application, I have problems 
reading from the bytea-stream.
Integers read is not correct. I can see that there is something new in ver 9.0: 
LC_COLLATE. Has that someting to do with my problem reading from "bytea".
If so, what is the easiest way to get around this problem, when importing the 
database that is created in PostgreSQL 8.3?

Kind regards,
Paul



Re: [GENERAL] changing from postgreSQL 8.3 to 9.0

2011-09-26 Thread Andreas Kretschmer
Malm Paul  wrote:

> Hi,
> I have a database created on ver 8.3 an have restored the database in a new
> database in PostgreSQL 9.0.
> In the database i'm using a column storing "bytea".
>  
> When trying to read the database with my java application, I have problems
> reading from the bytea-stream.

You can set bytea_output from 'hex' (now default) to 'escape'
(traditional format). You can do that in the postgresq.conf (for the
whole cluster), via ALTER DATABASE or per session.


HTH.


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] bytea insert difference between 8.3 and 9.x

2011-09-26 Thread Gregg Jaskiewicz
So consider this code C++, using libpqxx:

string = "INSERT INTO foo(x) VALUES( E'" + T.esc_raw(data) + "' )";

foo(x) is bytea , before you ask.

On 8.3, it works fine.
On 9.x:

ERROR:  invalid byte sequence for encoding "UTF8": 0x00 (if \000 is in
the string).

Now, I can take out the E'' and it will work fine on 9.X, but will
whine about it on 8.3. (HINT:  Use the escape string syntax for
escapes, e.g., E'\r\n'.)


I need one piece of code that will work on both, what should I do in
this case ?

Thanks.

-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] : PostgreSQL Online Backup

2011-09-26 Thread Venkat Balaji
Hello Everyone,

We have had situations where-in "rsync" was executed without executing
"pg_start_backup()" on the production data directory and on the next runs,
"pg_start_backup()" has been executed with "rsync". This was to avoid high
IO load on production. We ended up getting unmatched files (especially in
pg_clog) and not sure about "base" directory.

Postgres is asking for WAL Archive files dated sometime around 15 days ago.
We are absolutely not sure whats going on.

Is this dangerous for production (like corruption) ? or just the backup will
be invalid ? Please help us know if we have to perform any precautionary
checks on the production cluster.

Apart from firing a checkpoint, does "pg_start_backup()" updates any
dictionary tables or views ? or it updates anything in "pg_xlog"

Looking forward for your help !

Thanks
VB


Re: [GENERAL] New feature: accumulative functions.

2011-09-26 Thread Harald Fuchs
In article 
,
Pavel Stehule  writes:

> 2011/9/25 pasman pasmański :
>> I found second use case. Look at expression:
>> 
>> where left(str,n)='value'
>> 
>> function left(str,n) increase monotonically for str and n. With this
>> feature it can use index on str.
>> 
>> Classic index needs recreating.
>> 

> these use cases are just theory - for example - this case should be
> solved with immutable functions

> I can use a functional index left(str, const) and use a query

> where left(str, const) = left('value', const) and left(str, n) = 'value'

> There are a theoretical cases, but these cases should be solved via
> special data type and GiST index

If I don't misunderstand you, this data type is called 'prefix_range',
available at PgFoundry.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sending Results From One Function As Input into Another Function

2011-09-26 Thread Albe Laurenz
Jeff Adams wrote:
> I need to send the results (SETOF RECORDS) from one function into
another
> function, to produce another result (SETOF RECORDS). I am not quite
sure how
> to do get this done. The first function filters a large table down a
more
> manageable dataset. I want to send the results of this first function
to
> another function, where computations are performed. I could combine
into a
> single function, but I would lose some flexibility that I would like
to
> maintain by keeping the two functions separate. Preliminary research
> suggests that cursors might be the way to go, but I am not too
experienced
> with the use of cursors and was unable to find good examples. Any help
would
> be greatly appreciated...

Here's an example:

SELECT * FROM test;

 id |  val
+---
  1 | one
  2 | two
  3 | three
  4 | four
(4 rows)

CREATE FUNCTION filter() RETURNS refcursor
   LANGUAGE plpgsql STABLE STRICT AS
$$DECLARE
   /* assignment gives the cursor a name */
   curs refcursor := 'curs';
BEGIN
   OPEN curs FOR
  SELECT id, val FROM test WHERE id%2=0;
   RETURN curs;
END;$$;

CREATE FUNCTION compute(curs refcursor) RETURNS text
   LANGUAGE plpgsql STABLE STRICT AS
$$DECLARE
   v test;  -- row type for table
   r text := '';
BEGIN
   LOOP
  FETCH curs INTO v;
  EXIT WHEN v IS NULL;
  r := r || v.val;
   END LOOP;
   RETURN r;
END;$$;

SELECT compute(filter());

 compute
-
 twofour
(1 row)

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

2011-09-26 Thread Edson Carlos Ericksson Richter
Dear experts,

 

I have the following query:

 

select * from notafiscal where numeroctc like ‘POA%34345’;

 

Prefix is normally 3 characters, suffix varyies.

 

Is Postgresql 9.0.4 able to use an BTREE index on notafiscal.numeroctc to
execute this query?

 

Should I create GIST index or something else to speed up the query?

 

 

Thanks,

 


Edson Carlos Ericksson Richter 
SimKorp Infomática Ltda 


Fone:

(51) 3366-7964 


Celular:

(51) 8585-0796


Embedded Image

  www.simkorp.com.br

 

<>

smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] : PostgreSQL Online Backup

2011-09-26 Thread Venkat Balaji
I tried restoring the backup, after taking the full backup.

Below is what i see in the "archive destination".

Postgres was asking for "00010193006F" and i tried to find the
same and below is what i find...

-rw--- 1 postgres postgres 3.3M Sep 26 02:06 00010193006F.gz
-rw--- 1 postgres postgres  219 Sep 26 02:53
00010193006F.00328508.backup.gz

Why is PG (9.0) putting an extension for the WAL Archive file as "backup.gz"
??

Please help !

Thanks
VB

On Mon, Sep 26, 2011 at 5:11 PM, Venkat Balaji wrote:

> Hello Everyone,
>
> We have had situations where-in "rsync" was executed without executing
> "pg_start_backup()" on the production data directory and on the next runs,
> "pg_start_backup()" has been executed with "rsync". This was to avoid high
> IO load on production. We ended up getting unmatched files (especially in
> pg_clog) and not sure about "base" directory.
>
> Postgres is asking for WAL Archive files dated sometime around 15 days ago.
> We are absolutely not sure whats going on.
>
> Is this dangerous for production (like corruption) ? or just the backup
> will be invalid ? Please help us know if we have to perform any
> precautionary checks on the production cluster.
>
> Apart from firing a checkpoint, does "pg_start_backup()" updates any
> dictionary tables or views ? or it updates anything in "pg_xlog"
>
> Looking forward for your help !
>
> Thanks
> VB
>
>
>


Re: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

2011-09-26 Thread Gregg Jaskiewicz
You can always store it divided in the database into two columns.
Gist could also work for you.


[GENERAL] timeline X of the primary does not match recovery target timeline Y

2011-09-26 Thread Adam Tistler
I am trying to set up master/slave warm standby streaming replication (9.1).  I 
am not doing archiving/log shipping.  I have read that you cannot reverse roles 
between the master and slave, and that once a slave is "promoted", you need to 
use  pg_basebackup or rsync to copy files from the newly promoted master to the 
old master.  I am fine with this, however, in my case all I am trying to do is 
re-enable the slave to be in recovery mode. I am doing this by deleting the 
trigger file ( which I had originally created to promote the slave to master ) 
and moving recovery.done to recovery.conf, then restarting postgres.  As a 
result I get the following error:

FATAL:  timeline 2 of the primary does not match recovery target timeline 3

Relevant info:

on master: postgresql.conf

max_wal_senders = 5
wal_keep_segments = 32
checkpoint_segments = 8
wal_level = hot_standby

on slave: recovery.conf

standby_mode = on
primary_conninfo = 'host=172.16.135.205 port=5432 user=replicator 
password=replicator'
trigger_file = '/tmp/pgsql.trigger'

everything in postgresql.conf on the slave is default related to replication.




On a side note I am slightly confused as to why someone would use "archive 
mode/restore_command" in combination with streaming replication.  As I stated 
above, I am not archiving/restoring on either the master or slave, I am simply 
relying on streaming replication is this the wrong approach??




Thanks in advance for the help.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] changing from postgreSQL 8.3 to 9.0

2011-09-26 Thread Albe Laurenz
Malm Paul wrote:
> I have a database created on ver 8.3 an have restored the database in
a new database in PostgreSQL
> 9.0.
> In the database i'm using a column storing "bytea".
> 
> When trying to read the database with my java application, I have
problems reading from the bytea-
> stream.
> Integers read is not correct. I can see that there is something new in
ver 9.0: LC_COLLATE. Has that
> someting to do with my problem reading from "bytea".
> If so, what is the easiest way to get around this problem, when
importing the database that is created
> in PostgreSQL 8.3?

Collation or encoding matter only for textual data, not for binary data.

Did you update your JDBC driver to a version that supports 9.0?

With an old JDBS driver, you might have problems with the new
bytea encoding format in 9.0. You can test by setting
bytea_output to "escape" in postgresql.conf and see if that
works around the problem.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea insert difference between 8.3 and 9.x

2011-09-26 Thread Gregg Jaskiewicz
On 26 September 2011 14:39, Merlin Moncure  wrote:

> urk -- I have to be honest -- that's a pretty lousy way to send bytea.
> Personally, I'd encode the string as hex and send it like this:
>
> "INSERT INTO foo(x) VALUES( decode('" + hex_string + "'))";
>
> libpqxx doesn't have the ability to parameterize queries?
>

Thanks Merin.

It does, and that's probably what I'll do. Your solution isn't great
either, because it requires extra function to be run on the postgresql
side. Me no likeey that ;)
When you say parameterized - it allows you to prepare queries - which
I do in 80% of select/insert/update cases, apart from some older code
that no one wants to touch.
But the time came for me to act on it, and try to put us forward using
9.1 instead of old rusty 8.3 (which is still better then 8.1 they used
before I started working here).

m_connection.prepare("INSERT INTO foo(x) VALUES($1)") ("bytea",
pqxx::prepare::treat_binary);

Gotta try that one with both boys ;)

Btw, I hope 9.1.1 is out soon, gotta package some version for tests.
We used floating point timestamps, and I gotta repackage centos rpms
with that config option - otherwise it's pg_dump and restore of gigs
of data


-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Pg and compress

2011-09-26 Thread Jov
Hi all,
We are going to use pg as data warehouse,but after some test,we found that
plain text with csv format is 3 times bigger when load to pg.we use copy to
load data.we try some optimize and it reduce to 2.5 times bigger.other db
can  avarage compress  to 1/3 of the plain text.bigger data means heavy io.
So my question is how to make data compressed in pg?is some fs  such as
zfs,berfs with compression feature can work well with pg?


RES: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

2011-09-26 Thread Edson Carlos Ericksson Richter
Thanks for the tip, unfortunately, split it does not work for me, since it’s a 
free text field, that users fill as they wish...

But looking most slow queries, this one pops up with more frequency...

 

So GIST would work; I’ll give a try.

 

Regards,

 

Edson.

 

 

 

De: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] Em nome de Gregg Jaskiewicz
Enviada em: segunda-feira, 26 de setembro de 2011 10:01
Para: Edson Carlos Ericksson Richter
Cc: pgsql-general@postgresql.org
Assunto: Re: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

 

You can always store it divided in the database into two columns.  

Gist could also work for you. 

 



smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] bytea insert difference between 8.3 and 9.x

2011-09-26 Thread Merlin Moncure
On Mon, Sep 26, 2011 at 5:51 AM, Gregg Jaskiewicz  wrote:
> So consider this code C++, using libpqxx:
>
> string = "INSERT INTO foo(x) VALUES( E'" + T.esc_raw(data) + "' )";
>
> foo(x) is bytea , before you ask.
>
> On 8.3, it works fine.
> On 9.x:
>
> ERROR:  invalid byte sequence for encoding "UTF8": 0x00 (if \000 is in
> the string).
>
> Now, I can take out the E'' and it will work fine on 9.X, but will
> whine about it on 8.3. (HINT:  Use the escape string syntax for
> escapes, e.g., E'\r\n'.)
>
>
> I need one piece of code that will work on both, what should I do in
> this case ?

urk -- I have to be honest -- that's a pretty lousy way to send bytea.
Personally, I'd encode the string as hex and send it like this:

"INSERT INTO foo(x) VALUES( decode('" + hex_string + "'))";

libpqxx doesn't have the ability to parameterize queries?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea insert difference between 8.3 and 9.x

2011-09-26 Thread Merlin Moncure
On Mon, Sep 26, 2011 at 8:50 AM, Gregg Jaskiewicz  wrote:
> Thanks Merin.
>
> It does, and that's probably what I'll do. Your solution isn't great
> either, because it requires extra function to be run on the postgresql
> side. Me no likeey that ;)

If you are sending a bytea as encoded text, you have to call a
decoding function on the server side no matter what -- it's implicitly
done for you some cases though.  the decode() route is x-version and
guarantees hex method of transfer, not the old escaped method which is
stupid and slow.

The only way to not send encoded text is via binary switch in the
protocol...not sure if libpqxx supports this method of execution.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem with pg_upgrade from 9.0 to 9.1 under Ubuntu x64

2011-09-26 Thread DUPREZ Cédric
Hi all,

I have a Linux/Ubuntu (10.04) x64 server on which I have installed PostgreSQL 
9.0.4 by source compilation.
I have also installed PostgreSQL 9.1.0 with source compilation on the same 
server, and I try to migrate my data using pg_upgrade.

For PostgreSQL 9.0.4, the installation directories are :
   - bindir : /opt/postgreSQL/9.0/bin/
   - datadir : opt/postgreSQL/9.0/data/
   - port : 5433 (specified in postgresql.conf)

For PostgreSQL 9.1.0, the installation directories are :
   - bindir : /opt/postgreSQL/9.1/bin/
   - datadir : opt/postgreSQL/9.1/data/
   - port : 5433 (specified in postgresql.conf)

Both PostgreSQL server are shut down before migration.
I have compiled pg_upgrade 9.1 from the contrib source of PosgreSQL 9.1.0.

I use the following command : 
/opt/postgreSQL/9.1/bin/pg_upgrade -v --old-datadir=/opt/postgreSQL/9.0/data/ 
--new-datadir=/opt/postgreSQL/9.1/data/ --old-bindir=/opt/postgreSQL/9.0/bin/ 
--new-bindir=/opt/postgreSQL/9.1/bin/ --old-port=5433 --new-port=5432 
--user=postgres

And get the following message : 

Running in verbose mode
Performing Consistency Checks
-
Checking current, bin, and data directories ok
Checking cluster versions
This utility can only upgrade to PostgreSQL version 9.1.
Failure, exiting

Why do I get this error ?

Thanks in advance for your help,

Cedric Duprez



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In which case PG_VERSION file updates ?

2011-09-26 Thread Raghavendra
>
> From what I can see, PG_VERSION is written at database creation to document
> the
> major version of the Postgres instance used to create the database.  Since
> it
> only holds the major version string (i.e. 9.0) it is not touched during
> minor
> updates, for example 9.0.0 --> 9.0.1.  Grepping the pg_upgrade code shows
> it
> might touch PG_VERSION.  A quick grep on the rest of the source code shows
> only
> initdb writing out PG_VERSION, though it is checked by other code. This is
> by no
> means an in depth look and it would take some one with more knowledge of
> Postgres internals to give you a definitive answer. Might be worth a post
> on -
> hackers.


Thank you Adrian Klaver / Scott Marlowe for your valuable inputs. I got
clarified.

As said, PG_VERSION file is created at the time of Database creation.
Sometimes, question arises that at what time database created. For this we
don't have any information to get from pg_catalogs, so  with PG_VERSION file
timestamp we can pull database creation time.

However, with your inputs its clear that when PG_VERSION file is touched. In
pg_upgrade or any of the Postgres Internals touching PG_VERSION file will
never get exact database creation time.

I am not knowing why database creation time is not considered to be in
pg_catalogs.

-- Raghav


Re: [GENERAL] Sending Results From One Function As Input into Another Function

2011-09-26 Thread Merlin Moncure
On Mon, Sep 26, 2011 at 6:49 AM, Albe Laurenz  wrote:
> Jeff Adams wrote:
>> I need to send the results (SETOF RECORDS) from one function into
> another
>> function, to produce another result (SETOF RECORDS). I am not quite
> sure how
>> to do get this done. The first function filters a large table down a
> more
>> manageable dataset. I want to send the results of this first function
> to
>> another function, where computations are performed. I could combine
> into a
>> single function, but I would lose some flexibility that I would like
> to
>> maintain by keeping the two functions separate. Preliminary research
>> suggests that cursors might be the way to go, but I am not too
> experienced
>> with the use of cursors and was unable to find good examples. Any help
> would
>> be greatly appreciated...
>
> Here's an example:
>
> SELECT * FROM test;
>
>  id |  val
> +---
>  1 | one
>  2 | two
>  3 | three
>  4 | four
> (4 rows)
>
> CREATE FUNCTION filter() RETURNS refcursor
>   LANGUAGE plpgsql STABLE STRICT AS
> $$DECLARE
>   /* assignment gives the cursor a name */
>   curs refcursor := 'curs';
> BEGIN
>   OPEN curs FOR
>      SELECT id, val FROM test WHERE id%2=0;
>   RETURN curs;
> END;$$;
>
> CREATE FUNCTION compute(curs refcursor) RETURNS text
>   LANGUAGE plpgsql STABLE STRICT AS
> $$DECLARE
>   v test;  -- row type for table
>   r text := '';
> BEGIN
>   LOOP
>      FETCH curs INTO v;
>      EXIT WHEN v IS NULL;
>      r := r || v.val;
>   END LOOP;
>   RETURN r;
> END;$$;
>
> SELECT compute(filter());
>
>  compute
> -
>  twofour
> (1 row)

Another method of doing this which I like to point out is via arrays
of composite types.  It's suitable when the passed sets are relatively
small (say less than 10k) and is more flexible -- forcing all data
manipulation through FETCH is (let's be frank) pretty awkward and with
some clever work you can also involve the client application in a more
regular way.  You can use an implict table type or a specially defined
composite type to convey the data:

create type t as (a int, b text, c timestamptz);

create function filter() returns t[] as
$$
  select array(select row(a,b,c)::t from foo);
$$ language sql;

create function do_stuff(_ts t[]) returns void as
$$
declare
  _t t;
begin
  foreach _t in array _ts
  loop
raise notice '%', _t;
  end loop;
end;
$$ language plpgsql;

note: foreach in array feature is new to 9.1 -- 8.4+ use unnest() --
before that you have to hand roll unnest().

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] : PostgreSQL Online Backup

2011-09-26 Thread Albe Laurenz
Venkat Balaji wrote:
> We have had situations where-in "rsync" was executed without executing
"pg_start_backup()" on the
> production data directory and on the next runs, "pg_start_backup()"
has been executed with "rsync".
> This was to avoid high IO load on production. We ended up getting
unmatched files (especially in
> pg_clog) and not sure about "base" directory.
> 
> Postgres is asking for WAL Archive files dated sometime around 15 days
ago. We are absolutely not sure
> whats going on.
> 
> Is this dangerous for production (like corruption) ? or just the
backup will be invalid ? Please help
> us know if we have to perform any precautionary checks on the
production cluster.
> 
> Apart from firing a checkpoint, does "pg_start_backup()" updates any
dictionary tables or views ? or
> it updates anything in "pg_xlog"
> 
> Looking forward for your help !

I am not sure what the problem is.

Do you have problems starting the original PostgreSQL cluster,
or do you have problems restoring a backup?

Running pg_start_backup() will not harm the cluster.
End online backup mode by running pg_stop_backup() or removing
the backup_label file in the cluster directory.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


RES: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

2011-09-26 Thread Edson Carlos Ericksson Richter
Just discovered, I can’t use GIST over character varying...

 

Any other tips on how to optimize the query? Here are about 1,000,000 (one 
million) records in this table, table scan takes about 5 to 6 seconds on actual 
hardware (SAS, 1 Xeon, 2Gb memory on CentOS with all normal performance hacks).

 

select * from notafiscal where numeroctc like ‘POA%34345’;

 

 

Thanks,

 

Edson.

 

 

De: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] Em nome de Edson Carlos Ericksson 
Richter
Enviada em: segunda-feira, 26 de setembro de 2011 11:03
Para: pgsql-general@postgresql.org
Assunto: RES: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX 
queries?

 

Thanks for the tip, unfortunately, split it does not work for me, since it’s a 
free text field, that users fill as they wish...

But looking most slow queries, this one pops up with more frequency...

 

So GIST would work; I’ll give a try.

 

Regards,

 

Edson.

 

 

 

De: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] Em nome de Gregg Jaskiewicz
Enviada em: segunda-feira, 26 de setembro de 2011 10:01
Para: Edson Carlos Ericksson Richter
Cc: pgsql-general@postgresql.org
Assunto: Re: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

 

You can always store it divided in the database into two columns.  

Gist could also work for you. 

 



smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

2011-09-26 Thread Gregg Jaskiewicz
You can create your own type, but that means writing bit code in C.


Please, stop the top posting!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] upgrade postgres to 8.4.8, centos 5.3

2011-09-26 Thread MirrorX
thx a lot for your help. it worked great :)

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/upgrade-postgres-to-8-4-8-centos-5-3-tp4822762p4841782.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

2011-09-26 Thread Alban Hertroys
On 26 September 2011 17:15, Edson Carlos Ericksson Richter
 wrote:
> Just discovered, I can’t use GIST over character varying...

Why do you think that?

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Batching up data into groups of n rows

2011-09-26 Thread Andy Chambers
Hi,

I have a need to write a query that batches up rows of a table into
groups of n records.  I feel like this should be possible using the
existing window functions but I can't figure out how to do it from the
examples.

So I have some table, let's say

create table addresses (
  line_1 text,
  line_2 text,
  city text,
  state text,
  zip text);

...and I want to select the data from that table, adding a "group_id"
column, and a "record_id" column.  The "group_id" would start at 1,
and increment by 1 every 100 rows, and the "record_id" would increment
by 1 every row, but restart at 1 for each new group_id.

Thanks,
Andy

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a faster way to do that

2011-09-26 Thread hamann . w

Eduardo Morras  wrote:

>> 
>> At 08:04 25/09/2011, haman...@t-online.de wrote:
>> 
>> > >> select * from items where regexp_matches(items.code,'(ABC) (DE1)
>> > >> (any_substring)')<>{};
>> > >>
>> >
>> >Hi Eduardo,
>> >
>> >it is clear that scanning the table once with a list of matches will 
>> >outperform
>> >rescanning the table for every string wanted. Now, my problem is 
>> >that the patterns are
>> >dynamic as well. So if I could translate a table with one 
>> >column  and a few thousand rows
>> >into something like
>> >regexp_matches(code,'string1|string2|.string2781')
>> >would ideally be a performant query. Unfortunately I have no idea 
>> >how I could achieve this
>> >transformation inside the database. Doing it externally fails, 
>> >because any single query cannot
>> >be more than so few characters.
>> 
>> You can create a plsql function and pass a setof text that do it. 
>> Sorry but instead saying you What Must You Type, WMYT(c), i prefer 
>> the How Should You Do way, HSYD(c). Note that you can get the same 
>> results using other approachs (f.ex. using FTS described in chapter 12)
>> 
>> Check this topics:
>> 
>> Function 
>> Creation  http://www.postgresql.org/docs/9.0/static/sql-createfunction.html
>> 
>> Tutorial about Function 
>> Creation  http://www.adderpit.com/practical-postgresql/x10374.htm
>> 

Hi,

I tried the pl/sql approach to convert the contents of that patterns table into 
a regex.
Results: 40 seconds runtime for 9500 candidates and 815 patterns
718 seconds for the same set of 9500 candidates, but using 4000 patterns 
instead.
So it seems that I am reaching limits of pattern match


As for the fulltext index (and the underlying tsquery): this is an exact match 
rather than prefix
match, so I would need to know match patterns in advance in order to build the 
index

I am thinking about that anyway (because ABC1234 likely should not match ABC123 
pattern
in my context), but I would sort of prefer a system where I can state the rules 
when I
see the data set, rather than having to pre-create an index.

Thanks for the tutorial link :)
It seems that the responses on my post give all sorts of input that will help 
me on other
tasks

Regards
Wolfgang Hamann






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a faster way to do that

2011-09-26 Thread hamann . w


Alban Hertroys  wrote:

>> > Hi,
>> >=20
>> > the strings are not really known before.
>> > Let me explain the scenario; there is one table about products, and =
>> code is the
>> > manufacturer's (or resellers') product id.
>> > So, if ABC were a maker of laptops, ABC123 and ABC456 might be two =
>> different machines,
>> > and ABC123G might have G3 mobile installed, or ABC123X might be the =
>> same thing
>> > with extra memory. Obviously these device variants all look the same.
>> > Now reseller sends us a collection of product images, so there would =
>> be ABC123.jpg
>> > and ABC456.jpg
>> > The database task at hand is matching products to images (and then =
>> inserting the image
>> > name into a column of the products table).
>> 
>> 
>> I guessed right then. The origin of your problem is that you have =
>> similar items in your database, but the database doesn't know they are =
>> similar. I'd fix that first, it makes the problem a whole lot easier to =
>> handle.
>> 
>> For example, if an image comes in named ABC123G.jpg, you look up the =
>> product and manufacturer and update its image. Then you query for =
>> products of the same manufacturer that are similar to ABC123G (result: =
>> ABC123 and ABC123X) and update their images as well (if appropriate; =
>> perhaps they have a recent enough image of their own?).
>> 
>> 
>> As another whacky alternative to your regular expressions; I think it =
>> would be possible to abuse the text-search functionality in Postgres to =
>> match product id's. Those id's are basically a language per manufacturer =
>> describing product details.
>> 
>> If you can split the product id's up into lexemes that describe the base =
>> product id and it's options, then you can use full-text search to match =
>> up expressions similar to the lexemes derived from the image name.
>> 
>> For example:
>>  productid | lexemes
>> ---+--
>>  ABC123| {'ABC' '123'}
>>  ABC123G   | {'ABC' '123' 'G'}
>>  ABC123X   | {'ABC' '123' 'X'}
>>  ABC456| {'ABC' '456'}
>> 
>> I'm not really sure if that's possible, or how much work it would be per =
>> manufacturer - I haven't used FTS much.
>> 
>> I'd first see if I couldn't add that similarity information to the =
>> products table, though ;)
>> 
>> Alban Hertroys
>> 
>> --
>> The scale of a problem often equals the size of an ego.
>> 
>> 

Hi,

the actual process tends to be
- add products to database
- then receive images and try to match them to products.

So I know about the naming scheme only when I see a list of files, and would 
want to have
a cmdline option for my matching script that distinguishes formats like
a) exact
b) alpha suffix following numeric main body
c) period or slash between main and related
To make things even more complex, I might receive images from a reseller that 
offers
a few manufacturers using different conventions.

I wonder whether this would translate well into building a temporary index, if 
I detect b or c
patterns.

When I asked first, I also started working on a different approach. This did 
work on the
current case (I got similar datasets before, but with considerably fewer 
items), so I will
probably try that and some ideas I got from this discussion, and see how far I 
get.
This approach is a separate perl script that builds a tree structure of all the 
image names
and then tries to create a regex for a crude filter. In the example it would 
have
determined that all images in the lot match an ABC prefix. Then it selects all 
matching
codes (so it can work with the entire database) and runs them through the same 
tree
structure.

Regards
Wolfgang Hamann



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


RES: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

2011-09-26 Thread Edson Carlos Ericksson Richter
Because the error message saying so (I do use pgAdmin III):

"An error has occurred:

ERROR: the datatype character varying has no standard operator class for
"gist" access method
HINT: You should specify na operator class for the index or define one
standard operator class for the data type."

(I've translated the above message from portuguese to english, sorry if it's
not exact).

That's why I can't use GIST.

Thanks,

Edson.


-Mensagem original-
De: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] Em nome de Alban Hertroys
Enviada em: segunda-feira, 26 de setembro de 2011 12:37
Para: Edson Carlos Ericksson Richter
Cc: pgsql-general@postgresql.org
Assunto: Re: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX
queries?

On 26 September 2011 17:15, Edson Carlos Ericksson Richter
 wrote:
> Just discovered, I can't use GIST over character varying...

Why do you think that?

--
If you can't see the forest for the trees, Cut the trees and you'll see
there is no forest.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] "all" not inclusive of "replication" in pg_hba.conf

2011-09-26 Thread Merlin Moncure
On Sat, Sep 24, 2011 at 10:07 PM, Rajesh Kumar Mallah
 wrote:
> Dear List ,
>
> It is been found that the entry
>
> local   all         all                               trust
>
> does not renders below redundant in pg_hba.conf
>
> local replication   replicator01         trust

I noticed this too, and I think it should.  Either way, the
documentation isn't clear on this point -- either 'all' should include
the faux replication database or it should be noted in appropriate
places that 'all' doesn't/can't do that.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


RES: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

2011-09-26 Thread Edson Carlos Ericksson Richter
Sorry, Gregg. I did not noticed I was disturbing...

Can you please tell me what you mean by "top posting"? I've created an 
specific topic for this discussion, and I'm not using "urgent" or html 
format... so I suppose that I've been following the rules...

BTW, I've repeated the query to easy people who would try to help me... 
instead searching for the mail thread again (not everybody organizes mail list 
in threads)...

Or should I not query for help on the issues I'm facing?


Regards,

Edson.


-Mensagem original-
De: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] Em nome de Gregg Jaskiewicz
Enviada em: segunda-feira, 26 de setembro de 2011 12:28
Para: Edson Carlos Ericksson Richter
Cc: pgsql-general@postgresql.org
Assunto: Re: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX 
queries?

You can create your own type, but that means writing bit code in C.


Please, stop the top posting!

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] looking for a faster way to do that

2011-09-26 Thread Eduardo Morras

At 18:18 26/09/2011, you wrote:


Eduardo Morras  wrote:

>>
>> At 08:04 25/09/2011, haman...@t-online.de wrote:
>>
>> > >> select * from items where regexp_matches(items.code,'(ABC) (DE1)
>> > >> (any_substring)')<>{};
>> > >>
>> >
>> >Hi Eduardo,
>> >
>> >it is clear that scanning the table once with a list of matches will
>> >outperform
>> >rescanning the table for every string wanted. Now, my problem is
>> >that the patterns are
>> >dynamic as well. So if I could translate a table with one
>> >column  and a few thousand rows
>> >into something like
>> >regexp_matches(code,'string1|string2|.string2781')
>> >would ideally be a performant query. Unfortunately I have no idea
>> >how I could achieve this
>> >transformation inside the database. Doing it externally fails,
>> >because any single query cannot
>> >be more than so few characters.
>>
>> You can create a plsql function and pass a setof text that do it.
>> Sorry but instead saying you What Must You Type, WMYT(c), i prefer
>> the How Should You Do way, HSYD(c). Note that you can get the same
>> results using other approachs (f.ex. using FTS described in chapter 12)
>>
>> Check this topics:
>>
>> Function
>> 
Creation  http://www.postgresql.org/docs/9.0/static/sql-createfunction.html

>>
>> Tutorial about Function
>> Creation  http://www.adderpit.com/practical-postgresql/x10374.htm
>>

Hi,

I tried the pl/sql approach to convert the contents of that patterns 
table into a regex.

Results: 40 seconds runtime for 9500 candidates and 815 patterns
718 seconds for the same set of 9500 candidates, but using 4000 
patterns instead.

So it seems that I am reaching limits of pattern match


Perhaps calling the function twice with half the values go faster. 
How do you call the function? EXECUTE or SELECT? If you use EXECUTE 
then the prepared plan in a previous call is ignored and is usually 
faster. Don't know if in your case it run faster but you can try it.


As for the fulltext index (and the underlying tsquery): this is an 
exact match rather than prefix
match, so I would need to know match patterns in advance in order to 
build the index


I am thinking about that anyway (because ABC1234 likely should not 
match ABC123 pattern
in my context), but I would sort of prefer a system where I can 
state the rules when I

see the data set, rather than having to pre-create an index.

Thanks for the tutorial link :)


It's the one i used time ago. A bit old but very good one.

It seems that the responses on my post give all sorts of input that 
will help me on other

tasks



Regards
Wolfgang Hamann




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] : PostgreSQL Online Backup

2011-09-26 Thread Alan Hodgson
On September 26, 2011 05:49:50 AM Venkat Balaji wrote:
> I tried restoring the backup, after taking the full backup.
> 
> Below is what i see in the "archive destination".
> 
> Postgres was asking for "00010193006F" and i tried to find the
> same and below is what i find...
> 
> -rw--- 1 postgres postgres 3.3M Sep 26 02:06
> 00010193006F.gz -rw--- 1 postgres postgres  219 Sep 26
> 02:53
> 00010193006F.00328508.backup.gz
> 
> Why is PG (9.0) putting an extension for the WAL Archive file as
> > "backup.gz" ??
>

The archive files are created by your archive_command, as specified in 
postgresql.conf. My guess would be that your archive command runs the files 
through gzip as part of archiving (which is fine).

However, the restore_command you specify in recovery.conf  must undo this 
compression. So instead of (for example) 'cp -f "%f" "%p"', it might instead 
need to look like 'zcat "%f" > "%p"'.

Hope this helps.

 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Searching through trigger functions

2011-09-26 Thread Bob Pawley
Hi

I have an error somewhere in one of the trigger functions in my database. The 
error message only told me it was a ‘Select * From (table name).

Is there a global method of finding where this expression resides other than 
searching each script individually.

Bob

Re: [GENERAL] Searching through trigger functions

2011-09-26 Thread Merlin Moncure
On Mon, Sep 26, 2011 at 12:47 PM, Bob Pawley  wrote:
> Hi
>
> I have an error somewhere in one of the trigger functions in my database.
> The error message only told me it was a ‘Select * From (table name).
>
> Is there a global method of finding where this expression resides other than
> searching each script individually.

all function bodies are in pg_proc.prosrc.  search that...

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Searching through trigger functions

2011-09-26 Thread Alan Hodgson
On September 26, 2011 10:47:20 AM Bob Pawley wrote:
> Hi
> 
> I have an error somewhere in one of the trigger functions in my database.
> The error message only told me it was a ‘Select * From (table name).
> 
> Is there a global method of finding where this expression resides other
> than searching each script individually.
>

pg_dump -s > file; vi file; /s search_string

Alternately, I find it good practice to maintain function definitions outside 
of 
the database, under version control. You can then grep those files as needed.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ECPG Segmentation fault on retreiving timestamp column into sqlda.

2011-09-26 Thread Day, David
Hi,


If I am retrieving row(s) into a sql descriptor in ECPG,  and my last row 
column is of type timestamp,  then I I will consistently receive a segmentation 
fault as shown below.   If I redefine the table so that the timestamp field is 
not the last field of the table row, the routine runs successfully.
This seems like a bug to me that I will work around by  not having a timestamp 
column as my last column.

I'd put it in the bug category but I am fairly new to postgres so  would like 
to see if there other thoughts on the observation.

I am using ubuntu 10.04 with Postgres V 9.03.

Tnx

Dave


Here is the relavant logic in my ecpg file:


   EXEC SQL AT :cid PREPARE ps_registersipusr FROM
"SELECT * FROM RegisterSipUser(?, ?, ?, ?, ?, ?)";

 sqlda_t* sqlda = NULL;

97   EXEC SQL AT :cid DECLARE cur_regsip CURSOR FOR ps_registersipusr;
98EXEC SQL AT :cid OPEN cur_regsip USING :portid, :callid, :contact, :uri, 
:seq, :expiration;
99EXEC SQL AT :cid FETCH ALL FROM cur_regsip INTO DESCRIPTOR sqlda;

Here is the distilled down dump  received when the expiration/timestamp data 
type column is my last column in the row.

[21304]: ecpg_execute on line 98: using PQexecParams
..
[21304]: ecpg_execute on line 99: correctly got 2 tuples with 6 fields
[..
[21304]: ecpg_set_native_sqlda on line 99 row 1 col 5 IS NOT NULL
..
[21304]: ecpg_set_native_sqlda on line 99 row 0 col 5 IS NOT NULL
[21304]: ecpg_get_data on line 99: RESULT: 2000-01-01 00:21:56.639667 offset: 
0; array: no
[21304]: ecpg_execute on line 99: putting result (1 tuple 6 fields) into sqlda 
descriptor
*** glibc detected *** /space/dday/av/debug/i686-pc-linux-gnu/bin/dbm: double 
free or corruption (out):




[GENERAL] Quick-and-Dirty Data Entry with LibreOffice3?

2011-09-26 Thread Rich Shepard

  Rather than writing an application right now to enter data into a table I
thought of trying LibreOffice as a front end. But, it doesn't seem to work
as OO.o did. This leads to two questions:

  1)  Can someone show me how to use LO as a front end to a postgres table?

  2)  Is there another tool suitable for a linux box for some data entry
work?

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "all" not inclusive of "replication" in pg_hba.conf

2011-09-26 Thread Guillaume Lelarge
On Mon, 2011-09-26 at 11:58 -0500, Merlin Moncure wrote:
> On Sat, Sep 24, 2011 at 10:07 PM, Rajesh Kumar Mallah
>  wrote:
> > Dear List ,
> >
> > It is been found that the entry
> >
> > local   all all   trust
> >
> > does not renders below redundant in pg_hba.conf
> >
> > local replication   replicator01 trust
> 
> I noticed this too, and I think it should.  Either way, the
> documentation isn't clear on this point -- either 'all' should include
> the faux replication database or it should be noted in appropriate
> places that 'all' doesn't/can't do that.
> 

"all" includes all real databases, not "virtual" one. The documentation
could probably be clearer, but "all" shouldn't include the virtual
"replication" database.


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

2011-09-26 Thread Filip Rembiałkowski
Edson,

1. PostgreSQL IS able to use btree index to execute this query.
More generally, it is able to use btree index for all PREFIX search.

2. You will need a special (NOT spatial) index for it
CREATE INDEX notafiscal_numeroctc_tpo_idx ON notafiscal (numeroctc
text_pattern_ops);
( see http://www.postgresql.org/docs/9.0/interactive/indexes-opclass.htmlfor
explanation).


Hope this helped.



2011/9/26 Edson Carlos Ericksson Richter 

> Dear experts,
>
> ** **
>
> I have the following query:
>
> ** **
>
> select * from notafiscal where numeroctc like ‘POA%34345’;
>
> ** **
>
> Prefix is normally 3 characters, suffix varyies.
>
> ** **
>
> Is Postgresql 9.0.4 able to use an BTREE index on notafiscal.numeroctc to
> execute this query?
>
> ** **
>
> Should I create GIST index or something else to speed up the query?
>
> ** **
>
> ** **
>
> Thanks,
>
> ** **
>
> *Edson Carlos Ericksson Richter*
> *SimKorp Infomática Ltda *
>
> Fone:
>
> (51) 3366-7964 
>
> Celular:
>
> (51) 8585-0796
>
> [image: Embedded Image]
>
> www.simkorp.com.br
>
> ** **
>


Re: [GENERAL] "all" not inclusive of "replication" in pg_hba.conf

2011-09-26 Thread Merlin Moncure
On Mon, Sep 26, 2011 at 1:47 PM, Guillaume Lelarge
 wrote:
> On Mon, 2011-09-26 at 11:58 -0500, Merlin Moncure wrote:
>> On Sat, Sep 24, 2011 at 10:07 PM, Rajesh Kumar Mallah
>>  wrote:
>> > Dear List ,
>> >
>> > It is been found that the entry
>> >
>> > local   all         all                               trust
>> >
>> > does not renders below redundant in pg_hba.conf
>> >
>> > local replication   replicator01         trust
>>
>> I noticed this too, and I think it should.  Either way, the
>> documentation isn't clear on this point -- either 'all' should include
>> the faux replication database or it should be noted in appropriate
>> places that 'all' doesn't/can't do that.
>>
>
> "all" includes all real databases, not "virtual" one. The documentation
> could probably be clearer, but "all" shouldn't include the virtual
> "replication" database.

ok, what's your rationale for that? pg_hba.conf is a rule based system
with no distinction given for rule vs virtual databases.  what if we
create more virtual databases?  do you always have explicitly create a
rule for each database for each user?  IMSNHO, the more I think about
it, the more I think current behavior is broken.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "all" not inclusive of "replication" in pg_hba.conf

2011-09-26 Thread Guillaume Lelarge
On Mon, 2011-09-26 at 14:13 -0500, Merlin Moncure wrote:
> On Mon, Sep 26, 2011 at 1:47 PM, Guillaume Lelarge
>  wrote:
> > On Mon, 2011-09-26 at 11:58 -0500, Merlin Moncure wrote:
> >> On Sat, Sep 24, 2011 at 10:07 PM, Rajesh Kumar Mallah
> >>  wrote:
> >> > Dear List ,
> >> >
> >> > It is been found that the entry
> >> >
> >> > local   all all   trust
> >> >
> >> > does not renders below redundant in pg_hba.conf
> >> >
> >> > local replication   replicator01 trust
> >>
> >> I noticed this too, and I think it should.  Either way, the
> >> documentation isn't clear on this point -- either 'all' should include
> >> the faux replication database or it should be noted in appropriate
> >> places that 'all' doesn't/can't do that.
> >>
> >
> > "all" includes all real databases, not "virtual" one. The documentation
> > could probably be clearer, but "all" shouldn't include the virtual
> > "replication" database.
> 
> ok, what's your rationale for that? pg_hba.conf is a rule based system
> with no distinction given for rule vs virtual databases.  what if we
> create more virtual databases?  do you always have explicitly create a
> rule for each database for each user?  IMSNHO, the more I think about
> it, the more I think current behavior is broken.
> 

And I would agree (that the current behaviour is broken). Using a
database name as a flag to replication connection was a false good idea.
But, actually, I failed to find a better one.


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "all" not inclusive of "replication" in pg_hba.conf

2011-09-26 Thread Merlin Moncure
On Mon, Sep 26, 2011 at 2:23 PM, Guillaume Lelarge
 wrote:
> And I would agree (that the current behaviour is broken). Using a
> database name as a flag to replication connection was a false good idea.
> But, actually, I failed to find a better one.

Well, that may or may not be a good idea, but that's a separate issue.
 pg_hba.conf has a very specific way of working, and the replication
'database' doesn't work that way -- it should follow the same rules
the other databases do since it's stored in the same area and should
implicitly use the same mechanics.  A cleaner way of doing it might
have been to introduce a separate area for virtual databases for
example (and this might have mitigated pain for the non-zero chance
for users that already have a database named 'replication').

Maybe it's too late to change it now, though :(, we should do
something about the current situation, even if the best we can come up
with is putting a clearly worded disclaimer into the docs.  I still
think it's better to make 'all' work though.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg and compress

2011-09-26 Thread John R Pierce

On 09/26/11 6:59 AM, Jov wrote:


Hi all,
We are going to use pg as data warehouse,but after some test,we found 
that plain text with csv format is 3 times bigger when load to pg.we 
use copy to load data.we try some optimize and it reduce to 2.5 times 
bigger.other db can  avarage compress  to 1/3 of the plain text.bigger 
data means heavy io.
So my question is how to make data compressed in pg?is some fs  such 
as zfs,berfs with compression feature can work well with pg?




your source data is CSV, what data types are the fields in the table(s) 
? do you have a lot of indexes on this table(s)?




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Batching up data into groups of n rows

2011-09-26 Thread Marti Raudsepp
On Mon, Sep 26, 2011 at 18:59, Andy Chambers  wrote:
> ...and I want to select the data from that table, adding a "group_id"
> column, and a "record_id" column.  The "group_id" would start at 1,
> and increment by 1 every 100 rows, and the "record_id" would increment
> by 1 every row, but restart at 1 for each new group_id.

I can't see why you would want this, but regardless...

Since you didn't list a primary key for the table, I'm using "ctid" as
a substitute. But that's a PostgreSQL-specific hack, normally you
would use the real primary key instead.

update addresses set group_id=(nr/100)+1, record_id=(nr%100)+1 from
  (select ctid, row_number() over () -1 as nr from addresses) as subq
  where subq.ctid=addresses.ctid;

This isn't going to be fast for a large table as it will effectively
make 3 passes over the table, but it will get the job done.

If you want the numbers to be ordered by certain columns, you'd use
row_number() OVER (ORDER BY col1, col2)

Regards,
Marti

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

2011-09-26 Thread Marti Raudsepp
On Mon, Sep 26, 2011 at 15:16, Edson Carlos Ericksson Richter
 wrote:
> select * from notafiscal where numeroctc like ‘POA%34345’;
>
> Prefix is normally 3 characters, suffix varyies.
>
> Is Postgresql 9.0.4 able to use an BTREE index on notafiscal.numeroctc to 
> execute this query?

As mentioned by other posters, you should use a btree index with
text_pattern_ops opclass to speed up this query.

For queries like these, it's often faster to match the text in
*reverse*. You can create two indexes like this:

create index on foobar (txt text_pattern_ops);
create index on foobar (reverse(txt) text_pattern_ops);

And then write your queries like this:
SELECT * FROM foobar
WHERE txt like 'POA%34345'
  AND reverse(txt) like reverse('POA%34345');

PostgreSQL will automatically choose one or both indexes for executing
this query.

Regards,
Marti

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


RES: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

2011-09-26 Thread Edson Carlos Ericksson Richter
> -Mensagem original-
> De: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] Em nome de Marti Raudsepp
> Enviada em: segunda-feira, 26 de setembro de 2011 17:42
> Para: Edson Carlos Ericksson Richter
> Cc: pgsql-general@postgresql.org
> Assunto: Re: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX
> queries?
> 
> On Mon, Sep 26, 2011 at 15:16, Edson Carlos Ericksson Richter
>  wrote:
> > select * from notafiscal where numeroctc like ‘POA%34345’;
> >
> > Prefix is normally 3 characters, suffix varyies.
> >
> > Is Postgresql 9.0.4 able to use an BTREE index on notafiscal.numeroctc
> to execute this query?
> 
> As mentioned by other posters, you should use a btree index with
> text_pattern_ops opclass to speed up this query.
> 
> For queries like these, it's often faster to match the text in *reverse*.
> You can create two indexes like this:
> 
> create index on foobar (txt text_pattern_ops); create index on foobar
> (reverse(txt) text_pattern_ops);

I got the following error:

ERROR: function reverse(text) does not exist
SQL state: 42883
Hint: No function matches the given name and argument types. You might need to 
add explicit type casts.
Character: 29

> 
> And then write your queries like this:
> SELECT * FROM foobar
> WHERE txt like 'POA%34345'
>   AND reverse(txt) like reverse('POA%34345');
> 
> PostgreSQL will automatically choose one or both indexes for executing
> this query.
> 
> Regards,
> Marti
> 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
> changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


smime.p7s
Description: S/MIME cryptographic signature


RES: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

2011-09-26 Thread richter
This message has been digitally signed by the sender.

RES___GENERAL__Does_postgresql_9_0_4_use_index_on_PREFIX_SUFFIX_queries_.eml
Description: Binary data


-
Hi-Tech Gears Ltd, Gurgaon, India


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [Solved] Generic logging system for pre-hstore using plperl triggers

2011-09-26 Thread Diego Augusto Molina
Hi, I had to implement a logging system for some DBs in work. It's
generic and implemented using plperl. I've seen tons of mails on the
list from newbies asking for something like this using plpgsql, but no
specific solution is pointed for them. I think this may discourage
some of them.
The system is implemented using a separate schema with whatever name
you want, and has some really nice features: relevant tables can be
rotated to facilitate auditing, each logged action refers to the
modified tuple by pk, which you don't even have to (but can) specify
when reating the trigger, and some more. Performance is very
acceptable (test cases and suggestions are welcome), and never had a
problem since the about 8 months it's been working.
In the wiki some points are mentioned, but all is _too_ general and
for a vanilla pg-8.4 you won't have the hstore facility (note that
upgrading is not always a choice).
Will it worth pasting the little code in here or that'll be ignored?

-- 
Diego Augusto Molina
diegoaugustomol...@gmail.com

ES: Por favor, evite adjuntar documentos de Microsoft Office. Serán
desestimados.
EN: Please, avoid attaching Microsoft Office documents. They shall be discarded.
LINK: http://www.gnu.org/philosophy/no-word-attachments.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Solved] Generic logging system for pre-hstore using plperl triggers

2011-09-26 Thread Filip Rembiałkowski
2011/9/27 Diego Augusto Molina 

> Hi, I had to implement a logging system for some DBs in work. It's
> generic and implemented using plperl. I've seen tons of mails on the
> list from newbies asking for something like this using plpgsql, but no
> specific solution is pointed for them. I think this may discourage
> some of them.
> The system is implemented using a separate schema with whatever name
> you want, and has some really nice features: relevant tables can be
> rotated to facilitate auditing, each logged action refers to the
> modified tuple by pk, which you don't even have to (but can) specify
> when reating the trigger, and some more. Performance is very
> acceptable (test cases and suggestions are welcome), and never had a
> problem since the about 8 months it's been working.
> In the wiki some points are mentioned, but all is _too_ general and
> for a vanilla pg-8.4 you won't have the hstore facility (note that
> upgrading is not always a choice).
> Will it worth pasting the little code in here or that'll be ignored?
>
>
If you can share it - do it, maybe someone will have use of your work - it's
always welcome.
Publishing it on the web and linking here instead of pasting - will be even
better.

Filip


Re: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

2011-09-26 Thread Marti Raudsepp
On Tue, Sep 27, 2011 at 01:43, Edson Carlos Ericksson Richter
 wrote:
>> create index on foobar (txt text_pattern_ops); create index on foobar
>> (reverse(txt) text_pattern_ops);
>
> I got the following error:
>
> ERROR: function reverse(text) does not exist
> Hint: No function matches the given name and argument types. You might need 
> to add explicit type casts.

Ah, the reverse() function is not included with PostgreSQL 9.0 yet.
This is what I use:

CREATE FUNCTION reverse(input text) RETURNS text
LANGUAGE plpgsql IMMUTABLE STRICT AS $$
DECLARE
  result text = '';
  i int;
BEGIN
  FOR i IN 1..length(input) BY 2 LOOP
result = substr(input,i+1,1) || substr(input,i,1) || result;
  END LOOP;
  RETURN result;
END$$;

Regards,
Marti

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Quick-and-Dirty Data Entry with LibreOffice3?

2011-09-26 Thread Filip Rembiałkowski
2011/9/26 Rich Shepard 

>  Rather than writing an application right now to enter data into a table I
> thought of trying LibreOffice as a front end. But, it doesn't seem to work
> as OO.o did.


It does, albeit you will need libreoffice-base which is not always installed
by default (not in my Ubuntu). And of course the JDBC driver.


  1)  Can someone show me how to use LO as a front end to a postgres table?
>
>
http://www.postgresonline.com/journal/archives/8-Using-OpenOffice-Base-2.3.1-with-PostgreSQL.html

cheers, Filip


Re: [GENERAL] Quick-and-Dirty Data Entry with LibreOffice3?

2011-09-26 Thread planas
Rich

On Mon, 2011-09-26 at 11:38 -0700, Rich Shepard wrote: 

> Rather than writing an application right now to enter data into a table I
> thought of trying LibreOffice as a front end. But, it doesn't seem to work
> as OO.o did. This leads to two questions:
> 
>1)  Can someone show me how to use LO as a front end to a postgres table?



> 2)  Is there another tool suitable for a linux box for some data entry
> work?
> 
> Rich
> 

Which version of LO are you using and which Linux? I have some
experience with using LO as a front-end when pgAdmin is not the best
tool.

I have noticed that with Ubuntu you need to use the 3.3.x series from
the repository. There is pg connector for LO 3.3.x in the repository.
Getting 3.4.x to connect is more of a pain in Ubuntu. I think the
correct driver is libreoffice-sdbc-postgresql.

The steps I use are
1 Open Base and select connect to an existing database.
2 Scroll down to postgresq and select (it will be there if the correct
driver is present)
3 On the next screen you will need to enter the connection information
(dbname = , host = )
4 On the next screen enter your user information
5 On the next screen I usually enter the defaults
6 You should be in Base.


-- 
Jay Lozier
jsloz...@gmail.com


Re: [GENERAL] Pg and compress

2011-09-26 Thread Jov
Most are bigint and one field is varchar.
There is no index.

在 2011-9-27 上午3:34,"John R Pierce" 写道:
>
> On 09/26/11 6:59 AM, Jov wrote:
>>
>>
>> Hi all,
>> We are going to use pg as data warehouse,but after some test,we found
that plain text with csv format is 3 times bigger when load to pg.we use
copy to load data.we try some optimize and it reduce to 2.5 times
bigger.other db can  avarage compress  to 1/3 of the plain text.bigger data
means heavy io.
>> So my question is how to make data compressed in pg?is some fs  such as
zfs,berfs with compression feature can work well with pg?
>>
>
> your source data is CSV, what data types are the fields in the table(s) ?
do you have a lot of indexes on this table(s)?
>
>
>
> --
> john r pierceN 37, W 122
> santa cruz ca mid-left coast
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg and compress

2011-09-26 Thread John R Pierce
On 09/26/11 5:53 PM, Jov wrote:
>
> Most are bigint and one field is varchar.
> There is no index.
>
>


well, scalar bigint values will be 8 bytes, plus a bit or 2 of overhead
per field. each complete tuple has a dozen bytes of header overhead.
tuples are stored as many as fit in a 8K block, unless you've specified
a fillfactor, whereupon that % of space is left free in each block.

if your CSV has mostly small integer values that are just 1-2-3 digits,
yeah, bigint will take more space than ascii.


-- 
john r pierceN 37, W 122
santa cruz ca mid-left coast


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Batching up data into groups of n rows

2011-09-26 Thread Andy Chambers
On Mon, Sep 26, 2011 at 4:22 PM, Marti Raudsepp  wrote:
> On Mon, Sep 26, 2011 at 18:59, Andy Chambers  wrote:
>> ...and I want to select the data from that table, adding a "group_id"
>> column, and a "record_id" column.  The "group_id" would start at 1,
>> and increment by 1 every 100 rows, and the "record_id" would increment
>> by 1 every row, but restart at 1 for each new group_id.
>
> I can't see why you would want this, but regardless...

The addresses need to be sent to a 3rd party web-service for
canonicalization.  The web service accepts batches of <100 addresses.
I was wondering how I'd get Postgres to generate the XML for sending
100 addresses at a time to this web service.

> Since you didn't list a primary key for the table, I'm using "ctid" as
> a substitute. But that's a PostgreSQL-specific hack, normally you
> would use the real primary key instead.
>
> update addresses set group_id=(nr/100)+1, record_id=(nr%100)+1 from
>  (select ctid, row_number() over () -1 as nr from addresses) as subq
>  where subq.ctid=addresses.ctid;

Cool!  I don't need to actually store these ids in the database, they
just need to be generated on the fly and forgotten but I think I can
adapt the example to do what I need.

Thanks,
Andy

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] : PostgreSQL Online Backup

2011-09-26 Thread Venkat Balaji
Thanks for all your inputs !

Our problem is -

We had mistakenly executed  "rsync" on the running PostgreSQL data directory
(production) and we did not run "pg_start_backup()".

Will this harm production ? can this lead to corruption ?

Thanks -

On Mon, Sep 26, 2011 at 10:29 PM, Alan Hodgson  wrote:

> On September 26, 2011 05:49:50 AM Venkat Balaji wrote:
> > I tried restoring the backup, after taking the full backup.
> >
> > Below is what i see in the "archive destination".
> >
> > Postgres was asking for "00010193006F" and i tried to find
> the
> > same and below is what i find...
> >
> > -rw--- 1 postgres postgres 3.3M Sep 26 02:06
> > 00010193006F.gz -rw--- 1 postgres postgres  219 Sep 26
> > 02:53
> > 00010193006F.00328508.backup.gz
> >
> > Why is PG (9.0) putting an extension for the WAL Archive file as
> > > "backup.gz" ??
> >
>
> The archive files are created by your archive_command, as specified in
> postgresql.conf. My guess would be that your archive command runs the files
> through gzip as part of archiving (which is fine).
>
> However, the restore_command you specify in recovery.conf  must undo this
> compression. So instead of (for example) 'cp -f "%f" "%p"', it might
> instead
> need to look like 'zcat "%f" > "%p"'.
>
> Hope this helps.
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] PostgreSQL recovery when lost some file in data\global

2011-09-26 Thread tuanhoanganh
I am running PostgreSQL 9.0.1 32bit on windows 2003. Last night my disk had
some problem and i lost some file in data\global.
Is there anyway to recovery postgresql.

Thanks in advance. Sorry for my English.

Tuan Hoang Anh


Re: [GENERAL] New feature: accumulative functions.

2011-09-26 Thread Marti Raudsepp
2011/9/25 pasman pasmański :
> My english is not perfect, by accumulative i think about monotonically
> increasing function.
>
> It works that for clause WHERE f(x)=const:
> 1. Read root page of index_on_x and get x1 ... Xn
> 2. Calculate f(x1) ... f(xn) for this page
> 3. When f(x1)<=const<= f(xn) then x1 <= searched x <= xn and we can
> test smaller range (xlower, xgreater).
> 4. Otherwise no rows satisfy condition.

I can't get very excited about this feature for index scans. However,
I think there's another, more interesting use case: sorting

I frequently write queries like:
SELECT date_trunc('month', somedate), sum(foo)
GROUP BY date_trunc('month', somedate);

Currently the planner doesn't realize that instead of
GroupAggregate+Sort, it can use the already existing sorted index on
just (somedate). Alternatively I would need to create a separate
date_trunc functional index for daily, weekly and monthly aggregates
for EACH meaningful time zone.

This would be a planner-only change and nothing the executor needs to know of.

Now obviously HashAggregate helps a lot with these kinds of queries,
but there are still cases where GroupAggregate would be a win -- for
instance, queries with a LIMIT.

Regards,
Marti

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Download States and Capitals Database

2011-09-26 Thread Adarsh Sharma

Dear all,

I  googled a lot  and find  data  of all countries ,  cities  , location 
etc from  Geo Spatial websites but I am able to find the data that shows 
all *states & their respective capitals* in world.

Please let me know if anyone as prior information about this ?


Thanks