Re: [GENERAL] 9.3: bug related to json

2015-02-25 Thread Tom Lane
=?UTF-8?B?VG9yc3RlbiBGw7ZydHNjaA==?= torsten.foert...@gmx.net writes:
 On 25/02/15 07:34, David G Johnston wrote:
 Torsten Förtsch wrote
 Is there anything similar for JSON scalars?
 IDK, but have you tried ::text?

 yes. Here is the difference
 select * from (values (('{a:b}'::json - 'a')::text),
('{a:b}'::json - 'a')) t;
  column1
 -
  b
  b

As of 9.4, there's a function json_array_elements_text()
which does what I think you're looking for.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] utf8 issues

2015-02-25 Thread Felix Ivan Romero Rodríguez

I've got the followin issue with postgres 9.3, trying to restore a backup from 
sql script: 
invalid byte sequence for encoding “UTF8″: 0xc2de 
how can i solve it? 





Re: [GENERAL] utf8 issues

2015-02-25 Thread Adrian Klaver

On 02/25/2015 07:11 AM, Felix Ivan Romero Rodríguez wrote:


I've got the followin issue with postgres 9.3, trying to restore a
backup from sql script:
invalid byte sequence for encoding “UTF8″: 0xc2de
how can i solve it?


Provide more information:)

Where is backup coming from , another database, the same one, etc?

What was the encoding for the database that was backed up?

Is it just the one error or does it happen repeatedly?









--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] : :Full text search query ::

2015-02-25 Thread JD
Hi All,

please find herewith the following  query

1. select * from partdetails where scode=118 and
(to_tsvector('english'::regconfig, part_number::text)) @@ to_tsquery('104')

it is showing only 1 record as output, it is expected to give 17 records as
output.

2. select * from partdetails where scode=118 and
(to_tsvector('english'::regconfig, part_number::text)) @@ to_tsquery('104/')

it is showing only 1 record as output, it is expected to give 17 records as
output.

3. select * from partdetails where scode=118 and
(to_tsvector('english'::regconfig, part_number::text)) @@
to_tsquery('104/1')

it is showing  17 records as output.

In our search case we are passing parameter value as 104 and expected to
get 17 records.


Kindly some one guide here.


[GENERAL] Locking during UPDATE query with SUBSELECT

2015-02-25 Thread Cenkar, Maciej
Hi All,

I wonder if anyone can explain something I cannot easily find on google.

Given PostgreSQL 9.3.5 what is locking strategy when executing query such as:

UPDATE table SET some_col = some_val WHERE id IN (SELECT id FROM 
expensive_query_with_joins).

Is this starting to lock rows after it executed sub-select or is it locking 
whole table and then executing select?

Is there any advantage in precomputing ids from nested select to run only
UPDATE table SET some_col = some_val WHERE id IN (precomputed_values)?

If it changes anything table has few indices and many millions of rows.

Regards,
Maciek


Re: [GENERAL] utf8 issues

2015-02-25 Thread Vick Khera
Fix the character sequence to be valid UTF8 before you restore it. I'm
assuming you're restoring from a pretty old version of Postgres which did
not do strict UTF8 character validation. Are you sure the data is encoded
as UTF8 and not some other?


Re: [GENERAL] Some indexing advice for a Postgres newbie, please?

2015-02-25 Thread Merlin Moncure
On Sat, Feb 21, 2015 at 5:33 AM, Arjen Nienhuis a.g.nienh...@gmail.com wrote:
 Hi,

 On 19 Feb 2015 17:12, brian br...@meadows.pair.com wrote:


 Hi folks,

 I have a single-user application which is growing beyond the
 fixed-format data files in which it currently holds its data, I need a
 proper database as the backend. The front end is written using Lazarus
 and FreePascal under Linux, should anyone feel that makes a
 difference. The database will need to grow to around 250,000 records.

 My problem is with the data field which is the (unique) key. It's
 really a single 192-bit integer (it holds various bits of bitmapped
 data) which I currently hold as six 32-bit integers, but can convert
 if needed when transferring the data.

 How would you advise that I hold this field in a Postgres database,
 given the requirement for the whole thing to be a unique key? The
 first 64 bits change relatively infrequently, the last 128 bits will
 change with virtually every record. The last 128 bits will ALMOST be
 unique in themselves, but not quite. :(

 Thanks,

 Brian.


 Postgres can use almost anything as a key so it probably depends on the
 library you use to access the database.

 If it supports composite primary keys you can use the 6 ints as a key:

 PRIMARY KEY (n1, n2, n3, n4, n5, n6)

 The numeric type can hold 192-bit numbers. I think Lazarus supports this as
 well.

 You could also use a surrogate key and define a UNIQUE constraint on the 6
 ints or the 192-bit number.

You could also use 3 64 bit bigints if that's easier.  The other way
to do it is bytea.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] : :Full text search query ::

2015-02-25 Thread Tomas Vondra
Hi,

On 25.2.2015 12:50, JD wrote:
 Hi All,
 
 please find herewith the following  query
 
 1. select * from partdetails where scode=118 and 
 (to_tsvector('english'::regconfig, part_number::text)) @@ to_tsquery('104')
 
 it is showing only 1 record as output, it is expected to give 17 records
 as output.
 
 2. select * from partdetails where scode=118 and 
 (to_tsvector('english'::regconfig, part_number::text)) @@ to_tsquery('104/')
 
 it is showing only 1 record as output, it is expected to give 17 records
 as output.
 
 3. select * from partdetails where scode=118 and 
 (to_tsvector('english'::regconfig, part_number::text)) @@
 to_tsquery('104/1')
 
 it is showing  17 records as output.
 
 In our search case we are passing parameter value as 104 and expected to
 get 17 records.
 
 
 Kindly some one guide here.

You need to post 'to_tsvector('english', part_number)' for the 16 rows
that you think should be returned but aren't.

Fulltext works so that it transforms the source (part_number in this
case) as defined in the text search configuration ('english'), and
compares this with the tsquery.

My bet is that the transformation keeps the whole string ('104/1')  in
this case, so that it does not match the tsquery.

ISTM you're trying to do a prefix search on the part_number. In that
case fulltext may not be the right solution, because it's fuzzy by
nature. If you have two-part part numbers (i.e. it's always A/B) then
maybe split that into two fields, and use simple equality conditions on
each field.

So instead of column 'part_number' containing valuye '104/1' use two
columns part_number_a and part_number_b, containing values '104' and
'1', and simple equality queries

WHERE part_number_a = '104' and part_number_b = '1'

or (if you want to match just the first part)

   WHERE part_number_a = '104'

Another option is to use an index with a 'varchar_pattern_ops' opclass,
which allows you to do prefix LIKE queries [1]

   CREATE INDEX custom_partnum_idx
 ON partdetails (part_number varchar_pattern_ops);


   SELECT ... FROM partdetails WHERE part_number LIKE '104/%'


[1] http://www.postgresql.org/docs/9.2/static/indexes-opclass.html

regards

-- 
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Row-level Security vs Application-level authz

2015-02-25 Thread Dowwie
Thanks, Stephen and David for your responses.   My setup is as you described
it.  Thanks for clarifying.














--
View this message in context: 
http://postgresql.nabble.com/Row-level-Security-vs-Application-level-authz-tp5839069p5839291.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] newbie how to access the information scheme

2015-02-25 Thread frank ernest
Thanks


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Some indexing advice for a Postgres newbie, please?

2015-02-25 Thread Jeremy Harris
On 25/02/15 15:42, Merlin Moncure wrote:
 On Sat, Feb 21, 2015 at 5:33 AM, Arjen Nienhuis a.g.nienh...@gmail.com 
 wrote:
 Hi,

 On 19 Feb 2015 17:12, brian br...@meadows.pair.com wrote:


 Hi folks,

 I have a single-user application which is growing beyond the
 fixed-format data files in which it currently holds its data, I need a
 proper database as the backend. The front end is written using Lazarus
 and FreePascal under Linux, should anyone feel that makes a
 difference. The database will need to grow to around 250,000 records.

 My problem is with the data field which is the (unique) key. It's
 really a single 192-bit integer (it holds various bits of bitmapped
 data) which I currently hold as six 32-bit integers, but can convert
 if needed when transferring the data.

 How would you advise that I hold this field in a Postgres database,
 given the requirement for the whole thing to be a unique key? The
 first 64 bits change relatively infrequently, the last 128 bits will
 change with virtually every record. The last 128 bits will ALMOST be
 unique in themselves, but not quite. :(

 Thanks,

 Brian.


 Postgres can use almost anything as a key so it probably depends on the
 library you use to access the database.

 If it supports composite primary keys you can use the 6 ints as a key:

 PRIMARY KEY (n1, n2, n3, n4, n5, n6)

 The numeric type can hold 192-bit numbers. I think Lazarus supports this as
 well.

 You could also use a surrogate key and define a UNIQUE constraint on the 6
 ints or the 192-bit number.
 
 You could also use 3 64 bit bigints if that's easier.  The other way
 to do it is bytea.

The other way to do it is to have semantically-meaningful columns
rather than glomming them into this 192-bit integer, and a composite
key on the lot - if the set truly is unique.
-- 
Jeremy




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Locking during UPDATE query with SUBSELECT

2015-02-25 Thread David Steele
On 2/25/15 10:49 AM, Cenkar, Maciej wrote:
 Given PostgreSQL 9.3.5 what is locking strategy when executing query
 such as:
 
 UPDATE table SET some_col = some_val WHERE id IN (SELECT id FROM
 expensive_query_with_joins).
 
 Is this starting to lock rows after it executed sub-select or is it
 locking whole table and then executing select?

This statement will lock rows in the update table as they are returned
from the subquery and modified - only a share lock will be held on the
entire table from the beginning (that just keeps people from modifying
the table while you are using it).  If the subquery contains a group by,
order by, or some other clause that requires all the returned rows to be
examined as a whole then the row locks will happen very consecutively,
otherwise they could happen over a longer period of time and the locks
will be held for longer.

 Is there any advantage in precomputing ids from nested select to run only
 
 UPDATE table SET some_col = some_val WHERE id IN (precomputed_values)?

If your subquery is very expensive but returns a reasonable number of
rows, then putting the results in a temp table and then updating from
the temp table may allow locks to be held a shorter amount of time.  If
your subquery contains a group by, order by, etc. as mentioned above
then I wouldn't bother.

One other thing to mention - since the order of updates cannot be
guaranteed I wouldn't run more than one update like this at the same
time or you might get deadlocks.

-- 
- David Steele
da...@pgmasters.net



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] [postgresql 9.3.5] autovacuums stuck on non-existent pg_toast tables

2015-02-25 Thread Alvaro Herrera
tpham wrote:
 Hi everyone,
 
 Two weeks ago, one of our Postgres databases crashed violently and had to be
 brought up again. This certainly resulted in some lost pg_clog files, and we
 had to zero-fill them in one by one to get autovacuum up and running again.

You should never lose pg_clog files to any type of crash.  This kind of
problem shows up when you have broken config somehow, perhaps running
with fsync=off or your disks have write cache enabled and no
battery-backup for it.  This seems like a serious issue that should be
investigated more closely.

Your filesystem might have put the lost files in lost+found.

 Now, we have two autovacuuming processes constantly stuck at two pg_toast
 tables that are nonexistent:
 
 autovacuum: VACUUM pg_toast.pg_toast_455742374 (runs up to twelve hours with
 no progress)
 
 # select 455742374::regclass;
  regclass
 ---
  455742374
 (1 row)

So, you could look at the toast table directly in pg_class, and perhaps
delete the pg_class entry for the stale pg_toast table and the file if
there's any.  For instance, try

select relfilenode from pg_class where relname = 'pg_toast_455742374'

which would give you the file name of the offending toast table so that
you can remove it in the filesystem.  You can then run a DELETE against
pg_class.

Another (probably better) way to look for the table would be something
like

select *
  from pg_class
 where reltoastrelid = (select oid
  from pg_class
 where relname = 'pg_toast_455742374');
or similar.

I think you should pg_dump this database and restore it in a newly
initdb'd directory.  Otherwise, who knows what other inconsistency you
might have in your data.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [postgresql 9.3.5] autovacuums stuck on non-existent pg_toast tables

2015-02-25 Thread tpham
Hi everyone,

Two weeks ago, one of our Postgres databases crashed violently and had to be
brought up again. This certainly resulted in some lost pg_clog files, and we
had to zero-fill them in one by one to get autovacuum up and running again.

Now, we have two autovacuuming processes constantly stuck at two pg_toast
tables that are nonexistent:

autovacuum: VACUUM pg_toast.pg_toast_455742374 (runs up to twelve hours with
no progress)

# select 455742374::regclass;
 regclass
---
 455742374
(1 row)

whereas normally for a pg_toast table we should have something like

# select 1371237940::regclass;
regclass
-
 performance_report_parallel.campaigns_summary_p
(1 row)

The result is that we are down to just one autovacuum worker, which is
evidently not enough to help our database oiled and running. :(

Our autovacuum paramters are all using default values.

Any help would be greatly appreciated. Thank you!



--
View this message in context: 
http://postgresql.nabble.com/postgresql-9-3-5-autovacuums-stuck-on-non-existent-pg-toast-tables-tp5839397.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] parallel dump fails to dump large tables

2015-02-25 Thread Shanker Singh
There is no problem dumping large tables using parallel dump. My script had 
limit on the file size that was causing parallel dump to abort on large tables. 
Thanks everyone for their valuable suggestion.

Thanks
shanker

From: Shanker Singh
Sent: Monday, February 23, 2015 6:18 PM
To: Sterfield
Cc: Tom Lane; r...@iol.ie; pgsql-general@postgresql.org; Shanker Singh
Subject: RE: [GENERAL] parallel dump fails to dump large tables

I tried dumping the largest table that is having problem using –j1 flag in 
parallel dump. This time I got error on the console “File size limit exceeded” 
but the system allows
Unlimited file size. Also the pg_dump without –j flag goes through fine. Do you 
guys know what’s going on with parallel dump? The system is 64 bit centos(
2.6.32-504.8.1.el6.x86_64 #1 SMP Wed Jan 28 21:11:36 UTC 2015 x86_64 x86_64 
x86_64 GNU/Linux) with ext4 file system.

limit
cputime  unlimited
filesize unlimited
datasize unlimited
stacksize10240 kbytes
coredumpsize 0 kbytes
memoryuseunlimited
vmemoryuse   unlimited
descriptors  25000
memorylocked 64 kbytes
maxproc  1024

From: Sterfield [mailto:sterfi...@gmail.com]
Sent: Sunday, February 22, 2015 8:50 AM
To: Shanker Singh
Cc: Tom Lane; r...@iol.iemailto:r...@iol.ie; 
pgsql-general@postgresql.orgmailto:pgsql-general@postgresql.org
Subject: Re: [GENERAL] parallel dump fails to dump large tables



2015-02-20 14:26 GMT-08:00 Shanker Singh 
ssi...@iii.commailto:ssi...@iii.com:
I tried turning off ssl renegotiation by setting ssl_renegotiation_limit = 0 
in postgresql.conf but it had no effect. The parallel dump still fails on large 
tables consistently.

Thanks
Shanker

HI,
Maybe you could try to setup an SSH connection between the two servers, with a 
keepalive option, and left it open for a long time (at least the duration of 
your backup), just to test if your ssh connection is still being cut after some 
time.
That way, you will be sure if the problem is related to SSH or related to 
Postgresql.
Thanks,
Guillaume

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.usmailto:t...@sss.pgh.pa.us]
Sent: Saturday, February 14, 2015 9:00 AM
To: r...@iol.iemailto:r...@iol.ie
Cc: Shanker Singh; 
pgsql-general@postgresql.orgmailto:pgsql-general@postgresql.org
Subject: Re: [GENERAL] parallel dump fails to dump large tables
Raymond O'Donnell r...@iol.iemailto:r...@iol.ie writes:
 On 14/02/2015 15:42, Shanker Singh wrote:
 Hi,
 I am having problem using parallel pg_dump feature in postgres
 release 9.4. The size of the table is large(54GB). The dump fails
 with the
 error: pg_dump: [parallel archiver] a worker process died
 unexpectedly. After this error the pg_dump aborts. The error log
 file gets the following message:

 2015-02-09 15:22:04 PST [8636]: [2-1]
 user=pdroot,db=iii,appname=pg_dump
 STATEMENT:  COPY iiirecord.varfield (id, field_type_tag, marc_tag,
 marc_ind1, marc_ind2, field_content, field_group_id, occ_num,
 record_id) TO stdout;
 2015-02-09 15:22:04 PST [8636]: [3-1]
 user=pdroot,db=iii,appname=pg_dump
 FATAL:  connection to client lost

 There's your problem - something went wrong with the network.

I'm wondering about SSL renegotiation failures as a possible cause of the 
disconnect --- that would explain why it only happens on large tables.

regards, tom lane


--
Sent via pgsql-general mailing list 
(pgsql-general@postgresql.orgmailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] [postgresql 9.3.5] autovacuums stuck on non-existent pg_toast tables

2015-02-25 Thread Alvaro Herrera
Tong Pham wrote:

 We do have fsync turned on, and there was no disk failure. The database
 had to be shut down forcefully because it was becoming nonresponsive
 (probably due to inadequate earlier vacuuming) and we could not get the
 remaining queries to terminate with normal cancel/terminate operations (so
 a SIGKILL was used ‹ which was bad.)

Yes, SIGKILL is a bad idea, but it shouldn't lead to lost pg_clog files
nonetheless.  Is there a network filesystem involved perhaps?

 I did look inside pg_class, but there is absolutely no mention of this
 pg_toast table.

Well, so you can DELETE FROM pg_class the broken entry and you will get
rid of the autovacuum problem.  (Removing from the filesystem would be
good as well if the file is large, to save disk space.)

 This database has a streaming replica that has been
 healthy through the previous crash. If we decide to switch over to this
 replica, do you think these kinds of inconsistencies would be on the
 replica as well?

No idea there.  Depending on how the files disappeared, maybe the
replica has the same problem.

 A pg_dumpall is possible but might take too much time.

There's a nice saying, I can make this arbitrarily fast if it doesn't
have to give the right answer.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [postgresql 9.3.5] autovacuums stuck on non-existent pg_toast tables

2015-02-25 Thread Erik Jones

 On Feb 25, 2015, at 3:27 PM, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
 
 Tong Pham wrote:
 
 We do have fsync turned on, and there was no disk failure. The database
 had to be shut down forcefully because it was becoming nonresponsive
 (probably due to inadequate earlier vacuuming) and we could not get the
 remaining queries to terminate with normal cancel/terminate operations (so
 a SIGKILL was used ‹ which was bad.)

FWIW, when the server isn’t responding to 
pg_cancel_backend()/pt_terminate_backend() processes will typically go away if 
you directly issue a SIGTERM from the system, e.g. `kill pid` from bash or 
whatever.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Help with tokenization of age-ranges in full text search

2015-02-25 Thread Mason Hale
Hello, I've got a 9.3 database hosted at Heroku.

I'm full text search to search for group names in part of my application,
and some of my group names are the names of youth sports age groups like
Boys 9-10 or Girls 11-12.

I would like for a search for the terms Boys, Boys 9-10,  9, 10 or
9-10 to match Boys 9-10.

I have created a custom dictionary and search configuration as follows:

CREATE TEXT SEARCH DICTIONARY public.simple_nostem_no_stop (
TEMPLATE = pg_catalog.simple
);

   CREATE TEXT SEARCH CONFIGURATION public.user_search ( COPY =
pg_catalog.simple );

ALTER TEXT SEARCH CONFIGURATION public.user_search
   ALTER MAPPING FOR email, asciiword, asciihword, hword_asciipart,
hword, hword_part, word WITH simple_nostem_no_stop;

Which results in this configuration:

development=# \dF+ public.user_search
Text search
configuration public.user_search

Parser: pg_catalog.default

  Token  | Dictionaries

-+---

 asciihword  | simple_nostem_no_stop

 asciiword   | simple_nostem_no_stop

 blank   | simple

 email   | simple_nostem_no_stop

 file| simple

 float   | simple

 host| simple

 hword   | simple_nostem_no_stop

 hword_asciipart | simple_nostem_no_stop

 hword_numpart   | simple_nostem_no_stop

 hword_part  | simple_nostem_no_stop

 int | simple

 numhword| simple_nostem_no_stop

 numword | simple_nostem_no_stop

 sfloat  | simple

 uint| simple

 url | simple

 url_path| simple

 version | simple

 word| simple_nostem_no_stop



Testing my query, I get the following tokenization:

development=# select alias, token from ts_debug('public.user_search', 'Boys
9-10');

   alias   | token

---+---

 asciiword | Boys

 blank |

 uint  | 9

 int   | -10

(4 rows)


I was expecting / hoping to seek tokens for 9-10 and 10.

With the above a search for 9-10 or 10 would not match the term Boys
9-10.

I was expecting the hnumword or hword_numpart token_types to match, but
they appear to require a leading alpha character to match.

If I add a leading alpha character, it tokenizes the way I would like:

development=# select alias, token from ts_debug('public.user_search', 'Boys
x9-y10');

 alias | token

---+

 asciiword | Boys

 blank |

 numhword  | x9-y10

 hword_numpart | x9

 blank | -

 hword_numpart | y10

(6 rows)



So my question is -- can I get the tokenization that I want out of a
configuration of the stock available token types?

Since I'm hosting my db on Heroku, I assume a custom parser extension is
not a possibility.

I've tried remove the uint and int mappings, hoping then that the other
parsers will pick up the slack, but that just results in the terms being
dropped altogether.

For example, if I run:

 ALTER TEXT SEARCH CONFIGURATION public.user_search
DROP MAPPING IF EXISTS FOR file, float, host, int, uint, sfloat,
url_path, version, blank, url;

Then my configuration now looks like this:

development=# \dF+ public.user_search

Text search configuration public.user_search

Parser: pg_catalog.default

  Token  | Dictionaries

-+---

 asciihword  | simple_nostem_no_stop

 asciiword   | simple_nostem_no_stop

 email   | simple_nostem_no_stop

 hword   | simple_nostem_no_stop

 hword_asciipart | simple_nostem_no_stop

 hword_numpart   | simple_nostem_no_stop

 hword_part  | simple_nostem_no_stop

 numhword| simple_nostem_no_stop

 numword | simple_nostem_no_stop

 word| simple_nostem_no_stop

But now the tokens are just dropped altogether:

development=# select alias, token, lexemes from
ts_debug('public.user_search', 'Boys 9-10');

   alias   | token | lexemes

---+---+-

 asciiword | Boys  | {boys}

 blank |   |

 uint  | 9 |

 int   | -10   |

(4 rows)

Thanks in advance for any advice.

Would love to find a simple solution.

Mason


[GENERAL] Triggers Operations

2015-02-25 Thread Emanuel Araújo
Hi,

I have an application that replicates data from an Oracle database for
postgresql. The flow goes as follows:

oracle.table1 - AppSincronizador - postgresql.table1 - Trigger (upd,
ins, del) - postgresql.table2

I'm having situations where the data volume is large that the changes that
should be in the final table are not found, getting the tables in
postgresql nosync. Well, the application makes a single transaction and
makes commits every 1000 records.

It is as if the triggers disabled, when manually do the operation is
performed. Is there a BUG or situation where the postgresql disable these
triggers?


So Version: CentOS 6.5
PostgreSQL 9.3.5
Oracle: 11G

I found this POST that explain once situation.

AFTER triggers are more expensive than BEFORE triggers because They must be
queued up Until the statement finishes doing its work, Then executed. They
are not spilled to disk if the queue gets big (at least in 9.4 and below,
may change in future) are huge queues AFTER trigger can cause memory
available to overrun, Resulting in the statement aborting.

Link:
http://dba.stackexchange.com/questions/88761/scaling-postgresql-triggers

PS. Right now I'm not interested in the performance, as this take care
later, but the question that there are random operations that do not
complete for the final table.

Thanks!

-- 


*Atenciosamente,Emanuel Araújo*

*Linux Certified, DBA PostgreSQL*


Re: [GENERAL] Help with tokenization of age-ranges in full text search

2015-02-25 Thread Alvaro Herrera
Mason Hale wrote:
 Hello, I've got a 9.3 database hosted at Heroku.
 
 I'm full text search to search for group names in part of my application,
 and some of my group names are the names of youth sports age groups like
 Boys 9-10 or Girls 11-12.
 
 I would like for a search for the terms Boys, Boys 9-10,  9, 10 or
 9-10 to match Boys 9-10.

Hm, so if there's a sport for Boys 8-10, what will you do when it
doesn't match a query for 9?  Does this matter?  I mean, maybe
tokenization is not the most appropriate thing to do in this case.

 So my question is -- can I get the tokenization that I want out of a
 configuration of the stock available token types?

The tokenizer stuff is not the most configurable part of the FTS stuff,
sadly.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [pgadmin-support] Issue with a hanging apply process on the replica db after vacuum works on primary

2015-02-25 Thread Sergey Shchukin

Hi Radovan !

Thank you for the reply. The question is that this table is not a 
subject for a massive updates/deletes.


Is there any additional traces except from perf or pg_top to trace what 
replica is doing at the particular moment when we are lagging in replay? 
To see locks or spins or sleeps etc..


Thank you!

-

Best regards,
Sergey Shchukin

24.02.2015 19:05, Radovan Jablonovsky пишет:

This looks like more issue for pgsql-general mailing list.

Possible solutions
1) Set specific autovacuum parameters on the big table. The autovacuum 
could vacuum table on multiple runs based on the thresholds and cost 
settings
Example of setting specific values of autovacuum and analyze for 
table. It should be adjusted for your system, work load, table usage, etc:
alter table my_schema.my_big_table set (fillfactor = 80, 
autovacuum_enabled = true, autovacuum_vacuum_threshold = 200, 
autovacuum_analyze_threshold = 400, autovacuum_vacuum_scale_factor = 
0.05, autovacuum_analyze_scale_factor = 0.005, 
autovacuum_vacuum_cost_delay = 10, autovacuum_vacuum_cost_limit = 5000);


2) Could be to partition the large table on master site and vacuum it 
partition by partition.


On Tue, Feb 24, 2015 at 6:42 AM, Sergey Shchukin 
shchukin@gmail.com mailto:shchukin@gmail.com wrote:


Hi all!

May someone help me with the issue in the apply process on the
replica. We have a stream replication and after vacuum stops
working with a big table we get a freeze in applying data on the
replica database. It looks like this:

Tue Feb 24 15:04:51 MSK 2015 Stream:
MASTER-masterdb:79607136410456 SLAVE:79607136410456
Replay:79607136339456 :: REPLAY 69 KBytes (00:00:00.294485 seconds)
Tue Feb 24 15:04:52 MSK 2015 Stream:
MASTER-masterdb:79607137892672 SLAVE:79607137715392
Replay:79607137715392 :: REPLAY 173 KBytes (00:00:00.142605 seconds)
Tue Feb 24 15:04:53 MSK 2015 Stream:
MASTER-masterdb:79607139327776 SLAVE:79607139241816
Replay:79607139241816 :: REPLAY 84 KBytes (00:00:00.05223 seconds)
Tue Feb 24 15:04:54 MSK 2015 Stream:
MASTER-masterdb:79607141134776 SLAVE:79607141073344
Replay:79607141080032 :: REPLAY 54 KBytes (00:00:00.010603 seconds)
Tue Feb 24 15:04:55 MSK 2015 Stream:
MASTER-masterdb:79607143085176 SLAVE:79607143026440
Replay:79607143038040 :: REPLAY 46 KBytes (00:00:00.009506 seconds)
Tue Feb 24 15:04:56 MSK 2015 Stream:
MASTER-masterdb:79607145111280 SLAVE:79607145021384
Replay:79607145025664 :: REPLAY 83 KBytes (00:00:00.006795 seconds)
Tue Feb 24 15:04:57 MSK 2015 Stream:
MASTER-masterdb:79607146564424 SLAVE:79607146478336
Replay:79607146501264 :: REPLAY 61 KBytes (00:00:00.00701 seconds)
Tue Feb 24 15:04:58 MSK 2015 Stream:
MASTER-masterdb:79607148160680 SLAVE:79607148108352
Replay:79607147369320 :: REPLAY 773 KBytes (00:00:00.449702 seconds)
Tue Feb 24 15:04:59 MSK 2015 Stream:
MASTER-masterdb:79607150220688 SLAVE:79607150159632
Replay:79607150171312 :: REPLAY 48 KBytes (00:00:00.006594 seconds)
Tue Feb 24 15:05:00 MSK 2015 Stream:
MASTER-masterdb:79607152365360 SLAVE:79607152262696
Replay:79607152285240 :: REPLAY 78 KBytes (00:00:00.007042 seconds)
Tue Feb 24 15:05:02 MSK 2015 Stream:
MASTER-masterdb:79607154049848 SLAVE:79607154012624
Replay:79607153446800 :: REPLAY 589 KBytes (00:00:00.513637 seconds)
Tue Feb 24 15:05:03 MSK 2015 Stream:
MASTER-masterdb:79607155229992 SLAVE:79607155187864
Replay:79607155188312 :: REPLAY 41 KBytes (00:00:00.004773 seconds)
Tue Feb 24 15:05:04 MSK 2015 Stream:
MASTER-masterdb:79607156833968 SLAVE:79607156764128
Replay:79607156785488 :: REPLAY 47 KBytes (00:00:00.006846 seconds)
Tue Feb 24 15:05:05 MSK 2015 Stream:
MASTER-masterdb:79607158419848 SLAVE:79607158344856
Replay:79607158396352 :: REPLAY 23 KBytes (00:00:00.005228 seconds)
Tue Feb 24 15:05:06 MSK 2015 Stream:
MASTER-masterdb:79607160004776 SLAVE:79607159962400
Replay:7960715998 :: REPLAY 16 KBytes (00:00:00.003162 seconds)
*--here apply process just stops*

Tue Feb 24 15:05:07 MSK 2015 Stream:
MASTER-masterdb:79607161592048 SLAVE:79607161550576
Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376 seconds)
Tue Feb 24 15:05:08 MSK 2015 Stream:
MASTER-masterdb:79607163272840 SLAVE:79607163231384
Replay:79607160986064 :: REPLAY 2233 KBytes (00:00:01.446759 seconds)
Tue Feb 24 15:05:09 MSK 2015 Stream:
MASTER-masterdb:79607164958632 SLAVE:79607164904448
Replay:79607160986064 :: REPLAY 3879 KBytes (00:00:02.497181 seconds)
Tue Feb 24 15:05:10 MSK 2015 Stream:
MASTER-masterdb:79607166819560 SLAVE:7960716612
Replay:79607160986064 :: REPLAY 5697 KBytes (00:00:03.543107 seconds)
Tue Feb 24 15:05:11 MSK 2015 Stream:
MASTER-masterdb:79607168595280 SLAVE:79607168566536
Replay:79607160986064 :: REPLAY 7431 KBytes (00:00:04.589736 seconds)
Tue Feb 24 

[GENERAL] Create Virtual Indexes on Postgres

2015-02-25 Thread Sreerama Manoj
Hi,
 I use Postgres 9.4 database.Now,I am optimizing the queries by using
the results of explain and explain analyze,Sometimes I am creating
Indexes to optimize them. But, I was not successful sometimes as even I
create Index to optimize them, the planner is not using them .

So my question was can we know whether the planner  will use the index
before actually creating a real Index..or can we create virtual or
Hypothetical Index those can only be known to the planner and not the
user or Is there any alternative to do it..If present,share with me.


Re: [GENERAL] Longest prefix matching CTE

2015-02-25 Thread Pavel Stehule
Some other solutions

http://postgres.cz/wiki/PostgreSQL_SQL_Tricks_II#Fast_searching_of_longer_prefix

2015-02-25 9:04 GMT+01:00 Tim Smith randomdev4+postg...@gmail.com:

 Will take a look.  Thanks steve.

 On 24 February 2015 at 23:57, Steve Atkins st...@blighty.com wrote:
 
  On Feb 24, 2015, at 3:50 PM, Tim Smith randomdev4+postg...@gmail.com
 wrote:
 
 
 
  The goal being to match the longest prefix given a full phone number,
 e.g.
 
 
  61234567890  would match australia proper 61
  whilst
  61134567890 would match Australia premium 6113
  and
  61894321010 would match Australia - Sydney 61893
 
  I know the answer involves Postgres CTE, but I haven't used CTEs much
  yet... let alone in complex queries such as this.
 
  Thanking you all in advance for your kind help.
 
  There's probably a CTE approach for it, but you might want to look
  at https://github.com/dimitri/prefix too - it's an extension that's
 designed
  specifically for longest prefix matching, and that uses gist indexes to
  do it efficiently.
 
  Cheers,
Steve
 
 
 
  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Sequences not moved to new tablespace

2015-02-25 Thread Albe Laurenz
Guillaume Drolet wrote:
 If you want to move a whole database to a different tablespace (the only 
 reason
 I can think of for doing what you are trying to so), use the command
 ALTER DATABASE ... SET TABLESPACE ...

 Thanks Laurenz. I tried your suggestion:

 psql -U postgres -c ALTER DATABASE mydb SET TABLESPACE pg_default;

 I get this message:
 ERROR:  some relations of database mortalite are already in tablespace 
 pg_default
 HINT : You must move them back to the database's default tablespace before 
 using this command.

 But if I do SHOW default_tablespace; in mydb, it showed pg_default as 
 the default tablespace.

 So I tried changing it back to the tablespace I want to get rid of to 
 subsequently moved everything
 back there so that ultimately, it lets me move everything to pg_default:
 ALTER DATABASE mydb SET default_tablespace = diamonds;

 And then:
 psql -U postgres -c ALTER DATABASE mydb SET TABLESPACE diamonds;

 ALTER DATABASE is issued but nothing gets physically moved to diamonds. Why?
 
 I guess the problem is that you already moved a lot of tables around.
 
 Could you connect to the database and try the following:

 SELECT t.relname, t.reltablespace, sp.spcname
 FROM pg_class t LEFT JOIN
  pg_tablespace sp ON sp.oid = t.reltablespace;

  relname  | reltablespace |  
 spcname
 --+---+
[...]
  mod09a1_sur_refl_b05_amonth_idx  |  1663 | 
 pg_default
  mod44b_cloud_rid_seq | 0 |
  pg_toast_2619| 0 |
  pg_type  | 0 |
  pg_authid_rolname_index  |  1664 | 
 pg_global
  pg_authid_oid_index  |  1664 | 
 pg_global
[...]

Like Tom suggested, you should move the tables from pg_default back to the 
database's
default tablespace and then use ALTER DATABASE to move the database tablespace.

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Longest prefix matching CTE

2015-02-25 Thread Alban Hertroys

 On 25 Feb 2015, at 24:50, Tim Smith randomdev4+postg...@gmail.com wrote:
 
 Have an Oracle connect by SQL that looks something like :
 
 select phone, pfx, len, (select info from codes where
 pfx = x.pfx) infot
 from (
 select :x phone, to_number(substr( :x, 1, length(:x)-level+1 )) pfx,
 length(:x)-level+1 len
   from dual
 connect by level = length(:x)
 order by level
) x
   where rownum = 1
   and (select info from codes where pfx = x.pfx) is not null
 /

 The goal being to match the longest prefix given a full phone number, e.g.

 I know the answer involves Postgres CTE, but I haven't used CTEs much
 yet... let alone in complex queries such as this.

The CTE would look something like this, assuming that :x is some parameter from 
outside the query ($1 here):

with recursive x(level) as (
select $1 as phone, to_number(substr($1, 1, length($1))) as pfx, 
length($1 ) as len, 1 as level
union all
select $1 as phone, to_number(substr($1, 1, length($1)-level+1 )) as 
pfx, length($1 ) -level+1 as len, level +1 as level
from x
where level = x.len
)
select * from x;

Or:
select $1 as phone, to_number(substr($1, 1, length($1) - pos as pfx, length($1) 
as len
from generate_series(0, length($1)-1)(x);

BTW, I didn't test any of these (I'm late already!).

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.3: bug related to json

2015-02-25 Thread Torsten Förtsch
On 25/02/15 07:22, David G Johnston wrote:
 I'm doubting you intended to join a bunch of commas using the field value as
 the delimiter...methinks your got the argument order reversed for
 string_agg.

OMG, I am so stupid. Thanks.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Longest prefix matching CTE

2015-02-25 Thread Tim Smith
Will take a look.  Thanks steve.

On 24 February 2015 at 23:57, Steve Atkins st...@blighty.com wrote:

 On Feb 24, 2015, at 3:50 PM, Tim Smith randomdev4+postg...@gmail.com wrote:



 The goal being to match the longest prefix given a full phone number, e.g.


 61234567890  would match australia proper 61
 whilst
 61134567890 would match Australia premium 6113
 and
 61894321010 would match Australia - Sydney 61893

 I know the answer involves Postgres CTE, but I haven't used CTEs much
 yet... let alone in complex queries such as this.

 Thanking you all in advance for your kind help.

 There's probably a CTE approach for it, but you might want to look
 at https://github.com/dimitri/prefix too - it's an extension that's designed
 specifically for longest prefix matching, and that uses gist indexes to
 do it efficiently.

 Cheers,
   Steve



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.3: bug related to json

2015-02-25 Thread Torsten Förtsch
On 25/02/15 07:34, David G Johnston wrote:
 Torsten Förtsch wrote
  Is there anything similar for JSON scalars?
 IDK, but have you tried ::text?

yes. Here is the difference

select * from (values (('{a:b}'::json - 'a')::text),
   ('{a:b}'::json - 'a')) t;
 column1
-
 b
 b

Torsten


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general