[GENERAL] View permission error after upgrading from 8.4 - 9.2

2013-08-13 Thread Brian Hirt

I'm upgrading our database from 8.4 to 9.2 and I've run across a view that is 
no longer working.   When selecting from the view, I get a permission denied 
error on one of the referenced tables.   However, I can run the view's query 
directly without problems and I have read access to all the tables the view 
accesses.   I'm a bit confused as to what's causing this.   I'm logged in as 
the postgres superuser and don't have any permissions set up for the 
views/tables in question. Any help would be appreciated.

basement_QA=# create or replace view vcredit_info as SELECT 
game_credit.developer_id, credit_submission.game_id, 
    credit_submission.platform_id, game_credit.game_credit_title_id, 
    game_credit_title.developer_title_id, 
    ( SELECT min(substr(gv.release_date::text, 1, 4)::integer) AS min
           FROM game_version gv
          WHERE gv.approved = 1 AND gv.game_id = credit_submission.game_id AND 
gv.release_type_id = 1 AND gv.platform_id = credit_submission.platform_id) AS 
first_year, 
    ( SELECT max(substr(gv.release_date::text, 1, 4)::integer) AS max
           FROM game_version gv
          WHERE gv.approved = 1 AND gv.game_id = credit_submission.game_id AND 
gv.release_type_id = 1 AND gv.platform_id = credit_submission.platform_id) AS 
last_year, 
    developer_title.credit_title_category_id
   FROM game_credit
   JOIN credit_submission USING (credit_submission_id)
   JOIN game_status USING (game_id, platform_id)
   JOIN game_credit_title USING (game_credit_title_id)
   JOIN developer_title USING (developer_title_id)
  WHERE game_status.approved = 1 AND credit_submission.approved = 1;
CREATE VIEW
basement_QA=# select count(*) from vcredit_info where game_id = 30997;
ERROR:  permission denied for relation developer_title
basement_QA=# select count(*) from developer_title;
 count 
---
   224
(1 row)

basement_QA=# select count(*) from (SELECT game_credit.developer_id, 
credit_submission.game_id, 
basement_QA(#     credit_submission.platform_id, 
game_credit.game_credit_title_id, 
basement_QA(#     game_credit_title.developer_title_id, 
basement_QA(#     ( SELECT min(substr(gv.release_date::text, 1, 4)::integer) AS 
min
basement_QA(#            FROM game_version gv
basement_QA(#           WHERE gv.approved = 1 AND gv.game_id = 
credit_submission.game_id AND gv.release_type_id = 1 AND gv.platform_id = 
credit_submission.platform_id) AS first_year, 
basement_QA(#     ( SELECT max(substr(gv.release_date::text, 1, 4)::integer) AS 
max
basement_QA(#            FROM game_version gv
basement_QA(#           WHERE gv.approved = 1 AND gv.game_id = 
credit_submission.game_id AND gv.release_type_id = 1 AND gv.platform_id = 
credit_submission.platform_id) AS last_year, 
basement_QA(#     developer_title.credit_title_category_id
basement_QA(#    FROM game_credit
basement_QA(#    JOIN credit_submission USING (credit_submission_id)
basement_QA(#    JOIN game_status USING (game_id, platform_id)
basement_QA(#    JOIN game_credit_title USING (game_credit_title_id)
basement_QA(#    JOIN developer_title USING (developer_title_id)
basement_QA(#   WHERE game_status.approved = 1 AND credit_submission.approved = 
1) as myview where myview.game_id = 30997;
 count 
---
    66
(1 row)

basement_QA=# select CURRENT_USER;
 current_user 
--
 postgres
(1 row)
basement_QA=# \dp vcredit_info
                              Access privileges
 Schema |     Name     | Type | Access privileges | Column access privileges 
+--+--+---+--
 public | vcredit_info | view |                   | 
(1 row)

basement_QA=# \dp developer_title;
                                Access privileges
 Schema |      Name       | Type  | Access privileges | Column access 
privileges 
+-+---+---+--
 public | developer_title | table |                   | 
(1 row)




Re: [GENERAL] View permission error after upgrading from 8.4 - 9.2

2013-08-13 Thread Brian Hirt

None of the relations used by vcredit_info are views.   They are all tables.   
Oddly, I dropped the view and recreated it and the problem went away.  Earlier 
I was just using create or replace view and the problem persisted.   The schema 
was created by using pg_restore from an 8.4 custom dump.   I can do another 
pg_restore and see if the problem is reproducible if you want.

On Aug 13, 2013, at 12:03 PM, Tom Lane t...@sss.pgh.pa.us wrote:

Brian Hirt bh...@me.com writes:
I'm upgrading our database from 8.4 to 9.2 and I've run across a view that is 
no longer working. � When selecting from the view, I get a permission denied 
error on one of the referenced tables. � However, I can run the view's query 
directly without problems and I have read access to all the tables the view 
accesses.

Permissions checks for tables referenced by a view are done as the view's
owner. I'm suspicious that one of the relations used in your view
vcredit_info is itself a view that references developer_title, and
is owned by some other user with less privilege than you.

regards, tom lane


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Doubt in Backup

2011-03-21 Thread Brian Hirt

 Dear all,
 
 2 days ago, I need to backup 2 databases in my Database server because I need 
 to format the system and reinstalls again with the back up databases.
 
 After a fresh install of ubuntu-10.4 , I install postgreplus-8.4 binary and I 
 think giving the previous data directory /hdd2-1/postgres_data during 
 installation for my own testing. 
 

You really shouldn't backup the filesystem to backup your database.  There are 
many situations where it will not work and you will shoot yourself in the foot. 
  You should really use pg_dump or pg_dumpall.   See 
http://www.postgresql.org/docs/9.0/static/backup.html and 
http://www.postgresql.org/docs/9.0/static/backup-file.html for more 
information.  

 I think it picks the previous data and therefore,  no need for restoring 
 backups of 2 databases that was taken before.
 
 But what I have noticed that there is 6 databases in my Postgres database and 
 one database pdc_uima doesn't have any entry in \l command.
 
 However I restore it from my previous backup.
 
 Does Someone has any comments and suggestion on it. What is the reason of 
 this strange problem.
 
 
 Thanks  best Regards,
 Adarsh Sharma



Re: [GENERAL] Values larger than 1/3 of a buffer page cannot be indexed.

2011-03-13 Thread Brian Hirt
On Mar 13, 2011, at 12:05 PM, Dmitriy Igrishin wrote:

 Hey Viktor,
 
 2011/3/13 Viktor Nagy viktor.n...@toolpart.hu
 hi,
 
 when trying to insert a long-long value, I get the following error:
 
  index row size 3120 exceeds maximum 2712 for index ir_translation_ltns
 HINT:  Values larger than 1/3 of a buffer page cannot be indexed.
 Consider a function index of an MD5 hash of the value, or use full text 
 indexing.
 
 is there a way to generate this recommended function index of an md5 hash on 
 an already existing database and tables (filled with data)?
 Just create index this way, e.g.
 CREATE INDEX ir_translation_ltns ON tab ((md5(col)));
 where tab and col are table and column of which you want
 to create btree index.

This probably goes without saying, but you'll have to use col = 
md5('blahblahblahblah') in your qualifiers to get the benefit of the index.  

--brian


[GENERAL] Old values in statement triggers?

2010-10-21 Thread Brian Hirt
Is it possible to reference the old values in a statement trigger using 
plpgsql?   I'm looking for something similar to NEW and OLD that row triggers 
have, but I don' see anything @ 
http://www.postgresql.org/docs/current/static/plpgsql-trigger.html which 
doesn't really offer any examples for statement triggers.

If what I'm looking for doesn't exist, are there any common workarounds that 
people use to find the set of updated/inserted/deleted rows?

Thanks in advance,

Brian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Old values in statement triggers?

2010-10-21 Thread Brian Hirt
Thanks Josh,

On Oct 21, 2010, at 7:49 AM, Josh Kupershmidt wrote:

 2010/10/21 Grzegorz Jaśkiewicz gryz...@gmail.com:
 OLD.column_name
 NEW.column_name ?
 
 I believe OP is asking specifically about statement-level triggers. As

Yup.

 the docs 
 http://www.postgresql.org/docs/current/static/trigger-definition.html
 say:
 | Statement-level triggers do not currently have any way to
 | examine the individual row(s) modified by the statement.
 

I don't know how I didn't see that in the docs when I was looking.  It must 
have been sleepy morning eyes or lack of coffee.

 What I've done is to have a row-level trigger that populates a
 temporary table with interesting changes that need further
 processing, and then a statement-level trigger which does bulk-updates
 based on what's in that temporary table. This comes in quite handy
 when bulk-loading data, e.g. with COPY.
 

I'll look at doing something like you describe, although I wonder if the 
overhead of doing a row trigger and then a mass update at the end with a 
statement trigger will really be worth it for what I'm doing.   I might just 
end up doing only a row trigger.

--brian
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updates, deletes and inserts are very slow. What can I do make them bearable?

2010-10-21 Thread Brian Hirt
 
 There are only two tables in the query.
 

Tim,

No, your query is written incorrectly.  I don't understand why you come on to 
this list all hostile and confrontational.  Regardless, people still try to 
help you and then you still ignore the advice of people that are giving you the 
solutions to your problems.

--brian


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copying data files to new hardware?

2010-10-13 Thread Brian Hirt
Evan,

Others can probably answer your question better about copying, but in general 
we never do that but I think if you use the same arch and major release of 
postgresql you will be okay.

We have used Slony successfully for all of our database upgrades, server 
maintenance and database moves over the last several years.  Slony 1.2 still 
supports postgres 8.2.   You can set up the new database on the new machine, 
set it up as a slave and the current machine as a master, replicate the entire 
database, wait for replication to catch up do a switchover and shut down the 
old master and uninstall slony.  The plus is that you can accomplish what you 
need with pretty much 0 downtime.

--brian

On Oct 13, 2010, at 10:03 AM, EDH wrote:

 I have a large Postgres DB (1100 GB) that I'd like to move to a new
 physical machine.  In the past I've done this via pg_dump  restore,
 but the DB was much smaller then, and I'm concerned about how long
 that would take.  The version of pg currently in use is:
 
 PostgreSQL 8.2.5 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)
 4.1.1 20070105 (Red Hat 4.1.1-52)
 
 What I'd like to know is: if I install the latest 8.2.x release - I
 see 8.2.18 RPMs are available - can I do a straight copy of the
 contents of /var/lib/pgsql/data/ to the new server and start it up?
 Or is dump  restore the only real way to do this?
 
 If I have to do a dump  restore I figure I may as well take the
 opportunity to migrate to 8.4 or 9.0, but I'd rather just get
 everything done as quickly as possible.
 
 Thanks,
 Evan
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copying data files to new hardware?

2010-10-13 Thread Brian Hirt
Yes, we've used Slony for migrating 8.2 - 8.3 - 8.4 and plan an using it to 
migrate to 9.0 in the near future.  You should be able to skip releases as well 
like you say 8.2 - 8.4.

You'll probably want to test out both slony and 8.4 on your development 
machines first and make sure everything works okay.   It takes a little bit to 
get familiar with slony, it's not a simple program that you install and click a 
button to set up replication and have everything happen for you.  We spent a 
fair amount of time writing scripts to work with slony to help support our 
processes.

On Oct 13, 2010, at 10:59 AM, Evan D. Hoffman wrote:

 Thanks, Brian  Jaime.  Regarding Slony, would that allow for
 migration to a new version as well - i.e. moving from 8.2 on the old
 machine to 8.4 on the new machine via Slony with minimal downtime?
 
 The Slony method is one I hadn't considered.  Since our database is so
 large, even a direct file copy would require some downtime (since we'd
 need to stop the DB before beginning the copy).  Slony would probably
 let us cut the downtime from hours to minutes (dump  restore for us
 has historically taken days).
 
 Thanks again,
 Evan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade

2010-09-28 Thread Brian Hirt
Looks like pg_upgrade is using 32bit oids.  2147483647 is the max signed 32 bit 
int, but the oids for my tables are clearly larger than that. 

== output from pg_upgrade ==
Database: basement84_dev
relname: mit.company: reloid: 2147483647 reltblspace: 
relname: mit.company_history: reloid: 2147483647 reltblspace: 

== output from catalog query ==
basement84_dev=# select c.oid,c.relname from pg_catalog.pg_namespace n, 
pg_catalog.pg_class c where n.oid = c.relnamespace and n.nspname = 'mit';
oid |  relname   
+
 3000767630 | company
 3000767633 | company_history
(22 rows)


On Sep 28, 2010, at 10:51 AM, Tom Lane wrote:

 Brian Hirt bh...@me.com writes:
 I'm testing pg_upgrade out and ran into a couple of problems.   First when I 
 did pg_upgrade --check I got the tsearch2 tables preventing the upgrade from 
 happening:
 Database:  testdatabase
  public.pg_ts_dict.dict_init
  public.pg_ts_dict.dict_lexize
  public.pg_ts_parser.prs_start
  public.pg_ts_parser.prs_nexttoken
  public.pg_ts_parser.prs_end
  public.pg_ts_parser.prs_headline
  public.pg_ts_parser.prs_lextype
 
 For testing, at this point I really didn't care about tsearch, so I simply 
 dropped those tables so I could revisit them later -- however, I'm confused 
 about these tables in general, both pg_catalog.pg_ts_parser and 
 public.pg_ts_parser exist with different, albeit similar, schemas.   I think 
 that the table in public is no longer used and was a remnant from pre-8.3 
 when tsearch2 wasn't part of the distribution, can anyone confirm this?
 
 Correct, you should just drop the ones that aren't in pg_catalog.
 
 
 Anyway, after removing the tsearch tables, I did pg_upgrade --check again 
 and it said the clusters were compatible. I proceeded to run the upgrade 
 command and it bombed out in the Restoring user relation files section.
 
 That sure looks like a bug, but there's not enough info here to
 diagnose.  Is there actually a pg_toast.pg_toast_2147483647 table
 in the 8.4 cluster?  (I'm betting not.)  Could you try extracting
 a test case?  I wonder whether pg_dump -s from the 8.4 database,
 loaded into a fresh 8.4 database, would be enough to reproduce.
 
   regards, tom lane
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade

2010-09-28 Thread Brian Hirt
It looks like it's related to atol

$ cat test-atol.c 
#include stdlib.h
#include stdio.h

int
main(int argc, char **argv)
{
  unsigned int test1;
  long test2;
  long long test3;
  unsigned int test4;

  test1 = (unsigned int)atol(3000767169); 
  test2 = (long)atol(3000767169); 
  test3 = atoll(3000767169); 
  test4 = (unsigned int)atoll(3000767169); 

  fprintf(stderr,%u %ld %lld %u\n,test1,test2,test3,test4);
}

$ make test-atol
cc test-atol.c   -o test-atol
$ ./test-atol 
2147483647 2147483647 3000767169 3000767169


I think C90 and C99 specify different behaviors with atol

Is there some standard way postgresql parses integer strings?  Maybe that 
method should be used instead of duplicating the functionality so at least the 
two behave consistently.

--brian

On Sep 28, 2010, at 2:00 PM, Bruce Momjian wrote:

 Brian Hirt wrote:
 Looks like pg_upgrade is using 32bit oids.  2147483647 is the max signed 32 
 bit int, but the oids for my tables are clearly larger than that. 
 
 == output from pg_upgrade ==
 Database: basement84_dev
 relname: mit.company: reloid: 2147483647 reltblspace: 
 relname: mit.company_history: reloid: 2147483647 reltblspace: 
 
 == output from catalog query ==
 basement84_dev=# select c.oid,c.relname from pg_catalog.pg_namespace n, 
 pg_catalog.pg_class c where n.oid = c.relnamespace and n.nspname = 'mit';
oid |  relname   
 +
 3000767630 | company
 3000767633 | company_history
 (22 rows)
 
 
 Interesting.  Odd it would report the max 32-bit signed int.  I wonder
 if it somehow is getting set to -1.  I looked briefly at the pg_upgrade
 code and it appears to put all oids in unsigned ints.
 
 -- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
 
  + It's impossible for everything to be true. +
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade

2010-09-28 Thread Brian Hirt
Bruce,

The applied patch has the same behavior on i686 Ubuntu 10.04.   It looks like 
atol() is just a macro for strtol() in stdio.h.   I think you want strtoul() 
instead of strtol()

when i change str2uint() to use strtoul() pg_upgrade completes without a 
problem (I still haven't tested the upgrade database, but I expect that will be 
just fine).

I think it's pretty uncommon for the OID to be that big which is why nobody 
stumbled onto this.   This particular installation has pretty much been 
reloading development databases non stop for the last year.  Also, people tend 
to initdb a lot when testing and doing development which will keep resetting 
the oid low.

Thanks for getting this one fixed

--brian

On Sep 28, 2010, at 3:49 PM, Bruce Momjian wrote:
 I have applied the attached patch to HEAD and 9.0.X.  Odd I had never
 received a bug report about this before.  Good thing it didn't silently
 fail, but it is designed to be very picky.
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_upgrade

2010-09-27 Thread Brian Hirt
I'm testing pg_upgrade out and ran into a couple of problems.   First when I 
did pg_upgrade --check I got the tsearch2 tables preventing the upgrade from 
happening:

Database:  testdatabase
  public.pg_ts_dict.dict_init
  public.pg_ts_dict.dict_lexize
  public.pg_ts_parser.prs_start
  public.pg_ts_parser.prs_nexttoken
  public.pg_ts_parser.prs_end
  public.pg_ts_parser.prs_headline
  public.pg_ts_parser.prs_lextype

For testing, at this point I really didn't care about tsearch, so I simply 
dropped those tables so I could revisit them later -- however, I'm confused 
about these tables in general, both pg_catalog.pg_ts_parser and 
public.pg_ts_parser exist with different, albeit similar, schemas.   I think 
that the table in public is no longer used and was a remnant from pre-8.3 when 
tsearch2 wasn't part of the distribution, can anyone confirm this?

Anyway, after removing the tsearch tables, I did pg_upgrade --check again and 
it said the clusters were compatible. I proceeded to run the upgrade command 
and it bombed out in the Restoring user relation files section.I've 
included some output below, any advice on what is going on?  It seems something 
is messed up in either the check logic or actual migration code.

r...@ubuntu:~# /usr/pg-8.4/bin/oid2name 
All databases:
 Oid  Database Name  Tablespace
---
...
   11564   postgres  pg_default
...
r...@ubuntu:~# /usr/pg-8.4/bin/oid2name -o 2683
From database postgres:
  FilenodeTable Name

  2683  pg_largeobject_loid_pn_index


postg...@ubuntu:~$ /usr/pg-9.0/bin/pg_upgrade 
Performing Consistency Checks
-
Checking old data directory (/moby/pgdb-8.4)ok
Checking old bin directory (/usr/pg-8.4/bin)ok
Checking new data directory (/moby/pgdb-9.0)ok
Checking new bin directory (/usr/pg-9.0/bin)ok
Checking for reg* system oid user data typesok
Checking for /contrib/isn with bigint-passing mismatch  ok
Checking for large objects  ok
Creating catalog dump   ok
Checking for presence of required libraries ok

| If pg_upgrade fails after this point, you must
| re-initdb the new cluster before continuing.
| You will also need to remove the .old suffix
| from /moby/pgdb-8.4/global/pg_control.old.

Performing Migration

Adding .old suffix to old global/pg_control   ok
Analyzing all rows in the new cluster   ok
Freezing all rows on the new clusterok
Deleting new commit clogs   ok
Copying old commit clogs to new server  ok
Setting next transaction id for new cluster ok
Resetting WAL archives  ok
Setting frozenxid counters in new cluster   ok
Creating databases in the new cluster   ok
Adding support functions to new cluster ok
Restoring database schema to new clusterok
Removing support functions from new cluster ok
Restoring user relation files
  /moby/pgdb-8.4/base/11564/2683  
Could not find pg_toast.pg_toast_2147483647 in new cluster


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need help with full text index configuration

2010-07-28 Thread Brian Hirt
I have some data that can be searched, and it looks like the parser is making 
some assumptions about the data that aren't true in our case and I'm trying to 
figure out how to exclude a token type.   I haven't been able to find the 
answer to my question so far, so I thought I would ask here.

The data I have are english words, and sometimes there are words separated by a 
/ without spaces.   The parser finds these things and tokenizes them as files.  
 I'm sure in some situations that's the right assumption, but based on my data, 
I know there will never be a file name in the column.   

For example instead of the parser recognizing three asciiword it recognizes one 
asciiword and one file.   I'd like a way to have the / just get parsed as 
blank. 

db=# select * from ts_debug('english','maybe five/six');
   alias   |description|  token   |  dictionaries  |  dictionary  |  
lexemes   
---+---+--++--+
 asciiword | Word, all ASCII   | maybe| {english_stem} | english_stem | 
{mayb}
 blank | Space symbols |  | {} |  | 
 file  | File or path name | five/six | {simple}   | simple   | 
{five/six}
(3 rows)

I thought that maybe I could create a new configuration and drop the file 
mapping, but that doesn't seem to work either.

db=# CREATE TEXT SEARCH CONFIGURATION public.testd ( COPY = pg_catalog.english 
);
CREATE TEXT SEARCH CONFIGURATION
db=# ALTER TEXT SEARCH CONFIGURATION testd DROP MAPPING FOR file;
ALTER TEXT SEARCH CONFIGURATION
db=# SELECT * FROM ts_debug('testd','mabye five/six');
   alias   |description|  token   |  dictionaries  |  dictionary  | 
lexemes 
---+---+--++--+-
 asciiword | Word, all ASCII   | mabye| {english_stem} | english_stem | 
{maby}
 blank | Space symbols |  | {} |  | 
 file  | File or path name | five/six | {} |  | 
(3 rows)


Is there anyway to do this?

Thanks for the help in advance.  I'm running 8.4.4
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help with full text index configuration

2010-07-28 Thread Brian Hirt
Tom,

Thanks for the quick reply.   Doing a frontend mapping was my next option since 
I really don't care about / and the ability to search on it.   Preventing the 
parser from using the file tokenizer seemed like a better solution so I wanted 
to go down that path first (there are other false hits i was worried about too, 
like email, etc)

I'm really confused about what ALTER TEXT SEARCH CONFIGURATION dict DROP 
MAPPING FOR file actually does.   The documentation seems to make it sound 
like it does what I want, but I guess it does something else.

--brian

On Jul 28, 2010, at 2:06 PM, Tom Lane wrote:

 Brian Hirt bh...@mobygames.com writes:
 For example instead of the parser recognizing three asciiword it recognizes 
 one asciiword and one file.   I'd like a way to have the / just get parsed 
 as blank. 
 
 AFAIK the only good way to do that is to write your own parser :-(.
 The builtin parser isn't really configurable.  (If you didn't mind
 maintaining a private version you could patch its state transition
 table manually, but that seems like a PITA.)
 
 For the case at hand it could be a pretty thin frontend to the builtin
 text parser --- just change / to space and then call the builtin one.
 contrib/test_parser/ might help you get started.
 
   regards, tom lane
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Support of multibyte encoding for pg_trgm

2009-06-25 Thread Brian Hirt

Teodor,

I ran across a commit message that shows multibyte encoding support in  
8.4 and my testing shows that to be the case as well.   Is there a  
back patch for 8.2?   My own quick attempt at creating one didn't work  
so well and before I start spending some major time trying I thought  
I'd check in with you.


Regards,

Brian Hirt

Postgres Version 8.2 and 8.3
test=# select show_trgm('魔法門英雄無敵2:王位爭奪戰');
   show_trgm
---
 {  2, 2 }
(1 row)

Postgres Version 8.4
test=# select show_trgm('魔法門英雄無敵2:王位爭奪戰');
 
show_trgm

--
 {0x84af82,0x8426fb,0x886567,0x8986ec, 
0x8c3de8,0x9c19f5,0xa0ef88,0xae352c,0xc7de23,0xf449ca, 
0x003dc9,0x1003c8,0x143838,0x64a38f,0x650b2a}

(1 row)


Log Message:
---
Support of multibyte encoding for pg_trgm

Modified Files:
--
pgsql/contrib/pg_trgm:
trgm.h (r1.9 - r1.10)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/contrib/pg_trgm/trgm.h?r1=1.9r2=1.10 
)

trgm_gin.c (r1.5 - r1.6)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/contrib/pg_trgm/trgm_gin.c?r1=1.5r2=1.6 
)

trgm_op.c (r1.10 - r1.11)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/contrib/pg_trgm/trgm_op.c?r1=1.10r2=1.11 
)



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] can't drop table

2005-09-10 Thread Brian Hirt
I'm a bit baffled by this, i can't drop a table and I'm getting the  
most bizarre message that doesn't make sense to me.  To make things  
worse, I was able to drop it on my test database which is a dump of  
my production database running the same version of postgresql.

The index it's complaining about is the primary for a different table.

Does anyone have any ideas?

basement=# \d game_giveaway
  Table public.game_giveaway
   Column|  Type   |   Modifiers
-+- 
+
game_giveaway_id| integer | not null default nextval 
('game_giveaway_id_seq'::text)

game_id | integer |
description | text|
date_placed_on_list | date|
date_given_away | date|
given_to_user_id| integer |
Indexes:
game_giveaway_pkey PRIMARY KEY, btree (game_giveaway_id)

basement=# drop TABLE public.game_giveaway ;
ERROR:  game_pkey is an index
basement=# select version();
 version
 
-
PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)  
3.3.2 20031022 (Red Hat Linux 3.3.2-1)

(1 row)

basement=#



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] locking question

2004-04-27 Thread Brian Hirt
I have a question about locks.
I have a stats table that get updated when some other table changes.   
Sometimes that other table is updated a 2nd time before the first stats 
update is finished which causes an error.  I've tried using 'SET 
TRANSACTION ISOLATION LEVEL SERIALIZABLE' but get 'could not serialize 
access due to concurrent update'  If i try 'READ COMMITED' i get 
primary key failures.  This seems like it's a pretty common thing, and 
I'l like to be able to do this without having to write code to check 
for the 'could not serialize due to concurrent update'  error and 
re-run the query.

I don't have much experience with locking, because I haven't really 
needed to use it.  Any advice would be greatly helpful.   Belew is 
basically the transaction I'm running -- it fails when a 2nd one starts 
while the 1st is still running.

BEGIN WORK
delete from blah_stats where id = 1
insert into blah_stats select id,count(*) from blah where id = 1 group 
by id
COMMIT WORK

Regards,
Brian Hirt
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Increasing Max Connections Mac OS 10.3

2004-02-10 Thread Brian Hirt
Joe,

I've run into this on my ibook too.   The default number of files is 
set very low by default.  On my system 10.3.2, it's 256 for the 
postgres user.   You can raise it to something higher like 2048 with 
the ulimit command.   i have ulimit -n unlimited in my .bash_profile

ibook:~ root# su - postgres
ibook:~ postgres$ ulimit -a
open files(-n) 256
ibook:~ postgres$ ulimit -n unlimited
ibook:~ postgres$ ulimit -a
open files(-n) 10240
ibook:~ postgres$
regards,



On Feb 10, 2004, at 8:04 AM, Tom Lane wrote:

Joe Lester [EMAIL PROTECTED] writes:
[ lots of ]
2004-02-10 08:46:01 LOG:  out of file descriptors: Too many open 
files;
release and retry
Sounds like you need to reduce max_files_per_process.  Also look at
increasing the kernel's limit on number of open files (I remember 
seeing
it in sysctl's output yesterday, but I forget what it's called).

Even though I'm getting these messages in my log, all the queries I
send to the server seem to be working.
The Postgres server itself will generally survive this condition
(because it usually has other open files it can close).  However,
everything else on the system is likely to start falling over :-(.
You don't want to run with the kernel file table completely full.
I'd suggest setting max_files_per_process to something like 50 to 100,
and making sure that the kernel's limit is max_files_per_process *
max_connections plus plenty of slop for the rest of the system.
			regards, tom lane

---(end of 
broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Increasing Max Connections Mac OS 10.3

2004-02-10 Thread Brian Hirt
On Feb 10, 2004, at 10:57 AM, Tom Lane wrote:
Hmm, I hadn't even thought about ulimit.  I thought those settings were
per-process, not per-user.  If they are per-user they could be
problematic.
not sure if it's per user or per process.   after i did ulimit -n 
unlimited the problem joe describes went away for me.  i also did lower 
max_files_per_process to 1000.my database has a large number of 
files in it, a few thousand, so i assumed one of the back end processes 
was going over the limit.

--brian

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [webmaster] [GENERAL] Mirrors that don't suck.

2003-12-24 Thread Brian Hirt
Dave,

I can go about setting up a web mirror, but i cannot offer a download 
mirror if we have to use ftp.   Our security policies are not to allow 
ftp access.  It's just one more thing to worry about if we open up our 
firewall.  I hope you understand.   Will it be a problem only offer the 
web mirror?  We have a 10mbit/sec connection, but probably wouldn't 
want postgresql to take up more than 1mbit/sec on average.  we also 
have several free gigabytes of disk.

Best Regards,

Brian Hirt

On Dec 24, 2003, at 4:46 PM, Dave Page wrote:

It's rumoured that Brian Hirt once said:
What type bandwidth can you
expect to use if you become a mirror (1meg/sec? more less?)
I honestly couldn't say, but I would suspect it would be a *lot* less 
than
that. Whilst PostgreSQL is popular, the type of application that it is
means that it will never get anything remotely like the level of 
downloads
of say, Mozilla or OpenOffice.
Also are
http mirrors acceptable?
Currently only to mirror the web content (this is mainly because the
mirror code expects an ftp mirror to use ftp and a web mirror to use 
http
- changing this would require a fair bit of recoding).
I would like to offer up one of our servers
on mobygames.com because I love postgresql and it would be a way to
help out a bit, but I don't really know enough about the requirements
right now.
Who should i talk to?
Me please. I'll need your server IP, and the IP of whatever boxes will 
be
used to rsync the content. For web mirrors you must be able to setup a
virtual host, for ftp, I alsoneed to know the path to the content (eg.
/pub/postgresql/)
However, there are already 7 US mirrors.
There's closer to 20, but most are not active :-(

Regards, Dave.




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] patch for pg_autovacuum

2003-11-29 Thread Brian Hirt
Bruce,

for what it's worth, Matthew O'Connor submitted a patch which includes 
my patch.

best regards,

Brian Hirt.

On Nov 29, 2003, at 10:14 PM, Bruce Momjian wrote:

[ Attachment, skipping... ]


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] plpgsql question

2003-11-25 Thread Brian Hirt
I'm looking to find out how many rows were effected during an update in 
a trigger.  I ran across this message by jan talking about this feature 
possibly being added to postgresql 6.5, but I can't find any reference 
to such a feature in the current documentation.   Did this ever make it 
into postgresql?

http://archives.postgresql.org/pgsql-sql/1999-02/msg00110.php

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] question about error message

2003-11-21 Thread Brian Hirt
I got an error doing a vacuum analyze on one of our table:

MSG: 7 (S1000): ERROR:  simple_heap_update: tuple concurrently updated

does anyone know what this means, and more importantly... should i be worried about this?



i'm running 7.3.2 on redhat 7.3  dual 1.8 xeon with 1gb ram.


Re: [GENERAL] no records returned

2003-10-25 Thread Brian Hirt
is it possible that there are spaces on the end?   what type is 
stck_sym?  if it's varchar or text the padding won't be removed 
automatically.

example:

basement=# create table test (t varchar(6));
CREATE TABLE
basement=# INSERT into test values( 'ZRAN  ');
INSERT 92249850 1
basement=# select * from test where t = 'ZRAN';
 t
---
(0 rows)
basement=#

On Oct 24, 2003, at 5:27 PM, [EMAIL PROTECTED] wrote:

running - (PostgreSQL) 7.1.3

I have loaded a table with stock market data.

-
-
-
ZRAN|2003-09-29|20030929|  731487| 20.81|  20.
ZRAN|2003-09-30|20030930|  731488| 19.43| 20.1
ZRAN|2003-10-01|20031001|  731489| 19.82| 19.9
ZRAN|2003-10-02|20031002|  731490| 19.56| 20.3
ZRAN|2003-10-03|20031003|  731491| 20.25|  21.
(609398 rows)
pma= select count(*) from stck_dta_tbl_oprtnl;
 count
--
609398
When I do a select * from stck_dta_tbl_oprtnl; I get the whole table
just fine; but when I try and retrieve just 1 record which is out
put when doing the select * ie:
pma= select * from stck_dta_tbl_oprtnl where stck_sym = 'ZRAN';
stck_sym|dta_date|dta_date_num|dta_date_dys|opn|hi|lw|cls|vol|unk
++++---+--+--+---+---+---
(0 rows)
I get 0 rows, and no errors from postmaster.

Is there a limit to the number of rows in a table?
I have also tried building a unique index composed of the stck_sym and
dta_date but I still get 0 rows.
I have also noticed that when setting up a cursor that the first fetch
does NOT get the first record in the table, it also returns nothing 
with
no errors from the postmaster.  I am wondering if these bugs are 
related.

Thanks all for your help

Lynn





---(end of 
broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Picture with Postgres and Delphi

2003-09-10 Thread Brian Hirt
How do you deal with backing up the images?   Right now i can remote 
backup my filesystem using rsync to an offsite location many times a 
day, only taking a very small amount of I/O, bandwidth and time.   
Dealing with the backup scared me away from using postgres in the first 
place.   The idea of doing a 200gb dump multiple times a day on an 
image database scares me.   So does doing a vacuum on it.  The I/O, 
time and bandwidth required to do this is daunting.

Are there any suggestions on how to do incremental backups of the 
images and any other suggestions on performance?  In the future I'd 
like to move some filesystem images to postgres to have a centralized 
storage.  It would make some things easier, but i'm not sure it's worth 
the additional problems.  Hopefully i'm imagining the problems.

--brian

On Tuesday, September 9, 2003, at 08:56 PM, Jonathan Bartlett wrote:

For the education of me and maybe others too, why was
that?  i.e. what problems did you run into, that bytea avoids?
Compared to the filesystem, bytea provides data integrity.

Bytea gives you remote access, which you can cache if needed.

Bytea gives you the same permissions as anything else in Postgres, so 
you
don't have to worry about that separately.

Compared to BLOBs, bytea's are just simpler.  You can select them with 
a
single statement, you don't have to worry about leaving unreferenced
BLOBs, and, after 4 billion inserts, byteas are still meaningful while
BLOBs might not be. (due to OID problems).

Jon


__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


---(end of 
broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] 7.4b1 performance

2003-08-28 Thread Brian Hirt
I just wanted to let the developers know, i'm VERY IMPRESSED with the 
7.4b1 release.  I've been using it exclusively on my two development 
machines since it was released without any problems at all.   I was 
also using cvs on and off for a while before the beta.

my development machines are an athlon 1800 with 1gb memory, 3 drive ide 
sw/raid5 running redhat9 and an  800mhz ibook with 640mb memory, 30gb 
ide drive running osx 10.2.6 with fink

I've also noticed significant performance improvements.  I have 
simulated a typical 30 minute period of our website by replaying 13,000 
requests from our web access logs using 40 concurrent clients fetching 
the pages .  The test mostly simulated read performance.  Since most of 
our sites traffic is data distribution, it's the more important part 
for us.  Postgres 7.3 took 73 minutes to generate the 13,000 pages and 
Postgres 7.4 took 45 minutes to generate the same 13,000 requests.   
The databases and codebase were identical for the tests.   The 
configuration parameters in postgres.conf were for all practical 
purposes identical.

Our database takes up several gigabytes on disk and we process in the 
range of 1,000,000 dynamic web pages a day, each page doing on average 
between 10 and 30 queries per page to display.

The graph below shows the most active pages and the average amount of 
real time in seconds (not system or cpu time) they take to create.  So 
part of the time is obviously spent waiting on other processes and in 
our application code.


inline: perf.png
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] question about indexing.

2001-09-30 Thread Brian Hirt

I have a table with about 1 million rows in it.  One of the columns in this
table is some sort of status (it's an int2).  Out of the million rows, only
about 100 of the rows have a status that is not like the rest.

for example:
 999,900 have the value 1
   23 have the value 2
   67 have the value 3
   10 have the value 4

I often want to fetch the rows within that subset of 100.  When i index this
column, the planner always seems to choose a table scan when i query it.
I've tried BTREE and HASH indexes and both do the same thing.

Yes, i do vacuum the table.

Does anyone know how to avoid all these table scans?

Thanks,

Brian




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] question about indexing.

2001-09-30 Thread Brian Hirt

That's the answer!  Thanks Doug.

basement=# explain select * from game_developer where approved = 2;
Seq Scan on game_developer  (cost=0.00..1920.17 rows=48 width=46)

basement=# explain select * from game_developer where approved = int2(2);
Index Scan using game_developer_approved on game_developer
(cost=0.00..80.87 rows=48 width=46)

- Original Message -
From: Doug McNaught [EMAIL PROTECTED]
To: Brian Hirt [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; Brian A Hirt [EMAIL PROTECTED]
Sent: Sunday, September 30, 2001 7:36 PM
Subject: Re: [GENERAL] question about indexing.


 Brian Hirt [EMAIL PROTECTED] writes:

  I have a table with about 1 million rows in it.  One of the columns in
this
  table is some sort of status (it's an int2).  Out of the million rows,
only
  about 100 of the rows have a status that is not like the rest.

  Yes, i do vacuum the table.
 
  Does anyone know how to avoid all these table scans?

 Cast the value you're testing against in the query to int2 and you may
 see an improvement.  The planner isn't currently smart enough to
 realize it can use the index when the test value in the query is an
 int4.

 -Doug
 --
 In a world of steel-eyed death, and men who are fighting to be warm,
 Come in, she said, I'll give you shelter from the storm.-Dylan



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html