Re: [GENERAL] Recurring events
On 6/06/2011 8:59, Thomas Guettler wrote: Hi, how do you store recurring events in a database? Selecting all events in a week/month should be fast (comming from an index). My solution looks like this: Table event: Columns: id, name, recurring, start_datetime, end_datetime recurring is weekly, monthly, yearly or NULL. end_datetime can be NULL (open end). Can you create an indexed view with infinite rows? I only want to index the last three year and the next three years. An other solution would be to fill a table with "serialized" events. The recurring events would be created and inserted into a table. This can only be done in a time frame like above (last three year, next three years). If a recurring event gets altered, all its serialized events need to be updated. Any feedback? I have had success using the instructions at https://github.com/bakineggs/recurring_events_for -- Troy Rasiah -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Use of search path in plpgsql functions, PG 8.3.12
Hello, At present i have the following Schema 1 - Table: events Public Schema - Table: events Function CREATE OR REPLACE FUNCTION "public"."recurring_events_for" ( "range_start" timestamp, "range_end" timestamp, "time_zone" varchar, "events_limit" integer ) RETURNS SETOF "events" AS If i set the search path to schema1,public the function still returns rows from the events table in the public schema. I would like to use the same function for both schema's. I'm sure i'm missing something obvious, can anyone help me understand this better ? Thanks -- Troy Rasiah -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Query not using index
Nested Loop (cost=14.16..2539.09 rows=344 width=42) (actual time=34.092..62.106 rows=1777 loops=1) -> Bitmap Heap Scan on keyword_data k (cost=14.16..593.02 rows=231 width=42) (actual time=33.937..39.487 rows=975 loops=1) Recheck Cond: (idxfti @@ '''water'' & !''suppli'' & !''inspector'' & !''offic'' & !''clerk'''::tsquery) -> Bitmap Index Scan on idxfti_idx (cost=0.00..14.10 rows=231 width=0) (actual time=33.614..33.614 rows=975 loops=1) Index Cond: (idxfti @@ '''water'' & !''suppli'' & !''inspector'' & !''offic'' & !''clerk'''::tsquery) -> Index Scan using keyword_page_linkup_idx on keyword_page_linkup kp (cost=0.00..8.40 rows=2 width=8) (actual time=0.016..0.019 rows=2 loops=975) Index Cond: (kp.keyword_id = k.id) -> Index Scan using pages_new_pkey on pages_new p (cost=0.00..0.28 rows=1 width=12) (actual time=0.012..0.014 rows=1 loops=1777) Index Cond: (p.id = kp.page_id) -> Index Scan using gazette_pkey on gazette g (cost=0.00..0.29 rows=1 width=11) (actual time=0.010..0.012 rows=1 loops=1777) Index Cond: (g.id = p.gazette) Total runtime: 127.966 ms (18 rows) Below are the relevant table definitions gazette=# \d gazette Table "public.gazette" Column | Type| Modifiers --+---+-- year | integer | doctype | character varying | ggtype | character varying | old_vol | character varying | vol | integer | default 0 sequence | integer | default 0 pagerange| character varying | year_start_page | integer | default 0 year_finish_page | integer | default 0 ggtype_display | character varying | lr_type | character varying | gaz_start_page | integer | default 0 style| character varying | missing | character varying | gaz_date | date | ref_gaz_date | date | page_qty | character varying | gaz_no | character varying | remarks | character varying | type | character varying | cat_no | character varying | page_of_pgs | character varying | day | character varying | web_remarks | character varying | id | integer | not null default nextval('gazette_id_seq'::regclass) Indexes: "gazette_pkey" PRIMARY KEY, btree (id) "gazette_idx" btree (year) "gazette_idx1" btree (year, doctype, ggtype) "gazette_idx2" btree (year, doctype, ggtype, lr_type) "gazette_idx3" btree (gaz_date) gazette=# \d pages_new Table "public.pages_new" Column | Type| Modifiers -+---+ id | integer | not null default nextval('pages_new_id_seq'::regclass) page_no | character varying | gazette | integer | Indexes: "pages_new_pkey" PRIMARY KEY, btree (id) "pages_new_idx" UNIQUE, btree (page_no, gazette) "pages_new_idx1" btree (gazette) Foreign-key constraints: "pages_new_fk" FOREIGN KEY (gazette) REFERENCES gazette(id) ON UPDATE CASCADE ON DELETE CASCADE gazette=# \d keyword_data Table "public.keyword_data" Column | Type| Modifiers --+---+--- id | integer | not null default nextval('keyword_data_id_seq'::regclass) keyword | character varying | not null category | integer | not null subtopic | character varying | idxfti | tsvector | Indexes: "keyword_data_pkey" PRIMARY KEY, btree (id) "idxfti_idx" gist (idxfti) "keyword_data_idx" btree (category) "keyword_data_idx1" btree (keyword) Foreign-key constraints: "keyword_data_fk" FOREIGN KEY (category) REFERENCES categorys(categoryid) ON UPDATE CASCADE ON DELETE CASCADE gazette=# \d keyword_page_linkup Table "public.keyword_p
Re: [GENERAL] Connection to second database on server
Scott Marlowe wrote: > On Sun, Aug 24, 2008 at 10:19 PM, Troy Rasiah <[EMAIL PROTECTED]> wrote: >> >> Scott Marlowe wrote: >>> On Sun, Aug 24, 2008 at 8:43 PM, Troy Rasiah <[EMAIL PROTECTED]> wrote: >>>> Sorry for bringing up an old post...If you have a generic set of tables.. >>>> >>>> eg. table of countries / post codes etc which are used across several >>>> databases what is the best way to access / store them? >>>> I currently >>>> - use dblink to create views when i want to do joins, >>>> OR >>>> - i just open up a separate db handle when i just want to display the >>>> data (via a perl script) from the 'generic database' (eg. a select list >>>> of countries) >>>> >>>> but was wondering whether schema's would apply to me as well ? >>> Yes, schemas would be much better. The nice thing is with >>> search_path, you could have a setup where application1 and >>> application2 live in different schemas but have access to a common >>> schema. When running app1, you'd do something like: >>> >>> set search_path='app1','commonschema'; >>> >>> and when running app2 you'd change the app1 up there to app2 and then >>> you could access the tables in both schemas without having to use >>> prefixes. >> >> Thanks Scott. We currently do websites for different customers on the >> same machine so we have been setting each of them up with individual >> (database,user,pass). >> >> Instead should i be setting them all up in the one database and having >> individual schema's for each customer and then only granting each user >> access to their schema & the proposed 'commonschema' ? > > That's how I'd do it. You probably want to drop the public schema as > well. If you need to separate everybody from each other into their > own database, then you could always replicate the common schema to > each db, but if the common schema is large or changes a lot this > could be a pain. Thanks for the info, much appreciated. -- Troy Rasiah -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Connection to second database on server
Scott Marlowe wrote: > On Sun, Aug 24, 2008 at 8:43 PM, Troy Rasiah <[EMAIL PROTECTED]> wrote: >> Sorry for bringing up an old post...If you have a generic set of tables.. >> >> eg. table of countries / post codes etc which are used across several >> databases what is the best way to access / store them? >> I currently >> - use dblink to create views when i want to do joins, >> OR >> - i just open up a separate db handle when i just want to display the >> data (via a perl script) from the 'generic database' (eg. a select list >> of countries) >> >> but was wondering whether schema's would apply to me as well ? > > Yes, schemas would be much better. The nice thing is with > search_path, you could have a setup where application1 and > application2 live in different schemas but have access to a common > schema. When running app1, you'd do something like: > > set search_path='app1','commonschema'; > > and when running app2 you'd change the app1 up there to app2 and then > you could access the tables in both schemas without having to use > prefixes. Thanks Scott. We currently do websites for different customers on the same machine so we have been setting each of them up with individual (database,user,pass). Instead should i be setting them all up in the one database and having individual schema's for each customer and then only granting each user access to their schema & the proposed 'commonschema' ? -- Troy Rasiah -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Connection to second database on server
Hermann Muster wrote: > Bill Moran wrote: >> In response to Hermann Muster <[EMAIL PROTECTED]>: >> >> >>> Hello everyone, >>> >>> I already asked about that a couple of days ago, but didn't get an >>> satisfying solution for my problem which is following: >>> >>> I need to create a view that does a query on a second database on the >>> same PostgreSQL server. dblink seems to be the only (???) solution for >>> doing so. The problems are: Referring to dblink documentation I'll have >>> to hardcode (uaah!!)username and password. 1.) Hence, everyone who could >>> see the view definition e.g. in pgAdmin will be able to read the >>> username and password (for the second database). 2.) If I have multiple >>> postgres users with different rights they will all be treated as that >>> one hard-coded user for the second database when querying the view. >>> >>> Someone suggested to set up a pgpass file so the query can get these >>> dynamically. However a pgpass file is also not secure as username and >>> password are stored in plain text, and problem #2 won't be solved, too. >>> >>> Does anyone have an idea how to better set up a database view for >>> viewing records from another database? >>> >>> MSSQL for instance allows schema prefixes for using other databases of >>> the same server, the current user information is being used to connect >>> to this database as well. >>> >> >> I feel this paragraph encapsulates your problem. To summarize: you're >> doing it wrong. >> >> Don't take this as an attack, it's not. It's a statement that PostgreSQL >> handles this kind of thing differently than MySQL, and if you try to >> do it the MySQL way, you're going to hit these kinds of problems. >> >> The PostgreSQL way to do it is to create schemas within a single database, >> you can then use roles to set permissions, use search_path to determine >> what users see by default, and schema-qualify when needed. >> >> If you can't migrate your setup to use schemas, then I expect anything >> else you do will feel sub-optimal, as PostgreSQL is designed to use >> schemas for this sort of thing. >> > I just found the time to try that out and it worked! Thank you for your > help. I actually had no idea about using schemas in PostgreSQL. It was > easy to setup and db_link isn't needed anymore. I hope I won't run into > anymore problems. :-) > Regards. Sorry for bringing up an old post...If you have a generic set of tables.. eg. table of countries / post codes etc which are used across several databases what is the best way to access / store them? I currently - use dblink to create views when i want to do joins, OR - i just open up a separate db handle when i just want to display the data (via a perl script) from the 'generic database' (eg. a select list of countries) but was wondering whether schema's would apply to me as well ? -- Troy Rasiah -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Word boundaries in regular expressions
Apologies if this has been posted somewhere else but what is the equivalent of perl's \b in postgres regular expressions ? -- Troy Rasiah -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] how to unsubscribe from this group?
http://archives.postgresql.org/pgsql-general/ -- Troy Rasiah ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Time Zone design issues
Alvaro Herrera wrote: > novnov escribió: >> I think I get the picture; this post is the closest to making sense to me (my >> lack of understanding is the issue, obviously). But: >> >> What's the postgresql client, in a web app? >> >> When you write "The web app sets timezone='EST5EDT' and inserts a time of >> '2007-07-11 12:30:00'." that's the black box that I'm asking about. There is >> no web app other than the one I am writing. So, I think it means I need to >> write code in my web app to handle the conversion of the postgres stored UTC >> into user-relevant time, with their local time zone etc. > > You are misunderstanding the "sets timezone=". What it means is that > you call the following SQL command: > SET timezone TO 'EST5EDT'; > early in your webapp code. The value to use, you get from the user > profile (stored in a table perhaps). Sorry for the ignorance...but what defines a session in this context in say..perl Would it be something like $dbh = DBI->connect(); do your set timezone stuff here do your insert $rc = $dbh->disconnect; > >> If that's so, what I'm asking about is one the webapp side and perhaps >> people here are not so used to dealing with that. I don't know. Just >> like one of the other (much appreciated) responders in this thread >> suggested, the user's browser is likely offering the user's tz >> info...but I am interested in seeing someone's mature system for >> returning the user's local time based their tz as stored in their >> profile. > > I wouldn't trust the browser's TZ, and you would need a way to override > it. So storing it in a table seems the easiest way. > -- Troy Rasiah ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings