[GENERAL] Slow counting still true?
Release 9.2 should increase count(*) performance. Is this wiki page still valid? http://wiki.postgresql.org/wiki/Slow_Counting Please update the content. Thank you, Thomas Güttler -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Value to long for type ....: Columnname missing
Am 13.04.2012 20:35, schrieb Scott Marlowe: On Thu, Apr 12, 2012 at 2:16 AM, Thomas Guettler wrote: Hi, I think it would be very good, if postgresql reports which column is too small: Value to long for type character varying(1024) (message translated from german to english) Is there a reason not to report the column name? What version of pg are you using? I think later versions do report the column. 9.1.3 postgres=# create table "foo" ("bar" varchar(1)); CREATE TABLE postgres=# insert into foo values ('asdf'); ERROR: value too long for type character varying(1) postgres=# select version(); version PostgreSQL 9.1.3 on x86_64-pc-linux-gnu, compiled by gcc-4.6.real (Ubuntu/Linaro 4.6.1-9ubuntu3) 4.6.1, 64-bit (1 row) -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] recommended schema diff tools?
Hi, about database schema migrations: I am very happy with south http://south.aeracode.org/ It is written for django (python web framework), but could be used for database migrations outside django, too. Thomas Güttler Am 12.04.2012 17:10, schrieb Chris Angelico: On Fri, Apr 13, 2012 at 12:57 AM, Welty, Richard wrote: can anyone recommend an open source tool for diffing schemas? (it should go without saying that i'm looking for ddl to update production and QA DBs from development DBs, but i'll say it, just in case.) We toyed with this exact issue at work. In the end, we went the other direction, and created two files, both managed in source control: a .sql file with everything necessary to initialize the database from scratch, and a patch script. Every change gets done (by hand) to the primary .sql file, and the SQL statements needed to effect the transition (eg ALTER TABLE to add a column) get added to the patch script. ... -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Value to long for type ....: Columnname missing
Hi, I think it would be very good, if postgresql reports which column is too small: Value to long for type character varying(1024) (message translated from german to english) Is there a reason not to report the column name? How can you report feature request? Thomas Güttler -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Value to long for type ....: Columnname missing
Hi, I think it would be very good, if postgresql reports which column is too small: Value to long for type character varying(1024) (message translated from german to english) Is there a reason not to report the column name? How can you report feature request? Thomas Güttler -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Docs: Add Version Info: New since Version x.y ....
It would be very nice if the postgres documentation would contain version information like this: "application_name: new in version 9.0" Or: "... deprecated since version ...". Or: "removed in version x.y. Use ... instead" Is there a reason, why this is not done? Thomas Güttler -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Recurring events
Hi Craig and mailing list On 07.06.2011 00:54, Craig Ringer wrote: > On 06/06/2011 06:59 PM, Thomas Guettler wrote: >> Hi, >> >> how do you store recurring events in a database? > > I use two tables: one table that stores the recurring event, and another > that's essentially a materialized view containing instances of the event. > > It's not ideal, but performs better than using generate_series to > produce and filter the event series on the fly. > >> end_datetime can be NULL (open end). > > PostgreSQL has an ideal solution to this: the timestamp value > 'infinite'. Using it dramatically simplified my interval-related > queries. I initially used it in my design, only to discover that JDBC > doesn't support infinite dates (argh!) and neither do many languages. I > find this exceptionally frustrating. I use Python (and Django ORM) to access Postgres. Infinite is not supported. But this is no problem. I look at "Materialized Views" in the wiki: http://wiki.postgresql.org/wiki/Materialized_Views The view gets updated by a trigger. But if the date is infinite, you need to constrain the trigger to the next N years. I guess this is the best solution. Monthly you need to update the view from an external event (maybe cron), to create the missing events for N years + one month... Thank you for your answer Craig, Thomas -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Recurring events
On 07.06.2011 09:57, Vincent Veyron wrote: > Le lundi 06 juin 2011 à 12:59 +0200, Thomas Guettler a écrit : > >> how do you store recurring events in a database? >> >> Selecting all events in a week/month should be fast (comming from an index). >> >> My solution looks like this: >> >> Table event: >> >> Columns: id, name, recurring, start_datetime, end_datetime >> >> recurring is weekly, monthly, yearly or NULL. >> > > Maybe you could try something like what is used in cron, the scheduling > program for GNU/Linux I know cron very well. But I need to get all events on day X between time1 and time2 very quickly. If I build a crontab like table, I need to check all entries before I can know which crontab lines get executed during this period. Thomas -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Recurring events
Hi, how do you store recurring events in a database? Selecting all events in a week/month should be fast (comming from an index). My solution looks like this: Table event: Columns: id, name, recurring, start_datetime, end_datetime recurring is weekly, monthly, yearly or NULL. end_datetime can be NULL (open end). Can you create an indexed view with infinite rows? I only want to index the last three year and the next three years. An other solution would be to fill a table with "serialized" events. The recurring events would be created and inserted into a table. This can only be done in a time frame like above (last three year, next three years). If a recurring event gets altered, all its serialized events need to be updated. Any feedback? Thomas Güttler -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Trying to figure out why these queries are so slow
Hi, just a guess: Counting is slow, since it needs to check all rows. Explained here: http://wiki.postgresql.org/wiki/Slow_Counting Thomas Güttler Tim Uckun wrote: > I have two tables. Table C has about 300K records in it. Table E has > about a million records in it. Today I tried to run this query. > > update C > set result_count = X.result_count > from C > inner join (select c_id, count(c_id) as result_count > from E > where c_id is not null > group by c_id) as X > on C.id = X.c_id > > All the fields mentioned are indexed. In the case of Table C it's the > primary key. In the case table E it's just an index (non unique). > > I let this query run for about three hours before I cancelled it. > ... -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Get block of N numbers from sequence
Thomas Guettler schrieb: > Hi, > > how can you get N numbers (without holes) from a sequence? > If sequences could be locked like tables, it would be easy. In old versions of postgres it worked: http://archives.postgresql.org//pgsql-hackers/2001-10/msg00930.php Thomas -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Get block of N numbers from sequence
Boszormenyi Zoltan schrieb: > Thomas Guettler írta: >> Hi, >> >> how can you get N numbers (without holes) from a sequence? > # create sequence tmp_seq cache 1000; Hi, "alter SEQUENCE ... cache 100" survives a rollback. That's something I like to avoid: foo_esg_modw=# select * from foo_isu_isu_id_seq; begin; alter SEQUENCE foo_isu_isu_id_seq cache 100; rollback; select * from foo_isu_isu_id_seq; sequence_name | last_value | increment_by | max_value | min_value | cache_value | log_cnt | is_cycled | is_called ++--+-+---+-+-+---+--- foo_isu_isu_id_seq | 90508740 |1 | 9223372036854775807 | 1 | 1 | 6 | f | t BEGIN ALTER SEQUENCE ROLLBACK sequence_name | last_value | increment_by | max_value | min_value | cache_value | log_cnt | is_cycled | is_called ++--+-+---+-+-+---+--- foo_isu_isu_id_seq | 90508740 |1 | 9223372036854775807 | 1 | 100 | 6 | f | t -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Get block of N numbers from sequence
hubert depesz lubaczewski schrieb: > On Tue, May 19, 2009 at 01:45:17PM +0200, Thomas Guettler wrote: >> how can you get N numbers (without holes) from a sequence? > > alter sequence XXX increment by 1000; > select nextval('XXX'); > alter sequence XXX increment by 1; If other processes run nextval() between "increment by 1000" and "increment by 1", they leave big holes in the sequence. Unfortunately begin; ... rollback; does not help. the "alter sequence" command gets executed, even if the transaction gets rolled back. db=# begin; alter SEQUENCE foo_seq increment by 100; rollback; BEGIN ALTER SEQUENCE ROLLBACK db=# select * from foo_seq; sequence_name | last_value | increment_by | max_value | min_value | cache_value | log_cnt | is_cycled | is_called ++--+-+---+-+-+---+--- foo_seq | 90508740 | 100 | 9223372036854775807 | 1 | 1 | 6 | f | t db=# select version(); PostgreSQL 8.2.6 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.2.1 (SUSE Linux) Thomas -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Get block of N numbers from sequence
Hi, how can you get N numbers (without holes) from a sequence? Thomas -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] select distinct, index not used
Thank you Tom. The cron job for vacuum+analyze was not installed on the host. (I had this idea some seconds after posting) After vacuum+analyze the performance is good. I am happy. Nevertheless, on a different host with nearly the same data, a index scan is used. foo_hostone_foo=# explain analyze SELECT DISTINCT "foo_abc_abc"."lieferant" FROM "foo_abc_abc"; QUERY PLAN Unique (cost=15241.56..15820.71 rows=15 width=8) (actual time=1878.213..2393.550 rows=34 loops=1) -> Sort (cost=15241.56..15531.13 rows=115830 width=8) (actual time=1878.207..2227.478 rows=115830 loops=1) Sort Key: lieferant -> Seq Scan on foo_abc_abc (cost=0.00..3518.30 rows=115830 width=8) (actual time=0.042..226.883 rows=115830 loops=1) Total runtime: 2394.960 ms (5 Zeilen) foo_hostone_foo=# select version(); version -- PostgreSQL 8.2.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20061115 (prerelease) (SUSE Linux) (1 Zeile) foo_hosttwo_foo=# explain analyze SELECT DISTINCT "foo_abc_abc"."lieferant" FROM "foo_abc_abc"; QUERY PLAN -- Unique (cost=0.00..65641.70 rows=19 width=18) (actual time=0.163..1490.106 rows=68 loops=1) -> Index Scan using foo_abc_abc_lieferant on foo_abc_abc (cost=0.00..64536.38 rows=442127 width=18) (actual time=0.155..955.844 rows=227600 loops=1) Total runtime: 1490.481 ms (3 Zeilen) foo_hosttwo_foo=# select version(); version PostgreSQL 8.2.6 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.2.1 (SUSE Linux) (1 Zeile) Tom Lane schrieb: > Thomas Guettler writes: >> why does the statement take so long? The column 'lieferant' is indexed. But >> a sequential scan gets done. > > It might have something to do with the fact that the planner's idea of > the size of the table is off by a factor of more than 100: > >> -> Seq Scan on foo_abc_abc (cost=0.00..468944.11 rows=15404611 >> width=8) (actual time=0.029..125458.870 rows=115830 loops=1) > > You might need to review your vacuuming policy. > > (However, a full table indexscan isn't going to be particularly fast in > any case; it's often the case that seqscan-and-sort is the right > decision. I'm not sure this choice was wrong.) > > regards, tom lane > -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] select distinct, index not used
Hi, why does the statement take so long? The column 'lieferant' is indexed. But a sequential scan gets done. foo_egs_foo=# explain analyze SELECT DISTINCT "foo_abc_abc"."lieferant" FROM "foo_abc_abc"; QUERY PLAN --- Unique (cost=3361064.73..3438087.78 rows=7 width=8) (actual time=127133.435..127491.937 rows=34 loops=1) -> Sort (cost=3361064.73..3399576.26 rows=15404611 width=8) (actual time=127133.429..127322.101 rows=115830 loops=1) Sort Key: lieferant -> Seq Scan on foo_abc_abc (cost=0.00..468944.11 rows=15404611 width=8) (actual time=0.029..125458.870 rows=115830 loops=1) Total runtime: 127609.737 ms (5 Zeilen) foo_egs_foo=# \d foo_abc_abc Tabelle »public.foo_abc_abc« Spalte | Typ | Attribute -++-- id | integer| not null default nextval('foo_abc_abc_id_seq'::regclass) ... lieferant | character varying(32) | not null Indexe: »foo_abc_abc_pkey« PRIMARY KEY, btree (id) »foo_abc_abc_lieferant« btree (lieferant) .. version -- PostgreSQL 8.2.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20061115 (prerelease) (SUSE Linux) (1 Zeile) -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Logfile permissions
Hi, my logfiles all have this permission: -rw--- 1 postgres postgres14841 10. Feb 08:52 postgresql-2009-02-10_00.log Is it possible that postgres creates group readable files? I want to monitor this logfile, but for security I don't want to use the postgres user for this. I looked at this documentation and could not find a solution: http://www.postgresql.org/docs/8.2/static/runtime-config-logging.html I use this settings: log_destination = 'stderr' redirect_stderr = on Since the server is linux, I could use syslog, too. But somehow I want to stay with the defaults (SuSE). How do you check your logfiles? Thomas Güttler -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] master/detail: master rows without details
Peter Eisentraut <[EMAIL PROTECTED]>Peter Eisentraut schrieb: > Thomas Guettler wrote: >> My naive first solution was quite slow. Why is it so slow? >> I guess (select d.master_id from detail as d) gets executed for every >> master-row. But why? Shouldn't >> it be possible to calculate it once and then reuse it? > > Please show exact schema dumps and your PostgreSQL version. > > Version: PostgreSQL 8.2.6 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.2.1 (SUSE Linux) schema dump of both tables: historytransaction is the master und historystatement is the detail table. -- -- PostgreSQL database dump -- SET client_encoding = 'UTF8'; SET standard_conforming_strings = off; SET check_function_bodies = false; SET client_min_messages = warning; SET escape_string_warning = off; SET search_path = public, pg_catalog; SET default_tablespace = ''; SET default_with_oids = false; CREATE TABLE foo_historytransaction ( id integer NOT NULL, date timestamp with time zone NOT NULL, changedby_id integer NOT NULL ); ALTER TABLE public.foo_historytransaction OWNER TO user; CREATE SEQUENCE foo_historytransaction_id_seq INCREMENT BY 1 NO MAXVALUE NO MINVALUE CACHE 1; ALTER TABLE public.foo_historytransaction_id_seq OWNER TO user; ALTER SEQUENCE foo_historytransaction_id_seq OWNED BY foo_historytransaction.id; ALTER TABLE foo_historytransaction ALTER COLUMN id SET DEFAULT nextval('foo_historytransaction_id_seq'::regclass); ALTER TABLE ONLY foo_historytransaction ADD CONSTRAINT foo_historytransaction_pkey PRIMARY KEY (id); CREATE INDEX foo_historytransaction_changedby_id ON foo_historytransaction USING btree (changedby_id); ALTER TABLE ONLY foo_historytransaction ADD CONSTRAINT foo_historytransaction_changedby_id_fkey FOREIGN KEY (changedby_id) REFERENCES auth_user(id) DEFERRABLE INITIALLY DEFERRED; CREATE TABLE foo_historystatement ( id integer NOT NULL, transaction_id integer NOT NULL, beleg_id integer NOT NULL, operation character varying(7) NOT NULL, tablename character varying(40), message character varying(1024) ); ALTER TABLE public.foo_historystatement OWNER TO user; CREATE SEQUENCE foo_historystatement_id_seq INCREMENT BY 1 NO MAXVALUE NO MINVALUE CACHE 1; ALTER TABLE public.foo_historystatement_id_seq OWNER TO user; ALTER SEQUENCE foo_historystatement_id_seq OWNED BY foo_historystatement.id; ALTER TABLE foo_historystatement ALTER COLUMN id SET DEFAULT nextval('foo_historystatement_id_seq'::regclass); ALTER TABLE ONLY foo_historystatement ADD CONSTRAINT foo_historystatement_pkey PRIMARY KEY (id); CREATE INDEX foo_historystatement_beleg_id ON foo_historystatement USING btree (beleg_id); CREATE INDEX foo_historystatement_transaction_id ON foo_historystatement USING btree (transaction_id); ALTER TABLE ONLY foo_historystatement ADD CONSTRAINT foo_historystatement_transaction_id_fkey FOREIGN KEY (transaction_id) REFERENCES foo_historytransaction(id) DEFERRABLE INITIALLY DEFERRED; -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] master/detail: master rows without details
Hi, I have a two tables: master and detail. I search all master rows without a detail row. master: 1 rows detail: 29800 rows Although all three solution have the same result, The execution time differs very much. My naive first solution was quite slow. Why is it so slow? I guess (select d.master_id from detail as d) gets executed for every master-row. But why? Shouldn't it be possible to calculate it once and then reuse it? Has someone a better statement? NOT IN SELECT "master"."id" FROM "master" WHERE master.id not in (select d.master_id from detail as d); QUERY PLAN - Seq Scan on master (cost=782.68..2661482.52 rows=5132 width=16) (actual time=2520.509..340387.326 rows=43 loops=1) Filter: (NOT (subplan)) SubPlan -> Materialize (cost=782.68..1226.57 rows=29789 width=4) (actual time=0.005..16.696 rows=9482 loops=10269) -> Seq Scan on detail d (cost=0.00..606.89 rows=29789 width=4) (actual time=0.009..52.536 rows=29793 loops=1) Total runtime: 340387.898 ms (6 Zeilen) JOIN SELECT "master"."id" FROM "master" LEFT OUTER JOIN "detail" ON ("master"."id" = "detail"."master_id") WHERE "detail"."id" IS NULL ; QUERY PLAN - Merge Left Join (cost=0.00..3061.08 rows=14894 width=16) (actual time=107.521..153.840 rows=43 loops=1) Merge Cond: (master.id = detail.master_id) Filter: (detail.id IS NULL) -> Index Scan using master_pkey on master (cost=0.00..486.50 rows=10265 width=16) (actual time=0.024..20.519 rows=10269 loops=1) -> Index Scan using detail_master_id on detail (cost=0.00..2176.55 rows=29789 width=8) (actual time=0.014..59.256 rows=29793 loops=1) Total runtime: 153.974 ms (6 Zeilen) NOT EXISTS SELECT "master"."id" FROM master WHERE NOT EXISTS (SELECT 1 FROM detail AS d WHERE d.master_id=master.id); Seq Scan on master (cost=0.00..27278.09 rows=5132 width=16) (actual time=0.327..61.911 rows=43 loops=1) Filter: (NOT (subplan)) SubPlan -> Index Scan using detail_master_id on detail d (cost=0.00..50.16 rows=19 width=0) (actual time=0.004..0.004 rows=1 loops=10269) Index Cond: (master_id = $0) Total runtime: 62.028 ms (6 Zeilen) -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Schema Upgrade Howto
Hi, I found a way to do it. One problem remains: The order of the columns can't be changed. Any change to make postgres support this in the future? My way: pg_dump -s prod | strip-schema-dump.py - > prod.schema pg_dump -s devel | strip-schema-dump.py - > devel.schema strip-schema-dump.py removes some stuff which I don't care about (Owner, Comments, ...) kdiff3 prod.schema devel.schema You need to create an upgrade script by looking at the diff. But it is not difficult: -- update--MM-DD.sql begin; alter table ... add column ...; ... commit; Execute on production: cat update--MM-DD.sql | psql See http://www.djangosnippets.org/snippets/1160/ David Fetter schrieb: > On Thu, Oct 30, 2008 at 10:54:46AM +0100, Thomas Guettler wrote: > >> Hi, >> >> is there a schema upgrade howto? I could not find much with google. >> >> There is a running DB and a development DB. The development DB >> has some tables, columns and indexes added. >> > > The only sure way to track such changes is by changing the > databases--especially in development--only via scripts, all of which > go into your source code management system. > > -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Schema Upgrade Howto
Hi, is there a schema upgrade howto? I could not find much with google. There is a running DB and a development DB. The development DB has some tables, columns and indexes added. What is the preferred way to upgrade? I see these solutions: - pg_dump production DB. Install schema only from dev DB, restore data only from dump. - Use alter table. - Use a tool like apgdiff (never tried it). I guess all ways will be possible. But what do you suggest? Thomas -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Shopping cart
Andrus schrieb: > Thank you. > >> I know the OP wants to stick to a language he knows, but with django >> (Python), >> you have a quite good admin site. >> >> I guess the OP would be more fast with django, even if he needs to learn >> python on his way. > > Major stopper for python usage is lack of hosting. > Where to find free hosting for this python cart ? > Will it support PostgreSQL well ? > if you don't need postgres: google app engine. But I guess you won't get free hosting with postgres support anywhere. There are a lot of django friendly hosting companies: http://code.djangoproject.com/wiki/DjangoFriendlyWebHosts And a virtual root server does not cost much. Thomas -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Shopping cart
Aarni schrieb: > As Jonathan said, the trick is not in getting the shop online but in the > management side of it all. The public shop interface is in fact only a small > proportion of the system. > Hi, I know the OP wants to stick to a language he knows, but with django (Python), you have a quite good admin site. I guess the OP would be more fast with django, even if he needs to learn python on his way. Andrus> Probably I do'nt need transactions. But you don't want to waste several hours repairing a database which is not consistent. (http://en.wikipedia.org/wiki/Database_transaction) HTH, Thomas -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Shopping cart
Hi, http://www.satchmoproject.com/ But it is written in python. Not PHP or C#. Thomas Andrus schrieb: > I'm looking for a open source PHP or C#/mod_modo/Apache shopping chart. > > PostgreSQL database should contain items and item pictures or pictures > should stored in separate files. > > User can pick items, enter quantities and send order which is stored > in database. > > Any idea where to find source code for this ? > > Andrus. > -- Thomas Guettler, http://www.thomas-guettler.de/ E-Mail: guettli (*) thomas-guettler + de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general