Re: nbsp;[GENERAL]nbsp;Hi,nbsp;ever
On Wed, Oct 04, 2006 at 07:50:12AM +0800, stevegy wrote: My testing database is initdb -E UTF8. And i guess the solaris can not handle the UTF-8 sorting with the LC_COLLATE=zh_CN.GB18030. But i need to prove this. A locale can only handle one charset, usually the one given by locale charset. So if it handles sorting in GB18030 then by definition it can't handle UTF-8. So i plan to change the locale of my solaris box. I have installed the zh_CN.UTF8 on it. I am looking for a way to change the solaris locale. If this changing need to re-boot machine that I should do this in some not busy time for the live application. The locale is not a global setting. If you set the LANG or LC_ALL variable, it will change the locale of any program run with that environment variable. The default is the C locale. Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] Moving data to another disk
This is the original message from october 2 that didn't appear in the list. I don't know if someone received it. X-Gmail-Received: c62ada517ba59dc1cb70c46da86fdd974cd80c63 Received: by 10.70.35.9 with HTTP; Mon, 2 Oct 2006 10:59:43 -0700 (PDT) Message-ID: [EMAIL PROTECTED] Date: Mon, 2 Oct 2006 14:59:43 -0300 From: Clodoaldo Pinto Neto [EMAIL PROTECTED] To: pgsql-general postgresql.org pgsql-general@postgresql.org Subject: Moving data to another disk MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline Delivered-To: [EMAIL PROTECTED] I want to move the data directory to another disk and mantain the config files in the same place. I will change the data_directory parameter in postgresql.conf. I could just copy everything under /var/lib/pgsql/data to the other disk but I also want to clean the diretory now used thus I need to know for sure what files/directories are data. I have read the 50.1 section of the postgres manual. I suppose all directories under /var/lib/pgsql/data are data directories and i guess the postmaster.opts, postmaster.pid and PG_VERSION files should stay in the config directory and obviously the *.conf files are config files. Am i right? Regards, Clodoaldo ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Fwd: Help with function
I did not see this go through.Chris-- Forwarded message --From: Chris Hoover [EMAIL PROTECTED] Date: Oct 3, 2006 4:49 PMSubject: Help with functionTo: pgsql-general@postgresql.orgI need some help with writing a plpgsql function. I want to return multiple items from the function. How do I do this? Here is my attempt at the function (note, this is a simple example that could obviously be done via a view, but I am trying to learn more about writing plpgsql functions): create or replace function dba.active_transactions_by_db() returns setof integer pg_stat_activity.datname%TYPE as$BODY$declare dbName varchar; activeTransactions integer; countRec record; begin for countRec in select count(1) as cnt, datname from pg_stat_activity group by datname loop return next countRec; end loop; return countRec;end;$BODY$language plpgsql;
Re: [GENERAL] Fwd: Help with function
am Wed, dem 04.10.2006, um 9:31:28 -0400 mailte Chris Hoover folgendes: Here is my attempt at the function (note, this is a simple example that could obviously be done via a view, but I am trying to learn more about writing plpgsql functions): create or replace function dba.active_transactions_by_db() returns setof integer pg_stat_activity.datname%TYPE as $BODY$ declare dbNamevarchar; activeTransactionsinteger; countRecrecord; begin for countRec in select count(1) as cnt, datname from pg_stat_activity group by datname loop return next countRec; end loop; return countRec; end; $BODY$ language plpgsql; I wrote for you this: create or replace function active_transactions_by_db(out _cnt int, out _datname text) returns setof record as $BODY$ declare dbNamevarchar; activeTransactionsinteger; countRecrecord; begin for countRec in select count(1) as cnt, datname from pg_stat_activity group by datname loop _cnt := countRec.cnt; _datname := countRec.datname; return next; end loop; return; end; $BODY$ language plpgsql; It works. If you want lern more about IN and OUT - Parameters, see: http://people.planetpostgresql.org/xzilla/index.php?/archives/149-out-parameter-sql-plpgsql-examples.html#extended HTH, Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47215, D1: 0160/7141639 (mehr: - Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Fwd: Help with function
Just curious but since which version these IN/OUT parameters are supported? -- Matthias -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of A. Kretschmer Sent: Wednesday, October 04, 2006 4:01 PM To: pgsql-general@postgresql.org Subject: Re: [GENERAL] Fwd: Help with function am Wed, dem 04.10.2006, um 9:31:28 -0400 mailte Chris Hoover folgendes: Here is my attempt at the function (note, this is a simple example that could obviously be done via a view, but I am trying to learn more about writing plpgsql functions): create or replace function dba.active_transactions_by_db() returns setof integer pg_stat_activity.datname%TYPE as $BODY$ declare dbNamevarchar; activeTransactionsinteger; countRecrecord; begin for countRec in select count(1) as cnt, datname from pg_stat_activity group by datname loop return next countRec; end loop; return countRec; end; $BODY$ language plpgsql; I wrote for you this: create or replace function active_transactions_by_db(out _cnt int, out _datname text) returns setof record as $BODY$ declare dbNamevarchar; activeTransactionsinteger; countRecrecord; begin for countRec in select count(1) as cnt, datname from pg_stat_activity group by datname loop _cnt := countRec.cnt; _datname := countRec.datname; return next; end loop; return; end; $BODY$ language plpgsql; It works. If you want lern more about IN and OUT - Parameters, see: http://people.planetpostgresql.org/xzilla/index.php?/archives/ 149-out-parameter-sql-plpgsql-examples.html#extended HTH, Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47215, D1: 0160/7141639 (mehr: - Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: nbsp;nbsp;[GENERAL]nbsp;Hi,nbs
Hi Martijn,Thank you for the reply.I think the locale settings is a diffrent in my case, and in the logical way, I guess the locale environs should be used by the program in the system C run-time library for the string operating functions such like comparing or something else like that.I have login as user name 'postgres' and changed the LANG=zh_CN.UTF-8; export LANG and then the locale command output this:-bash-3.00$ localeLANG=zh_CN.UTF-8LC_CTYPE=zh_CN.UTF-8LC_NUMERIC=zh_CN.UTF-8LC_TIME=zh_CN.UTF-8LC_COLLATE=zh_CN.UTF-8LC_MONETARY=zh_CN.UTF-8LC_MESSAGES=zh_CN.UTF-8LC_ALL=So, now i stop the postgres: pg_ctl stop -D ./data_euc_cn and start it use the UTF-8 encoding data directory: pg_ctl start -D ./data_utf8. But the sort result for the Chinese characters is still wrong. And I notice that the sort result is diffrent from the LANG=zh_CN.GB18030.I think I still have not a direct way to solve this. Or, maybe, the postgres uses the locale settings from the other than the current postgres user?Any suggestions?-- Steve Yao-原始邮件-发件人:Martijn van Oosterhout 发送时间:2006-10-04 17:42:00收件人:stevegy 抄送:pgsql-general@postgresql.org主题:Re: [GENERAL] Hi, ever On Wed, Oct 04, 2006 at 07:50:12AM +0800, stevegy wrote: My testing database is initdb -E UTF8. And i guess the solaris can not handle the UTF-8 sorting with the LC_COLLATE=zh_CN.GB18030. But i need to prove this.A locale can only handle one charset, usually the one given by localecharset. So if it handles sorting in GB18030 then by definition itcan't handle UTF-8. So i plan to change the locale of my solaris box. I have installed the zh_CN.UTF8 on it. I am looking for a way to change the solaris locale. If this changing need to re-boot machine that I should do this in some not busy time for the live application.The locale is not a global setting. If you set the LANG or LC_ALLvariable, it will change the locale of any program run with thatenvironment variable. The default is the C locale.Have a nice day,-- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. 3G 时 代 来 临 了,坚 决 对 传 统 邮 箱 说 不 ! 新 一 代 极 速 3G 邮 箱 闪 亮 登 场 ,280 兆 网 盘 免 费 送 ! 点 击 此 处 注 册
Re: [GENERAL] Fwd: Help with function
am Wed, dem 04.10.2006, um 16:06:40 +0200 mailte [EMAIL PROTECTED] folgendes: Just curious but since which version these IN/OUT parameters are supported? I'm not sure, 8.0 or 8.1 [ silly fullquote deleted ] Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47215, D1: 0160/7141639 (mehr: - Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Fwd: Help with function
am Wed, dem 04.10.2006, um 16:06:40 +0200 mailte [EMAIL PROTECTED] folgendes: Just curious but since which version these IN/OUT parameters are supported? Since 8.1: http://developer.postgresql.org/pgdocs/postgres/release-8-1.html E.6.3.8. General Server-Side Language Changes Allow SQL and PL/PgSQL functions to use OUT and INOUT parameters (Tom) HTH, Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47215, D1: 0160/7141639 (mehr: - Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: nbsp;nbsp;[GENERAL]nbsp;Hi,nbs
On Wed, Oct 04, 2006 at 09:45:26PM +0800, stevegy wrote: So, now i stop the postgres: pg_ctl stop -D ./data_euc_cn and start it use the UTF-8 encoding data directory: pg_ctl start -D ./data_utf8. But the sort result for the Chinese characters is still wrong. And I notice that the sort result is diffrent from the LANG=zh_CN.GB18030. You need to do more to change the encoding of a database. The encoding is fixed at cluster-creation time, so you need to run initdb again to actually change the locale/encoding. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] Need help with a function from hell..
Hello Archie, We approach the problem slightly differently then others. Given an aggregate function comma_list which simply creates a comma seperated list, we use distinct to remove duplicates. test=# select comma_list(col) from test; comma_list a, b, a, c (1 row) test=# select comma_list(distinct col) from test; comma_list a, b, c (1 row) I've included our function definitions below. hope that helps, -Chris CREATE OR REPLACE FUNCTION list_add(text, text) RETURNS text AS $BODY$ select CASE WHEN $2 IS NULL OR $2 ='' THEN $1 WHEN $1 IS NULL or $1 = '' THEN $2 ELSE $1 || ', ' || $2 END; $BODY$ LANGUAGE 'sql' VOLATILE; CREATE OR REPLACE FUNCTION list_fin(text) RETURNS text AS $BODY$ SELECT CASE WHEN $1=text('') THEN NULL ELSE $1 END $BODY$ LANGUAGE 'sql' VOLATILE; CREATE AGGREGATE comma_list( BASETYPE=text, SFUNC=list_add, STYPE=text, FINALFUNC=list_fin ); On Tuesday 03 October 2006 03:26 pm, [EMAIL PROTECTED] wrote: Hi all, I have a small coding problem where my function is becoming, well, too ugly for comfort. I haven't finished it but you will get picture below. First a small description of the purpose. I have an aggregate function that takes a string and simply concatenates that string to the previous (internal state) value of the aggregate, example: Hello:World || , || World:Hello -- Hello:World, World:Hello My problem is that I sometimes get the same value before the colon sign and in those cases I should not add the whole string to the previous value of the aggregate but extract the value that is behind the colon and add it to already existing part which matched the value before the colon but with a slash as a delimiter, example: Internal state: Hello:World, World:Hello New value: Hello:Dolly After function is run: Hello:World/Dolly, World:Hello So what I am doing is a lot of strpos() and substr() functions (I have previously asked for the speed of the substr() function) but it is beginning to look really alwful. It seems very odd that there doesn't exist something else like what I need but I haven't found anything, although I admit I might not understand all aspects of the PostGreSQL database and what I can do with the SQL in connection to it. Below you will find my unfinished function, but it will show you what I mean when I say ugly.. Any help is appreciated. Thanks in advance, Archie CREATE FUNCTION rarity_concat(text, text) RETURNS text AS 'DECLARE colon_pos integer; set_str text; rarity_str text; set_exist_pos integer; rarity_exist_str_middle text; rarity_exist_str_end text; BEGIN colon_pos := strpos($2, ':'); set_str := substr($2, 1, colon_pos); set_exist_pos := strpos($1, set_str); IF set_exist_pos 0 THEN rarity_str := substr($2, colon_pos + 2); rarity_exist_str_start := substr($1, 1, set_exist_pos - 1); comma_pos := ELSE RETURN $1 || \', \' || $2; END IF; END' LANGUAGE 'plpgsql'; ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match -- Chris Kratz ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Potentially annoying question about date ranges (part 2)
I have a similar question, I receive payments, but not every day (or even every week), and I would like see how much I receive per week and/or per month. I also need to compare different years. I also would like to see what the (accumulated) sum of the payments is at the end of a given week, month. And then of course I would like to show this to someone, ie make a graph. X-Axis 12 months (or 52 weeks). Left X-Axis the amounts for the bars which should be in different colors (per month/bar), right Y-Axis the amounts for the totals as a line (so that the bars don't become too flat). If such a combined graph is not possible/feasible, I could live with one plot each per year. I got the usual tools on the Mac, including R, gnuplot and fink. any help would be appreciated. el on 9/30/06 8:34 PM Jan Danielsson said the following: I'm going to assume that this question has been asked a gazillion times, and is in every SQL book known to man and aliens. And I also assume it is in the FAQ. But I'm going to ask anyway. I have a database in which I store my purchases -- mainly for entertainment (you have no idea how dull my life is). In an effort to get some variation in my life, I thought I'd find out how much of my valuable money I waste each day - on average. -- If you want to email me, replace nospam with el ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Moving data to another disk
I want to move the data directory to another disk and mantain the config files in the same place. I will change the data_directory parameter in postgresql.conf. I could just copy everything under /var/lib/pgsql/data to the other disk but I also want to clean the diretory now used thus I need to know for sure what files/directories are data. I have read the 50.1 section of the postgres manual. I suppose all directories under /var/lib/pgsql/data are data directories and i guess the postmaster.opts, postmaster.pid and PG_VERSION files should stay in the config directory and obviously the *.conf files are config files. Am i right? Regards, Clodoaldo ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] using float4, I get incorrect value when select
I am newbie in postgres. But I think that the value is stored correctly because when select and cast the column to float8 the postgres return the correct value and when I insert, no overflow or other error occur. Thanks Martijn Martijn van Oosterhout kleptog@svana.org escribió en el mensaje news:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Normal vs Surrogate Primary Keys...
Martijn van Oosterhout wrote: On Sun, Oct 01, 2006 at 07:48:14PM -0700, rlee0001 wrote: snip For example, if I key employee by Last Name, First Name, Date of Hire and Department, I would need to store copies of all this data in any entity that relates to an employee (e.g. payroll, benefits and so on). In addition, if any of these fields change in value, that update would need to cascade to any related entities, which might be perceived as a performance issue if there are many related records. Err, those fields don't make a natural key since they have no guarentee of uniqueness. You've simply decided that the chance of collision is low enough that you don't care, but for me that's not really good enough for use as a key. Oh look mommy, a usenet troll. Sweet. I'm bored, so... Those fields were a contrived example of a key that might be perceived to be too large to use as a key for performance reasons. Are you suggesting that because they are not guaranteed to be unique that no perforance problem would exist in using such large and complex fields as keys? Or do you acknowledge that my example holds regardless? The fact of the matter is, non-abstract (natural) entities have only one perfect candidate key, which is the compound of all their natural attributes. For these entities, a decision must be made by the data modeler after gathering the requirements of the application as to what the minimum subset of attributes are that would never be duplicated (again: within the context of the application). In my employee example, I, as the data modeler, have decided that those four fields constitute a reasonable candidate key based on the requirements of the application. Secondly, three of the four fields you suggest are subject to change, so that indeed makes them a bad choice. My definition of key includes unchanged for the lifetime of the tuple. There is no such rule of normalization or good database logic. You are refering to a technical limitation in some obsolete system that lack cascading update support. In that situation your idea may work well, but that's just a surrogate key in disguise... I know. But not just in disguise -- invisible. An internal peice of the database, like an index. This is where perforance hacks belong, not mixed in with business logic (or in this case business data). Basically I'm introducing the concept of a hidden-psudo-sub-primary-key. The index of relationships. Additionally the ID could be extracted and used by the application for other uses such as transmitting a record pointer via a query-string and other internal/technical/non-business-logic activities. Have a nice day, Which one? -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. --0ntfKIWw70PvrIHh Content-Type: application/pgp-signature Content-Disposition: inline; filename=signature.asc Content-Description: Digital signature X-Google-AttachSize: 190 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] storing transactions
Hi all, Ive been studying the whole evening and dont seem to find an answer: I want to store transactions on the server- like views, or, (sorry) as in M$ SQL Server CREATE OR REPLACE TRANSACTION xyz() Is this possible with postgres or do I have to store all these at the client side?! Thanks heaps, Chris _ Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! http://smartsurfer.web.de/?mc=100071distributionid=0066 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Help with function
I need some help with writing a plpgsql function. I want to return multiple items from the function. How do I do this?Here is my attempt at the function (note, this is a simple example that could obviously be done via a view, but I am trying to learn more about writing plpgsql functions): create or replace function dba.active_transactions_by_db() returns setof integer pg_stat_activity.datname%TYPE as$BODY$declare dbName varchar; activeTransactions integer; countRec record; begin for countRec in select count(1) as cnt, datname from pg_stat_activity group by datname loop return next countRec; end loop; return countRec;end;$BODY$language plpgsql;
Re: [GENERAL] rule for inserting into partitions
Marc Evans wrote: Hello - I have a schema setup which I want to use partitions with. The intent is to partition based on the created_at column, seperating based on the year/month. What I am struggling with is a rule that I can use to automatically determine which partition data should be inserted into, such that I want to derive the table name based on the data being inserted. For example, a non-functional rule that shows my goal is: create or replace rule test_partition as insert into tests do instead insert into (select 'tests_' || (extract(year from NEW.created_at) * 100 + extract(month from NEW.created_at))::text) values (id,created_at,data); In the above, the sub expression to derive the table name doe not parse. My question is, what could I do instead of the above to achieve the same? Options that I am aware of include: * Use a list of hard coded table name and range check combinations. This option doesn't scale over time, e.g. you are always needing to expand the list of table names over time. * Modify application code to directly insert into the partition. This is not very friendly to the programmer(s), and is far less flexible over time, should the partitioning logic need to change. * Create a function which returns the table name to be used, which the application code then uses to insert directly into. This to some extent resolves the above 2 issues, though requires cooperation of the application programmers, which I'd ideally like to avoid. Any suggestions? - Marc You can view (1) as a positive, insofar as you can add/remove rules on a monthly basis to turn on and turn off inserts into monthly partitions as time goes on (i.e. freeze previous partitions). As for (3), you can supply a stored procedure that does the INSERTs, and guarantee cooperation by not giving INSERT permission to the underyling table(s). ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Help required
Hi Friends, I am using postgres 7.4.2 in Ubuntu linux... when I run psql -l command, it says the following error [EMAIL PROTECTED]:~$ psql -lpsqll: could not connect to server: Connection refused Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? I have the backup database in the home folder, please update me, how should i make the database up... rgds vijay
[GENERAL] Run programs within the bin folder
guys, is there a way to run various programs like (pg_config, pg_restore, dropdb etc) from within pgadmin or from within my java code. i mean can i perform database administration from within postgres that is without using literal commands. so from within my java code i can execute a query like select all etc. but can i also execute a query which will perform database administration. i mean when my java code says select all it doesnt have psql in front of it. so why when i have to backup or do anything else i need to use pg_dump and not a general query (which really makes things very complicate programmatically). thanks. ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Storing images in PostgreSQL databases (again)
Hello, I think I'm about to ask a traditional (almost religious) question, but I haven't been able to find a crystal clear answer in the mailing lists so far. Thus, here is my question: I need to store a large number of images in a PostgreSQL database. In my application, this represents a few hundreds of thousands of images. The size of each image is about 100-200 Ko. There is a large turnover in my database, i.e. each image stays about 1 week in the database, then it is deleted. Of course, I need to have a relatively fast access to each one of these images. But more importantly, I need to periodically delete a large number of images in batch process. Moreover, the disk space that is used on the hard-disk to store the images should be kept as small as possible: Precisely, after the aforementioned batch deletions, the table that contains the images should be immediately compacted (I cannot afford the internal use of a to be deleted flag, because of the large amount of disk space my database requires). I have three possible implementation choices in PostgreSQL: 1) Storing the images directly on the disk, and storing an URI in the database tables (but this would require a more tricky implementation, and ACID-ity would be difficult to ensure -- after all, a database should abstract the internal storage of data, may it be images). 2) Storing the images in a bytea column (but what about the access times, and the batch deletion process?). 3) Storing the images as large objects (this sounds like the best solution to me, but the documentation lacks clarity about the scope of these large objects). Finally, my question is what method would you recommend to me? I thank you much in advance for your answers! ___ Yahoo! Mail réinvente le mail ! Découvrez le nouveau Yahoo! Mail et son interface révolutionnaire. http://fr.mail.yahoo.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] storing transactions
You can save your transactions in an sql file and then run that file whenever you need to run those transactions. Regards Talha Khan On 10/3/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Hi all,I've been studying the whole evening and don't seem to find an answer:I want to "store" transactions on the server- like view's, or, (sorry) as in M$ SQL Server CREATE OR REPLACE TRANSACTION xyz() Is this possible with postgres or do I have to store all these at the client side?!Thanks heaps,Chris_Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!http://smartsurfer.web.de/?mc=100071distributionid=0066 ---(end of broadcast)---TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] storing transactions
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 In this context, what is a transaction? On 10/04/06 14:32, Talha Khan wrote: You can save your transactions in an sql file and then run that file whenever you need to run those transactions. Regards Talha Khan On 10/3/06, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Hi all, I've been studying the whole evening and don't seem to find an answer: I want to store transactions on the server- like view's, or, (sorry) as in M$ SQL Server CREATE OR REPLACE TRANSACTION xyz() Is this possible with postgres or do I have to store all these at the client side?! - -- Ron Johnson, Jr. Jefferson LA USA Is common sense really valid? For example, it is common sense to white-power racists that whites are superior to blacks, and that those with brown skins are mud people. However, that common sense is obviously wrong. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.5 (GNU/Linux) iD8DBQFFJA+SS9HxQb37XmcRAgRHAKDf0wqR78o2ImFV+le9gH3ETX051ACcC25X Y5N2tk9XweRCKSwVVMQFP1Q= =zG8a -END PGP SIGNATURE- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Potentially annoying question about date ranges (part 2)
Eberhard Lisse wrote: I receive payments, but not every day (or even every week), and I would like see how much I receive per week and/or per month. I also need to compare different years. I also would like to see what the (accumulated) sum of the payments is at the end of a given week, month. Timestamp each transaction, then use a GROUP BY clause to aggregate the numbers however you wish. If the available date field extractions are not sufficient, write a scalar function of your own to accomplish what you need. -- Guy Rouillier ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Storing images in PostgreSQL databases (again)
Hello, I think I'm about to ask a traditional (almost religious) question, but I haven't been able to find a crystal clear answer in the mailing lists so far. Thus, here is my question: I need to store a large number of images in a PostgreSQL database. In my application, this represents a few hundreds of thousands of images. The size of each image is about 100-200 Ko. There is a large turnover in my database, i.e. each image stays about 1 week in the database, then it is deleted. Of course, I need to have a relatively fast access to each one of these images. But more importantly, I need to periodically delete a large number of images in batch process. Moreover, the disk space that is used on the hard-disk to store the images should be kept as small as possible: Precisely, after the aforementioned batch deletions, the table that contains the images should be immediately compacted (I cannot afford the internal use of a to be deleted flag, because of the large amount of disk space my database requires). I have three possible implementation choices in PostgreSQL: 1) Storing the images directly on the disk, and storing an URI in the database tables (but this would require a more tricky implementation, and ACID-ity would be difficult to ensure -- after all, a database should abstract the internal storage of data, may it be images). 2) Storing the images in a bytea column (but what about the access times, and the batch deletion process?). 3) Storing the images as large objects (this sounds like the best solution to me, but the documentation lacks clarity about the scope of these large objects). Finally, my question is what method would you recommend to me? I thank you much in advance for your answers! ___ Yahoo! Mail réinvente le mail ! Découvrez le nouveau Yahoo! Mail et son interface révolutionnaire. http://fr.mail.yahoo.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings Encode the image on base64 and inseert on a text field if you use Bytea it needs to be encoded and the size stored will be more than base64 encoded if you store the image on disk you need to keep the consistency between the database and the file system leonel ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Storing images in PostgreSQL databases (again)
TIJod wrote: I need to store a large number of images in a PostgreSQL database. In my application, this represents a few hundreds of thousands of images. The size of each image is about 100-200 Ko. There is a large turnover in my database, i.e. each image stays about 1 week in the database, then it is deleted. I see little value to storing the images in the database. For me that's a general statement (I'm sure others will disagree); but especially in your case, where you have a high volume and only want to store them for a couple days. Why incur all the overhead of putting them in the DB? You can't search on them or sort on them. I would just store them in the file system and put a reference in the DB. but this wouldrequire a more tricky implementation, and ACID-ity would be difficult to ensure -- after all, a database should abstract the internal storage of data, may it be images). I can't get excited about this. First, given the amount of overhead you'll be avoiding, checking the return code from storing the image in the file system seems relatively trivial. Store the image first, and if you get a failure code, don't store the rest of the data in the DB; you've just implemented data consistency. That assumes, of course, that the image is the only meaningful data you have, which in most situations is not the case. Meaning you'd want to store the rest of the data anyway with a messages saying image not available. -- Guy Rouillier ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Help required
Run the followingpg_ctl -D data folder path statusto see if you have the db server running or not?As these seems to me you dont have the database server running on port 5432.Thanks, ---Shoaib MirEnterpriseDB (www.enterprisedb.com)On 10/3/06, Ravindran Vijaykumar R-AVR025 [EMAIL PROTECTED] wrote: Hi Friends, I am using postgres 7.4.2 in Ubuntu linux... when I run psql -l command, it says the following error [EMAIL PROTECTED]:~$ psql -lpsqll: could not connect to server: Connection refused Is the server running locally and accepting connections on Unix domain socket /var/run/postgresql/.s.PGSQL.5432? I have the backup database in the home folder, please update me, how should i make the database up... rgds vijay
[GENERAL] [Fwd: Realizing the Value of Enterprise Open Source Databases] -- May be of interest -- re Sony Online's use of Enterprise DB
Ziff Davis Media eSeminars: The Online Seminar Standard Realizing the Value of Enterprise Open Source Databases: How Sony Online Entertainment Replaced Oracle with EnterpriseDB October 12, 2006 @ 12:00 p.m. Eastern/9:00 a.m. Pacific Duration: 60 minutes Register Attend Online http://ct.enews.eweek.com/rd/cts?d=186-4561-8-695-100415-529098-0-0-0-1 If you are unable to attend the live event you may still register and will receive an e-mail when the on-demand version becomes available. Find out how leading enterprises are realizing dramatic cost savings and other benefits using open source alternatives to Oracle and other proprietary databases. This eSeminar explores how your company can leverage existing IT assets while benefiting from the new breed of enterprise-class, open source software. Noel Yuhanna, Senior Analyst, Forrester Research, shares market insights and best practices for evaluating, selecting and implementing enterprise open source database solutions in several usage scenarios, including data marts, reporting servers, new applications, and legacy applications. Christopher Yates, Vice President, Technology for Sony Online Entertainment, provides a first-hand look at how his company replaced Oracle with an enterprise open source database, Enterprise DB. As the leader in massive multiplayer online games, the company faced rising IT costs. Hear how Sony Online Entertainment leveraged EnterpriseDB's PostgreSQL foundation and Oracle compatibility to drive down the cost of operating its online games and platform services. Join us for this eSeminar and learn how to: * Evaluate and select enterprise open source database solutions * Identify usage scenarios that leverage the advantages of open source software * Migrate legacy applications from proprietary to open-source database solutions * Quantify real cost savings from open source implementations Featured Speakers Noel Yuhanna, Senior Analyst - Forrester Research, Inc. Christopher Yates, Vice President, Technology - Sony Online Entertainment Sponsored by EnterpriseDB Corporation Register Attend Online http://ct.enews.eweek.com/rd/cts?d=186-4561-8-695-100415-529098-0-0-0-1 Please visit www.eSeminarslive.com for a complete list of upcoming Ziff Davis Media eSeminars. If you have already registered for these eSeminars, please ignore this message. Feel free to pass this e-mail along to other colleagues on your team who may have an interest in attending the eSeminar above. If you have problems with your registration, please e-mail: mailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Storing images in PostgreSQL databases (again)
On Oct 4, 2006, at 12:56 PM, Guy Rouillier wrote: TIJod wrote: I need to store a large number of images in a PostgreSQL database. In my application, this represents a few hundreds of thousands of images. The size of each image is about 100-200 Ko. There is a large turnover in my database, i.e. each image stays about 1 week in the database, then it is deleted. I see little value to storing the images in the database. For me that's a general statement (I'm sure others will disagree); but especially in your case, where you have a high volume and only want to store them for a couple days. Why incur all the overhead of putting them in the DB? You can't search on them or sort on them. I would just store them in the file system and put a reference in the DB. but this wouldrequire a more tricky implementation, and ACID-ity would be difficult to ensure -- after all, a database should abstract the internal storage of data, may it be images). I can't get excited about this. First, given the amount of overhead you'll be avoiding, checking the return code from storing the image in the file system seems relatively trivial. Store the image first, and if you get a failure code, don't store the rest of the data in the DB; you've just implemented data consistency. That assumes, of course, that the image is the only meaningful data you have, which in most situations is not the case. Meaning you'd want to store the rest of the data anyway with a messages saying image not available. Combine that with an on delete trigger that adds the filename to a deletion queue (within the transaction) and a separate process that runs through the deletion queue occasionally and you get something quite useable, while still being able to use sendfile() to throw the image over the wire rather than squeezing all that data through the database. Cheers, Steve ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Storing images in PostgreSQL databases (again)
On 10/4/06, TIJod [EMAIL PROTECTED] wrote: I think I'm about to ask a traditional (almost religious) question, but I haven't been able to find a crystal clear answer in the mailing lists so far. I think the key in deciding this, in your case, is your requirement for space reclamation: There is a large turnover in my database, i.e. each image stays about 1 week in the database, then it is deleted. ... But more importantly, I need to periodically delete a large number of images in batch process. Moreover, the disk space that is used on the hard-disk to store the images should be kept as small as possible: Precisely, after the aforementioned batch deletions, the table that contains the images should be immediately compacted (I cannot afford the internal use of a to be deleted flag, because of the large amount of disk space my database requires). If I understand what postgresql is doing, then DELETE will not reclaim the space immediately. What happens internally is not all that different from marking the space as deleted. A VACUUM will allow that space to be reused, (assuming your free space map is big enough), and a VACUUM FULL would be necessary to compress the space away. All of these seem incompatible with your requirements. I agree with another responder who suggested using the filesystem for your images. Jack Orenstein ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] now() and time zone
Hello,I am a bit sorry to come back on that topic but I just cannot get it right. How comes that select now() at time zone 'EST'returns "2006-10-04 15:59:26.713623"when it is actually 16:59 on the east coast? Can it be that the server where the PostgreSQL database is located is not properly configured?Many thanksJean
Re: [GENERAL] now() and time zone
On Wed, Oct 04, 2006 at 02:01:18PM -0700, Jean-Christophe Roux wrote: Hello, I am a bit sorry to come back on that topic but I just cannot get it right. How comes that select now() at time zone 'EST' returns 2006-10-04 15:59:26.713623 when it is actually 16:59 on the east coast? Can it be that the server where the PostgreSQL database is located is not properly configured? Maybe there's summar time/daylight savings time? I'm unsure if EST is supposed to reflect that. You can say things like: select now() at time zone 'America/California'; Perhaps that gets you something more reasonable? Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] now() and time zone
Yes you are right, that's what I have just realized; I should be using EDT instead of EST. Sorry for the botherJCR- Original Message From: Martijn van Oosterhout kleptog@svana.orgTo: Jean-Christophe Roux [EMAIL PROTECTED]Cc: pgsql-general@postgresql.orgSent: Wednesday, October 4, 2006 5:06:42 PMSubject: Re: [GENERAL] now() and time zoneOn Wed, Oct 04, 2006 at 02:01:18PM -0700, Jean-Christophe Roux wrote: Hello, I am a bit sorry to come back on that topic but I just cannot get it right. How comes that select now() at time zone 'EST' returns "2006-10-04 15:59:26.713623" when it is actually 16:59 on the east coast? Can it be that the server where the PostgreSQL database is located is not properly configured?Maybe there's summar time/daylight savings time? I'm unsure if "EST" issupposed to reflect that.You can say things like:select now() at time zone 'America/California';Perhaps that gets you something more reasonable?Have a nice day,-- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate.
Re: [GENERAL] now() and time zone
Jean-Christophe Roux [EMAIL PROTECTED] writes: I am a bit sorry to come back on that topic but I just cannot get it right. How comes that select now() at time zone 'EST' returns 2006-10-04 15:59:26.713623 when it is actually 16:59 on the east coast? USA east coast is currently on EDT ... not EST. In PG 8.1 and later you can say AT TIME ZONE 'EST5EDT' or AT TIME ZONE 'America/New_York' to get the behavior I think you are after. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Run programs within the bin folder
On Mon, 2006-10-02 at 22:56 -0700, pd wrote: guys, is there a way to run various programs like (pg_config, pg_restore, dropdb etc) from within pgadmin or from within my java code. i mean can i perform database administration from within postgres that is without using literal commands. so from within my java code i can execute a query like select all etc. but can i also execute a query which will perform database administration. i mean when my java code says select all it doesnt have psql in front of it. so why when i have to backup or do anything else i need to use pg_dump and not a general query (which really makes things very complicate programmatically). First of all, some administration can be easily performed with queries, such as CREATE DATABASE, DROP DATABASE, CREATE LANGUAGE, etc. To work with something like pg_dump, just open a pipe. Generally you want to output to a file anyway. It would be odd to have something like a pg_dump api call that dumped information out as a Java string. If nothing else, you might not be able to store the entire string in memory, so the interface of unix pipes is much better. Regards, Jeff Davis ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Generating synthetic keys on copy
Given a table def something like: create table mytbl (id int8 default nextval('myseq') primary key... I have data I'm extracting from a legacy database. I want to assign newly-generated synthetic keys. I would like to use copy to get the data in. If I put an explicit null in the data file to be imported, pg won't generate a key, right? Default values only get generated when an INSERT doesn't list the column. The choices I see are: - Manually bump the sequence up enough to accommodate the new records, and assign those ids going into the text file, before import. - Put a trigger on the table for the import. - Create the table without the constraints, import with null id values, update id = nextval..., then alter table. Am I missing anything? It's not a huge number of records, so I could perfectly well (and probably will) just generate a text file of individual INSERT statements. I'm just asking to make sure my understanding is correct. -- Scott Ribe [EMAIL PROTECTED] http://www.killerbytes.com/ (303) 722-0567 voice ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Generating synthetic keys on copy
Scott Ribe [EMAIL PROTECTED] writes: I have data I'm extracting from a legacy database. I want to assign newly-generated synthetic keys. I would like to use copy to get the data in. If I put an explicit null in the data file to be imported, pg won't generate a key, right? Right. Instead, specify a column list to the COPY (you are using a PG version new enough to have column lists in COPY, no?) and it will execute the default expression for the column(s) not coming from the data file. If it is an old version, what I'd do is COPY into a temp table whose column set matches the data file, and then use INSERT/SELECT to transfer the data to the permanent table and fill the missing columns. This latter is a good answer anytime you need to do extra data massaging that COPY can't handle. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: nbsp;nbsp;nbsp;[GENERAL]nbsp;H
Hi Martijn, I have changed the locale to LANG=zh_CN.UTF-8;export LANGfirst, stop the pgsql, and issue the command: initdb -D ./data -E UTF8 --locale=zh_CN.UTF-8 --lc-collate=zh_CN.UTF-8 -U postgres -W so i get a fresh new data cluster in this ./data, then I pg_ctl start -D ./data, psql to create the utf8 encoding database: CREATE DATABASE "Recruit" WITH OWNER = recruit ENCODING = 'UTF-8'; -- the owner has been created before this sql pg_restore -d Recruit ./backup/r2.tar psql and \c Recruit \encoding gb18030 -- iwill notread the Chinese characters in client mode without this setting on my solaris 10, maybe there's something about the font mapping. select cname from t_resume order by cname; and then i get the wrong order result. I really want to know why the server side encoding for GB18030 is not supported? Thank you. -- Steve Yao -原始邮件-发件人:"Martijn van Oosterhout"发送时间:2006-10-04 22:35:41收件人:"stevegy" <[EMAIL PROTECTED]>抄送:"[EMAIL PROTECTED]" 主题:Re:[GENERAL]Hi,nbs On Wed, Oct 04, 2006 at 09:45:26PM +0800, stevegy wrote: So, now i stop the postgres: pg_ctl stop -D ./data_euc_cn and start it use the UTF-8 encoding data directory: pg_ctl start -D ./data_utf8. But the sort result for the Chinese characters is still wrong. And I notice that the sort result is diffrent from the LANG=zh_CN.GB18030. You need to do more to change the encoding of a database. The encoding is fixed at cluster-creation time, so you need to run initdb again to actually change the locale/encoding. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. 3G 时 代 来 临 了,坚 决 对 传 统 邮 箱 说 不 ! 新 一 代 极 速 3G 邮 箱 闪 亮 登 场 ,280 兆 网 盘 免 费 送 ! 点 击 此 处 注 册
Re: nbsp;nbsp;nbsp;[GENERAL]nbsp;H
stevegy [EMAIL PROTECTED] writes: I really want to know why the server side encoding for GB18030 is not supported? Because it overlaps the ASCII character set, ie, it has multibyte characters in which some of the bytes don't have the high bit set. That creates too many parsing risks for us to be willing to deal with it inside the backend. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq