[GENERAL] Polygon data type
Hello all, I know that this is the postgres forum and not the libpqxx forum (I've been there but no answer yet). I want to retrieve the points of a polygon that are stored in the postgres db. The contents of the db are: polygonid |vertices ---+- 2 |((1,0),(1.5,-1),(2,-1),(2,1),(1,1),(0,0),(0,2),(3,2),(3,-2),(1,-2)) 4 | ((3,3),(4,4),(5,5)) The vertices column is of type Polygon. I'm using libpqxx library for C++. Suppose I want to retrieve and access the points in the vertices column, I would execute these statements in C++: result R = W.exec ("select * from polygon_tbl"); for (result::const_iterator r = R.begin(); r != R.end(); ++r) { int x = 0; cout << "Polygon ID: " << r[0].to(x) << endl; //Suppose i would like to print the first point of every polygon, //how would i access it? cout << "First vertex: " << r[1][0] << endl;??? //Or suppose i would like to print the first x coordinate of //every polygon, how would i access it? cout << "First x coordinate: " << r[1][0][0] << endl; // (am just guessing here..) } Sorry I'm very new to libpqxx. I've pretty much understood how the libpqxx works but I'm stuck with the Polygon types. We actually just need a simple storage for our polygons in Postgres but I'm not sure how to access them using libpqxx. I know I can parse it as a string using r[1].c_str(), but is this the proper way to do it (parsing the text)? Or is there a container that I should use to store the vertices and access them using the container ? best regards, mark
Re: [GENERAL] Warm standby: 1 to N
On Tue, Jun 02, 2009 at 02:52:26PM -0400, Bruce Momjian wrote: Yaroslav Tykhiy wrote: Hi All, Let's consider the following case: WAL segments from a master have been shipped to N warm standby servers, and now the master fails. Using this or that mechanism, one of the warm standbys takes over and becomes the new master. Now the question is what to do with the other N-1 warm standbys. By the failure, all N warm standbys were the same exact copies of the master. So at least in theory, the N-1 warm standbys left can be fed with WAL segments from the new master. Do you think it will work in practice? Are there any pitfalls? I think it should work. Bruce, thank you a lot for the encouragement! I had a chance to go a step further and fail over to a warm stand-by server without losing a singe transaction. Now I'm happy to share my experience with the community. The initial setup was as follows: Server A was the master, servers B and C were warm stand-bys. The task was to fail over from A to B in a controlled manner whilst keeping C running as a warm stand-by. Both B and C were initially running with archive_command set as follows: archive_command='/some/path/archive.sh "%p" "%f"' where archive.sh contained just "exit 1". So a real archive script could be atomically mv'ed in place later without losing any WAL segments. (Note that the archiver process is supposed to queue segments and keep retrying as long as the archive command is exiting with a non-zero status.) After making sure B and C were keeping up with A, the latter was shut down. Then the last, incomplete WAL segment NNN was manually copied from A (pg_controldata was useful to find its name) to B's WAL shipping spool for the restore script to pick it up. B processed segment NNN and, upon reaching its logical end, exited recovery mode. At this moment all the clients were switched over to B. Now the master, B continued writing its transaction log to segment NNN, filling it up and moving on to the next segment NNN+1. (On the one hand, it was quite unexpected that B didn't move on to a new timeline upon exiting recovery mode. On the other hand, had it done so, the whole trick would have been impossible. Please correct me if I'm wrong. Just in case, the Postgresql version was 8.0.6. Yes, it's ancient and sorely needs an upgrade.) Now segment NNN was full and contained both the last transactions from A and the first transactions from B. It was time to ship NNN from B to C in order to bring C in line with B -- without disrupting C's recovery mode. A real archive script was substituted for the dummy script on B. At the next retry the script shipped segment NNN to C and so the WAL shipping train got going B->C. A possible pitfall to watch out for is this: If the WAL shipping spool is shared between B and C, e.g., NFS based, just copying segment NNN to it will make both B and C exit recovery mode. To avoid that, at least in theory, segment NNN can be copied directly into B's pg_xlog and then B's restore command needs to be signalled to return a non- zero status. According to the manual, the recovery process is supposed to look in pg_xlog as a final resort in case the restore command returned an error status. However, I didn't try that as I had separate, local WAL spools on B and C. Hoping all this stuff helps somebody... Yar -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres 8.3.7 Server Crash: failed to add item to the right sibling in index
>>Sounds like a data corruption problem After reindexing, the problem seems to go away. Is there any way to determine what caused the data corruption in the first place ?? Thanks. On Mon, Jul 6, 2009 at 8:29 PM, Tom Lane wrote: > CM J writes: > >I am running Postgres 8.3.7 on Windows 2003 with my java > > application.Off late, the server crashes with the following trace: > > > *2009-07-01 14:47:07.250 ISTPANIC: failed to add item to the right > sibling > > in index "mevservices2_ndx"* > > Sounds like a data corruption problem ... try reindexing that index. > >regards, tom lane >
Re: [GENERAL] Replication
Does anybody have any experience with tungsten or sequia they would like to share? How about pgcluster or cybercluster? There are a lot of options but not a lot of insight or documentation really. Also note that the postgres-r web site says it's not production ready and it will take months if not years to do so. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [PERFORM] Postgres Clustering
2009/5/28 Eddy Ernesto Baños Fernández : > Try Cybercluster I looked into that. There is one piece of documentation that is less than ten pages long. There is no users group, no listserve, no community that I can discern. Do you have experience with it and if so could you please share. Thanks. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] singletons per row in table AND locking response
I could have just as easily described it as a table of SERIALS, one per row, instead of per column. :0) Dennis Gearon Signature Warning EARTH has a Right To Life I agree with Bolivian President Evo Morales # The right to life: "The right for no ecosystem to be eliminated by the irresponsible acts of human beings." # The right of biosystems to regenerate themselves: "Development cannot be infinite. There's a limit on everything." # The right to a clean life: "The right for Mother Earth to live without contamination, pollution. Fish and animals and trees have rights." # The right to harmony and balance between everyone and everything: "We are all interdependent." See the movie - 'Inconvenient Truth' See the movie - 'Syriana' -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] singletons per row in table AND locking response
When locking is involved, does a transaction wait for access to a row or table, or does it just fail back to the calling code? Would it be up to my PHP code to keep hammeing for access to a row/table, or could a user defined function do that? I'd like to have a certain object in my PHP application have essentially individual SERIAL rows per object created site wide. So only one script instance at a time in apache can have access to a row to read and increment a value in a row. Example, (totally random idea, example only), any user on site can create a group. Each group assigns group_user_ids per site member in his group, starting at zero for each new user joining a group, no matter their site_id. My choices so far seem to be: IN PHP, Use a system file for locking only one instance of the class gets access to the table. IN PHP, Use the transaction failure to hammer the database for one instance of the class. IN PHP, Use the transaction failure to hammer the database for each ROW's instance of a class. IN POSTGRESQL, use the transaction failure to hammer the database for each ROW's instance of a class. But maybe there's more to the locking than failed transactions for UPDATE, some kind of sequential queueing of access to tables or rows for transactions? I'm trying to minimize the interfaces, cpu time, etc involved in getting access to the table. extremely basic SQL for this idea. CREATE TABLE group ( group_id SERIAL NOT NULL, CONSTRAINT PK_group PRIMARY KEY (group_id) ); CREATE TABLE singletons_for_last_grp_mbr_id_issued ( group_id INTEGER NOT NULL, last_grp_mbr_id_issued INTEGER DEFAULT 0 NOT NULL, CONSTRAINT PK_singletons PRIMARY KEY (counts_per_main, main_id) ); CREATE UNIQUE INDEX IDX_One_Group_Row_Only ON singletons_for_last_grp_mbr_id_issued (group_id); ALTER TABLE singletons_for_last_grp_mbr_id_issued ADD CONSTRAINT group_singletons_for_last_grp_mbr_id_issued FOREIGN KEY (group_id) REFERENCES group (group_id) Dennis Gearon Signature Warning EARTH has a Right To Life I agree with Bolivian President Evo Morales # The right to life: "The right for no ecosystem to be eliminated by the irresponsible acts of human beings." # The right of biosystems to regenerate themselves: "Development cannot be infinite. There's a limit on everything." # The right to a clean life: "The right for Mother Earth to live without contamination, pollution. Fish and animals and trees have rights." # The right to harmony and balance between everyone and everything: "We are all interdependent." See the movie - 'Inconvenient Truth' See the movie - 'Syriana' -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] End of updates info
Chris Velevitch wrote: > Where do I find information on which versions are no longer being > updated or will soon cease being updated? Uh, the stop being updated when they no longer appear on the main Postgres page: http://www.postgresql.org/ However, I we are now discussing ways of making this clearer to users. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Compiling using Visual Studio 2005
Hi, You may obtain the required files from: http://winpg.jp/~saito/pg_work/OSSP_win32/msvc/ for MS-VC++, by Hiroshi Saito Copy uuid.h and uuid.lib from here to appropriate folders and it should work. Swati Chande
[GENERAL] ZFS prefetch considered evil?
Hi All, I have a mid-size database (~300G) used as an email store and running on a FreeBSD + ZFS combo. Its PG_DATA is on ZFS whilst xlog goes to a different FFS disk. ZFS prefetch was enabled by default and disk time on PG_DATA was near 100% all the time with transfer rates heavily biased to read: ~50-100M/s read vs ~2-5M/s write. A former researcher, I was going to set up disk performance monitoring to collect some history and see if disabling prefetch would have any effect, but today I had to find out the difference the hard way. Sorry, but that's why the numbers I can provide are quite approximate. Due to a peak in user activity the server just melted down, with mail data queries taking minutes to execute. As the last resort, I rebooted the server with ZFS prefetch disabled -- it couldn't be disabled at run time in FreeBSD. Now IMAP feels much more responsive; transfer rates on PG_DATA are mostly <10M/s read and 1-2M/s write; and disk time stays way below 100% unless a bunch of email is being inserted. My conclusion is that although ZFS prefetch is supposed to be adaptive and handle random access more or less OK, in reality there is plenty of room for improvement, so to speak, and for now Postgresql performance can benefit from its staying just disabled. The same may apply to other database systems as well. Thanks, Yar -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Compiling using Visual Studio 2005
Hello Folks, I am trying to compile PostgreSQL source code using Visual Studio 2005 . I am able to compile psql but while compiling .\contrib\uuid-ossp\uuid-ossp. I encountered following error. ecpg : warning PRJ0009: Build log could not be opened for writing. .\contrib\uuid-ossp\uuid-ossp.c(27): fatal error C1083: Cannot open include file: 'uuid.h': No such file or directory config.pl : uuid=>'C:\pgsqlsrcreqs\uuid', #--with-ossp-uuid Thanks & Regards, Vikram
[GENERAL] End of updates info
Where do I find information on which versions are no longer being updated or will soon cease being updated? Chris -- Chris Velevitch Manager - Adobe Platform Users Group, Sydney m: 0415 469 095 www.apugs.org.au Adobe Platform Users Group, Sydney July meeting: Going It Alone Date: Mon 20th July 6pm for 6:30 start Details and RSVP on http://groups.adobe.com/posts/1f34697b9b -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem search on text arrays, using the overlaps (&&) operator
I don't mean to be pesky. I was just wondering if there is anything else I should try? Should I simply rewrite all queries, change the form WHERE textarr && '{foo, bar}'::text[] To WHERE (textarr && '{foo}'::text[] OR textarr && '{bar}'::text[]) ? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Sugestion a db modele like mysql workbrench
Thomas Kellerer schrieb: Andreas Wenk wrote on 07.07.2009 09:33: originally posted at BUGS list - now answering at GENERAL list Pavel Golub schrieb: Hello, Oscar. First of all you shouldn't post such messages here since this is not a bug for sure. Take a look on MicroOLAP Database Designer for PostgreSQL at http://microolap.com/products/database/postgresql-designer/ You will be surprised :) Hi Pavel, since I did some research for such a tool, I am wondering if there is a similar one for none Windows OS. I installed this tool and thought it's a good one - but sadly it's working only on Windows ;-). This is interesing for people needing more options as given in pgAdmin. Do you know something else? Try out Power*Architect: http://www.sqlpower.ca/page/architect Thomas Hey this seems to be a nice one ... just installed it on my MAC at home. Really good. I will check it tomorrow on Linux in the office ... Thanks for the tip! Cheers Andy -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Passing a table to function
On Mon, Jul 6, 2009 at 7:27 AM, sqlguru wrote: > In SQL 2008, we could pass tables into stored procedures. > CREATE TABLE members -- Only username is required > ( > mem_username VARCHAR(25) NOT NULL PRIMARY KEY, > mem_email VARCHAR(255), > mem_fname VARCHAR(25), > mem_lname VARCHAR(25) > ); > > CREATE TABLE TYPE member_table_type > ( > mem_username VARCHAR(25) > ); > > CREATE STORED PROCEDURE CreateMembers > @members member_table_type READONLY > AS > INSERT INTO [members] > SELECT * FROM @members; > > To execute this stored procedure, you would do: > DECLARE @members member_table_type; > INSERT INTO @members (mem_username) > VALUES( ('mem1'), ('mem2'), ('mem3') ); > EXECUTE CreateMembers @members; > > > How would you accomplish this on Postgre 8.4? I know you can pass an > entire row to a function but that is not what I want. Notice that even > though the table has many columns (nullable), I'm only passing in the > username. With the ROW datatype in Postgre, you have to pass in all > the columns (null if no value). > > This is what I have so far in Postgre: > CREATE FUNCTION create_members(IN var_members members) > BEGIN > INSERT INTO members > SELECTvar_members.mem_username, var_members.mem_email, > var_members.mem_fname, var_members.mem_lname; > END > > SELECT create_members(ROW('mem1', NULL, NULL, NULL)); I prefer an explicit cast using the specific type: SELECT create_members(('mem1', NULL, NULL, NULL)::members); > INSERT INTO members > SELECTvar_members.mem_username, var_members.mem_email, > var_members.mem_fname, var_members.mem_lname; This isn't necessary if you are using the table type. Prefer: INSERT INTO members select (var_members).*; Also, in 8.4, if you were wanting to pass one or more 'members' records into the function for multiple create, you could modify or overload the function to take an array of members. create or replace function create_members(_members members[]) ... ... INSERT INTO members select (m).* from (select unnest(_members) as m) q; also, some style tips: *) 'IN' is optional...I'd leave it out. *) var_ prefix is pretty verbose, i'd prefer '_' or 'i_' (i being in) *) use plurals for arrays, singular for tables. merlin merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Trying to find a low-cost program for Data migration and ETL
On Tue, Jul 7, 2009 at 1:26 PM, Scott Mead wrote: > > > > You may have some luck by viewing a similar thread on another mailing list: > > http://www.theserverside.net/discussions/thread.tss?thread_id=54755 > That being said, I would highly recommend using: http://www.sql-workbench.net/ As a tool to help in Migrations. Also, a quick google for ETL Mysql or ETL Postgres shows many different companies (both software and consulting) that have the ability to quickly and easily build you a solution. --Scott
Re: [GENERAL] Trying to find a low-cost program for Data migration and ETL
On Tue, Jul 7, 2009 at 11:48 AM, Rstat wrote: > > > Hi, Im building a database for my company. We are a rather small size book > company with a lot of references and still growing. > > We have a Mysql database here and are trying to find some good tools to use > it at its best. Basically we are just starting up the database after > dealing > with Excel: we had a size problem… So im trying to find a program that will > allow us to do two different things: the migration of our data from the old > system to the new one and a specialized software to perform ETL (Extract, > transform and load) on our database. > > About the price of the tools, if we were one year ago, the accounting > department would have been pretty relaxed about this. But today, we have > some budget restrictions and therefore need a low cost tool. So could you > give me some advice on a good data migration and etl tool for a low cost? > > Thanks for your help. You may have some luck by viewing a similar thread on another mailing list: http://www.theserverside.net/discussions/thread.tss?thread_id=54755 -- Scott
Re: [GENERAL] howto determine rows count to be returned by DECLARE ... SELECT ...
Konstantin Izmailov wrote: Dear Community, I'm working on implementation of virtual grid using DECLARE... SELECT Advantage of virtual grid is that it loads only rows that a user is willing to see (with FETCH). However, it is not clear how to determine max rows count that the cursor can return. The count is necessary for two purposes: render scrollbar and support jumping to the last rows in the grid. The only obvious solution is to execute SELECT COUNT(1) ... before declaring the cursor. Is there a better solution? Not really. The whole point of the server is that it doesn't fetch all the rows, and until you've fetched them all you don't know how many there are. Can the cursor return total rows count or is there a way to position cursor to the last row? (Then the number of roundtrips to server can be less by 1 and virtual grid can render last rows in reverse order). See the manuals for details on FETCH. -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Passing a table to function
sqlguru wrote: In SQL 2008, we could pass tables into stored procedures. CREATE TABLE members -- Only username is required ( mem_username VARCHAR(25) NOT NULL PRIMARY KEY, mem_email VARCHAR(255), mem_fname VARCHAR(25), mem_lname VARCHAR(25) ); CREATE TABLE TYPE member_table_type ( mem_username VARCHAR(25) ); CREATE STORED PROCEDURE CreateMembers @members member_table_type READONLY AS INSERT INTO [members] SELECT * FROM @members; OK - so it's binding "mem_username" from your type to the same-named column in members. To execute this stored procedure, you would do: DECLARE @members member_table_type; INSERT INTO @members (mem_username) VALUES( ('mem1'), ('mem2'), ('mem3') ); EXECUTE CreateMembers @members; How would you accomplish this on Postgre 8.4? I know you can pass an entire row to a function but that is not what I want. Notice that even though the table has many columns (nullable), I'm only passing in the username. Well, you defined a type with just the one column. > With the ROW datatype in Postgre, you have to pass in all the columns (null if no value). I'm guessing you're not puzzled about doing: CREATE TYPE member_table_type AS ( mem_username VARCHAR(25) ); ... INSERT INTO members (mem_username) VALUES (var_members.mem_username); ... Perhaps the closest to duplicating the exact way you're doing it in MS-SQL 2008 would be by passing in a cursor. The code below shows that (although it's not the same as your example). = begin script = CREATE TABLE test_tbl(a int4, b text); INSERT INTO test_tbl VALUES (1,'a'); INSERT INTO test_tbl VALUES (2,'b'); INSERT INTO test_tbl VALUES (3,'c'); CREATE FUNCTION test_cursors(c refcursor) RETURNS integer AS $$ DECLARE tot integer; r RECORD; BEGIN tot := 0; LOOP FETCH c INTO r; EXIT WHEN NOT FOUND; tot := tot + r.a; END LOOP; RETURN tot; END; $$ LANGUAGE plpgsql; DECLARE mycursor CURSOR FOR SELECT * FROM test_tbl; SELECT sum(a) FROM test_tbl; SELECT test_cursors('mycursor'); = end = The other way would be to create a TEMPORARY table, pass its name and use EXECUTE inside the plpgsql to generate the INSERT statement you require. -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Sugestion a db modele like mysql workbrench
Andreas Wenk wrote on 07.07.2009 09:33: originally posted at BUGS list - now answering at GENERAL list Pavel Golub schrieb: Hello, Oscar. First of all you shouldn't post such messages here since this is not a bug for sure. Take a look on MicroOLAP Database Designer for PostgreSQL at http://microolap.com/products/database/postgresql-designer/ You will be surprised :) Hi Pavel, since I did some research for such a tool, I am wondering if there is a similar one for none Windows OS. I installed this tool and thought it's a good one - but sadly it's working only on Windows ;-). This is interesing for people needing more options as given in pgAdmin. Do you know something else? Try out Power*Architect: http://www.sqlpower.ca/page/architect Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Trying to find a low-cost program for Data migration and ETL
Rstat wrote: Hi, Im building a database for my company. We are a rather small size book company with a lot of references and still growing. We have a Mysql database here and are trying to find some good tools to use it at its best. You do realise this is a PostgreSQL mailing list? You'll find more experience re: this question on a MySQL list/forum. About the price of the tools, if we were one year ago, the accounting department would have been pretty relaxed about this. But today, we have some budget restrictions and therefore need a low cost tool. So could you give me some advice on a good data migration and etl tool for a low cost? 1. You don't mention what limitations you've hit with mysqladmin etc. 2. If you have copy of MS-Access you could use that as a staging point. That plus some ODBC drivers would be a simple way of going forward. You might even want to replace your Excel spreadsheets with Access and just stop there for a bit. 3. Google is your friend. The results for "mysql etl tools" should give you a list on MySQL's site as the second hit. Best of luck! -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Trying to find a low-cost program for Data migration and ETL
Hi, Im building a database for my company. We are a rather small size book company with a lot of references and still growing. We have a Mysql database here and are trying to find some good tools to use it at its best. Basically we are just starting up the database after dealing with Excel: we had a size problem… So im trying to find a program that will allow us to do two different things: the migration of our data from the old system to the new one and a specialized software to perform ETL (Extract, transform and load) on our database. About the price of the tools, if we were one year ago, the accounting department would have been pretty relaxed about this. But today, we have some budget restrictions and therefore need a low cost tool. So could you give me some advice on a good data migration and etl tool for a low cost? Thanks for your help. -- View this message in context: http://www.nabble.com/Trying-to-find-a-low-cost-program-for-Data-migration-and-ETL-tp24375920p24375920.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Feistel cipher, shorter string and hex to int
Ivan Sergio Borgonovo wrote: I don't get the 1366.0 and the 714025.0. Writing 1366.0 isn't going to use float arithmetic? Yes, that's on purpose. Now that you mention it, I think that 1366.0 could be an integer instead, but the division by 714025 and multiplication by 32767 have to be floating point operations. I'm going to see if using bigint is going to make any difference in speed. If you're after more speed, using the C language may be the solution. Finally... if I were (and I'm not) interested in using 30 bit, should I turn that *32767 into a *16383? For shift and bit mask it looks more obvious. To generate a 31 bits (positive) result from a 30 bits input, I would modify the initialisation of the 16 bits blocks so that each of them has the most significant bit set to 0, but without loosing any of the 30 bits. The MSB bits have to be kept at 0 throughout the algorithm. So I'd to that: l1:= ((value >> 16) & 16383) | (value&32768); r1:= value&32767; and indeed reduce the output range of the function: r2:=l1 # 1366.0*r1+150889)%714025)/714025.0)*16383)::int; the rest being identical excepts that it could now return int (and it would be unsigned) instead of bigint. I haven't tested that variant, though. Do you remember the name of this particular F? Not sure it has a specific name, but you'll find related stuff by searching for "linear congruential generator". Everything else seems to need more processing at no real added value. Turning the int into base 32 [0-9A-N] with plpgsql looks expensive just to shorten the string to 4 char. Note that 4 chars would cover a range of 32^4, which is only about one million different values. I think you'd need 7 chars to express up to 2^31 in base 32, because 32^7 < 2^31 < 32^6 Best regards, -- Daniel PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Table Partitioning : Having child tables in multiple database servers
Hi, Ransika de Silva writes: > The client wants to have the freedom of increasing the processor power > AND the storage by introducing new database servers. I think the following document will be of interest: http://wiki.postgresql.org/wiki/Image:Moskva_DB_Tools.v3.pdf It presents the architecture Skype is using, and how to achieve what you need with plproxy and londiste and their other tools. -- dim -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] combine multiple row values in to one row
On Tue, Jul 07, 2009 at 08:40:06AM -0700, David Fetter wrote: > On Tue, Jul 07, 2009 at 01:59:35AM +0430, Lee Harr wrote: > > > > Is there a generic way to do this? An aggregate maybe? > > The aggregate is called array_agg() and it's in 8.4. You can then > wrap array_to_string() around it and get pretty formatting, as in: > > SELECT idn, array_to_string(array_agg(code),', ') AS codes FROM tbl; Oops. That should read: SELECT idn, array_to_string(array_agg(code),', ') AS codes FROM tbl GROUP BY idn; /* gotta group by :) */ Cheers, David. -- David Fetter http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: david.fet...@gmail.com Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Table Partitioning : Having child tables in multiple database servers
Hello, Many thanks for the quick response. Why I raised the previous question was to get an answer/solution for the following requirement; The client wants to have the freedom of increasing the processor power AND the storage by introducing new database servers. What is your thought on getting the above mentioned requirement satisfied. Thanking you and anticipating a response soon. @ Grzegorz Jaśkiewicz Apologies for directly sending you the mail. Regards Ransika 2009/7/7 Grzegorz Jaśkiewicz > you should ask the question on the list (press reply-all, not just > reply to my previous message please). > > > 2009/7/7 Ransika de Silva : > > Hello, > > Many thanks for the quick response. > > Why I raised the previous question was to get an answer/solution for the > > following requirement; > > The client wants to have the freedom of increasing the processor power > AND > > the storage by introducing new database servers. > > What is your thought on getting the above mentioned > requirement satisfied. > > Thanking you and anticipating a response soon. > > > Regards, > > Ransika > > > 2009/7/7 Grzegorz Jaśkiewicz > >> > >> On Tue, Jul 7, 2009 at 3:32 PM, Ransika de Silva > wrote: > >> > > >> > Hi all, > >> > We have got the Table Partitioning of PostgreSQL to work on one > database > >> > server, where the Parent Table and Inherited Child Tables are on one > (1) > >> > database server. > >> > The question which we need to get answered is, whether we can have the > >> > Child > >> > Tables in separate database servers? > >> > Can you please give us your expertise thoughts on this? > >> > >> no you can't > >> > >> > >> > >> > >> -- > >> GJ > > > > > > > > -- > > Ransika De Silva > > SCMAD 1.0, SCJP 1.4, > > BSc.(Hons) Information Systems > > > > > > -- > GJ > -- Ransika De Silva SCMAD 1.0, SCJP 1.4, BSc.(Hons) Information Systems
Re: [GENERAL] combine multiple row values in to one row
On Tue, Jul 07, 2009 at 01:59:35AM +0430, Lee Harr wrote: > > Hi; > > I'm looking for a way to do this: > > > # \d tbl > Table "public.tbl" > Column | Type | Modifiers > +-+--- > idn| integer | > code | text| > # SELECT * FROM tbl; > idn | code > -+-- >1 | A >2 | B >2 | C >3 | A >3 | C >3 | E > (6 rows) > # select idn, magic() as codes FROM tbl; > idn | codes > -+-- >1 | A >2 | B, C >3 | A, C, E > (3 rows) > > > Right now, I use plpgsql functions, but each time I do it > I have to rewrite the function to customize it. > > Is there a generic way to do this? An aggregate maybe? The aggregate is called array_agg() and it's in 8.4. You can then wrap array_to_string() around it and get pretty formatting, as in: SELECT idn, array_to_string(array_agg(code),', ') AS codes FROM tbl; If you're not on 8.4 yet, you can create a similar aggregate with CREATE AGGREGATE. Cheers, David. -- David Fetter http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: david.fet...@gmail.com Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Table Partitioning : Having child tables in multiple database servers
Hi all, We have got the Table Partitioning of PostgreSQL to work on one database server, where the Parent Table and Inherited Child Tables are on one (1) database server. The question which we need to get answered is, whether we can have the Child Tables in separate database servers? Can you please give us your expertise thoughts on this? Thanks & Regards -- Ransika De Silva
Re: [GENERAL] Bug in ecpg lib ?
l...@crysberg.dk wrote: >I have now generate a rather small example where I > experience the problem, attached. It is linked with the > mudflapth library using the commands below. You may have to > change the DBNAME and DBUSER. The delay just before the > pthread_cancel(), i.e. sleep(10), is rather critical for the > problem to appear and you might have to change it to > something less. On some very slow machines I wasn't able to > produce the problem. > [...] > >And this is the output from running the program: > > leif$ LD_LIBRARY_PATH=/usr/local/Packages/gcc-4.4.0/lib/ ./crashex > Couldn't open somen...@localhost:5432 > 2+2=0. > *** glibc detected *** /home/leif/tmp/crashex: free(): > invalid pointer: 0x081f3958 *** [...] > Aborted (core dumped) > > > leif$ gdb ~/tmp/crashex core.30920 [...] > #0 0xe405 in __kernel_vsyscall () > (gdb) bt > #0 0xe405 in __kernel_vsyscall () > #1 0xf7bef335 in raise () from /lib32/libc.so.6 > #2 0xf7bf0cb1 in abort () from /lib32/libc.so.6 > #3 0xf7c286ec in ?? () from /lib32/libc.so.6 > #4 0xf7c30615 in ?? () from /lib32/libc.so.6 > #5 0xf7c34080 in free () from /lib32/libc.so.6 > #6 0xf7d39061 in free (buf=0x81f3958) at > ../../../libmudflap/mf-hooks1.c:241 > #7 0xf7e3fb5c in ecpg_sqlca_key_destructor () from > /lib32/libecpg.so.6 > #8 0xf7d1bbb0 in __nptl_deallocate_tsd () from /lib32/libpthread.so.0 > #9 0xf7d1c509 in start_thread () from /lib32/libpthread.so.0 > #10 0xf7c9d08e in clone () from /lib32/libc.so.6 > (gdb) I ran your sample with gdb against PostgreSQL 8.4, and ecpg_sqlca_key_destructor() was called only once, for a valid pointer, one that was previously allocated with malloc(). Could you check if ecpg_sqlca_key_destructor() is called more than once if you run the sample? Are you aware that in your sample run the connection attempt failed? It does not matter, ecpg should do the right thing anyway. What I notice about your program is that you connect to the database in the main thread, then start a new thread and use the connection in that new thread. I don't know, but I'd expect that since ecpg keeps a thread-specific sqlca, this could cause problems. Indeed I find with the debugger that in your sample sqlca is allocated and initialized twice, once when the catabase connection is attempted, and once when the SQL statement is run. I think that the "good" way to do it would be: - start a thread - connect to the database - do work - disconnect from the database - terminate the thread Maybe somebody who knows more about ecpg can say if what you are doing should work or not. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Sugestion a db modele like mysql workbrench
USTID: DE 238093396 Pavel Golub schrieb: Hello, Andreas. You wrote: AW> originally posted at BUGS list - now answering at GENERAL list AW> Pavel Golub schrieb: Hello, Oscar. First of all you shouldn't post such messages here since this is not a bug for sure. Take a look on MicroOLAP Database Designer for PostgreSQL at http://microolap.com/products/database/postgresql-designer/ You will be surprised :) AW> Hi Pavel, AW> since I did some research for such a tool, I am wondering if there is a similar one for AW> none Windows OS. I installed this tool and thought it's a good one - but sadly it's AW> working only on Windows ;-). This is interesing for people needing more options as given AW> in pgAdmin. AW> Do you know something else? The latest release of MicroOLAP Database Designer for PostgreSQL is specialy tested for Wine support. Thus all you need is to install Wine of the latest version. Some details ma be found here: http://pgolub.wordpress.com/2009/05/27/pgmdd-1-2-8-wine-out-of-the-box/ AW> Thankls a lot & Cheers AW> Andy You wrote: OMAE> The following bug has been logged online: OMAE> Bug reference: 4903 OMAE> Logged by: Oscar Miguel Amezcua Estrella OMAE> Email address: obel...@gmail.com OMAE> PostgreSQL version: 8.4 OMAE> Operating system: Opensuse 11.1 OMAE> Description:Sugestion a db modele like mysql workbrench OMAE> Details: OMAE> hiyas im newuser for postgresql and we try to find a software like mysql OMAE> workbrench to modelate data bases and we not found a good program for this OMAE> work and we like to sugestion this one to make a software like mysql OMAE> workbrench that can help to migrate more people from mysql to postgreSQL Hi Pavel, thanks a lot for the info ... Cheers Andy -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Normalize INTERVAL ouput format in a db driver
Albe Laurenz wrote: Sebastien FLAESCH wrote: According to the doc, INTERVAL output format is controlled by SET intervalstyle. I am writing an interface/driver and need a solution to fetch/convert interval values independently from the current format settings... I could force my driver to implicitly set the intervalstyle to iso_8601, but I would prefer to leave this in the hands of the programmer... Imagine you have to write and ODBC interface/driver with libpq that must support the SQLINTERVAL C structure, how would you deal with PostgreSQL intervals? Is it possible to query the current intervalstyle? You can use "SHOW intervalstyle" to get the current setting. Would it be an option to use the to_char(interval, text) function to convert the interval value to a string you can understand? That would make you independent of server parameters. Another way to go is to retrieve the interval values in binary format. That would make you dependent on the setting of "integer_datetimes", but it might still be easier. Yours, Laurenz Albe Thanks for the tip, I found SHOW after sending my initial mail. I will however go by forcing a given intervalstyle in a first time, this is not critical (I cannot use the internal binary format). My main concern now is to describe properly the type of interval which is used in a SELECT list, with the PQfmod() and PQfsize() libpq functions, I need some doc/spec here... Thanks a lot! Seb -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Feistel cipher, shorter string and hex to int
On Tue, 07 Jul 2009 12:07:48 +0200 "Daniel Verite" wrote: > Ivan Sergio Borgonovo wrote: > > > r2:=l1 # 1366.0*r1+150889)%714025)/714025.0)*32767)::int; > > -- but what about this? where does it come from? > > This function: > (1366.0*r1+150889)%714025 > implements a known method to get random numbers. I think it comes > from "Numerical recipes" by William Press. > Note that the algorithm is not tied to that function, it could be > replaced by something else (especially one that involves a private > key), but it has to be carefully chosen or the end result won't > look so random. I don't get the 1366.0 and the 714025.0. Writing 1366.0 isn't going to use float arithmetic? Is it there just to avoid an overflow? I'm going to see if using bigint is going to make any difference in speed. Finally... if I were (and I'm not) interested in using 30 bit, should I turn that *32767 into a *16383? For shift and bit mask it looks more obvious. Do you remember the name of this particular F? Since I don't see anything other than to_hex that could "shorten" an int to a string easily and quickly... it seems that returning a signed integer is OK. Everything else seems to need more processing at no real added value. Turning the int into base 32 [0-9A-N] with plpgsql looks expensive just to shorten the string to 4 char. Thanks. -- Ivan Sergio Borgonovo http://www.webthatworks.it -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] combine multiple row values in to one row
Try this. select idn, array_to_string(array(select code from tbl t2 where t2.idn = t1.idn order by code), ', ') as codes fromtbl t1 group byidn order byidn Matthew Hartman Programmer/Analyst Information Management, ICP Kingston General Hospital (613) 549- x4294 > -Original Message- > From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general- > ow...@postgresql.org] On Behalf Of Lee Harr > Sent: Monday, July 06, 2009 5:30 PM > To: pgsql-general@postgresql.org > Subject: [GENERAL] combine multiple row values in to one row > > > Hi; > > I'm looking for a way to do this: > > > # \d tbl > Table "public.tbl" > Column | Type | Modifiers > +-+--- > idn| integer | > code | text| > # SELECT * FROM tbl; > idn | code > -+-- >1 | A >2 | B >2 | C >3 | A >3 | C >3 | E > (6 rows) > # select idn, magic() as codes FROM tbl; > idn | codes > -+-- >1 | A >2 | B, C >3 | A, C, E > (3 rows) > > > Right now, I use plpgsql functions, but each time I do it > I have to rewrite the function to customize it. > > Is there a generic way to do this? An aggregate maybe? > > > Thanks for any help. > > > _ > Invite your mail contacts to join your friends list with Windows Live > Spaces. It's easy! > http://spaces.live.com/spacesapi.aspx?wx_action=create&wx_url=/friends.a sp > x&mkt=en-us > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general .now. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Sugestion a db modele like mysql workbrench
Hello, Andreas. You wrote: AW> originally posted at BUGS list - now answering at GENERAL list AW> Pavel Golub schrieb: >> Hello, Oscar. >> >> First of all you shouldn't post such messages here since this is not a >> bug for sure. >> >> Take a look on MicroOLAP Database Designer for PostgreSQL at >> http://microolap.com/products/database/postgresql-designer/ >> >> You will be surprised :) AW> Hi Pavel, AW> since I did some research for such a tool, I am wondering if there is a similar one for AW> none Windows OS. I installed this tool and thought it's a good one - but sadly it's AW> working only on Windows ;-). This is interesing for people needing more options as given AW> in pgAdmin. AW> Do you know something else? The latest release of MicroOLAP Database Designer for PostgreSQL is specialy tested for Wine support. Thus all you need is to install Wine of the latest version. Some details ma be found here: http://pgolub.wordpress.com/2009/05/27/pgmdd-1-2-8-wine-out-of-the-box/ AW> Thankls a lot & Cheers AW> Andy >> You wrote: >> >> OMAE> The following bug has been logged online: >> >> OMAE> Bug reference: 4903 >> OMAE> Logged by: Oscar Miguel Amezcua Estrella >> OMAE> Email address: obel...@gmail.com >> OMAE> PostgreSQL version: 8.4 >> OMAE> Operating system: Opensuse 11.1 >> OMAE> Description:Sugestion a db modele like mysql workbrench >> OMAE> Details: >> >> OMAE> hiyas im newuser for postgresql and we try to find a software like >> mysql >> OMAE> workbrench to modelate data bases and we not found a good program for >> this >> OMAE> work and we like to sugestion this one to make a software like mysql >> OMAE> workbrench that can help to migrate more people from mysql to >> postgreSQL >> >> >> >> -- With best wishes, Pavel mailto:pa...@gf.microolap.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] An example needed for Serializable conflict...
Hi durumdara, On Tuesday 07 July 2009 12:10:52 durumdara wrote: > Another question if I use only "SELECTS" can I get some Serialization > Error? No. > For example: > I need a report tool that can show the actual state of the business. > Because of I use value-dependent logic, I MUST use consistent state to > preserve the equality of many values (sums, counts, etc.). > So some (Read Committer) threads are update/delete/insert (sum modify) > rows, but this report tool only READ the tables, and only works for temp > tables. > Can I get some S. error from this transaction? > Or can I get some error from modifier threads if this (serializer > report) thread actually read the rows that they are want to modify? You can get errors between the writers but not between a writer and a reader. It probably would be a good idea to read the documentation about mvcc ( http://www.postgresql.org/docs/current/static/mvcc.html) to understand the possibilities/constraints better. Andres -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] An example needed for Serializable conflict...
Hi! Thanks for your help! Another question if I use only "SELECTS" can I get some Serialization Error? For example: I need a report tool that can show the actual state of the business. Because of I use value-dependent logic, I MUST use consistent state to preserve the equality of many values (sums, counts, etc.). So some (Read Committer) threads are update/delete/insert (sum modify) rows, but this report tool only READ the tables, and only works for temp tables. Can I get some S. error from this transaction? Or can I get some error from modifier threads if this (serializer report) thread actually read the rows that they are want to modify? This is the main question about it. Thanks for your read/answer! dd 2009.07.07. 11:36 keltezéssel, Albe Laurenz írta: Durumdara wrote: Please send me an example (pseudo-code) for Serializable conflict. And I wanna know, if possible, that if more transactions only read the tables in Serializable mode, and one or others write to it, can I got some conflicts in read operation? You get a serialization conflict if you try to modify a row in a serializable transaction T1 that has been changed by a second transaction T2 after T1 started. Sample 1: T1: START TRANSACTION ISOLATION LEVEL SERIALIZABLE; T1: SELECT * FROM t; id | val +-- 1 | test (1 row) T2: DELETE FROM t WHERE id=1; T1: UPDATE t SET val='new' WHERE id=1; ERROR: could not serialize access due to concurrent update Sample 2: T1: START TRANSACTION ISOLATION LEVEL SERIALIZABLE; T1: SELECT * FROM t; id | val +-- 1 | test (1 row) T2: UPDATE t SET val=val WHERE id=1; T1: DELETE FROM t; ERROR: could not serialize access due to concurrent update Yours, Laurenz Albe
Re: [GENERAL] Feistel cipher, shorter string and hex to int
Ivan Sergio Borgonovo wrote: r2:=l1 # 1366.0*r1+150889)%714025)/714025.0)*32767)::int; -- but what about this? where does it come from? This function: (1366.0*r1+150889)%714025 implements a known method to get random numbers. I think it comes from "Numerical recipes" by William Press. Note that the algorithm is not tied to that function, it could be replaced by something else (especially one that involves a private key), but it has to be carefully chosen or the end result won't look so random. Best regards, -- Daniel PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] An example needed for Serializable conflict...
Durumdara wrote: > Please send me an example (pseudo-code) for Serializable conflict. > And I wanna know, if possible, that if more transactions only > read the tables in Serializable mode, and one or others write > to it, can I got some conflicts in read operation? You get a serialization conflict if you try to modify a row in a serializable transaction T1 that has been changed by a second transaction T2 after T1 started. Sample 1: T1: START TRANSACTION ISOLATION LEVEL SERIALIZABLE; T1: SELECT * FROM t; id | val +-- 1 | test (1 row) T2: DELETE FROM t WHERE id=1; T1: UPDATE t SET val='new' WHERE id=1; ERROR: could not serialize access due to concurrent update Sample 2: T1: START TRANSACTION ISOLATION LEVEL SERIALIZABLE; T1: SELECT * FROM t; id | val +-- 1 | test (1 row) T2: UPDATE t SET val=val WHERE id=1; T1: DELETE FROM t; ERROR: could not serialize access due to concurrent update Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Efficiently move data from one table to another, with FK constraints?
Rob W wrote: > I am using COPY to bulk load large volumes (i.e. multi GB > range) of data to a staging table in a PostgreSQL 8.3. For > performance, the staging table has no constraints, no primary > key, etc. I want to move that data into the "real" tables, > but need some advice on how to do that efficiently. > > Here's a simple, abbreviated example of tables and relations > I'm working with (in reality there are a lot more columns and > foreign keys). > > /* The raw bulk-loaded data. No indexes or constraints. */ > CREATE TABLE log_entry ( > req_time TIMESTAMP NOT NULL, > url TEXT NOT NULL, > bytes INTEGER NOT NULL > ); > > /* Where the data will be moved to. Will have indexes, etc */ > CREATE TABLE request ( > id BIGSERIAL PRIMARY KEY, > req_time TIMESTAMP WITH TIME ZONE NOT NULL, > bytes INTEGER NOT NULL, > fk_url INTEGER REFERENCES url NOT NULL, > ); > > CREATE TABLE url ( > id SERIAL PRIMARY KEY, > path TEXT UNIQUE NOT NULL, > ); > > Is there a way to move this data in bulk efficiently? > Specifically I'm wondering how to handle the foreign keys? > The naive approach is: > > 1) For each column that is a foreign key in the target table, > do INSERT ... SELECT DISTINCT ... to copy all the values > into the appropriate child tables. > 2) For each row in log_entry, do a similar insert to insert > the data with the appropriate foreign keys. > 3) delete the contents of table log_entry using TRUNCATE > > Obviously, this would be very slow when handling tens of > millions of records. Are there faster approaches to solving > this problem? How about something like that: INSERT INTO url (path) (SELET DISTINCT url FROM log_entry); Then INSERT INTO request (req_time, bytes, fk_url) (SELECT l.req_time, l.bytes, u.id FROM log_entry AS l JOIN url AS u ON (l.url = u.path)); I didn't test it, so there may be syntax errors and stuff. But I doubt that it can be done much more efficiently. Creating an index on log_entry(url) *might* improve performance. Check with EXPLAIN. The TRUNCATE should not be a very expensive operation. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Out of memory error
Paul Smith wrote: It's actually ST_Intersects from PostGIS (some of the PostGIS function names are still recognize without the leading "ST_"). Not for too much longer - these have been deprecated for a while ;) http://postgis.refractions.net/documentation/manual-1.3/ch06.html#id2574404 # select postgis_version(); postgis_version --- 1.3 USE_GEOS=USE_PROJ=1 USE_STATS=1 Can you do a "SELECT postgis_full_version()"? If you're running anything less than 1.3.6, then the first thing you must do is upgrade. 1.3.6 has several important memory-leak fixes IIRC. HTH, Mark. -- Mark Cave-Ayland - Senior Technical Architect PostgreSQL - PostGIS Sirius Corporation plc - control through freedom http://www.siriusit.co.uk t: +44 870 608 0063 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Feistel cipher, shorter string and hex to int
Ivan Sergio Borgonovo wrote: I need shorter values (because they should be easier to type. To be sure to modify the function in a sensible way I really would appreciate some pointer. Still if it return What exactly is your desired range of output values? Best regards, -- Daniel PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Normalize INTERVAL ouput format in a db driver
Sebastien FLAESCH wrote: > According to the doc, INTERVAL output format is controlled by > SET intervalstyle. > > I am writing an interface/driver and need a solution to > fetch/convert interval > values independently from the current format settings... > > I could force my driver to implicitly set the intervalstyle > to iso_8601, but I > would prefer to leave this in the hands of the programmer... > > Imagine you have to write and ODBC interface/driver with > libpq that must support > the SQLINTERVAL C structure, how would you deal with > PostgreSQL intervals? > > Is it possible to query the current intervalstyle? You can use "SHOW intervalstyle" to get the current setting. Would it be an option to use the to_char(interval, text) function to convert the interval value to a string you can understand? That would make you independent of server parameters. Another way to go is to retrieve the interval values in binary format. That would make you dependent on the setting of "integer_datetimes", but it might still be easier. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Sugestion a db modele like mysql workbrench
originally posted at BUGS list - now answering at GENERAL list Pavel Golub schrieb: Hello, Oscar. First of all you shouldn't post such messages here since this is not a bug for sure. Take a look on MicroOLAP Database Designer for PostgreSQL at http://microolap.com/products/database/postgresql-designer/ You will be surprised :) Hi Pavel, since I did some research for such a tool, I am wondering if there is a similar one for none Windows OS. I installed this tool and thought it's a good one - but sadly it's working only on Windows ;-). This is interesing for people needing more options as given in pgAdmin. Do you know something else? Thankls a lot & Cheers Andy You wrote: OMAE> The following bug has been logged online: OMAE> Bug reference: 4903 OMAE> Logged by: Oscar Miguel Amezcua Estrella OMAE> Email address: obel...@gmail.com OMAE> PostgreSQL version: 8.4 OMAE> Operating system: Opensuse 11.1 OMAE> Description:Sugestion a db modele like mysql workbrench OMAE> Details: OMAE> hiyas im newuser for postgresql and we try to find a software like mysql OMAE> workbrench to modelate data bases and we not found a good program for this OMAE> work and we like to sugestion this one to make a software like mysql OMAE> workbrench that can help to migrate more people from mysql to postgreSQL -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Performance problem with low correlation data
> But that would be a different query -- there's no > restrictions on the > t values in this one. There is a restriction on the t values: select * from idtable left outer join testinsert on id=ne_id where groupname='a group name' and time between $a_date and $another_date > Have you tried something using IN or EXISTS instead of a > join? I still get nested loop join on the ne_id column... > The > algorithm you describe doesn't work for the join because it > has to > produce a record which includes the matching group columns. Yeah, I thought about that. Basically I guess the "perfect" algorithm would be something like: Hash Join < this is needed to join values from both relations -> Bitmap Heap Scan for each id found in idtable where groupname='a group name' BitmapOr BitmapIndexScan using ne_id and time between $a_date and $another_date -> select id from idtable where groupname='a group name' > Actually I wonder if doing a sequential scan with a hash > join against > the group list wouldn't be a better option. The table is pretty big (60M rows), sequential scans are the reason why my queries are so slow: since the correlation on the ne_id col is so bad, the planner chooses seq scans when dealing with most of the "t" values, even if the number of "ne_id" values is low. For the moment I've found this solution: whenever too many "t" are selected, which would lead the planner towards a seq scan (or a very poor bitmap index scan in case I disable seq scans) I create a temporary table: create temporary table alldata as select * FROM generate_series(mydatestart, mydateend, '15 minutes'::interval) as t cross join idtable where groupname='a group name' order by t,id; analyze alldata; select * from alldata left outer join testinsert using (ne_id,t); basically I'm doing what I'd like PG to do: since the correlation on the "t" col is good, and correlation on the "id" col is bad, query the index using the right order: "t" first, "id" then (given by the "order by t,id" on the creation of the temp table). I would like PG to do that for me. Since it knows an index scan looping on ne_id would be wrong, I'd like it to create a "materialized" table where data is ordered by "t" first instead of going for the seq scan. This would lead to a x10 - x100 improvement on query time. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general