Re: [GENERAL] What is your favorite front end for user interaction to postgresql databases?
On 09.05.2006, at 0:33 Uhr, Karen Hill wrote: What is your favorite front end for end users to interact with your postgresql db? Is it java, .net, web apache + php, MS-Access, ruby on rails? Why is it your favorite? Which would you recommend for end users on multiple OSes? You mean what technology to use to build business logic? advertisement It just depends on your needs. I prefer to use Apple WebObjects for building web frontends to the db. Why? Because it is extremely powerful, free if you are able to develop on Mac OS X (it comes with the developer tools) and even if you have to buy a couple of Powerbooks and Xserves it may be cheaper then only the software license for one of the other big Java application server technologies. You can deploy it to every Java plattform because it's just a Java application, but it's build on and for Mac OS X. It completely frees you from thinking in tables, joins, selects and other sql stuff, well, at least nearly completely. I can switch my whole application including a couple of hundred MB database content in half a day (mostly the time to transfer the data) - I have done this two months ago from another dbms to PostgreSQL. It's just as cool as Ruby on Rails right now, not as fat and slow in development as most J2EE servers, you can deploy everywhere, I can develop on Mac OS X (yes, that's a reason for me, because it speeds up my work by a good percentage), it is extremely fast for development once you have the concepts and a good set of your own components in stock which you are used to. It's pure Java, so I can easily include third party tools like Lucene or others. It does one connection per application instance which keeps concurrent db tasks low. Also it has more than one working approach for multi language / internationalized applications. And, most important, it has a tool, which enables me to do things alone in days, other teams do in weeks with other J2EE tools: DirectToWeb - a rule and template driven dev plattform. You write the db schema, lots of rules how list, inspect, edit and search pages should look like and some basic html for wrappers and you can create an administration application faster as with everything I've seen so far. Including Ruby on Rails, PHP and other Java tools. But, to make this clear: the starting point to use it is VERY high. I had a couple of years doing WebObjects development when I started using D2W and it was good to have this background. Also you can build WebServices, rich Java client apps, can use Ajax for rich web applications and I have a tool that proved it's scalability, reliability and ease of use in a real high volume environment: the iTunes Music Store. For more information, send me an email and/or go to http:// www.apple.com/webobjects. (I hope the url is correct, I'm not online right now - sitting in the train to work ;-)) Ah, and if you download it for testing: go to one of the WO mailing lists and ask for some good tips for starters. It will help. /advertisement Also, what do you think of having the database management system do all work (business logic etc) with the front end as a user interface vs. having N-tier with an application server handling business logic, and the db just accepting data. I prefer the second approach because it frees me from one specific database. I can use Oracle, PostgreSQL, MySQL, FrontBase, OpenBase, Sybase, MSSQL without changing one line of code (if I have avoided using custom SQL, which I mostly do) by just switching one framework. Okay, right, there are always some problems you have to solve, but they are nothing compared to what a PHP developer has to do to use different dbms as the one he develops on. This comes from having to re-invent the wheel every time a new user app needs to be created. You do something wrong if you don't build libraries of your basic and generic work over the time. This has nothing to do with the tool you use or the underlying dbms - it's just bad coding style if you have to re-invent the wheel for every app. For me and our customers it's more important to be able to switch the dbms than the application server. But, as with everything: YMMV. Ah, and for curiosity I'm just evaluating Ruby on Rails: it has interesting ideas and places itself somewhere between the fat J2EE app servers and the scripting languages PHP and Perl. It is more closely placed to WebObjects then I have thought on the first look, but is a bit easier to learn and not the big 800 pound gorilla of web development, with hooks, tool, templates, ideas and concepts for nearly every case. But in my opinion it lacks a couple of things, two of the most important for me are complete abstraction from the db (you may reach this with easy schemas and generic sql but not enough) and a working internationalization approach. It is lightweight and it
[GENERAL] Is anyone using ha-jdbc with the distributable tag
I realise this is not strictly a Postgresql problem but I wondered if anyone here was using ha-jdbc. I have tried asking on their mailing list but apart from two of my questions there has been no traffic for 24 hours and it appears to be a dead list. I have a sngle instance of ha-jdbc working talking to multiple postgresql backends. But I need to set this up for a tomcat cluster and so I want multiple ha-jdbc's talking to the same DBs. According to the docs this is what the distributable tag is for, but when I try to start the second tomcat server rather than joining with the first it complains of a name clash. Regards David ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Arguments Pro/Contra Software Raid
Hi, I've just had some discussion with colleagues regarding the usage of hardware or software raid 1/10 for our linux based database servers. I myself can't see much reason to spend $500 on high end controller cards for a simple Raid 1. Any arguments pro or contra would be desirable. From my experience and what I've read here: + Hardware Raids might be a bit easier to manage, if you never spend a few hours to learn Software Raid Tools. + There are situations in which Software Raids are faster, as CPU power has advanced dramatically in the last years and even high end controller cards cannot keep up with that. + Using SATA drives is always a bit of risk, as some drives are lying about whether they are caching or not. + Using hardware controllers, the array becomes locked to a particular vendor. You can't switch controller vendors as the array meta information is stored proprietary. In case the Raid is broken to a level the controller can't recover automatically this might complicate manual recovery by specialists. + Even battery backed controllers can't guarantee that data written to the drives is consistent after a power outage, neither that the drive does not corrupt something during the involuntary shutdown / power irregularities. (This is theoretical as any server will be UPS backed) -- Regards, Hannes Dorbath ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] catalog is missing 16 attribute(s) for relid 8202590
i think i make a big misstake by using kill -9 the postmaster .so ,what should i do when i facing this problem, can someone give advise, cause i quite new to postgresql. how to check there is corrupted data in my database? it there anyway i can do to prevent this error msg come back again? cause it just temporary table, can i just remove from database? Regards Beh - Original Message - From: Tom Lane [EMAIL PROTECTED] To: Chun Yit(Chronos) [EMAIL PROTECTED] Cc: pgsql-general@postgresql.org Sent: Monday, May 08, 2006 10:43 PM Subject: Re: [GENERAL] catalog is missing 16 attribute(s) for relid 8202590 Chun Yit\(Chronos\) [EMAIL PROTECTED] writes: what i did is i=20 1) kill -9 (postmaster process id). 2) remove the postmaster.pid file under /usr/local/pgsql/data/ 3) restart the postmaster process. The *first* thing you ought to do is learn not to do that. You deliberately broke the safety interlocks that keep two separate sets of backends from being active in a single database. If there's corrupted data in there now, I think it's your own fault. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] database size grows (even after vacuum (full and analyze))....
Title: RE: [GENERAL] database size grows (even after vacuum (full and analyze)) Ok. I get the point. I'm using 7.2 because that's the one I got from the original Fedora Core 3 CD's. I'll upgrade to the most recent. Thank you all for your support. jmf -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED]] Sent: Mon 5/8/2006 7:08 PM To: Bruno Wolff III Cc: João Miguel Ferreira; pgsql-general@postgresql.org Subject: Re: [GENERAL] database size grows (even after vacuum (full and analyze)) Bruno Wolff III [EMAIL PROTECTED] writes: In the long run, you should upgrade. 7.2 is essentially without support. I beleive there is still a RHEL version using it that is in support, so a critical fix might get back ported. No, Red Hat never shipped a RHEL version using 7.2.* (they went straight from 7.1 to 7.3). This is not unrelated to the fact that the community dropped support for 7.2, actually --- I'm sure we'd not be maintaining 7.3 anymore either, if I weren't personally on the hook to support 7.3 for RHEL3. Bottom line is there's no one out there maintaining 7.2 at all, and even 7.3 and 7.4 are really not getting anything but the most critical bug fixes. regards, tom lane
[GENERAL] pgcrypto sha256/384/512 don't work on Redhat. Please help!
Hello,Pgcrypto SHA 256/384/512 algorithm don't work on RedHat: db=# SELECT digest('test', 'sha1'); digest -- \251J\217\345\314\261\233\246\034L\010s\323\221\351\207\230/\273\323 (1 row) db=# SELECT digest('test', 'sha256'); ERROR: Cannot use sha256: No such hash algorithmI have standard Fedora Core 4 installed with standard PostgreSQL 8.1.3Pgcrypto documentation says that it has built-in SHA256 and it should work when OpenSSL is not found: ==Without OpenSSL, public-key encryption does not work, as pgcrypto doesnot yet contain math functions for large integers.Functionality built-in OpenSSL MD5 yes yesSHA1 yes yesSHA256/384/512 yes since 0.9.8 =Fedora core has OpenSSL 0.9.7 installed by default. And it's not possible to install 0.9.8 because of glibc conflict.I suspect pgcrypto looks for SHA256 in OpeSSL lib when it should use built-in.SHA256 is working fine on Windows but on Redhat it fails. Maybe because windows pgcrypto DLL is compiled statically with OpenSSL? How to make pgcrypto use built-in sha256?Thanks.
Re: [GENERAL] Arguments Pro/Contra Software Raid
Hi Hannes, Hannes Dorbath a écrit : Hi, I've just had some discussion with colleagues regarding the usage of hardware or software raid 1/10 for our linux based database servers. I myself can't see much reason to spend $500 on high end controller cards for a simple Raid 1. Naa, you can find ATA | SATA ctrlrs for about EUR30 ! Any arguments pro or contra would be desirable. From my experience and what I've read here: + Hardware Raids might be a bit easier to manage, if you never spend a few hours to learn Software Raid Tools. I'd the same (mostly as you still have to punch a command line for most of the controlers) + There are situations in which Software Raids are faster, as CPU power has advanced dramatically in the last years and even high end controller cards cannot keep up with that. Definitely NOT, however if your server doen't have a heavy load, the software overload can't be noticed (essentially cache managing and syncing) For bi-core CPUs, it might be true + Using SATA drives is always a bit of risk, as some drives are lying about whether they are caching or not. ?? Do you intend to use your server without a UPS ?? + Using hardware controllers, the array becomes locked to a particular vendor. You can't switch controller vendors as the array meta information is stored proprietary. In case the Raid is broken to a level the controller can't recover automatically this might complicate manual recovery by specialists. ?? Do you intend not to make backups ?? + Even battery backed controllers can't guarantee that data written to the drives is consistent after a power outage, neither that the drive does not corrupt something during the involuntary shutdown / power irregularities. (This is theoretical as any server will be UPS backed) RAID's laws: 1- RAID prevents you from loosing data on healthy disks, not from faulty disks, 1b- So format and reformat your RAID disks (whatever SCSI, ATA, SATA) several times, with destructive tests (see -c -c option from the mke2fs man) - It will ensure that disks are safe, and also make a kind of burn test (might turn to... days of formating!), 2- RAID doesn't prevent you from power suply brokeage or electricity breakdown, so use a (LARGE) UPS, 2b- LARGE UPS because HDs are the components that have the higher power consomption (a 700VA UPS gives me about 10-12 minutes on a machine with a XP2200+, 1GB RAM and a 40GB HD, however this fall to.. less than 25 secondes with seven HDs ! all ATA), 2c- Use server box with redudancy power supplies, 3- As for any sensitive data, make regular backups or you'll be as sitting duck. Some hardware ctrlrs are able to avoid the loss of a disk if you turn to have some faulty sectors (by relocating internally them); software RAID doesn't as sectors *must* be @ the same (linear) addresses. BUT a hardware controler is about EUR2000 and a (ATA/SATA) 500GB HD is ~ EUR350. That means you have to consider: * The server disponibility (time to change a power supply if no redudancies, time to exchange a not hotswap HD... In fact, how much down time you can afford), * The volume of the data (from which depends the size of the backup device), * The backup device you'll use (tape or other HDs), * The load of the server (and the number of simultaneous users = Soft|Hard, ATA/SATA|SCSI...), * The money you can spend in such a server * And most important, the color of your boss' tie the day you'll take the decision. Hope it will help you Jean-Yves ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Arguments Pro/Contra Software Raid
On 09.05.2006 12:10, Jean-Yves F. Barbier wrote: Naa, you can find ATA | SATA ctrlrs for about EUR30 ! Sure, just for my colleagues Raid Controller = IPC Vortex, which resides in that price range. For bi-core CPUs, it might be true I've got that from pgsql.performance for multi-way opteron setups. ?? Do you intend to use your server without a UPS ?? Sure there will be an UPS. I'm just trying to nail down the differences between soft- and hardware raid, regardless if they matter in the end :) ?? Do you intend not to make backups ?? Sure we do backups, this all is more hypothetical thinking.. Hope it will help you It has, thanks. -- Regards, Hannes Dorbath ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] pgcrypto sha256/384/512 don't work on Redhat. Please help!
On 5/9/06, Joe Kramer [EMAIL PROTECTED] wrote: Fedora core has OpenSSL 0.9.7 installed by default. And it's not possible to install 0.9.8 because of glibc conflict. I suspect pgcrypto looks for SHA256 in OpeSSL lib when it should use built-in. SHA256 is working fine on Windows but on Redhat it fails. Maybe because windows pgcrypto DLL is compiled statically with OpenSSL? OpenSSL vs. builtin is a compile-time decision, so the builtin code cannot be fallback at runtime. Windows code is using either builtin code or newer OpenSSL. How to make pgcrypto use built-in sha256? You need to recompile PostgreSQL. Don't try to use non-OpenSSL pgcrypto with OpenSSL PostgreSQL, it will crash due to symbol conflict. Another variant is to try to compile separate OpenSSL 0.9.8 and compile PostgreSQL against that. So you don't need to upgrade system OpenSSL. -- marko ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] pgcrypto sha256/384/512 don't work on Redhat. Please help!
Thanks, but I need it to work out-of-the-box, with standard installation of RedHat or Gentoo and standard PostgreSQL rpm. I am developing application with PortgreSQL and I can't tell customer to Recompile PostgreSQL and see if it works then try to use non-openssl pgcrypto or try to compile openSSL 0.9.8. Pgcrypto depends on OpenSSL 0.9.8. And 0.9.8 is not Stable in RedHat and Gentoo. So you need to recompile pgcrypto/openssl anyway if you want to use it. Can I report this in PostgreSQL bug system? Regards.On 5/9/06, Marko Kreen [EMAIL PROTECTED] wrote: On 5/9/06, Joe Kramer [EMAIL PROTECTED] wrote: Fedora core has OpenSSL 0.9.7 installed by default. And it's not possible to install 0.9.8 because of glibc conflict. I suspect pgcrypto looks for SHA256 in OpeSSL lib when it should use built-in. SHA256 is working fine on Windows but on Redhat it fails. Maybe because windows pgcrypto DLL is compiled statically with OpenSSL? OpenSSL vs. builtin is a compile-time decision, so the builtin codecannot be fallback at runtime.Windows code is using either builtin code or newer OpenSSL. How to make pgcrypto use built-in sha256? You need to recompile PostgreSQL.Don't try to use non-OpenSSLpgcrypto with OpenSSL PostgreSQL, it will crash due to symbolconflict.Another variant is to try to compile separate OpenSSL 0.9.8 and compile PostgreSQL against that.So you don't need to upgradesystem OpenSSL.--marko
Re: [GENERAL] Segfaults with 8.1.3 on amd64
Martijn van Oosterhout wrote: On Thu, May 04, 2006 at 12:22:01PM +0100, Gavin Hamill wrote: At a guess rip = return instruction pointer, rsp = return stack point. The fact that they're all the same seems to rule out hardware. That's good to hear (in one way... :) fore starting the server, run ulimit -S -c unlimited If done properly it should enable core dumps for the backend. Have a nice day, Great stuff - it's crashed again and dropped 6MB of core which points the finger squarely at Slony - I'll ask on the relevant list :) Core was generated by `postgres: sharp laterooms 194.24.250.135(54478) UPDATE'. Program terminated with signal 11, Segmentation fault. Reading symbols from /lib/libpam.so.0...(no debugging symbols found)...done. Reading symbols from /usr/lib/postgresql/8.1/lib/slony1_funcs.so...done. Loaded symbols for /usr/lib/postgresql/8.1/lib/slony1_funcs.so Reading symbols from /usr/lib/postgresql/8.1/lib/xxid.so...done. Loaded symbols for /usr/lib/postgresql/8.1/lib/xxid.so #0 0x2b5e8c00 in strlen () from /lib/libc.so.6 (gdb) bt #0 0x2b5e8c00 in strlen () from /lib/libc.so.6 #1 0x2aaaca65b062 in slon_quote_literal (str=0x0) at slony1_funcs.c:1044 #2 0x2aaaca65c348 in _Slony_I_logTrigger (fcinfo=0x8f5ec5) at slony1_funcs.c:783 #3 0x005ca9f9 in fmgr_internal_function () #4 0x004ce6a4 in FreeTriggerDesc () #5 0x004cf42e in ExecARUpdateTriggers () #6 0x004cf873 in ExecARUpdateTriggers () #7 0x004cfb10 in AfterTriggerEndQuery () #8 0x0055ef05 in FreeQueryDesc () #9 0x0055fecf in PortalRun () #10 0x0055f78f in PortalRun () #11 0x0055b721 in pg_plan_queries () #12 0x0055e14c in PostgresMain () #13 0x00539cc1 in ClosePostmasterPorts () #14 0x00539797 in ClosePostmasterPorts () #15 0x00537d3d in PostmasterMain () #16 0x0053704e in PostmasterMain () #17 0x004fdb58 in main () Cheers, Gavin, ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] pgcrypto sha256/384/512 don't work on Redhat. Please help!
On 5/9/06, Joe Kramer [EMAIL PROTECTED] wrote: Thanks, but I need it to work out-of-the-box, with standard installation of RedHat or Gentoo and standard PostgreSQL rpm. I am developing application with PortgreSQL and I can't tell customer to Recompile PostgreSQL and see if it works then try to use non-openssl pgcrypto or try to compile openSSL 0.9.8. Pgcrypto depends on OpenSSL 0.9.8. And 0.9.8 is not Stable in RedHat and Gentoo. So you need to recompile pgcrypto/openssl anyway if you want to use it. Can I report this in PostgreSQL bug system? What bug are you refering to? Only bug that I can see is the symbol-conflict problem, but as 8.1 pgcrypto uses always same setting as core postgres, it should not be a big deal. The fact that Fedora pgcrypto is linked with OpenSSL that does not support SHA256 is not a bug, just a fact. OTOH, the nicest solution to your problem would be self-compiled pgcrypto, that would work with stock PostgreSQL. As the conflict happens with only (new) SHA2 functions, I can prepare a patch for symbol conflict, would that be satisfactory for you? -- marko ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] [PERFORM] Arguments Pro/Contra Software Raid
-BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 Hannes Dorbath wrote: Hi, I've just had some discussion with colleagues regarding the usage of hardware or software raid 1/10 for our linux based database servers. I myself can't see much reason to spend $500 on high end controller cards for a simple Raid 1. Any arguments pro or contra would be desirable. One pro and one con off the top of my head. Hotplug. Depending on your platform, SATA may or may not be hotpluggable (I know AHCI mode is the only one promising some kind of a hotplug, which means ICH6+ and Silicon Image controllers last I heard). SCSI isn't hotpluggable without the use of special hotplug backplanes and disks. You lose that in software RAID, which effectively means you need to shut the box down and do maintenance. Hassle. CPU. It's cheap. Much cheaper than your average hardware RAID card. For the 5-10% overhead usually imposed by software RAID, you can throw in a faster CPU and never even notice it. Most cases aren't CPU-bound anyways, or at least, most cases are I/O bound for the better part. This does raise the question of I/O bandwidth your standard SATA or SCSI controller comes with, though. If you're careful about that and handle hotplug sufficiently, you're probably never going to notice you're not running on metal. Kind regards, - -- Grega Bremec gregab at p0f dot net -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.0 (GNU/Linux) iD8DBQFEYHRAfu4IwuB3+XoRA9jqAJ9sS3RBJZEurvwUXGKrFMRZfYy9pQCggGHh tLAy/YtHwKvhd3ekVDGFtWE= =vlyC -END PGP SIGNATURE- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] initdb utf-8
hello, I want to correctly sort English, Hebrew, Russian ... What is the best encoding for this ? ( utf-8 ? ) In the IRC i been told that utf-8 is not the solution and actually there is no solution to correctly sort many languages. Is that true ? Will it matter if each language will be in different table ? Back in the 7.0.x days i created a cluster with Hebrew encoding, and now I cant dump/restore it to a C / utf-8 / other iso Cluster. Is there a way around it ? Thanks -- -- Michael Ben-Nes - Internet Consultant and Director. http://www.epoch.co.il - weaving the Net. Cellular: 054-4848113 -- ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] pgcrypto sha256/384/512 don't work on Redhat. Please help!
On 5/9/06, Marko Kreen [EMAIL PROTECTED] wrote: On 5/9/06, Joe Kramer [EMAIL PROTECTED] wrote: Thanks, but I need it to work out-of-the-box, with standard installation of RedHat or Gentoo and standard PostgreSQL rpm. I am developing application with PortgreSQL and I can't tell customer to Recompile PostgreSQL and see if it works then try to use non-openssl pgcrypto or try to compile openSSL 0.9.8 .Pgcrypto depends on OpenSSL 0.9.8. And 0.9.8 is not Stable in RedHat and Gentoo. So you need to recompile pgcrypto/openssl anyway if you want to use it. Can I report this in PostgreSQL bug system? What bug are you refering to?Only bug that I can see is thesymbol-conflict problem, but as 8.1 pgcrypto uses always samesetting as core postgres, it should not be a big deal.The fact that Fedora pgcrypto is linked with OpenSSL that does not support SHA256 is not a bug, just a fact.It's not Fedora only, same problem with Gentoo/portage.I think it's problem for all distros. You need recompile pgcrypto or install openssl 0.9.8 which is considered as unstable by most distros. Maybe pgcrypto should use built-in algorithms until OpenSSL 0.9.8 is mainstream/default install. OTOH, the nicest solution to your problem would be self-compiledpgcrypto, that would work with stock PostgreSQL.As the conflicthappens with only (new) SHA2 functions, I can prepare a patch forsymbol conflict, would that be satisfactory for you? Ideally, would be great if pgcrypto could fallback to built-in algorithm of OpenSSL don't support it.But since it's compile switch, completely seld-compiled pgcrypto would be great.Thanks a lot!
Re: [GENERAL] how to debugg
On Tue, 09 May 2006 10:59:20 +0530 N Srinivasa [EMAIL PROTECTED] wrote: Hi I downloded postgresql source code, and compile it in windows platform, can any body plz tell me that how can i debug the sourcecode in windows platform, what are the steps are i should go through.. Regards Srinivasa _ Spice up your IM conversations. New, colorful and animated emoticons. Get chatting! http://server1.msn.co.in/SP05/emoticons/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly Good Morning Srinivasa, What do you mean by debug? Speaking purely for myself I'd appreciate it if you fixed windows FIRST. Then turn your skills to fixing PostgreSQL. John Purser ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] tablespace and backup
Hello, for 4 weeks I have imported (text) data of 50 DVD's from a customer into my PostgreSQL and now I have a very big problem... The maintable (called 'timeline' is around 350 GByte in size... ...and searching is the hell! Since I try to redesign my Database since some time, I like to use for each year ONE table using tablespace and would like to know what happen with a backup and restoring it. How does the restore know whewre to place, etc. And, it is possibel to get a table (restoring), if, for example I have only one Disk where the tablespace was created? Or would it be better, to run several postmasters using one SCSI or SATA (WD Raptor) HDD of 150 GByte for each Database? Currently I am using a 3Ware 3w8500-12S with 2 x WD1500GD (OS, Raid-1) and 8 HDD's using Raid-5 but I can switch to 4 x Raid-1 plus (two new HDD's). I do not like LVM because too negative experience. Or would it generaly better to use 1U Server Racks with each one Raid-1 of 150 GByte? (Since the prices for 1U servers are falling in germany) My current server eat 2 x 4U for the Database, 4U for the Webserver and 3 x 4U for the Binaries (1,8 TB of original documents of any kind). Those three servers plus a very big Sun machine are connected of a CISCO to a SONET Dual STM-4 (since end march). Maybe it is relevant. Thanks Michelle Konzack -- Linux-User #280138 with the Linux Counter, http://counter.li.org/ # Debian GNU/Linux Consultant # Michelle Konzack Apt. 917 ICQ #328449886 50, rue de Soultz MSM LinuxMichi 0033/3/8845235667100 Strasbourg/France IRC #Debian (irc.icq.com) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] tablespace and backup
You will need to provide more information about the data requirement- such as column types, what you need to search for, and the actual queries and execution plans. Purely as a guess, it seems like you haven't tried partial indexes: http://www.postgresql.org/docs/8.1/interactive/sql-createindex.html Mit freundlichen Gruessen aus den USA, M On Sat, May 6, 2006 2:58 pm, Michelle Konzack wrote: Hello, for 4 weeks I have imported (text) data of 50 DVD's from a customer into my PostgreSQL and now I have a very big problem... The maintable (called 'timeline' is around 350 GByte in size... ...and searching is the hell! Since I try to redesign my Database since some time, I like to use for each year ONE table using tablespace and would like to know what happen with a backup and restoring it. How does the restore know whewre to place, etc. And, it is possibel to get a table (restoring), if, for example I have only one Disk where the tablespace was created? Or would it be better, to run several postmasters using one SCSI or SATA (WD Raptor) HDD of 150 GByte for each Database? Currently I am using a 3Ware 3w8500-12S with 2 x WD1500GD (OS, Raid-1) and 8 HDD's using Raid-5 but I can switch to 4 x Raid-1 plus (two new HDD's). I do not like LVM because too negative experience. Or would it generaly better to use 1U Server Racks with each one Raid-1 of 150 GByte? (Since the prices for 1U servers are falling in germany) My current server eat 2 x 4U for the Database, 4U for the Webserver and 3 x 4U for the Binaries (1,8 TB of original documents of any kind). Those three servers plus a very big Sun machine are connected of a CISCO to a SONET Dual STM-4 (since end march). Maybe it is relevant. Thanks Michelle Konzack -- Linux-User #280138 with the Linux Counter, http://counter.li.org/ # Debian GNU/Linux Consultant # Michelle Konzack Apt. 917 ICQ #328449886 50, rue de Soultz MSM LinuxMichi 0033/3/8845235667100 Strasbourg/France IRC #Debian (irc.icq.com) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] What is your favorite front end for user interaction to postgresql databases?
On May 8, 2006, at 11:05 PM, Guido Neitzer wrote: On 09.05.2006, at 0:33 Uhr, Karen Hill wrote: What is your favorite front end for end users to interact with your postgresql db? Is it java, .net, web apache + php, MS-Access, ruby on rails? Why is it your favorite? Which would you recommend for end users on multiple OSes? You mean what technology to use to build business logic? advertisement It just depends on your needs. I prefer to use Apple WebObjects for building web frontends to the db. Why? Because it is extremely powerful, free if you are able to develop on Mac OS X (it comes with the developer tools) and even if you have to buy a couple of Powerbooks and Xserves it may be cheaper then only the software license for one of the other big Java application server technologies. You can deploy it to every Java plattform because it's just a Java application Is that actually true? My understanding was that under the most recent license changes it was not possible to deploy it to any platform other than XServe. Technically possible, sure, but a violation of the license. That's the main reason I stopped considering it a viable development environment. Cheers, Steve ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] pgcrypto sha256/384/512 don't work on Redhat. Please help!
On 5/9/06, Joe Kramer [EMAIL PROTECTED] wrote: On 5/9/06, Marko Kreen [EMAIL PROTECTED] wrote: The fact that Fedora pgcrypto is linked with OpenSSL that does not support SHA256 is not a bug, just a fact. It's not Fedora only, same problem with Gentoo/portage. I think it's problem for all distros. You need recompile pgcrypto or install openssl 0.9.8 which is considered as unstable by most distros. Maybe pgcrypto should use built-in algorithms until OpenSSL 0.9.8 is mainstream/default install. To be honest, pgcrypto actually falls back on built-in code for AES, in case old OpenSSL that does not have AES. Thats because AES should be always there, together with md5/sha1/blowfish. I do not consider SHA2 that important (yet?), so they don't get same treatment. OTOH, the nicest solution to your problem would be self-compiled pgcrypto, that would work with stock PostgreSQL. As the conflict happens with only (new) SHA2 functions, I can prepare a patch for symbol conflict, would that be satisfactory for you? Ideally, would be great if pgcrypto could fallback to built-in algorithm of OpenSSL don't support it. But since it's compile switch, completely seld-compiled pgcrypto would be great. Attached is a patch that re-defines SHA2 symbols so that they would not conflict with OpenSSL. Now that I think about it, if your OpenSSL does not contain SHA2, then there should be no conflict. But ofcourse, if someone upgrades OpenSSL, server starts crashing. So I think its best to always apply this patch. I think I'll send the patch to 8.2 later, not sure if it's important enough for 8.1. -- marko Index: contrib/pgcrypto/sha2.c === RCS file: /opt/cvs/pgsql/contrib/pgcrypto/sha2.c,v retrieving revision 1.5 diff -u -c -r1.5 sha2.c *** contrib/pgcrypto/sha2.c 15 Oct 2005 02:49:06 - 1.5 --- contrib/pgcrypto/sha2.c 9 May 2006 14:09:12 - *** *** 163,171 * library -- they are intended for private internal visibility/use * only. */ ! void SHA512_Last(SHA512_CTX *); ! void SHA256_Transform(SHA256_CTX *, const uint8 *); ! void SHA512_Transform(SHA512_CTX *, const uint8 *); /*** SHA-XYZ INITIAL HASH VALUES AND CONSTANTS / --- 163,171 * library -- they are intended for private internal visibility/use * only. */ ! static void SHA512_Last(SHA512_CTX *); ! static void SHA256_Transform(SHA256_CTX *, const uint8 *); ! static void SHA512_Transform(SHA512_CTX *, const uint8 *); /*** SHA-XYZ INITIAL HASH VALUES AND CONSTANTS / *** *** 307,313 j++; \ } while(0) ! void SHA256_Transform(SHA256_CTX * context, const uint8 *data) { uint32 a, --- 307,313 j++; \ } while(0) ! static void SHA256_Transform(SHA256_CTX * context, const uint8 *data) { uint32 a, *** *** 378,384 } #else /* SHA2_UNROLL_TRANSFORM */ ! void SHA256_Transform(SHA256_CTX * context, const uint8 *data) { uint32 a, --- 378,384 } #else /* SHA2_UNROLL_TRANSFORM */ ! static void SHA256_Transform(SHA256_CTX * context, const uint8 *data) { uint32 a, *** *** 631,637 j++; \ } while(0) ! void SHA512_Transform(SHA512_CTX * context, const uint8 *data) { uint64 a, --- 631,637 j++; \ } while(0) ! static void SHA512_Transform(SHA512_CTX * context, const uint8 *data) { uint64 a, *** *** 699,705 } #else /* SHA2_UNROLL_TRANSFORM */ ! void SHA512_Transform(SHA512_CTX * context, const uint8 *data) { uint64 a, --- 699,705 } #else /* SHA2_UNROLL_TRANSFORM */ ! static void SHA512_Transform(SHA512_CTX * context, const uint8 *data) { uint64 a, *** *** 842,848 usedspace = freespace = 0; } ! void SHA512_Last(SHA512_CTX * context) { unsigned int usedspace; --- 842,848 usedspace = freespace = 0; } ! static void SHA512_Last(SHA512_CTX * context) { unsigned int usedspace; Index: contrib/pgcrypto/sha2.h === RCS file: /opt/cvs/pgsql/contrib/pgcrypto/sha2.h,v retrieving revision 1.2 diff -u -c -r1.2 sha2.h *** contrib/pgcrypto/sha2.h 15 Oct 2005 02:49:06 - 1.2 --- contrib/pgcrypto/sha2.h 9 May 2006 11:43:13 - *** *** 38,43 --- 38,53 #ifndef _SHA2_H #define _SHA2_H + /* avoid conflict with OpenSSL */ + #define SHA256_Init pg_SHA256_Init + #define SHA256_Update pg_SHA256_Update + #define SHA256_Final pg_SHA256_Final + #define SHA384_Init pg_SHA384_Init + #define SHA384_Update pg_SHA384_Update + #define SHA384_Final pg_SHA384_Final + #define SHA512_Init pg_SHA512_Init + #define SHA512_Update pg_SHA512_Update + #define SHA512_Final pg_SHA512_Final /*** SHA-256/384/512 Various Length Definitions
Re: [GENERAL] What is your favorite front end for user interaction
Steve Atkins wrote: On May 8, 2006, at 11:05 PM, Guido Neitzer wrote: On 09.05.2006, at 0:33 Uhr, Karen Hill wrote: What is your favorite front end for end users to interact with your postgresql db? Is it java, .net, web apache + php, MS-Access, ruby on rails? Why is it your favorite? Which would you recommend for end users on multiple OSes? You mean what technology to use to build business logic? advertisement It just depends on your needs. I prefer to use Apple WebObjects for building web frontends to the db. Why? Because it is extremely powerful, free if you are able to develop on Mac OS X (it comes with the developer tools) and even if you have to buy a couple of Powerbooks and Xserves it may be cheaper then only the software license for one of the other big Java application server technologies. You can deploy it to every Java plattform because it's just a Java application Is that actually true? My understanding was that under the most recent license changes it was not possible to deploy it to any platform other than XServe. Technically possible, sure, but a violation of the license. That's the main reason I stopped considering it a viable development environment. Cheers, Steve ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org from http://developer.apple.com/softwarelicensing/agreements/webobjects.html WebObjects Distribution License *WebObjects Distribution License $699 per copy* WebObjects, the premier web application server used by hundreds of corporations, is now available for redistribution by web application developers just like you. Upon signature by Apple, the agreement, together with payment for your initial request of licenses, allows you to resell WebObjects license keys, and redistribute the WebObjects deployment runtime and adaptors as part of your web application. The WebObjects runtime includes a powerful object-relational engine for extracting and managing data from virtually any database, without writing a single line of SQL. Its HTML component model makes it a breeze to assemble dynamic, fully customizable web pages. There’s even support for rich Java clients and Web services. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Arguments Pro/Contra Software Raid
On Tue, May 09, 2006 at 12:10:32 +0200, Jean-Yves F. Barbier [EMAIL PROTECTED] wrote: Naa, you can find ATA | SATA ctrlrs for about EUR30 ! But those are the ones that you would generally be better off not using. Definitely NOT, however if your server doen't have a heavy load, the software overload can't be noticed (essentially cache managing and syncing) It is fairly common for database machines to be IO, rather than CPU, bound and so the CPU impact of software raid is low. Some hardware ctrlrs are able to avoid the loss of a disk if you turn to have some faulty sectors (by relocating internally them); software RAID doesn't as sectors *must* be @ the same (linear) addresses. That is not true. Software raid works just fine on drives that have internally remapped sectors. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] pg_restore duplicate key violations
On 05/08/2006 06:42:18 PM, Tom Lane wrote: Karl O. Pinc [EMAIL PROTECTED] writes: I'm having trouble with a dump and restore: $ pg_dump --format=t --schema=babase --data-only --user babase_admin babase_test | pg_restore --data-only --disable-triggers --user babase_admin --dbname=babase pg_restore: ERROR: duplicate key violates unique constraint activities_pkey CONTEXT: COPY activities, line 1: B t t Be groomed Um ... it looks to me like you're trying to restore into an existing table that already has the same data loaded ... That's what I thought at first, except that I had just created the db structure with a script. Just in case I checked with a select from psql. I even looked at the dump (as text) output to check that it wasn't doing something wierd like loading things twice. Turns out this table is the first to have data loaded into it. I'm not clear on where to start with this. Karl [EMAIL PROTECTED] Free Software: You don't pay back, you pay forward. -- Robert A. Heinlein ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Exporting postgres query to CSV
Greetings, I am running postgres 7.4.7 on debian sarge. I need to run an SQL query and store the results in a file. The format needs to be comma separated values (CSV), so I can import this later in Excel. Any ideas on how to accomplish this? much appreciated, Ryan ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] pg_restore duplicate key violations
Karl O. Pinc [EMAIL PROTECTED] writes: On 05/08/2006 06:42:18 PM, Tom Lane wrote: Um ... it looks to me like you're trying to restore into an existing table that already has the same data loaded ... That's what I thought at first, except that I had just created the db structure with a script. Just in case I checked with a select from psql. I even looked at the dump (as text) output to check that it wasn't doing something wierd like loading things twice. Turns out this table is the first to have data loaded into it. I'm not clear on where to start with this. Well, the first thing is to look at the database after the failure and see if there's already data in the table. I'm betting you'll find there is. Then you would start trying to figure out where it came from. One thought that comes to mind: maybe the table exists in template1? regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] pg_restore duplicate key violations
On 05/09/2006 10:24:28 AM, Tom Lane wrote: Karl O. Pinc [EMAIL PROTECTED] writes: On 05/08/2006 06:42:18 PM, Tom Lane wrote: Um ... it looks to me like you're trying to restore into an existing table that already has the same data loaded ... I'm not clear on where to start with this. Well, the first thing is to look at the database after the failure and see if there's already data in the table. I'm betting you'll find there is. Then you would start trying to figure out where it came from. I tried that already. Nothing in the table. One thought that comes to mind: maybe the table exists in template1? Don't think so. Nothing has gone into template1. Karl [EMAIL PROTECTED] Free Software: You don't pay back, you pay forward. -- Robert A. Heinlein ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Problem dropping a table
I have a java application that is trying to dynamically drop a set of tables. Problem is, when it gets to a particular table and I execute the drop table foo cascade command from a prepared statement, the query never returns. It just hangs indefinitely. I presume that it is waiting on a lock for this table, but there is nothing that I know of that should be locking this table. So, I have two issues - the first, how do I prevent myself from hanging indefinitely? I tried 'setQueryTimeout' on the prepared statement, but it doesn't seem to have any effect. Still hangs indefinitely. The second, how can I track down what is locking this table? I presume that it is some of my code somewhere... maybe a prepared statement that I didn't get closed - but I can't find it. I've check my code twice for any reference to this table, and every use of it is properly closing the result sets and the prepared statement. Any ideas? Thanks, Dan -- Daniel Armbrust Biomedical Informatics Mayo Clinic Rochester daniel.armbrust(at)mayo.edu http://informatics.mayo.edu/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Problem dropping a table
Dan Armbrust [EMAIL PROTECTED] writes: I have a java application that is trying to dynamically drop a set of tables. Problem is, when it gets to a particular table and I execute the drop table foo cascade command from a prepared statement, the query never returns. It just hangs indefinitely. I presume that it is waiting on a lock for this table, but there is nothing that I know of that should be locking this table. So, I have two issues - the first, how do I prevent myself from hanging indefinitely? I tried 'setQueryTimeout' on the prepared statement, but it doesn't seem to have any effect. Still hangs indefinitely. You should complain to the JDBC list about that. The underlying backend facility works as expected: session 1: regression=# create table foo (f int); CREATE TABLE regression=# begin; BEGIN regression=# lock table foo; LOCK TABLE session 2: regression=# set statement_timeout TO 1000; SET regression=# drop table foo cascade; ... after a second ... ERROR: canceling statement due to statement timeout regression=# The second, how can I track down what is locking this table? Look in pg_locks to start with. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [PERFORM] Arguments Pro/Contra Software Raid
On Tue, 2006-05-09 at 04:16, Hannes Dorbath wrote: Hi, I've just had some discussion with colleagues regarding the usage of hardware or software raid 1/10 for our linux based database servers. I myself can't see much reason to spend $500 on high end controller cards for a simple Raid 1. Any arguments pro or contra would be desirable. From my experience and what I've read here: + Hardware Raids might be a bit easier to manage, if you never spend a few hours to learn Software Raid Tools. Depends. Some hardware RAID cards aren't that easy to manage, and sometimes, they won't let you do some things that software will. I've run into situations where a RAID controller kicked out two perfectly good drives from a RAID 5 and would NOT accept them back. All data lost, and it would not be convinced to restart without formatting the drives first. arg! With Linux kernel sw RAID, I've had a similar problem pop up, and was able to make the RAID array take the drives back. Of course, this means that software RAID relies on you not being stupid, because it will let you do things that are dangerous / stupid. I found the raidtools on linux to be well thought out and fairly easy to use. + There are situations in which Software Raids are faster, as CPU power has advanced dramatically in the last years and even high end controller cards cannot keep up with that. The only times I've found software RAID to be faster was against the hybrid hardware / software type RAID cards (i.e. the cheapies) or OLDER RAID cards, that have a 33 MHz coprocessor or such. Most modern RAID controllers have coprocessors running at several hundred MHz or more, and can compute parity and manage the array as fast as the attached I/O can handle it. The one thing a software RAID will never be able to match the hardware RAID controller on is battery backed cache. + Using SATA drives is always a bit of risk, as some drives are lying about whether they are caching or not. This is true whether you are using hardware RAID or not. Turning off drive caching seems to prevent the problem. However, with a RAID controller, the caching can then be moved to the BBU cache, while with software RAID no such option exists. Most SATA RAID controllers turn off the drive cache automagically, like the escalades seem to do. + Using hardware controllers, the array becomes locked to a particular vendor. You can't switch controller vendors as the array meta information is stored proprietary. In case the Raid is broken to a level the controller can't recover automatically this might complicate manual recovery by specialists. And not just a particular vendor, but likely a particular model and even firmware revision. For this reason, and 24/7 server should have two RAID controllers of the same brand running identical arrays, then have them set up as a mirror across the controllers, assuming you have controllers that can run cooperatively. This setup ensures that even if one of your RAID controllers fails, you then have a fully operational RAID array for as long as it takes to order and replace the bad controller. And having a third as a spare in a cabinet somewhere is cheap insurance as well. + Even battery backed controllers can't guarantee that data written to the drives is consistent after a power outage, neither that the drive does not corrupt something during the involuntary shutdown / power irregularities. (This is theoretical as any server will be UPS backed) This may be theoretically true, but all the battery backed cache units I've used have brought the array up clean every time the power has been lost to them. And a UPS is no insurance against loss of power. Cascading power failures are not uncommon when things go wrong. Now, here's my take on SW versus HW in general: HW is the way to go for situations where a battery backed cache is needed. Heavily written / updated databases are in this category. Software RAID is a perfect match for databases with a low write to read ratio, or where you won't be writing enough for the write performance to be a big issue. Many data warehouses fall into this category. In this case, a JBOD enclosure with a couple of dozen drives and software RAID gives you plenty of storage for chicken feed. If the data is all derived from outside sources, then you can turn on the write cache in the drives and turn off fsync and it will be plenty fast, just not crash safe. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Exporting postgres query to CSV
On Tue, 09 May 2006 10:58:07 -0400 Ryan Suarez [EMAIL PROTECTED] wrote: Greetings, I am running postgres 7.4.7 on debian sarge. I need to run an SQL query and store the results in a file. The format needs to be comma separated values (CSV), so I can import this later in Excel. Any ideas on how to accomplish this? much appreciated, Ryan ---(end of broadcast)--- TIP 6: explain analyze is your friend Ryan, Two tips. First: psql -U PGSQL USER -o OUPUT FILE NAME --pset format=unaligned --pset fieldsep=',' -c 'SQL COMMAND HERE' -d DATABASE NAME HERE I think that will give you the output you were after assuming you're scripting psql and that you replace the values in with appropriate values. The syntax is slightly different from the psql command line. Second: man psql is your friend. John Purser ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] pg_dump and grants to PUBLIC
On Mon, 2006-08-05 at 16:32 -0700, Joshua D. Drake wrote: Hello, What version of PostgreSQL is this that you are using? Because it ]# rpm -qi postgresql Name: postgresql Relocations: (not relocateable) Version : 7.3.4 Vendor: Red Hat, Inc. Release : 3.rhl9Build Date: Tue 04 Nov 2003 13:45:53 MST Install Date: Mon 29 Dec 2003 15:52:53 MST Build Host: porky.devel.redhat.com Group : Applications/DatabasesSource RPM: postgresql-7.3.4-3.rhl9.src.rpm Size: 6332200 License: BSD Signature : DSA/SHA1, Tue 11 Nov 2003 15:48:30 MST, Key ID 219180cddb42a60e Packager: Red Hat, Inc. http://bugzilla.redhat.com/bugzilla URL : http://www.postgresql.org/ Summary : PostgreSQL client programs and libraries. Description : PostgreSQL is an advanced Object-Relational database management system (DBMS) that supports almost all SQL constructs, including transactions, subselects, and user-defined types and functions. The postgresql package includes the client programs and libraries that you need to access a PostgreSQL DBMS server. These PostgreSQL client programs are programs that directly manipulate the internal structure of PostgreSQL databases on a PostgreSQL server.These client programs can be located on the same machine with the PostgreSQL server, or may be on a remote machine which accesses a PostgreSQL server over a network connection. This package contains the client libraries for C and C++, as well as command-line utilities for managing PostgreSQL databases on a PostgreSQL server. If you want to manipulate a PostgreSQL database on a remote PostgreSQL server, you need this package. You also need to install this package if you are installing the postgresql-server package. [EMAIL PROTECTED]:~$ psql -U test2 test2; Welcome to psql 8.1.3, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit test2= create table foo2(id bigserial); NOTICE: CREATE TABLE will create implicit sequence foo2_id_seq for serial column foo2.id NOTICE: CREATE TABLE will create implicit sequence foo2_id_seq for serial column foo2.id CREATE TABLE Now you need to grant that table to PUBLIC at this point. in PSQL: GRANT ALL ON test2 TO PUBLIC; test2= \q [EMAIL PROTECTED]:~$ pg_dump test1; -- -- PostgreSQL database dump -- SET client_encoding = 'UTF8'; SET check_function_bodies = false; SET client_min_messages = warning; -- -- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres -- COMMENT ON SCHEMA public IS 'Standard public schema'; SET search_path = public, pg_catalog; SET default_tablespace = ''; SET default_with_oids = false; -- -- Name: foo; Type: TABLE; Schema: public; Owner: test1; Tablespace: -- CREATE TABLE foo ( id bigserial NOT NULL ); ALTER TABLE public.foo OWNER TO test1; -- -- Name: foo_id_seq; Type: SEQUENCE SET; Schema: public; Owner: test1 -- SELECT pg_catalog.setval(pg_catalog.pg_get_serial_sequence('foo', 'id'), 1, false); -- -- Data for Name: foo; Type: TABLE DATA; Schema: public; Owner: test1 -- COPY foo (id) FROM stdin; \. -- -- Name: public; Type: ACL; Schema: -; Owner: postgres -- REVOKE ALL ON SCHEMA public FROM PUBLIC; REVOKE ALL ON SCHEMA public FROM postgres; GRANT ALL ON SCHEMA public TO postgres; GRANT ALL ON SCHEMA public TO PUBLIC; -- -- PostgreSQL database dump complete -- [EMAIL PROTECTED]:~$ ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Encoding Conversion
Hello All So I have an old database that is ASCII_SQL encoded. For a variety of reasons I need to convert the database to UNICODE. I did some googling on this but have yet to find anything that looked like a viable option, so i thought I'd post to the group and see what sort of advice might arise. :) TIA -b ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] What is your favorite front end for user interaction to postgresql databases?
On 09.05.2006, at 16:31 Uhr, Steve Atkins wrote: Is that actually true? My understanding was that under the most recent license changes it was not possible to deploy it to any platform other than XServe. Wrong. You are allowed to deploy on any platform you like, but only Mac OS X Server is officially supported by Apple. Please note that this is for 5.3.1 - there was a bad license formulation in 5.3 which actually did not allow the deployment. For 5.3.1 this was corrected. The issue came up when WebObjects was released free as part of the Xcode tools and a new license was necessary which wasn't well written. Technically possible, sure, but a violation of the license. Nope. Cliff Tuel of Apple clarified this on the WO mailing lists. cug ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Arguments Pro/Contra Software Raid
Don't buy those drives. That's unrelated to whether you use hardware or software RAID. Sorry that is an extremely misleading statement. SATA RAID is perfectly acceptable if you have a hardware raid controller with a battery backup controller. And dollar for dollar, SCSI will NOT be faster nor have the hard drive capacity that you will get with SATA. Sincerely, Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] pg_dump and grants to PUBLIC
On Tue, 2006-09-05 at 13:19 +1200, Brent Wood wrote: On Mon, 8 May 2006, Blair Lowe wrote: Hi, I have had this problem for a while, and have not been able to find anything in the archives or on search engines: If I want to back up a client's database on our shared web server, I would type: pg_dump database_name Thanks Brent, comments below ... try pgdump -t table to just get the tables you want exported. Good idea, but too many clients, too many tables and too little time :) or implement a separate schema (not public) for the tables your app uses use pg_dump -n schema to avoid all the public tables in the public schema. I cannot control what my clients do, and I want to back them all up in separate areas so that they cannot see each other's data in a backup. I like this idea, but I would have to do this for each client, no? or fire up a new postgres server (postmaster) process at a different port talking to a separate Postgres database location, so other users don't create superfluous tables, etc in your database. Any application should take a port as an argument in the connect parameter string Expensive to run tons of postgres at the same time. The ultimate solution is to run a Xen server so I don't have to worry about any stupid things that my clients or their software packages do. HTH, Brent Wood Since we are running a shared server, and since crappy (only because of this problem) off the shelf database open source software such as oscommerce, or phpBB2 grants access to public rather than the web user www or nobody, when I do a pg_dump for a database, I get all the databases on the system that grant to PUBLIC being dumped with with database that I want. To restore, I need to go in and prune out all the extra junk that was granted to PUBLIC by other users in other databases - very time consuming. How can I use pg_dump to get JUST the database in th argument, and not other tables and databases that have granted to PUBLIC? Altering my client's software to grant to nobody is not practical. Thanks in advance, Blair. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] What is your favorite front end for user interaction
On 09.05.2006, at 16:52 Uhr, Reid Thompson wrote: *WebObjects Distribution License $699 per copy* WebObjects, the premier web application server used by hundreds of corporations, is now available for redistribution by web application developers just like you. Upon signature by Apple, the agreement, together with payment for your initial request of licenses, allows you to resell WebObjects license keys, and redistribute the WebObjects deployment runtime and adaptors as part of your web application. The WebObjects runtime includes a powerful object-relational engine for extracting and managing data from virtually any database, without writing a single line of SQL. Its HTML component model makes it a breeze to assemble dynamic, fully customizable web pages. There’s even support for rich Java clients and Web services. This is for WO 5.2.4. WO 5.3.1 (current release) is free for development on Mac OS X and free for deployment on any chosen platform. cug ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] pg_dump and grants to PUBLIC
Blair Lowe [EMAIL PROTECTED] writes: On Mon, 2006-08-05 at 16:32 -0700, Joshua D. Drake wrote: What version of PostgreSQL is this that you are using? Because it ]# rpm -qi postgresql Name: postgresql Relocations: (not relocateable) Version : 7.3.4 Vendor: Red Hat, Inc. ^ That does not square with this: [EMAIL PROTECTED]:~$ psql -U test2 test2; Welcome to psql 8.1.3, the PostgreSQL interactive terminal. ^ and the psql session is talking to test2 but you're dumping an unrelated database: [EMAIL PROTECTED]:~$ pg_dump test1; I see no evidence here that test1 didn't already have foo in it. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Exporting postgres query to CSV
A word of advice: if there is any chance that a column (e.g. text) contains an embedded newline, you will be much better off outputting the data in simple xml, instead of CSV. This works very well with Excel for import. I just did a simple program for this recently. Susan Ryan Suarez [EMAIL PROTECTED]To: pgsql-general@postgresql.org on.ca cc: Sent by: Subject: [GENERAL] Exporting postgres query to CSV |---| [EMAIL PROTECTED] | [ ] Expand Groups | tgresql.org |---| 05/09/2006 07:58 AM Greetings, I am running postgres 7.4.7 on debian sarge. I need to run an SQL query and store the results in a file. The format needs to be comma separated values (CSV), so I can import this later in Excel. Any ideas on how to accomplish this? much appreciated, Ryan ---(end of broadcast)--- TIP 6: explain analyze is your friend -- Simply protected storage solutions ensure that your information is automatically safe, readily available and always there, visit us at http://www.overlandstorage.com -- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] pg_dump and grants to PUBLIC
On Tue, 2006-09-05 at 12:25 -0400, Tom Lane wrote: Blair Lowe [EMAIL PROTECTED] writes: On Mon, 2006-08-05 at 16:32 -0700, Joshua D. Drake wrote: What version of PostgreSQL is this that you are using? Because it ]# rpm -qi postgresql Name: postgresql Relocations: (not relocateable) Version : 7.3.4 Vendor: Red Hat, Inc. ^ That does not square with this: [EMAIL PROTECTED]:~$ psql -U test2 test2; Welcome to psql 8.1.3, the PostgreSQL interactive terminal. ^ and the psql session is talking to test2 but you're dumping an unrelated database: Please read the that was from Joshua who was testing on 8.1.3, not me. I am running 7.3.4. In my test I do not see stuff2 either. The problem here is that I have sensitive production data, so my tests are hard to read, and not able to submit here. So how do I recreate this template thing without killing production data? TTYL, Blair. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] pg_dump and grants to PUBLIC
Tom Lane wrote: Blair Lowe [EMAIL PROTECTED] writes: On Mon, 2006-08-05 at 16:32 -0700, Joshua D. Drake wrote: What version of PostgreSQL is this that you are using? Because it ]# rpm -qi postgresql Name: postgresql Relocations: (not relocateable) Version : 7.3.4 Vendor: Red Hat, Inc. ^ That does not square with this: [EMAIL PROTECTED]:~$ psql -U test2 test2; Welcome to psql 8.1.3, the PostgreSQL interactive terminal. ^ and the psql session is talking to test2 but you're dumping an unrelated database: [EMAIL PROTECTED]:~$ pg_dump test1; I see no evidence here that test1 didn't already have foo in it. regards, tom lane Tom, you are commenting on my example of why his doesn't make sense :). I used 8.1.3 to test his theory, but he is running 7.3. Sincerely, Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] InitBufferPoolAccess crash
I have a 7.3.4 cluster that just sigsegv'd. I know an upgrade is desperately needed. In the meantime, does the following gdb output provide any clues as to what we might be able to do to nurse it along until we upgrade? Thanks, Ed PostgreSQL 7.3.4 on hppa2.0w-hp-hpux11.00, compiled by GCC gcc (GCC) 3.2.2 (gdb) core-file core Core was generated by `postgres'. Program terminated with signal 11, Segmentation fault. warning: The shared libraries were not privately mapped; setting a breakpoint in a shared library will not work until you rerun the program. (no debugging symbols found)...(no debugging symbols found)... (no debugging symbols found)...(no debugging symbols found)... (no debugging symbols found)...(no debugging symbols found)... (no debugging symbols found)...#0 0x12d288 in InitBufferPoolAccess () from /opt/pgsql/installs/postgresql-7.3.4/bin/postmaster (gdb) bt #0 0x12d288 in InitBufferPoolAccess () from /opt/pgsql/installs/postgresql-7.3.4/bin/postmaster Error accessing memory address 0x0: Invalid argument. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Arguments Pro/Contra Software Raid
On May 9, 2006, at 8:51 AM, Joshua D. Drake wrote: (Using SATA drives is always a bit of risk, as some drives are lying about whether they are caching or not.) Don't buy those drives. That's unrelated to whether you use hardware or software RAID. Sorry that is an extremely misleading statement. SATA RAID is perfectly acceptable if you have a hardware raid controller with a battery backup controller. If the drive says it's hit the disk and it hasn't then the RAID controller will have flushed the data from its cache (or flagged it as correctly written). At that point the only place the data is stored is in the non battery backed cache on the drive itself. If something fails then you'll have lost data. You're not suggesting that a hardware RAID controller will protect you against drives that lie about sync, are you? And dollar for dollar, SCSI will NOT be faster nor have the hard drive capacity that you will get with SATA. Yup. That's why I use SATA RAID for all my databases. Cheers, Steve ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] simple md5 authentication problems
Bruno Wolff III escreveu: On Mon, May 08, 2006 at 23:10:31 +0900, kmh496 [EMAIL PROTECTED] wrote: doesn't that user have to exist since you are using ident method? that means unix username == postgres username. do you have a user named maragato_test on the system? did you create that user in postgres and on the system? Note that you can create custom mappings for which the unix user is not the same as the postgres user. I don't want to use ident and the unix user name. Let me try and simplify my question: 1) Isn't the user 'postgres' pre-configured? Running this seems to imply so: 'select datname from pg_database;' datname --- postgres 2) Is there a way to use this user 'postgres' with a non-root unix account _not_ named postgres? I just want _any_ method - md5, ident, whatever, that allows access to my db with user 'postgres' from an account called myuser1, myuser2, and myuser3. Tomorrow it might be myuser4. 3) I'm willing to try and use custom mappings if that's the easiest way to solve my problem. Thanks for the help, Robert ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Google Summer of Code: Full Disjunctions
First, i have no knowledge of anyone that have implemented full disjunctions(ever) aside from the theoretical works of my colleagues. With the exception of a corner case of it, that I believe was a simulation in 96. (A. Rajaman and J.D. Ullman Integrating information by outerjoins and full-disjunctions). I'd love to hear about any implementation out there (aside from my colleagues work, which is mine also: cohen,sagiv, kimelfeld,kanza) I didn't mean to imply there was. It was the Rajaraman Ullman paper that got me interested in FD's and then I've looked at the Computing Full Disjunctions paper by Kanza Sagiv which gives a general solution. Obviously from the second paper it's clear that implementing full disjunction (efficiently) is a non-trivial exercise. It can never be a binary operation since at the heart of the matter is that you need to take each subset of the relations and join them. i.e.: ... Usually binary operations allow for a bottom up computation approach, but FD is a TOP down approach (Galindo-Legaria, C. outerjoins as disjunctions). Right, thanks for clarifying. From a data analysis perspective I would like to be able to look at various subsets, eg. FD(A,B,C), FD(B,C,D), FD(A,B,C,D) etc and so this just means that each subset has too be computed independantly. I can live with that but wasn't sure if I had missed something. In any case, the difficulty of implementing FD precludes me from experimenting with it just yet. Regards Lee ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] default client_encoding with psql on windows
hi all, I try to solve a litle problem, with PostgreSQL 8.1.3 on windows with UTF8. I read all the documentation related to psql on windows, I turn my cmd.exe encoding to codepage 1252 with the good font. When I try a connection to my UTF8 database with psql (with my windows cp1252 terminal) I have some problems like \d produce an error : ERROR: invalid UTF-8 byte sequence detected near byte 0xe9, because my client_encoding is set to UTF8. So I try to set my encoding to win1252 with \encoding win1252 and it's work beter. I would like to set it by default but for psql only (all other programs speak UTF8). I tried various things with the --set option of psql without success, like --set client_encoding=win1252 or --set CLIENT_ENCODING=win1252 or --set encoding=win1252. I think my syntax is good but that doesn't work. This part of the man page doesn't sound realy good to me (concerning the --set option) : These assignments are done during a very early stage of start-up, so variables reserved for internal purposes might get overwritten later. But you are my last change before reading the source code, so please tell me if you have any clue :) Thomas
[GENERAL] How to allow non-local to postgreSQL
Good Day I Just want to ask on how to allow a non-local in PostgreSQL withoutadding it in pg-hba.confI just want to view a desktop application on Web, yes it works, but you need to allow those I.P. addresses that will gonna connect to postgreSQL. What I want is everyone in the net can access the said desktop application. Can you help me on how to configure it.thank you very much Yahoo! Messenger with Voice. PC-to-Phone calls for ridiculously low rates.
Re: [GENERAL] Unify distant Postgres databases
Each distant database works on its own domain of data. Then no conflict should happen during updates. One thing I have not specified is that the distant databases don't handle global data but only data collected at the local level. Slony-1 seems not to provide replication from multi-partial databases to one global database. But maybe I'm wrong... Can you tell me more about this use of Slony? Hugues -Message d'origine- De : Scott Marlowe [mailto:[EMAIL PROTECTED] Envoyé : vendredi 5 mai 2006 19:10 À : Houssais Hugues Cc : pgsql general Objet : Re: [GENERAL] Unify distant Postgres databases On Fri, 2006-05-05 at 04:21, Houssais Hugues wrote: Hi, We desire to implement a multi-site server that unifies data from distant Postgres databases in a nightly batch. The distant databases have all the same architecture (schema). The size of data exchanged between distant servers and the multi-site manager has to be reduced to the strictly usefully data. We naturally have been interested by the WAL archiving (PITR). But after a deep analysis of this skill, we still encounter problems. The main problem is unifying the data from many databases in a common database. Has anyone experienced a solution to this problem... maybe not with WAL? Are you talking a big multi-way setup? That's rather complex, and resolution of conflicting updates can keep a DBA busy full time in a poorly thought out setup. OTOH, if you're looking at having one or more one-way pushes in your setup, you might want to look at using slony. There are a lot of ways you can set it up, depending on your needs. Got a bit more detail on what you're wanting to do? ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] pg_dump and grants to PUBLIC
On Mon, 8 May 2006, Blair Lowe wrote: Hi, I have had this problem for a while, and have not been able to find anything in the archives or on search engines: If I want to back up a client's database on our shared web server, I would type: pg_dump database_name try pgdump -t table to just get the tables you want exported. or implement a separate schema (not public) for the tables your app uses use pg_dump -n schema to avoid all the public tables in the public schema. or fire up a new postgres server (postmaster) process at a different port talking to a separate Postgres database location, so other users don't create superfluous tables, etc in your database. Any application should take a port as an argument in the connect parameter string HTH, Brent Wood Since we are running a shared server, and since crappy (only because of this problem) off the shelf database open source software such as oscommerce, or phpBB2 grants access to public rather than the web user www or nobody, when I do a pg_dump for a database, I get all the databases on the system that grant to PUBLIC being dumped with with database that I want. To restore, I need to go in and prune out all the extra junk that was granted to PUBLIC by other users in other databases - very time consuming. How can I use pg_dump to get JUST the database in th argument, and not other tables and databases that have granted to PUBLIC? Altering my client's software to grant to nobody is not practical. Thanks in advance, Blair. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Arguments Pro/Contra Software Raid
On May 9, 2006, at 11:51 AM, Joshua D. Drake wrote: Sorry that is an extremely misleading statement. SATA RAID is perfectly acceptable if you have a hardware raid controller with a battery backup controller. And dollar for dollar, SCSI will NOT be faster nor have the hard drive capacity that you will get with SATA. Does this hold true still under heavy concurrent-write loads? I'm preparing yet another big DB server and if SATA is a better option, I'm all (elephant) ears. smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Arguments Pro/Contra Software Raid
Vivek Khera [EMAIL PROTECTED] writes: On May 9, 2006, at 11:51 AM, Joshua D. Drake wrote: And dollar for dollar, SCSI will NOT be faster nor have the hard drive capacity that you will get with SATA. Does this hold true still under heavy concurrent-write loads? I'm preparing yet another big DB server and if SATA is a better option, I'm all (elephant) ears. Correct me if I'm wrong, but I've never heard of a 15kRPM SATA drive. -Doug ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] pg_restore duplicate key violations
On 05/08/2006 06:42:18 PM, Tom Lane wrote: Karl O. Pinc [EMAIL PROTECTED] writes: I'm having trouble with a dump and restore: Um ... it looks to me like you're trying to restore into an existing table that already has the same data loaded ... Thanks everybody, the problem was in the schemas and my default search path. Somehow when upgrading to 8.1.3 I wound up with the public schema put back into all my databases, and my search paths of the databases set back to the default $user,public. I can see how the public schema got there, this time I did not delete the public schema from template1. I'm not so sure about the search path. Obviously, I did not do my pg_restore properly when reloading databases. Karl [EMAIL PROTECTED] Free Software: You don't pay back, you pay forward. -- Robert A. Heinlein ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [PERFORM] Arguments Pro/Contra Software Raid
Vivek Khera wrote: On May 9, 2006, at 11:51 AM, Joshua D. Drake wrote: Sorry that is an extremely misleading statement. SATA RAID is perfectly acceptable if you have a hardware raid controller with a battery backup controller. And dollar for dollar, SCSI will NOT be faster nor have the hard drive capacity that you will get with SATA. Does this hold true still under heavy concurrent-write loads? I'm preparing yet another big DB server and if SATA is a better option, I'm all (elephant) ears. I didn't say better :). If you can afford, SCSI is the way to go. However SATA with a good controller (I am fond of the LSI 150 series) can provide some great performance. I have not used, but have heard good things about Areca as well. Oh, and make sure they are SATA-II drives. Sincerely, Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] A better AND query?
I'm trying to generate a query that will handle tags matching in a database. The simplified structure is create table contacts ( id serial primary key, name varchar ); create table books ( id serial primary key, name varchar ); create table tags ( id serial primary key, name varchar ); create table taggings ( tag_id int, tagged_id int, tagged_type int -- points to the table this tag is tagging ); What I want to now achieve is to find all items that are tagged with the same set of tags. So it's an AND matching on a list of tags I have. I have two types of matching. One is within the same object type (where both tagged objects are the same, say two books with the same set of tags) and one that will find ANY object that's tagged with the same tag (like book and contact) Current query (for the same object type) I am using is the following, for a list of 4 tags called summer, winter, spring and fall. SELECT * FROM contacts WHERE 4 = ( SELECT COUNT(*) FROM tags, taggings WHERE tags.id = taggings.tag_id AND lower(tags.name) IN ( 'summer' , 'winter', 'spring', 'fall' ) AND taggings.tagged_type = 1 AND taggings.tagged_id = contacts.id); The query to match all the objects tagged with a given set of tags is: SELECT DISTINCT taggings.tagged_id, taggings.tagged_type FROM taggings WHERE 4 = ( SELECT COUNT(*) FROM tags, taggings as taggings2 WHERE tags.id = taggings2.tag_id AND lower(tags.name) IN ( 'summer' , 'winter', 'spring', 'fall' ) AND taggings.tagged_type = taggings2.tagged_type AND taggings.tagged_id = taggings2.tagged_id ); The idea in both is to see that I find the number of tags needed. I've attached a script that will create the tables, insert some data and run the queries to make it easy to try it. Is there a way to simplify this query and make it more efficient? Thanks! Guy. -- Family management on rails: http://www.famundo.com - coming soon! My development related blog: http://devblog.famundo.com drop table taggings; drop table contacts; drop table books; drop table tags; create table contacts ( id serial primary key, name varchar ); create table books ( id serial primary key, name varchar ); create table tags ( id serial primary key, name varchar ); create table taggings ( tag_id int, tagged_id int, tagged_type int -- points to the table this tag is tagging ); insert into contacts (id,name) VALUES (1,'guy'); insert into contacts (id,name) VALUES (2,'michal'); insert into contacts (id,name) VALUES (3,'gal'); insert into contacts (id,name) VALUES (4,'noa'); insert into contacts (id,name) VALUES (5,'edo'); insert into books (id,name) VALUES (1,'B1'); insert into books (id,name) VALUES (2,'B2'); insert into books (id,name) VALUES (3,'B3'); insert into books (id,name) VALUES (4,'B4'); insert into books (id,name) VALUES (5,'B5'); insert into tags (id,name) values (1,'summer'); insert into tags (id,name) values (2,'winter'); insert into tags (id,name) values (3,'spring'); insert into tags (id,name) values (4,'fall'); insert into tags (id,name) values (5,'sea'); insert into tags (id,name) values (6,'beach'); insert into taggings (tag_id,tagged_id,tagged_type) values (1,1,1); insert into taggings (tag_id,tagged_id,tagged_type) values (2,1,1); insert into taggings (tag_id,tagged_id,tagged_type) values (3,1,1); insert into taggings (tag_id,tagged_id,tagged_type) values (4,1,1); insert into taggings (tag_id,tagged_id,tagged_type) values (1,2,1); insert into taggings (tag_id,tagged_id,tagged_type) values (2,2,1); insert into taggings (tag_id,tagged_id,tagged_type) values (3,2,1); insert into taggings (tag_id,tagged_id,tagged_type) values (4,2,1); insert into taggings (tag_id,tagged_id,tagged_type) values (5,2,1); insert into taggings (tag_id,tagged_id,tagged_type) values (1,3,1); insert into taggings (tag_id,tagged_id,tagged_type) values (4,3,1); insert into taggings (tag_id,tagged_id,tagged_type) values (1,4,1); insert into taggings (tag_id,tagged_id,tagged_type) values (2,4,1); insert into taggings (tag_id,tagged_id,tagged_type) values (3,4,1); insert into taggings (tag_id,tagged_id,tagged_type) values (4,4,1); insert into taggings (tag_id,tagged_id,tagged_type) values (1,2,2); insert into taggings (tag_id,tagged_id,tagged_type) values (2,2,2); insert into taggings (tag_id,tagged_id,tagged_type) values (3,2,2); insert into taggings (tag_id,tagged_id,tagged_type) values (4,2,2); insert into taggings (tag_id,tagged_id,tagged_type) values (1,1,2); insert into taggings (tag_id,tagged_id,tagged_type) values (4,1,2); -- Find all items tagged with the same set of tags SELECT DISTINCT taggings.tagged_id, taggings.tagged_type FROM taggings WHERE 4= ( SELECT COUNT(*) FROM tags, taggings as taggings2 WHERE tags.id = taggings2.tag_id AND lower(tags.name) IN ( 'summer' , 'winter', 'spring', 'fall' ) AND taggings.tagged_type = taggings2.tagged_type AND taggings.tagged_id =
Re: [PERFORM] [GENERAL] Arguments Pro/Contra Software Raid
You're not suggesting that a hardware RAID controller will protect you against drives that lie about sync, are you? Of course not, but which drives lie about sync that are SATA? Or more specifically SATA-II? Sincerely, Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PERFORM] [GENERAL] Arguments Pro/Contra Software Raid
On May 9, 2006, at 11:26 AM, Joshua D. Drake wrote: You're not suggesting that a hardware RAID controller will protect you against drives that lie about sync, are you? Of course not, but which drives lie about sync that are SATA? Or more specifically SATA-II? SATA-II, none that I'm aware of, but there's a long history of dodgy behaviour designed to pump up benchmark results down in the consumer drive space, and low end consumer space is where a lot of SATA drives are. I wouldn't be surprised to see that beahviour there still. I was responding to the original posters assertion that drives lying about sync were a reason not to buy SATA drives, by telling him not to buy drives that lie about sync. You seem to have read this as don't buy SATA drives, which is not what I said and not what I meant. Cheers, Steve ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PERFORM] [GENERAL] Arguments Pro/Contra Software Raid
Douglas McNaught wrote: Vivek Khera [EMAIL PROTECTED] writes: On May 9, 2006, at 11:51 AM, Joshua D. Drake wrote: And dollar for dollar, SCSI will NOT be faster nor have the hard drive capacity that you will get with SATA. Does this hold true still under heavy concurrent-write loads? I'm preparing yet another big DB server and if SATA is a better option, I'm all (elephant) ears. Correct me if I'm wrong, but I've never heard of a 15kRPM SATA drive. Best I have seen is 10k but if I can put 4x the number of drives in the array at the same cost... I don't need 15k. Joshua D. Drake -Doug ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] How to allow non-local to postgreSQL
leo camalig wrote: Good Day I Just want to ask on how to allow a non-local in PostgreSQL without adding it in pg-hba.conf You don't. pg_hba.conf is the place where you can specify who can connect from where to which database using what authentication method. Your only alternative to using pg_hba.conf would be to allow access from anywhere in pg_hba.conf, and use the firewalling capabilieties of your OS to restrict access. I just want to view a desktop application on Web, yes it works, but you need to allow those I.P. addresses that will gonna connect to postgreSQL. What I want is everyone in the net can access the said desktop application. Can you help me on how to configure it. You can add whole subnets to pg_hba.conf - that's the reason why there is a host and a netmask field ;-) To e.g. allow anyone from the ip-range 192.168.0.0-192.168.0.255, you'd use a line containing 192.168.0.0 255.255.255.0. To allow connections from anywhere, use 0.0.0.0 0.0.0.0. greetings, Florian Pflug ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] A better AND query?
tagged_type int -- points to the table this tag is tagging My head exploded right about here. Is the schema written in stone, or can it change? What is the use case for this schema? What's it for? What is a tag about? Best Regards, Wayne Conrad ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] How to allow non-local to postgreSQL
On Mon, May 08, 2006 at 18:37:31 -0700, leo camalig [EMAIL PROTECTED] wrote: Good Day I Just want to ask on how to allow a non-local in PostgreSQL without adding it in pg-hba.conf That isn't possible. You need to grant the access using that file. I just want to view a desktop application on Web, yes it works, but you need to allow those I.P. addresses that will gonna connect to postgreSQL. What I want is everyone in the net can access the said desktop application. Can you help me on how to configure it. Can you clarify what you are really asking? The internet is not the web. Are you asking if people can run desktop applications that connect directly to your database server from anywhere? If so, the abswer is that it's possible, but you probably don't really want to do that. If you are asking if you can have people run an application using a web browser where the application runs on a server you control, that is also possible and is generally a better way to do things. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] pg_dump and grants to PUBLIC
On Tue, May 09, 2006 at 10:52:32AM -0600, Blair Lowe wrote: In my test I do not see stuff2 either. The problem here is that I have sensitive production data, so my tests are hard to read, and not able to submit here. You don't need to show any data, just the schema will be enough. An example you could show us would be something like below. Replace 'mydatabase' with a database and 'sometable' with a table name you know is not is 'mydatabase' and so should not be in the dump but you say is because it's in some other database. $ psql mydatabase psql version x.x.x mydatabase select oid from pg_class where relname = 'sometable'; oid --- (0 rows) mydatabase \q $ pg_dump -s mydatabase | grep 'CREATE.*sometable' show us the output here If it turns out it is in template1, you can fix this without deleting any production data. Easiest is just login and delete stuff, though you can recreate it using the steps in the docs. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] simple md5 authentication problems
On Mon, May 08, 2006 at 02:10:02PM -0700, robert wrote: 1) Isn't the user 'postgres' pre-configured? Running this seems to imply so: 'select datname from pg_database;' datname --- postgres This demonstrates a *database* named postgres. Users are in the pg_user table. 2) Is there a way to use this user 'postgres' with a non-root unix account _not_ named postgres? I just want _any_ method - md5, ident, whatever, that allows access to my db with user 'postgres' from an account called myuser1, myuser2, and myuser3. Tomorrow it might be myuser4. Absolutely, though the question is obviously why. It's a superuser account, you can create more of them if you like with createuser. If you want to use md5, setup a line in pg_hba.conf for md5 auth from wherever you're logging in (reload postmaster). For this to work you might need to ALTER USER postgres WITH PASSWORD 'blah' to set the password. If you want to use ident (no password), setup pg_hba.conf for ident using a mapname. You say ident mapname there. Then in pg_ident.conf setup the mapping for IDENT to PGUSERNAME there. Reload postmaster. You can use trust if you're desperate. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] pgcrypto sha256/384/512 don't work on Redhat. Please help!
On 5/9/06, Marko Kreen [EMAIL PROTECTED] wrote: On 5/9/06, Joe Kramer [EMAIL PROTECTED] wrote: On 5/9/06, Marko Kreen [EMAIL PROTECTED] wrote: The fact that Fedora pgcrypto is linked with OpenSSL that does not support SHA256 is not a bug, just a fact. It's not Fedora only, same problem with Gentoo/portage. I think it's problem for all distros. You need recompile pgcrypto or install openssl 0.9.8 which is considered as unstable by most distros. Maybe pgcrypto should use built-in algorithms until OpenSSL 0.9.8 is mainstream/default install.To be honest, pgcrypto actually falls back on built-in code for AES, in case old OpenSSL that does not have AES.Thats because AESshould be always there, together with md5/sha1/blowfish.I do not consider SHA2 that important (yet?),so they don'tget same treatment. Right on! SHA2 should fallback the same as AES! Ideally, would be great if pgcrypto could fallback to built-in algorithm of OpenSSL don't support it. But since it's compile switch, completely seld-compiled pgcrypto would be great.Attached is a patch that re-defines SHA2 symbols so that they would notconflict with OpenSSL. Now that I think about it, if your OpenSSL does not contain SHA2, thenthere should be no conflict.But ofcourse, if someone upgrades OpenSSL,server starts crashing.So I think its best to always apply this patch. That was my thought too. Old OpenSSL doesn't have SHA2 so why SHA2 is still blocked in pgcrypto? Is that by design or bug?Thanks.
Re: [PERFORM] [GENERAL] Arguments Pro/Contra Software Raid
On Tue, 2006-05-09 at 12:52, Steve Atkins wrote: On May 9, 2006, at 8:51 AM, Joshua D. Drake wrote: (Using SATA drives is always a bit of risk, as some drives are lying about whether they are caching or not.) Don't buy those drives. That's unrelated to whether you use hardware or software RAID. Sorry that is an extremely misleading statement. SATA RAID is perfectly acceptable if you have a hardware raid controller with a battery backup controller. If the drive says it's hit the disk and it hasn't then the RAID controller will have flushed the data from its cache (or flagged it as correctly written). At that point the only place the data is stored is in the non battery backed cache on the drive itself. If something fails then you'll have lost data. You're not suggesting that a hardware RAID controller will protect you against drives that lie about sync, are you? Actually, in the case of the Escalades at least, the answer is yes. Last year (maybe a bit more) someone was testing an IDE escalade controller with drives that were known to lie, and it passed the power plug pull test repeatedly. Apparently, the escalades tell the drives to turn off their cache. While most all IDEs and a fair number of SATA drives lie about cache fsyncing, they all seem to turn off the cache when you ask. And, since a hardware RAID controller with bbu cache has its own cache, it's not like it really needs the one on the drives anyway. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Skip vacuum warning when datallowconn = false and datvacuumxid = datfrozenxid?
Currently the documentation says: A database that is marked datallowconn = false in pg_database is assumed to be properly frozen; the automatic warnings and wraparound protection shutdown do not take such databases into account. Therefore it's up to you to ensure you've correctly frozen a database before you mark it with datallowconn = false. http://www.postgresql.org/docs/8.1/interactive/maintenance.html#VACUUM-FOR-WRAPAROUND When a VACUUM FREEZE is done, does datvacuumxid = datfrozenxid? If so, should VACUUM skip warning about any databases that are datvacuumxid = datfrozenxid and datallowconn = false? ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] pg_dump design problem (bug??)
Postgresql 8.1.3 Hi, I'm wondering if there's a problem with pg_dump --create, or if I'm just missing something. It does not seem to restore things like: ALTER DATABASE foo SET DateStyle TO European; Shouldn't the database that is re-created be like the database that is being dumped? For our purposes we do a pg_dumpall --globals-only and then pg_dumps of each of our databases. We like this because we can restore blobs this way, get a nice compressed and flexable --format=c, and are able to restore individual databases. But there is clearly a problem because we lose the database meta-information like datestyle, timezones and all that other per-db SET stuff. It seems the only way to get that is with a pg_dumpall, and then it's not per-database. What should we do to work around this problem? Should pg_dump/pg_restore have arguments like: --dbmeta (the default when --create is used) Dumps the database's SET options. --no-dbmeta (the default when --create is not used) Does not dump the database's SET options. --dbowner (the default when --create is used) Dumps the database's owner. --no-dbowner (the default when --create is not used) Does not dump the database's owner. Hummm for complete control consider the following: Or maybe pg_dump/pg_restore should augment/replace --data-only --schema-only --create with: --content=ctype[, ...] where ctype=db|dbowner|meta|schema|schemaowner|table|tableowner|data db create the database dbowner set the database owner as in the dumped db metaset the database SETs as in the dumped db schema create the schema (not data definitions/table structure) as in the dumped db schemaowner set the schema owner as in the dumped db table create the table(s) as in the dumped db tableowner set the table owners as in the dumped db dataload the data as in the dumped db I'd also want to add functions, triggers, views and the other sorts of things that go into databases to the above list, but that's enough for now. Thanks for listening. Karl [EMAIL PROTECTED] Free Software: You don't pay back, you pay forward. -- Robert A. Heinlein ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] A better AND query?
The schema can change, but I rather not. The use case is a web app where you can tag items with tags (many-2-many). There are multiple items you can tag: contacts, schedules, lists, etc... And then you can search and categorize by tags. The standard for this if you look aroung the web is to retrieve the tagged records with any of the tags you select. Effectively an OR query. What I'm trying to do is search for items matching multiple tags at the same time - and AND query. So that I can bring up all contacts that are tagged with friends and movie-lovers. Hope that clears it up a bit... Guy. On 5/9/06, Wayne Conrad [EMAIL PROTECTED] wrote: tagged_type int -- points to the table this tag is tagging My head exploded right about here. Is the schema written in stone, or can it change? What is the use case for this schema? What's it for? What is a tag about? Best Regards, Wayne Conrad -- Family management on rails: http://www.famundo.com - coming soon! My development related blog: http://devblog.famundo.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Encoding Conversion
beer schreef: Hello All So I have an old database that is ASCII_SQL encoded. For a variety of reasons I need to convert the database to UNICODE. I did some googling on this but have yet to find anything that looked like a viable option, so i thought I'd post to the group and see what sort of advice might arise. :) well i recently struggled with the same problem. After a lot of trial and error and reading, it seems that an ascii encoded database can't use its client encoding capabilities ( set client_encoding to utf8 ). i think the easist solution is to do a dump, recreate the database with a proper encoding, and restore the dump. jef peeraer TIA -b ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Encoding Conversion
On May 9, 2006 01:03 pm, jef peeraer [EMAIL PROTECTED] wrote: well i recently struggled with the same problem. After a lot of trial and error and reading, it seems that an ascii encoded database can't use its client encoding capabilities ( set client_encoding to utf8 ). i think the easist solution is to do a dump, recreate the database with a proper encoding, and restore the dump. You also need to convert any non-ASCII encoded characters present in the dump to UTF-8, prior to restoring it. If you're lucky, and they're all the same (ie. LATIN1 or something), you can use iconv to easily do that. Remember to change the set client_encoding line in the dump file, too. -- Alan ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] pg_dump design problem (bug??)
Karl O. Pinc [EMAIL PROTECTED] writes: I'm wondering if there's a problem with pg_dump --create, or if I'm just missing something. It does not seem to restore things like: ALTER DATABASE foo SET DateStyle TO European; Shouldn't the database that is re-created be like the database that is being dumped? The major reason why pg_dump doesn't touch that stuff is that it wants to be agnostic about the name of the database you are restoring into. I don't see any particular problem with leaving it to pg_dumpall, in any case. pg_dump is already assuming that you've correctly set up cluster-wide state; for example it doesn't create users for you. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] install postgres on usb drive???
It is possible to install postgres on usb driver to run it anywhere??? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] install postgres on usb drive???
On Tue, 9 May 2006 16:54:37 -0400 Rodrigo Cortés [EMAIL PROTECTED] wrote: It is possible to install postgres on usb driver to run it anywhere??? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org A) Possible is a BIG word. So is anywhere. If you could narrow these two down a bit it might help answer your question. B) Why? Posgrgres is a network aware service so your clients can generally attach TO the server FROM anywhere so why have a portable installation? Not arguing, just trying to understand. John Purser -- Beware the one behind you. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] pg_dump design problem (bug??)
On 05/09/2006 03:47:20 PM, Tom Lane wrote: Karl O. Pinc [EMAIL PROTECTED] writes: I'm wondering if there's a problem with pg_dump --create, or if I'm just missing something. It does not seem to restore things like: ALTER DATABASE foo SET DateStyle TO European; Shouldn't the database that is re-created be like the database that is being dumped? I don't see any particular problem with leaving it to pg_dumpall, in any case. pg_dump is already assuming that you've correctly set up cluster-wide state; for example it doesn't create users for you. Thing is, I don't see the ALTER DATABASE x SET ... to be part of a cluster-wide structure, I see it as belonging to a database. (I do see your point as far as database owners go.) The convenient way to backup and restore a single database is to use pg_dump. I could do a pg_dumpall --schema-only and then remove everything not having to do with the specific db I'm interested in when I want to, say, copy a database from one machine to another, but it's a hassle. The SETs make a big difference. I was looking in the wrong schema because I didn't restore my database's my search_path properly. Karl [EMAIL PROTECTED] Free Software: You don't pay back, you pay forward. -- Robert A. Heinlein ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] install postgres on usb drive???
On 5/9/06, John Purser [EMAIL PROTECTED] wrote: On Tue, 9 May 2006 16:54:37 -0400 Rodrigo Cortés [EMAIL PROTECTED] wrote: It is possible to install postgres on usb driver to run it anywhere??? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org A) Possible is a BIG word. So is anywhere. If you could narrow these two down a bit it might help answer your question. B) Why? Posgrgres is a network aware service so your clients can generally attach TO the server FROM anywhere so why have a portable installation? Not arguing, just trying to understand. John Purser -- Beware the one behind you. Im trying to make a portable development environment for ruby on rails. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] InitBufferPoolAccess crash
Ed L. [EMAIL PROTECTED] writes: from /opt/pgsql/installs/postgresql-7.3.4/bin/postmaster (gdb) bt #0 0x12d288 in InitBufferPoolAccess () from /opt/pgsql/installs/postgresql-7.3.4/bin/postmaster Error accessing memory address 0x0: Invalid argument. Right offhand I'd imagine that the calloc() call failed and returned NULL --- leastwise it's hard to see how else that routine could crash. Could your machine be nearly out of memory? regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] install postgres on usb drive???
A) Possible is a BIG word. So is anywhere. If you could narrow these two down a bit it might help answer your question. possible mean how to do it anywhere mean a pc with a windows os ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] install postgres on usb drive???
On Tue, 9 May 2006 17:06:53 -0400 Rodrigo Cortés [EMAIL PROTECTED] wrote: On 5/9/06, John Purser [EMAIL PROTECTED] wrote: On Tue, 9 May 2006 16:54:37 -0400 Rodrigo Cortés [EMAIL PROTECTED] wrote: It is possible to install postgres on usb driver to run it anywhere??? Im trying to make a portable development environment for ruby on rails. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings Well, I see no problem with INSTALLING postgreSQL on a USB drive. But if you compiled it for Linux I don't think it would run on a windows machine. In fact you might have a lot of trouble just getting to to run on a different version of the same DISTRIBUTION of linux and that's assuming you could maintain mount points and paths. I think I'd go with a cheap laptop and/or Ruby installed on the USB drive and my postgreSQL installation on the network. John Purser -- Q: How do you keep a moron in suspense? ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] install postgres on usb drive???
On Tue, 9 May 2006 17:10:21 -0400 Rodrigo Cortés [EMAIL PROTECTED] wrote: A) Possible is a BIG word. So is anywhere. If you could narrow these two down a bit it might help answer your question. possible mean how to do it anywhere mean a pc with a windows os ---(end of broadcast)--- TIP 6: explain analyze is your friend Rodrigo with Windows I'm not sure how much the registry would have to be edited (if at all) to run the same postgrsql installation from windows machines A B and C. I strongly doubt that you could just walk up to a strange machine, plug in your USB drive, and away you go. Some years ago when I was still running Windows at home I recall there was a discussion on the cygwin mailing lists about running cygwin from a cd drive. You might want to check out their archives and see if you could use that information. http://www.cygwin.com/ Good luck. John Purser -- You will never know hunger. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] EXPLAIN SELECT .. does not return
Tom Lane wrote: David Link [EMAIL PROTECTED] writes: The following large EXPLAIN SELECT Statement fails to return, but continues to take up processing time until it is killed. [ 52-way join... ] Am I right in guessing that all the sales_xxx tables are the same size and have similar statistics? I think the problem is that the planner is faced with a large set of essentially equivalent plans and isn't pruning the list aggressively enough. That's something we fixed in 8.0. Correct. Postgresql 7.4.8 You really oughta try something newer. On my machine, 7.4.12 plans a 52-way join in about a minute, and 8.0 and 8.1 in under a second. We just completed our upgrade to 8.1.3. And we are happy campers! Our Explain plan problem has gone away and everything runs faster. I especially notice improved caching of repeated queries. Hats off to you postgres folks. Thank you very much. Postgres rocks! I wonder also if there's not a better way to design the query... maybe a UNION ALL would work better than nested joins. We need the info in separate columns. I don't think we can do it with UNION. That's why the many joins. I understand though with the new tablespace and inheritence features in 8/8.1 I could put all those sales tables back into one table and keep the data in separate files. regards, tom lane Thanks again for all your help. David Link Nielsen Entertainment, White Plains, NY ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] InitBufferPoolAccess crash
On Tuesday May 9 2006 3:07 pm, Tom Lane wrote: Ed L. [EMAIL PROTECTED] writes: from /opt/pgsql/installs/postgresql-7.3.4/bin/postmaster (gdb) bt #0 0x12d288 in InitBufferPoolAccess () from /opt/pgsql/installs/postgresql-7.3.4/bin/postmaster Error accessing memory address 0x0: Invalid argument. Right offhand I'd imagine that the calloc() call failed and returned NULL --- leastwise it's hard to see how else that routine could crash. Could your machine be nearly out of memory? Quite likely. It turns out we had a rogue spike of 1400 processes at that time, so that makes perfect sense. Thanks, Ed ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Preventing SQL Injection in PL/pgSQL in psql
Is my understanding correct that the following is vulnerable to SQL injection in psql: CREATE OR REPLACE FUNCTION fx ( my_var bchar) RETURNS void AS $$ BEGIN INSERT INTO fx VALUES ( my_var ) ; END; $$ LANGUAGE 'plpgsql' VOLATILE Where this is NOT subject to SQL injection: CREATE OR REPLACE FUNCTION fx ( my_var bpchar) RETURNS void AS $$ BEGIN EXECUTE ' INSERT INTO fx VALUES ( ' || quote_literal( my_var) || ' ); ' END; $$ LANGUAGE 'plpgsql' VOLATILE Is this understanding correct? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Preventing SQL Injection in PL/pgSQL in psql
On 9 May 2006 17:04:31 -0700, Karen Hill [EMAIL PROTECTED] wrote: Is my understanding correct that the following is vulnerable to SQL injection in psql: CREATE OR REPLACE FUNCTION fx ( my_var bchar) RETURNS void AS $$ BEGIN INSERT INTO fx VALUES ( my_var ) ; END; $$ LANGUAGE 'plpgsql' VOLATILE no, IMO this is the safest and best option. Quoting, etc is handled by the plpgsql processor (this is one of the things that make it so great). Where this is NOT subject to SQL injection: CREATE OR REPLACE FUNCTION fx ( my_var bpchar) RETURNS void AS $$ BEGIN EXECUTE ' INSERT INTO fx VALUES ( ' || quote_literal( my_var) || ' ); ' END; $$ LANGUAGE 'plpgsql' VOLATILE If you are making dynamic sql statements this (quote_literal) is the preferred way to do quotations...otherwise there is potential for malformed statement. My rule of thumb is to use static sql when you can, dynamic when you have to. Merlin ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] [PERFORM] Arguments Pro/Contra Software Raid
Joshua D. Drake wrote: Vivek Khera wrote: On May 9, 2006, at 11:51 AM, Joshua D. Drake wrote: Sorry that is an extremely misleading statement. SATA RAID is perfectly acceptable if you have a hardware raid controller with a battery backup controller. And dollar for dollar, SCSI will NOT be faster nor have the hard drive capacity that you will get with SATA. Does this hold true still under heavy concurrent-write loads? I'm preparing yet another big DB server and if SATA is a better option, I'm all (elephant) ears. I didn't say better :). If you can afford, SCSI is the way to go. However SATA with a good controller (I am fond of the LSI 150 series) can provide some great performance. Basically, you can get away with cheaper hardware, but it usually doesn't have the reliability/performance of more expensive options. You want an in-depth comparison of how a server disk drive is internally better than a desktop drive: http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] [GENERAL] Arguments Pro/Contra Software Raid
Scott Marlowe wrote: Actually, in the case of the Escalades at least, the answer is yes. Last year (maybe a bit more) someone was testing an IDE escalade controller with drives that were known to lie, and it passed the power plug pull test repeatedly. Apparently, the escalades tell the drives to turn off their cache. While most all IDEs and a fair number of SATA drives lie about cache fsyncing, they all seem to turn off the cache when you ask. And, since a hardware RAID controller with bbu cache has its own cache, it's not like it really needs the one on the drives anyway. You do if the controller thinks the data is already on the drives and removes it from its cache. -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] default client_encoding with psql on windows
On May 8, 2006, at 12:14 PM, Thomas Sondag wrote: I tried various things with the --set option of psql without success, like --set client_encoding=win1252 or --set CLIENT_ENCODING=win1252 or -- set encoding=win1252. The variables are case sensitive, so it looks like you left out the right one. The docs say the psql variable is ENCODING (all caps). If that does not work maybe you could call \encoding in the psqlrc file? John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Encoding Conversion
beer schreef: Hello All So I have an old database that is ASCII_SQL encoded. For a variety of reasons I need to convert the database to UNICODE. I did some googling on this but have yet to find anything that looked like a viable option, so i thought I'd post to the group and see what sort of advice might arise. :) well i recently struggled with the same problem. After a lot of trial and error and reading, it seems that an ascii encoded database can't use its client encoding capabilities ( set client_encoding to utf8 ). i think the easist solution is to do a dump, recreate the database with a proper encoding, and restore the dump. jef peeraer TIA -b ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] [GENERAL] Arguments Pro/Contra Software Raid
On May 9, 2006, at 11:26 AM, Joshua D. Drake wrote: Of course not, but which drives lie about sync that are SATA? Or more specifically SATA-II? I don't know the answer to this question, but have you seen this tool? http://brad.livejournal.com/2116715.html It attempts to experimentally determine if, with your operating system version, controller, and hard disk, fsync() does as claimed. Of course, experimentation can't prove the system is correct, but it can sometimes prove the system is broken. I say it's worth running on any new model of disk, any new controller, or after the Linux kernel people rewrite everything (i.e. on every point release). I have to admit to hypocrisy, though...I'm running with systems that other people ordered and installed, I doubt they were this thorough, and I don't have identical hardware to run tests on. So no real way to do this. Regards, Scott -- Scott Lamb http://www.slamb.org/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] What's wrong with this SQL?
I'm trying to create a table and I'm getting this error: SQL error: ERROR: syntax error at or near ( at character 39 In statement: CREATE TABLE users (user_ID SERIAL(12), first_name character varying(40) NOT NULL, last_name character varying(40) NOT NULL, password character varying(16) NOT NULL, email character varying(100) NOT NULL, privilege integer(2) NOT NULL, PRIMARY KEY (user_ID)) What's wrong with this SQL? ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] What's wrong with this SQL?
If you count over 39 characters, you will see the parser is barking at this: user_ID SERIAL(12) ^ See: http://www.postgresql.org/docs/8.1/static/datatype.html#DATATYPE-SERIAL to find out why your definition is confusing Mr. SQL-parser. -Original Message- From: [EMAIL PROTECTED] [mailto:pgsql-general- [EMAIL PROTECTED] On Behalf Of Adam Sent: Tuesday, May 09, 2006 9:06 PM To: pgsql-general@postgresql.org Subject: [GENERAL] What's wrong with this SQL? I'm trying to create a table and I'm getting this error: SQL error: ERROR: syntax error at or near ( at character 39 In statement: CREATE TABLE users (user_ID SERIAL(12), first_name character varying(40) NOT NULL, last_name character varying(40) NOT NULL, password character varying(16) NOT NULL, email character varying(100) NOT NULL, privilege integer(2) NOT NULL, PRIMARY KEY (user_ID)) What's wrong with this SQL? ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] What's wrong with this SQL?
Adam [EMAIL PROTECTED] writes: I'm trying to create a table and I'm getting this error: ERROR: syntax error at or near ( at character 39 In statement: CREATE TABLE users (user_ID SERIAL(12), SERIAL doesn't take a parameter. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] What's wrong with this SQL?
Adam wrote: I'm trying to create a table and I'm getting this error: SQL error: ERROR: syntax error at or near ( at character 39 In statement: CREATE TABLE users (user_ID SERIAL(12), first_name character varying(40) NOT NULL, last_name character varying(40) NOT NULL, password character varying(16) NOT NULL, email character varying(100) NOT NULL, privilege integer(2) NOT NULL, PRIMARY KEY (user_ID)) privilege integer(2) NOT NULL Postgres doesn't support integers of different sizes like this. http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE-INT -- Postgresql php tutorials http://www.designmagick.com/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] What's wrong with this SQL?
On May 10, 2006, at 12:06 AM, Adam wrote: CREATE TABLE users (user_ID SERIAL(12), first_name character varying(40) NOT NULL, last_name character varying(40) NOT NULL, password character varying(16) NOT NULL, email character varying (100) NOT NULL, privilege integer(2) NOT NULL, PRIMARY KEY (user_ID)) PostgreSQL integers don't have a size property. So SERIAL(12) and integer(2) are wrong. See the type choices here: http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE- INT John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Preventing SQL Injection in PL/pgSQL in psql
Merlin Moncure [EMAIL PROTECTED] writes: On 9 May 2006 17:04:31 -0700, Karen Hill [EMAIL PROTECTED] wrote: Is my understanding correct that the following is vulnerable to SQL injection in psql: ... no, IMO this is the safest and best option. Neither of the options that Karen shows are dangerous. What would be dangerous is building a SQL command string and feeding it to EXECUTE *without* using quote_literal. I agree with Merlin that you shouldn't use EXECUTE unless you have to --- it's both much slower than a precompiled statement, and much more vulnerable to security mistakes. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Arguments Pro/Contra Software Raid
Douglas McNaught [EMAIL PROTECTED] writes: Vivek Khera [EMAIL PROTECTED] writes: On May 9, 2006, at 11:51 AM, Joshua D. Drake wrote: And dollar for dollar, SCSI will NOT be faster nor have the hard drive capacity that you will get with SATA. Does this hold true still under heavy concurrent-write loads? I'm preparing yet another big DB server and if SATA is a better option, I'm all (elephant) ears. Correct me if I'm wrong, but I've never heard of a 15kRPM SATA drive. Well, dollar for dollar you would get the best performance from slower drives anyways since it would give you more spindles. 15kRPM drives are *expensive*. -- greg ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Arguments Pro/Contra Software Raid
Steve Atkins [EMAIL PROTECTED] writes: On May 9, 2006, at 2:16 AM, Hannes Dorbath wrote: Hi, I've just had some discussion with colleagues regarding the usage of hardware or software raid 1/10 for our linux based database servers. I myself can't see much reason to spend $500 on high end controller cards for a simple Raid 1. Any arguments pro or contra would be desirable. Really most of what's said about software raid vs hardware raid online is just FUD. Unless you're running BIG servers with so many drives that the raid controllers are the only feasible way to connect them up anyways, the actual performance difference will likely be negligible. The only two things that actually make me pause about software RAID in heavy production use are: 1) Battery backed cache. That's a huge win for the WAL drives on Postgres. 'nuff said. 2) Not all commodity controllers or IDE drivers can handle failing drives gracefully. While the software raid might guarantee that you don't actually lose data, you still might have the machine wedge because of IDE errors on the bad drive. So as far as runtime, instead of added reliability all you've really added is another point of failure. On the data integrity front you'll still be better off. -- Greg ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org