[GENERAL] pg_hba.conf
Dear all, I am using postgres. when I try to connect to the database it is showing me following error. Please look into that and help me out. an error occurred: FATAL: no pg_hba.conf entry for host "127.0.0.1", user "postgres", database "template1", SSL off. How do I proceed with this error. What are the things I need to change. Thank you in advance
Re: [GENERAL] Application user name attribute on connection pool
On 2 Aug 2010, at 23:43, Radosław Smogura wrote: >> PostgreSQL already has BIGINT aka INT8, which are 8 bytes, and can >> represent integers up to like 9 billion billion (eg, 9 * 10^18). > But I think about numbers with precision - you can use float for moneys, etc > (rounding problems), and dividing each value in application by some scale > isn't nice, too. Most people don't use float for monetary values. Have a look at the NUMERIC type: http://www.postgresql.org/docs/8.4/interactive/datatype-numeric.html Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll see there is no forest. !DSPAM:737,4c57b0dc286217280628589! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql database for distributed transactions
On 08/02/10 3:49 PM, Hu, William wrote: Each of the mobile POS when facing the consumer needs to process at most 500 transactions a day. I was hoping to use the intermittent network connection time to do system download to flag transactions, such as, stolen credit card number, etc. And, to use the stable midnight network connection time to do uploads of the mobile transactions to the central Postgresql database. The upload would require at most 600 mobile terminals times 500 transactions at most of 3,000,000 transactions a day in 30 minutes time frame. fwiw, 600 * 500 is only 300,000, not 3,000,000. Thats only 10,000 transactions/minute, 166/second, which should be readily achievable with a reasonable DB server with the appropriate disk system. I'm not sure how you can ensure data consistency when your POS terminals are disconnected from each other. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Postgresql database for distributed transactions
Hi, I am designing a mobile POS application. I was hoping I can use the Postgresql database as the central transactional database. The application itself will run on a remote mobile PC. The retail transaction data is small, it consists of the transaction timestamp, location, amount, item price, item type, etc. There are at least 600 of these types of mobile POS required. Each of the mobile PCs DOES NOT have a stable and consistent network connection throughout the day. For example, the network may be available for 5 minutes in the morning, and no connection at all in the afternoon. However, there are at least 30 minutes of guaranteed stable connection time at midnight every day. Each of the mobile POS when facing the consumer needs to process at most 500 transactions a day. I was hoping to use the intermittent network connection time to do system download to flag transactions, such as, stolen credit card number, etc. And, to use the stable midnight network connection time to do uploads of the mobile transactions to the central Postgresql database. The upload would require at most 600 mobile terminals times 500 transactions at most of 3,000,000 transactions a day in 30 minutes time frame. What specific Postgresql feature can I use to process the upload transactions? should I attempt to write transaction logs or do bulk inserts. What specific features in Postgresql would help to ensure the data consistency in such a computing environment? Thanks in advance. William Hu Trimet.org
Re: [GENERAL] solaris slow
El 2 de agosto de 2010 19:14, John R Pierce escribió: > On 08/02/10 7:35 AM, Felipe de Jesús Molina Bravo wrote: > >> Hi >> >> >> I have installed postgres 8.4.4 (libxml and plperl)on a Sun Microsystems >> sun4u Sun Fire 880 with: >> >> - 4 700Mhz processors >> ... >> >> Also I have installed postgres (8.4.2) on a PC (Dell Optiplex 960) with >> the next characteristics: >> >> - 3 Gb ram >> - 1 processor of 3.2 Ghz Dual Core >> > ... > > dual 3.2Ghz Core2 processor cores are going to be far faster than a 9 year > old quad 700Mhz ultrasparc-III's, regardless of the operating system. > > If you want to compare operating system performance, you should do so on > the same hardware platform. > > btw, the standard disks on a SunFire 880 are FC-AL not SCSI... old 1Gbps > FC, I believe, with 36-146GB 10K rpm FC drives standard, with all the > internal drives sharing the same 1Gbps FC port. > Another factor, if your linux system was using LVM (its the default storage > configuration on many distributions), there's a pretty good chance the drive > mapper is ignoring write barriers, which greatly speeds up random writes at > the expense of reliable commits. > > My file-system is Reiser 3. I try with "CACHE turned on" (recommendation from Josua ... thanks) with the same results. I tried to migrate my database from the PC to Sun ... with the same performance.but I can not My conclution is: I'll ever get similar performance because the technology of PC is more recent than SUN thanks to all for your answers see you
Re: [GENERAL] Application user name attribute on connection pool
> how would you handle scale factors? numeric represents a BCD data > type, with a decimal fractional component. how would you represent, > say, 1.001 in your version? how would you add 1.001 to 1.01 > in your binary representation? I think about datastructure something like this [precision 16bits][scale 15 bits][1 bit sign]int[n] (here n can be always calculeted as the (size of datatype - 8) / 4. In this way the number 1.001 will be stored as the single element array 8,3,+,{1001} If scale is same typically in aggregate, then it's just adding this array of integers. If scales aren't same then one of argument's must be multiplied by 10^(scales diff). In this way the result of 1.001 + 1.01 will be 1001 + 101*10 with scale 3. I think there is no big algorithmic difference beteween nbase encoding, and encoding on full bytes - becuase in nbase encoding the carry in addition you take as the (a+b)/1000. Here the difference is only that carry will be taken from shifting longs eg: long l = a[0] + b[0]; carry = L >> 32; s[0] = l & 0xff; > PostgreSQL already has BIGINT aka INT8, which are 8 bytes, and can > represent integers up to like 9 billion billion (eg, 9 * 10^18). But I think about numbers with precision - you can use float for moneys, etc (rounding problems), and dividing each value in application by some scale isn't nice, too. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Need help on WAL configure
On 08/02/10 10:38 AM, stee...@gmail.com wrote: Need some help to configure the WAl files shipping and restore it to the target server.here is what I configured in the source server. (both windows)In postgresql.conf, I enabled the following option.# WRITE AHEAD LOGfsync = on synchronous_commit = on wal_sync_method = fsync # - Archiving -archive_mode = onarchive_command = 'copy "%p" "D:\\PG_DATA\\%f"' #archive_timeout = 0 D drive is on the same source server ( I am also having trouble to ship it to target, had a network drive mapped in the source server, for some reason I couldn't get it copy over to this network drive, so had to copy it local D drive first) Windows network drive mappings are on a per user session basis.the drives you have mapped on your desktop will NOT be available to a service, which is probably running with different user credentials, anyways instead, use UNC names, like \\server\sharename\path\...and also be sure to verify that the user the service is running as (postgres ?) has write access to the \\server\share. this can be pretty tricky to debug. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Need help on WAL shipping
Yes, that's me and it is for windows. I enabled archive and copy WAL file, created base backup. Restored base backup to target, removed files under pg-xlog, copy over archived WAL files, for some reason recovery always look for WAL files from base restore, not those archived WAL files. --Original Message-- From: Joshua D. Drake To: stee...@gmail.com Cc: pgsql-general@postgresql.org ReplyTo: j...@commandprompt.com Subject: Re: [GENERAL] Need help on WAL shipping Sent: Aug 2, 2010 5:07 PM On Mon, 2010-08-02 at 21:04 +, stee...@gmail.com wrote: > Hi, I posted twice and didn't get any response, wonder my post didn't get > populated properly. > > Again, followed the document on WAL shipping, and couldn't get target to > recognize the archive WAL files. It looks for different WAL files. The post I saw was rather unreadable, I believe you were the one trying log shipping on Windows correct? JD > > Did I miss any steps? > > Thx > Sent from my BlackBerry device on the Rogers Wireless Network > -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt Sent from my BlackBerry device on the Rogers Wireless Network -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Need help on WAL shipping
On Mon, 2010-08-02 at 21:04 +, stee...@gmail.com wrote: > Hi, I posted twice and didn't get any response, wonder my post didn't get > populated properly. > > Again, followed the document on WAL shipping, and couldn't get target to > recognize the archive WAL files. It looks for different WAL files. The post I saw was rather unreadable, I believe you were the one trying log shipping on Windows correct? JD > > Did I miss any steps? > > Thx > Sent from my BlackBerry device on the Rogers Wireless Network > -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Need help on WAL shipping
Hi, I posted twice and didn't get any response, wonder my post didn't get populated properly. Again, followed the document on WAL shipping, and couldn't get target to recognize the archive WAL files. It looks for different WAL files. Did I miss any steps? Thx Sent from my BlackBerry device on the Rogers Wireless Network -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] solaris slow
2010/8/2 John R Pierce : > On 08/02/10 7:35 AM, Felipe de Jesús Molina Bravo wrote: >> >> Hi >> >> >> I have installed postgres 8.4.4 (libxml and plperl)on a Sun Microsystems >> sun4u Sun Fire 880 with: >> >> - 4 700Mhz processors >> ... >> Also I have installed postgres (8.4.2) on a PC (Dell Optiplex 960) with >> the next characteristics: >> >> - 3 Gb ram >> - 1 processor of 3.2 Ghz Dual Core > > ... > > dual 3.2Ghz Core2 processor cores are going to be far faster than a 9 year > old quad 700Mhz ultrasparc-III's, regardless of the operating system. > Good point. Note to the OP, you can install OpenSolaris on your PC hardware if you want to compare the same hardware with a different OS. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] solaris slow
On 08/02/10 7:35 AM, Felipe de Jesús Molina Bravo wrote: Hi I have installed postgres 8.4.4 (libxml and plperl)on a Sun Microsystems sun4u Sun Fire 880 with: - 4 700Mhz processors ... Also I have installed postgres (8.4.2) on a PC (Dell Optiplex 960) with the next characteristics: - 3 Gb ram - 1 processor of 3.2 Ghz Dual Core ... dual 3.2Ghz Core2 processor cores are going to be far faster than a 9 year old quad 700Mhz ultrasparc-III's, regardless of the operating system. If you want to compare operating system performance, you should do so on the same hardware platform. btw, the standard disks on a SunFire 880 are FC-AL not SCSI... old 1Gbps FC, I believe, with 36-146GB 10K rpm FC drives standard, with all the internal drives sharing the same 1Gbps FC port. Another factor, if your linux system was using LVM (its the default storage configuration on many distributions), there's a pretty good chance the drive mapper is ignoring write barriers, which greatly speeds up random writes at the expense of reliable commits. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Application user name attribute on connection pool
On 08/02/10 6:30 AM, rsmog...@softperience.pl wrote: I would like to ask, about your opinion about numeric type. I implemented binary read for numeric type in JDBC and I saw, that numeric type is stored inside database as array of shorts no greater then nbase (currently 1). In my opinion this isn't high performance method for two reasons: 1. Arithmetic operations could take more time. 2. It's generally about JDBC and other drivers, transmitting numeric value is complicated and leaks performance for client side, as for long numbers many multiplications and additions must occur. I think about writing something like numeric2 which internally will be represented as the array of ints without nbase. In this context I would like to ask about your opinion 1. If this behaviour can be useful? I imagine performance increase on storing and retrieving values, similarly arithmetic should be faster. (currently 10001 + 10001 requires 4 operations: 2 additions of 1, and 2 additions of 1 from 1*1 and carry move operations, if this value will be stored without nbase, with full bits then addition even in short will take 1 operation 10001+10001 + carry move). 2. Will this decrease other performances? I think that text processing will be much slower, but will this decrease engine performance, as the text conversion is required when creating type? how would you handle scale factors? numeric represents a BCD data type, with a decimal fractional component. how would you represent, say, 1.001 in your version? how would you add 1.001 to 1.01 in your binary representation? PostgreSQL already has BIGINT aka INT8, which are 8 bytes, and can represent integers up to like 9 billion billion (eg, 9 * 10^18). -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Need help on WAL configure
Need some help to configure the WAl files shipping and restore it to the target server.here is what I configured in the source server. (both windows)In postgresql.conf, I enabled the following option.# WRITE AHEAD LOGfsync = on synchronous_commit = on wal_sync_method = fsync # - Archiving -archive_mode = onarchive_command = 'copy "%p" "D:\\PG_DATA\\%f"' #archive_timeout = 0 D drive is on the same source server ( I am also having trouble to ship it to target, had a network drive mapped in the source server, for some reason I couldn't get it copy over to this network drive, so had to copy it local D drive first).created the checkpoint, then make the base backup and copy the backup file to target along with archived WAL files. (filename: 00010 etc)Next step is to restore the base backup in the target server,after restore the base backup, stop PG service, then move away any file (filename: 2... etc) under pg_xlog, and copy source's WAL files to pg_xlogcreate recovery.conf file with the following line.restore_command = 'copy "C:\\pgwal\\%f" "%p"'when I tried to start PG service, for some reason, the recovery command is always looking for 2, which was created during the base backup restore. and it didn't even look for those archived WAL files from sources.Once I copied back those files that I moved away during the backup restore, it completes the recovery.Now, why it won't recognize the WAL files from source server.Did I miss any step? Sent from my BlackBerry device on the Rogers Wireless Network -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] solaris slow
On Mon, 2010-08-02 at 14:35 +, Felipe de Jesús Molina Bravo wrote: > Hi > > We design some performance tests (special for my application); this > test was executed with pgbench: > > pgbench -c 5 -T 600 -n -U aeedc -f test.sql aeespc > You noted SCSI disks on solaris. They may not have the CACHE turned on, on the drive. The SATA drives will have cache turned on and will be faster. That said, your results are slow as a whole. Joshua D. Drake -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] problem with ON UPDATE rule
Phil Dagosto writes: > I am trying to use event notifications controlled by an ON UPDATE > rule. Rules don't work the way you are hoping --- in particular, the NOTIFY will fire whether or not any individual rows get inserted. You might have better luck using a trigger instead. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] problem with ON UPDATE rule
Hi all, I'm new to Postgres and I'm not really a database expert but I was wondering if someone could help me out with this issue. I am trying to use event notifications controlled by an ON UPDATE rule. In the table I'm interested in I have created a rule that should be invoked when a particular column is updated to a value of 'complete': Table "public.table1" Column | Type | Modifiers +-+--- run_id | text | run_status | text | flag | boolean | Rules: r1 AS ON UPDATE TO table1 WHERE new.run_status = 'complete'::text DO INSERT INTO table2 (r_id, r_status) VALUES (new.run_id, new.run_status) The second table is supposed to have a row inserted into it when the conditional update takes place and has an ON INSERT rule that fires the notification: Table "public.table2" Column | Type | Modifiers --+--+--- r_id | text | r_status | text | Rules: r2 AS ON INSERT TO table2 DO NOTIFY table2 To test this I am using the example program in the section of the Postgres documentation that discusses event notification and libpq. And, this all works, when I update table1 and set a particular row's value for run_status to "complete" a row is inserted into table2, the rule on table2 fires and the example program, which is listening for "table2", is notified as expected. The problem is, this also happens when an update to table1 that sets the value of run_status to something other than "complete". In these cases, no row is inserted into table2 but the notification is issued just the same and is received by the example program. How is the notification being issued if no row is being inserted into table2? By the way I have also tried modifying the WHERE clause in R1 to: WHERE old.run_status <> new.run_status AND new.run_status = 'complete' but this did not change the behavior at all. I am using Postgres 8.3.11 and before you tell me to upgrade I do not have any choice about that. I'm working with a vendor system that uses Postgres and I have no control (read ZERO influence) on what version of Postgres is being used. Thanks in advance for any help or insight into how I can debug this problem.
[GENERAL] Register now for Surge 2010
Registration for Surge Scalability Conference 2010 is open for all attendees! We have an awesome lineup of leaders from across the various communities that support highly scalable architectures, as well as the companies that implement them. Here's a small sampling from our list of speakers: John Allspaw, Etsy Theo Schlossnagle, OmniTI Rasmus Lerdorf, creator of PHP Tom Cook, Facebook Benjamin Black, fast_ip Artur Bergman, Wikia Christopher Brown, Opscode Bryan Cantrill, Joyent Baron Schwartz, Percona Paul Querna, Cloudkick Surge 2010 focuses on real case studies from production environments; the lessons learned from failure and how to re-engineer your way to a successful, highly scalable Internet architecture. The conference takes place at the Tremont Grand Historic Venue on Sept 30 and Oct 1, 2010 in Baltimore, MD. Register now to enjoy the Early Bird discount and guarantee your seat to this year's event! http://omniti.com/surge/2010/register Thanks, -- Jason Dixon OmniTI Computer Consulting, Inc. jdi...@omniti.com 443.325.1357 x.241 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Application user name attribute on connection pool
On Mon, Aug 2, 2010 at 07:30, wrote: > Hello, > > I'm not member of this list, but because the 9th version of PostgreSQL is > incoming, I would like to ask if there is possibility to add session > property of application user Not for 9.0 its more or less already cut. > - this property, in contrast to login name, > should be setted by driver (e.g. JDBC) to current user logged into > application server. This for example, will allow triggers to store real > user name in history tables or log subsystem (in future) to write who > deleted this or that. Similarly solution is in Oracle. You might be able to (ab)use the new application_name GUC, see http://www.postgresql.org/docs/9.0/static/runtime-config-logging.html#GUC-APPLICATION-NAME. I have used custom GUCS, per session temporary tables and giving each user an actual database 'role' for this in the past. Any of those with appropriate triggers works fairly well. I find normally you end up needing a temp table anyways as you end up wanting to log more than just the user (for me its the "page_load" or the "action"). -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] solaris slow
Hi I have installed postgres 8.4.4 (libxml and plperl)on a Sun Microsystems sun4u Sun Fire 880 with: - 4 700Mhz processors - 8 Gb Ram - System 9 operSolaris - 64-bit sparcv9 Applications - 32-bit sparc Applications - SCSI-3. Discs do not have any arrangement It was compiled with the sun-studio (flag-bit X05 and 32 bits ) ... i follow the recommendations from: http://www.postgresql.org/docs/8.4/static/installation-platform-notes.html # INSTALLATION-NOTES-SOLARIS Also I have installed postgres (8.4.2) on a PC (Dell Optiplex 960) with the next characteristics: - 3 Gb ram - 1 processor of 3.2 Ghz Dual Core - A hard SATAS - 06/02/1928 Linux (gentoo) We design some performance tests (special for my application); this test was executed with pgbench: pgbench -c 5 -T 600 -n -U aeedc -f test.sql aeespc The results was the next: on pc : Thu July 29 15:50:02 CDT 2010 transaction type: Custom query scaling factor: 1 query mode: simple number of clients: 5 duration: 120 s Actually number of Transactions processed: 159 tps = 1.302670 (Including Establishing connections) tps = 1.302746 (excluding Establishing connections) Thu July 29 15:52:04 CDT 2010 on Solaris: Mon July 26 18:41:58 CDT 2010 transaction type: Custom query scaling factor: 1 query mode: simple number of clients: 5 duration: 120 s Actually number of Transactions processed: 85 tps = 0.668752 (Including Establishing connections) tps = 0.669172 (excluding Establishing connections) Mon July 26 18:44:05 CDT 2010 As observed, is much slower solaris. I also see the difference in disk access. For example iostat throws in the pc (filter some columns) devicer/sw/s rKB/s wKB/s await %util sda 0.12 3.62 8.2445.85 6.06 0.16 sda 0 2.20 126.4 0.64 0.08 sda 0 0.80 80 0 sda 0 1.6031.2 0.12 0.02 sda 0 1.20 8.8 0.17 0.02 sda 0 1.80 360.11 0.02 sda 0 1 0 5.6 0.60.06 and on Solaris: r/sw/s kr/skw/swait wsvc_t asvc_t %w %b c1t0d00.12.8 2.5 25.30.127.8 22.8 0 2 c1t0d00 4.8 0 36.8 0 05.80 3 c1t0d00 10.8 0 92.80 08 0 6 c1t0d00 12.2 0 97.61.299.8 77.6 57 it is also slower solaris ... I made the following "tunnings" in solaris without managing to improve the performance: http://blogs.sun.com/jkshah/entry/best_practices_with_postgresql_8 http://archives.postgresql.org/pgsql-performance/2006-02/msg00190.php http://vnull.pcnet.com.pl/dl/postgresql/pgsol_best_practices.pdf My questions - are there another way to improve the performance of solaris? where can i read more? - the PC (with linux) have better performance than SUN (solaris) therefore never going to get better performance ... what do you think about it? Greetings and thanks in advance.
Re: [GENERAL] 8.4 backpatching
"Igor Neyman" writes: > So, "backpatched to 8.4" - does it mean that we can expect new 8.4 > release (8.4.5) that will include these patches in near future? > And if yes, then - when? Will it coinside with 9.0 release? 8.4.5 will be released whenever it seems appropriate (ie, whenever the number and severity of accumulated bugs justifies it). We don't normally synchronize back-branch updates with release of a new major version --- it'd be a poor strategy both in terms of managing PR and likely load on the project's servers. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] 8.4 backpatching
Hello, This question mainly addressed to PG core developers. Today in "PostgreSQL 9.0 Beta4 Released" article on postgresql.org I read (amongst other things): "Allow full SSL certificate verification when host and hostaddr are both specified, backpatched to 8.4;" There were similar notes about backpatching 8.4, when 9.0 Beta3 was released. So, "backpatched to 8.4" - does it mean that we can expect new 8.4 release (8.4.5) that will include these patches in near future? And if yes, then - when? Will it coinside with 9.0 release? Thank you in advance, Igor Neyman
Re: [GENERAL] Netsted views working on same set of data on 8.1.21 but not on 8.4.4
writes: > I have migrated a 5 years old database from 8.1.21 running on top of centos > 5.5 on an i686 to 8.4.4 running on top of centos on an x86_64 architecture. > I have some nested views and i am still able to get the results on 8.1.21 but > when it comes to 8.4.4, I can get the results only from views that references > other tables but the views references other views, I get an empty result set. > Here is the output of explain on the two different machines. Can someone help > me please: Not with that amount of information. If you can provide a self-contained test case that produces different results on 8.1 and 8.4, we could look into it. See http://www.postgresql.org/docs/8.4/static/bug-reporting.html regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Application user name attribute on connection pool
Hello, I'm not member of this list, but because the 9th version of PostgreSQL is incoming, I would like to ask if there is possibility to add session property of application user - this property, in contrast to login name, should be setted by driver (e.g. JDBC) to current user logged into application server. This for example, will allow triggers to store real user name in history tables or log subsystem (in future) to write who deleted this or that. Similarly solution is in Oracle. I would like to ask, about your opinion about numeric type. I implemented binary read for numeric type in JDBC and I saw, that numeric type is stored inside database as array of shorts no greater then nbase (currently 1). In my opinion this isn't high performance method for two reasons: 1. Arithmetic operations could take more time. 2. It's generally about JDBC and other drivers, transmitting numeric value is complicated and leaks performance for client side, as for long numbers many multiplications and additions must occur. I think about writing something like numeric2 which internally will be represented as the array of ints without nbase. In this context I would like to ask about your opinion 1. If this behaviour can be useful? I imagine performance increase on storing and retrieving values, similarly arithmetic should be faster. (currently 10001 + 10001 requires 4 operations: 2 additions of 1, and 2 additions of 1 from 1*1 and carry move operations, if this value will be stored without nbase, with full bits then addition even in short will take 1 operation 10001+10001 + carry move). 2. Will this decrease other performances? I think that text processing will be much slower, but will this decrease engine performance, as the text conversion is required when creating type? Kind regards, Radek. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Netsted views working on same set of data on 8.1.21 but not on 8.4.4
Hello, I have migrated a 5 years old database from 8.1.21 running on top of centos 5.5 on an i686 to 8.4.4 running on top of centos on an x86_64 architecture. I have some nested views and i am still able to get the results on 8.1.21 but when it comes to 8.4.4, I can get the results only from views that references other tables but the views references other views, I get an empty result set. Here is the output of explain on the two different machines. Can someone help me please: on 8.1.21: Subquery Scan v_tableau_de_bord_nevralgique_dcg (cost=6726.72..6727.11 rows=31 width=328) -> Sort (cost=6726.72..6726.80 rows=31 width=297) Sort Key: v_articles_sous_min_reappro_a_central_sfa.code_reapprovisionnement, v_articles_sous_min_reappro_a_central_sfa.article -> Nested Loop Left Join (cost=6394.14..6725.96 rows=31 width=297) -> Nested Loop (cost=6394.14..6560.90 rows=28 width=224) -> Sort (cost=6394.14..6394.21 rows=28 width=234) Sort Key: v_situation_stock_sfa.nevralgique, v_situation_stock_sfa.total_ressources, v_situation_stock_sfa.code_reapprovisionnement , v_situation_stock_sfa.article -> HashAggregate (cost=6393.12..6393.47 rows=28 width=234) -> Nested Loop Left Join (cost=6291.29..6392.42 rows=28 width=234) -> Sort (cost=6291.29..6291.32 rows=13 width=82) Sort Key: t_articles_stock.code_reapprovisionnement, t_articles_stock.article -> HashAggregate (cost=6290.82..6291.05 rows=13 width=82) Filter: (GREATEST(sum(quantite_stock), 0::real) < min_reappro) -> Nested Loop (cost=3627.64..6290.49 rows=13 width=82) -> Merge Join (cost=3627.64..6191.86 rows=18 width=74) Merge Cond: ("outer".article = "inner".article) -> Merge Left Join (cost=3624.06..6119.89 rows=27248 width=64) Merge Cond: ("outer".article = "inner".article) -> Index Scan using "Articles_SNCFT_pkey" on t_articles_stock (cost=0.00..2353.27 rows= 27248 width=60) -> Sort (cost=3624.06..3636.47 rows=4963 width=8) Sort Key: v_ressources_par_article.article -> Subquery Scan v_ressources_par_article (cost=3158.11..3319.40 rows=4963 width= 8) -> GroupAggregate (cost=3158.11..3269.77 rows=4963 width=44) -> Sort (cost=3158.11..3170.51 rows=4963 width=44) Sort Key: t_ressources.article, t_articles_stock.nom -> Hash Left Join (cost=1140.60..2853.45 rows=4963 width=44) Hash Cond: ("outer".article = "inner".article) -> Seq Scan on t_ressources (cost=0.00..161.63 rows=4963 width=8) -> Hash (cost=845.48..845.48 rows=27248 width=40) -> Seq Scan on t_articles_stock (cost=0.00..845.48 rows=27248 width=40) -> Sort (cost=3.58..3.63 rows=18 width=10) Sort Key: t_valeurs_stock.article -> Index Scan using "idx_NEVRALGIQUE_T_VALEURS_STOCK" on t_valeurs_stock (cost=0.00..3. 21 rows=18 width=10) Index Cond: (nevralgique = true) Filter: (nevralgique IS TRUE) -> Index Scan using t_stocks_pkey on t_stocks (cost=0.00..5.47 rows=1 width=16)
Re: [GENERAL] Libpq on iPad?
On Aug 1, 2010, at 4:44 PM, Jerry LeVan wrote: > Has anyone looked at the feasibility of building libpq as a first step > towards building > An iPad app that can talk to postgresql dbs ? Apparently, yes. http://www.spanware.com/iphonedb/MobileCan/MobileCan.html ISTR someone discussing it on IRC or on one of the mailing lists, and it being a fairly naive port of libpq to cocoa rather than being a new native implementation of the protocol. If you google for "iphone postgresql" you'll find a fair bit of discussion, and some example build scripts for libpq on iphone. The EnterpriseDB folks also did an iphone friendly webapp as part of stackbuilder, I think. Cheers, Steve -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
On Fri, Jul 30, 2010 at 5:41 AM, Brad Nicholson wrote: > On 10-07-29 08:54 PM, Greg Smith wrote: >> >> Brad Nicholson wrote: >>> >>> Postgres also had a reputation of being slow compared to MySQL. >>> This was due to a lot of really poor MySQL vs Postgres benchmarks >>> floating around in the early 2000's. >> >> I think more of those were fair than you're giving them credit for. > > I'm sure some where, but I recall a lot that were not. > > The main problems I recall is that they took the stock postgresql.conf > (which was far to restrictive) and measured it against a much better MySQL > config. They then measured some unrealistic test for most applications and > declared MySQL the clear winner for everything and Postgres slow as a dog. > I would like to point out that in general the opposite is probably generally in effect at this point. For software dev that downloads MySQL 5.1 and PG 8.4 and selects sane options PG will probably have a significant advantage. MyISAM is dead.* Innodb does not make much use of fs caching, while PG depends on it. With a "detuned" instance PG will likely have a significant advantage over Innodb for that reason. *Pretend to be a developer and install MySQL on windows. You will probably not get a MyISAM default. -- Rob Wultsch wult...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general