Re: [ADMIN] Do Petabyte storage solutions exist?
[EMAIL PROTECTED] (scott.marlowe) wrote in message news:[EMAIL PROTECTED]... For quite some time. I believe the max table size of 32 TB was in effect as far back as 6.5 or so. It's not some new thing. Now, the 8k row barrier was broken with 7.1. I personally found the 8k row size barrier to be a bigger problem back then. And 7.1 broke that in 2001, almost exactly four years ago. 6.5 came out in 1999-06-09, so the limit to table sizes was gone a very long time ago. The PostgreSQL limitations on the users' page (http://www.postgresql.org/users-lounge/limitations.html) still says that tables are limited to 16 TB, not 32 TB. Perhaps it should be updated? -Tony ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] blob without largeobject api
I need to insert a Blob into a table without using org.postgresql.largeobject.*; classes, that's because the db pool I'm using (Resin) doesn't allow me to cast: ((org.postgresql.PGConnection)db) to get access at the Postgres LargeObjectAPI. // Generates ClassCastExep. LargeObjectManager lom = ((org.postgresql.PGConnection)db).getLargeObjectAPI(); Since I'd like to keep using the same type of connection (without changing the pool manager or directly connecting to the db) is there a way to insert Blob (oid) data into a table without using the large object api? Looked in the examples under src/interfaces/jdbc/ but all blob examples uses that type of access. Thank You -- Edoardo Ceccarelli ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[ODBC] [ADMIN] psqlodbc problem?
Hi all, i'm having a problem when connect vb6 with pgsql-7.3.1 using the psqlodbc. i have a simple select (select * from mytable where pkey = 1;) in a 6 row table, when i explain analize it give me 63ms (milliseconds). but there are 13 seconds between i order de execution of the select and it give me the results. maybe aproblem of the odbc, if yes what solution can be. thanx in advance, Jaime Casanova _ Help STOP SPAM with the new MSN 8 and get 2 months FREE* http://join.msn.com/?page=features/junkmail ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[ADMIN] debugging queries from psql
Greeting, i have 2 identical queries. One of that finish faast another takes ages. there are couble of trigers that run on update. I want to know where exactly the sencond query takes time. is there any way to increase debug logging from the psql prompt so that i can see whats going behind? Regds Mallah. tradein_clients=# UPDATE users set last_visit=now() WHERE userid= 34866; UPDATE 1 Time: 436.591 ms tradein_clients=# UPDATE users set last_visit=now() WHERE userid= 72643; Time: Takes ages ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] Do Petabyte storage solutions exist?
[EMAIL PROTECTED] (Tony Reina) writes: The PostgreSQL limitations on the users' page (http://www.postgresql.org/users-lounge/limitations.html) still says that tables are limited to 16 TB, not 32 TB. Perhaps it should be updated? There was some concern at the time it was written as to whether we were sure that we'd fixed all the places that treated block numbers as signed rather than unsigned ints. I still misdoubt that this should be considered a tested and guaranteed-to-work thing. Those who have done any testing of, eg, VACUUM FULL on greater-than-16TB tables, please raise your hands? regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[ADMIN] Frequent 'deadlock detected' in 7.4 ... or just my bad code?
G'day ... I've got a script that runs on all the servers that dump's IP traffic data to a 7.4 database ... they all run at the same time, but I'm starting to get the following on a reasonably regular basis: ERROR: deadlock detected at /usr/local/abin/ipaudit2ams.pl line 175. The code that is causing it, from the times I've been able to catch it, is a simple update to the same table: $upd-execute( $traffic{$company_id}{$ip_id}{$port}, $company_id, $ip_id, $date ) || die $upd-errstr; Now, the scripts are wrap'd in a BEGIN/END ... if a file fails to be loaded, I want the whole thing to rollback ... the deadlock itself, I'm presuming, is because two servers are trying to update the same $ip_id/$port/$company_id record, at the same time ... Now, reading the DEADLOCKS section at: http://www.postgresql.org/docs/7.4/static/explicit-locking.html This is to be expected ... but, other then breaking the transaction itself into smaller chunks, or staggering the scripts run times ... is there something I'm overlooking to eliminate this? I could increase the deadlock timeout, as an option as well ... Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] Frequent 'deadlock detected' in 7.4 ... or just my bad code?
Marc G. Fournier [EMAIL PROTECTED] writes: Now, the scripts are wrap'd in a BEGIN/END ... if a file fails to be loaded, I want the whole thing to rollback ... the deadlock itself, I'm presuming, is because two servers are trying to update the same $ip_id/$port/$company_id record, at the same time ... Actually, the problem is more likely that two servers try to update two different rows in opposite orders. It's not possible to deadlock when only one lock is involved. You could work around this by ensuring that all sessions update rows in a consistent order; for instance, at the beginning of a transaction sort your intended updates by primary key and then apply in that order. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
[ADMIN] Use 7.4.1's pg_dump in 7.2.4?
Can I use the pg_dump from 7.4.1 on 7.4.2? I tried copying a couple of libraries that it needed, and I can copy one-by-one that it looks for, but I am not sure how many does it need in all for the new version to work Purpose is to dump/restore a pretty good-sized db (19GB) on 7.2.4, so I can upgrade it to 7.4.1. Thanks, Anjan
Re: [ADMIN] Frequent 'deadlock detected' in 7.4 ... or just my bad
On Mon, 5 Apr 2004, Tom Lane wrote: Marc G. Fournier [EMAIL PROTECTED] writes: Now, the scripts are wrap'd in a BEGIN/END ... if a file fails to be loaded, I want the whole thing to rollback ... the deadlock itself, I'm presuming, is because two servers are trying to update the same $ip_id/$port/$company_id record, at the same time ... Actually, the problem is more likely that two servers try to update two different rows in opposite orders. It's not possible to deadlock when only one lock is involved. You could work around this by ensuring that all sessions update rows in a consistent order; for instance, at the beginning of a transaction sort your intended updates by primary key and then apply in that order. Actually, unless I'm mistaken about how hashes work in perl, the update order for all servers is the same ... basically what happens is: 1. a traffic table is read in, and loaded into a hash table that is ordered by company_id, ip_id and port: $traffic{$ip_rec{$ip}{'company_id'}}{$ip_id}{$port} += $bytes1 + $bytes2; 2. a foreach loop is run on that resultant list to do the updates to the database: foreach $company_id ( keys %traffic ) { foreach $ip_id ( keys %{$traffic{$company_id}} ) { foreach $port ( keys %{$traffic{$company_id}{$ip_id}} ) { and the updates are done based on those 3 values, plus the byte value of $traffic{$company_id}{$ip_id}{$port} ... Now, my first mistake may be that I'm mis-assuming that the hashes will be read in a sorted order ... ? If this is the case, though, then sort order shouldn't be an issue, as all servers would be sorted the same way ... Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] Use 7.4.1's pg_dump in 7.2.4?
Anjan Dave wrote: Purpose is to dump/restore a pretty good-sized db (19GB) on 7.2.4, so I can upgrade it to 7.4.1. pg_dump should be able to dump databases back to about 7.1, so you should be good to go. As always, if you have problems, we would like to hear about it. By the way, the latest version is 7.4.2 already. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Raw vs Filesystem
[EMAIL PROTECTED] (Jaime Casanova) asked: so, the real question is what is the best filesystem for optimal speed in postgresql? The smart-alec answer would be... Veritas, of course! But seriously, it depends on many factors you have not provided information about. - Different operating systems support different filesystems; you should probably specify what OS you are using. - Hardware choices have ENORMOUSLY more impact on speed than any choice of filesystems. If your hardware is too slow, changing filesystems will not help you. And you would be _way_ better off with a filesystem that is 3% slower if it improves reliability by a significant factor. -- wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','acm.org'). http://www3.sympatico.ca/cbbrowne/lsf.html We should start referring to processes which run in the background by their correct technical name:... paenguins. -- Kevin M Bealer ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Raw devices vs. Filesystems
After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] (Jaime Casanova) belched out: Can you tell me (or at least guide me to a palce where i can find the answer) what are the benefits of filesystems over raw devices? For PostgreSQL, filesystems have the merit that you can actually use them. PostgreSQL doesn't support use of raw devices. Two major benefits of using filesystems as opposed to raw devices are that: a) The use of raw devices is dramatically non-portable; you have to reimplement data access on every platform you are trying to support; b) The use of raw devices essentially mandates that you implement some form of generic filesystem on top of them, which adds considerable complexity to your code. Two benefits to raw devices are claimed... c) It's faster. But that assumes that the cooked filesystems are implemented fairly badly. That was typically true, a dozen years ago, but it isn't so typical now, particularly with a fancy cacheing controller. d) It guarantees application control of update ordering. Of course, with a cacheing controller, or disk drives that lie to one degree or another, those guarantees might be gone anyways. There are other filesystem advantages, such as e) Shifting cooked data around may be as simple as a mv, whereas reorganizing on raw disk requires DB-specific tools... And what filesystem is the best for postgresql performance? That would depend, assortedly, on what OS you are using, what kind of hardware you are running on, what kind of usage patterns you have, as well as on how you define the notion of best. Absent of any indication of any of those things, the best that can be said is that depends... -- (format nil [EMAIL PROTECTED] cbbrowne acm.org) http://cbbrowne.com/info/languages.html TTY Message from The-XGP at MIT-AI: [EMAIL PROTECTED] 02/59/69 02:59:69 Your XGP output is startling. ---(end of broadcast)--- TIP 8: explain analyze is your friend
[ADMIN] Installation Docs
Hi All, Any nice docs which talk about complete linux windows installation. Thanks Suresh A. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[ADMIN] code editing
What is your recommendations for code editin in Postgresql? I'm a student at Regis university in Denver co. Thanks Mark Bross 13565 Detroit St. Thornton, Co. 80241 Email: [EMAIL PROTECTED] Ph: 303-252-9255 Fax: 303-252-9556 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Frequent 'deadlock detected' in 7.4 ... or just my bad
On Mon, 5 Apr 2004, Tom Lane wrote: Marc G. Fournier [EMAIL PROTECTED] writes: D'oh ... just tested my assumption, it was wrong ... *sigh* okay, back to the drawing board on the code ... Can't you just change foreach $company_id ( keys %traffic ) { to foreach $company_id ( sort keys %traffic ) { etc. Ya, just saw the note from Matt on that also ... didn't realize it was *that* simple ... was going to look into using the NULL Berkeley DB driver ... the O'Reilly Programming Perl book that I have, when you look into the index at the back under Hashes: Automatic Sorting Of talks about the Berkeley DB driver ... and that's it, nothing about simple sorting like the above ... even if you look under 'Sorting: Hashes automatically', it points to the same thing ... Talk about over-complicating things :( Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 8: explain analyze is your friend
[ADMIN] Doc for Linux Windows
Hi All, Can any one suggest me a good url for linux and windows from download to configuration. Thanks in advance. Suresh ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] invalid input syntax for type tid: (,)
at a point the postgresql ODBC driver issues a command like this (not questioning here how useful or correct it could be) select * from table1 where ctid = '(,)'; this command works (?) returning zero rows without error, even from psql, when the db is hosted in the following systems: - debian i386 using the stable postgresql7.2.1-2 package - debian ppc using the stable postgresql7.2.1-2 package - debian i386 using the testing postgresql7.4.2-2 package - MacOSX 10.2 using the 7.3.4 compiled from source but when hosted in a MacOSX 10.3 (Panther) using postgres 7.4.x compiled from source we get: ERROR: invalid input syntax for type tid: (,) I've tried on three different machines and a miriad of different configure parameters and always get the same error... could be an incompatible version of some external lib? who is responsible of the parsing of the command? as hint: when installing 7.4.2 on Panther we got a warning telling us about the version of bison, we have upgraded it anyway to 1.875, Pax et Bonum # dott. Jose' Cruanyes Aguilar - C.E. Soft srl # Pzza. Firenze,4 MILANO - XX Settembre 10, CREMONA # 02,336031220372,460602 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] what is the impact of Database size on Performemce
Avner [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] I need some information to understand what is the impact of Database size on the performemce. CUT 1. Is there any impact? In queries that make use of the data, yes. 2. Does one very large table impact the performence of the the whole DB, meaning the overall performence and access the other tables. Essentially no. 3. What are the ways to reduce the impact of the DB size on the DB performence DB size is so low on the radar of performance issues, that I think it is basically worth ignoring. If your database is so big that it fills the drive it is on, that could slow the whole system down as the page file has to get chunked up. But for all reasonable scenereos, you are much better off focusing on reducing the number of calls that are required to get work done, and improving the efficiency of your functions and procedures. Cheers! ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] Do Petabyte storage solutions exist?
Bradley Kieser wrote: No, it isn't. Oracle is expensive but it is also the Rolls Royce, it seems. I am a strictly OpenSource man so I don't really get into the pricing thing, but I do know that it is also deal-by-deal and depending on who and what you are, the prices can vary. E.g. Educational facilities have massive discounts. Military has massive prices, etc. snip You're correct about it being 'deal-by-deal' pricing. You can negotiate the salesmen down quite a bit, depending on who your company is, the field you're in, the time of year (eg. end of quarter or year nets bigger reductions), and especially if you use a bit of cleverness by getting in-house demos by the big competitors (eg. MSSQL and DB2). Standard Edition One is listed at around $6500 Canadian per processor, or $195 per named user. This is all totally negotiable, though. Apparently mssql is priced similarly, though I can't verify that. Doing price comparisons isn't very helpful, what you really need to do is analyze your requirements and see what features you actually need, or will need in the future. I have no affiliation with any of these companies, so I'm not going to start a marketing war about who's better etc. Anyways, ss they say, You get what you pay for. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] what is the impact of Database size on Performemce
Avner ([EMAIL PROTECTED]) writes: I need some information to understand what is the impact of Database size on the performemce. Few questions : 1. Is there any impact? Maybe. Maybe not. Depends whether you query the large table, and not the least how you query them, and what indexes you have. As a matter of fact, for the same amount of information, a database A twice as big as database B, could give better performance for retrieving a certain subset of that information, because in A data has been stored redudantly, maybe preaggregated. For instance, say that you have a transactions table for bank accounts, and you need to be able to retrieve the balance on a certain date. Saving the standings for each day is going to take up a lot of space, but it will be much faster to retrieve the historic balance, than adding up all transactions. 2. Does one very large table impact the performence of the the whole DB, meaning the overall performence and access the other tables. If the table just sits there, no. However, it will have an impact on backup operations. 3. What are the ways to reduce the impact of the DB size on the DB performence Good indexing is of course essential. -- Erland Sommarskog, SQL Server MVP, [EMAIL PROTECTED] Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] Problem after RH9 Install of 7.4.2
I recently built my RH9 system and updated the binary RPMs to 7.4.2. I am new to Postgresql, but I think I understand the PDF documentation provided. However, I couldn't find any documentation on starting with the binary RPMs. Instead, the documentation seems to be bent on building postgresql on the system. When I attempt to start the server, ie.: -bash-2.05b$ postmaster -D /usr/local/pgsql/data FATAL: /usr/local/pgsql/data is not a valid data directory DETAIL: File /usr/local/pgsql/data/PG_VERSION is missing. -bash-2.05b$ I think I must be missing some environment variables and perhaps configuration files. For example, I manually created the /usr/local/pgsql/data after reading the Administrator install instructions. It wasn't there just after installing the RPMs. Also, it appears the RH9 RPMs installed postgresql in different locations than the Administrator documentation implies. I would appreciate any help that you might provide. Regards, Bud Curtis ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Frequent 'deadlock detected' in 7.4 ... or just my bad code?
Marc G. Fournier [EMAIL PROTECTED] writes: D'oh ... just tested my assumption, it was wrong ... *sigh* okay, back to the drawing board on the code ... Can't you just change foreach $company_id ( keys %traffic ) { to foreach $company_id ( sort keys %traffic ) { etc. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[ADMIN] what is the impact of Database size on Performemce
I need some information to understand what is the impact of Database size on the performemce. Few questions : 1. Is there any impact? 2. Does one very large table impact the performence of the the whole DB, meaning the overall performence and access the other tables. 3. What are the ways to reduce the impact of the DB size on the DB performence I will apreciate help in regards . Thanks Avner ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[ADMIN] PostgreSQL and MS SQL Server 2000
Hi ! My name is Mariusz Wojtkiewicz, im from Poland. In my company we using base MS SQL Server 2000. In next time we will need new base in next office. Our deliverer suggest as to use free base PostgreSQL put on free Linux. All is fine and beautiful but is one problem: We will need transfer all tables and record and changes from PostgreSQL to MS SQL Server 2000 in head office. Is in PostgreSQL replication mechanism which will be allow: 1) transfer all tables, records and changes from It to MS SQL Server 2000 2) work it in automatic mode 3) make it in nearly real mode or minimum every half hour 4) get inversely changes (from MS SQL Server 2000 to PostgreSQL) 5) make all through Internet (TCP/IP) Regards Mariusz Wojtkiewicz Seeger Dach Sp. z o.o. Lebieniec 64 POLAND
Re: [ADMIN] Use 7.4.1's pg_dump in 7.2.4?
I am having some trouble restoring the data back on 7.4.1 (made from pg_dump on 7.2.4), that's the reason I would like to try dumping using the pg_dump version of 7.4.1. I had read in a post somethere by Tom Lane that the pg_dump of 7.4.1 can be used to dump data on earlier versions, which is what I wanted to try. -anjan -Original Message- From: Peter Eisentraut [mailto:[EMAIL PROTECTED] Sent: Monday, April 05, 2004 1:36 PM To: Anjan Dave; [EMAIL PROTECTED] Subject: Re: [ADMIN] Use 7.4.1's pg_dump in 7.2.4? Anjan Dave wrote: Purpose is to dump/restore a pretty good-sized db (19GB) on 7.2.4, so I can upgrade it to 7.4.1. pg_dump should be able to dump databases back to about 7.1, so you should be good to go. As always, if you have problems, we would like to hear about it. By the way, the latest version is 7.4.2 already. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] plpgsql editor(s)?
After a long battle with technology, [EMAIL PROTECTED] (Andrew Biagioni), an earthling, wrote: Can anyone recommend an editor (windows OR linux) for writing plpgsql code, that might be friendlier than a standard text editor? Nice features I can think of might be: - smart tabbing (1 tab = N spaces) - code coloring (esp. quoted strings!) - parens/brackets matching Building an electric mode for Emacs would almost certainly be The Right Answer. Nobody has done so yet. -- cbbrowne,@,ntlug.org http://www.ntlug.org/~cbbrowne/internet.html Why do we drive on parkways and park on driveways? ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] what is the impact of Database size on Performemce
1. Is there any impact? 2. Does one very large table impact the performence of the the whole DB, meaning the overall performence and access the other tables. 3. What are the ways to reduce the impact of the DB size on the DB performence I've run into 2 problems w/ very large DB. A) Backups. I have a 50G DB. I backup to file then delete yesterday's backup file (I have another process which transfers the backup file to tape). So I have a 50G backup file and I need another 50G freespace, to run my backup process. B) Reindexing. On any given day maybe 5% (max) of new data is added to my DB. But DBCC reindex has to work on all the indexes. So DBCC reindex takes 3 hours a night. Yuck. Maybe there is a better way. But I try to keep each database as small as possible. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[ADMIN] possible bug with sequences
Colleagues, What is the internal difference between an implicit sequence (created automatically by the serial data type) and an explicit sequence (created manually)? I think I have hit something that can qualify as a bug. How to reproduce: == cut here = reports=# CREATE TABLE sometable (id serial, name text); NOTICE: CREATE TABLE will create implicit sequence sometable_id_seq for serial column sometable.id CREATE TABLE reports=# ALTER TABLE sometable RENAME TO othername; ALTER TABLE reports=# \d othername Table admin.othername Column | Type |Modifiers +-+-- id | integer | not null default nextval('admin.sometable_id_seq'::text) name | text| == cut here = Voila! pg_restore will now return an error on trying to execute SELECT pg_catalog.setval('sometable_id_seq', 4, true); because this sequence has not been created either explicitly or implicitly! It is 100% reproducible on postgresql-7.4.1 -- Victor Sudakov, VAS4-RIPE, VAS47-RIPN 2:5005/[EMAIL PROTECTED] http://vas.tomsk.ru/ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] Using windows authentication to connect to postgress
I am planning on running PosgreSQL on a Linux box. I will write an application using VB.NET. I am already able to connect using then npgsql provider to my local database running on windows (cygwin). Will it be possible for me to connect to the Linux box using the windows authentication system to the database? Thank you, Fernando [EMAIL PROTECTED] _ MSN Premium includes powerful parental controls and get 2 months FREE* http://join.msn.com/?pgmarket=en-capage=byoa/premxAPID=1994DI=1034SU=http://hotmail.com/encaHL=Market_MSNIS_Taglines ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Do Petabyte storage solutions exist?
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] (Bradley Kieser) transmitted: I think as far as PG storage goes you're really on a losing streak here because PG clustering really isn't going to support this across multiple servers. We're not even close to the mark as far as clustered servers and replication management goes, let alone the storate limit of 2GB per table. So sadly, PG would have to bow out of this IMHO unless someone else nukes me on this! Are you trying to to do a bad April Fool's joke? A 2GB limit is simply nonsense. I work with a number of databases where tables contain 2GB of data. While there are some of the pointy-clicky approaches to clustering and replication that aren't there for PostgreSQL, a '2GB limit' is certainly NOT one of the reasons to avoid PG. -- If this was helpful, http://svcs.affero.net/rm.php?r=cbbrowne rate me http://www.ntlug.org/~cbbrowne/oses.html Let me get this straight: A company that dominates the desktop, and can afford to hire an army of the world's best programmers, markets what is arguably the world's LEAST reliable operating system? What's wrong with this picture? -- [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[ADMIN] Installation Docs for Linux and Windows
Hi Team, Any good url which talk about linux and windows installation. Thanks in advance suresh a. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] Frequent 'deadlock detected' in 7.4 ... or just my bad
1. a traffic table is read in, and loaded into a hash table that is ordered by company_id, ip_id and port: $traffic{$ip_rec{$ip}{'company_id'}}{$ip_id}{$port} += $bytes1 + $bytes2; 2. a foreach loop is run on that resultant list to do the updates to the database: foreach $company_id ( keys %traffic ) { foreach $ip_id ( keys %{$traffic{$company_id}} ) { foreach $port ( keys %{$traffic{$company_id}{$ip_id}} ) { and the updates are done based on those 3 values, plus the byte value of $traffic{$company_id}{$ip_id}{$port} ... Now, my first mistake may be that I'm mis-assuming that the hashes will be read in a sorted order ... ? If this is the case, though, then sort order shouldn't be an issue, as all servers would be sorted the same way The output of keys(%hash) is NOT ordered! Try: foreach $company_id ( sort keys %traffic ) { foreach $ip_id ( sort keys %{$traffic{$company_id}} ) { foreach $port ( sort keys %{$traffic{$company_id}{$ip_id}} ) { Matt ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Do Petabyte storage solutions exist?
On Fri, Apr 02, 2004 at 03:32:27PM +, Bricklen wrote: Anyways, ss they say, You get what you pay for. This has not been my experience at all. The correlation between software price and quality looks to me to be something very close to random. A -- Andrew Sullivan | [EMAIL PROTECTED] The fact that technology doesn't work is no bar to success in the marketplace. --Philip Greenspun ---(end of broadcast)--- TIP 8: explain analyze is your friend
[ADMIN] Migrating from a slave disk
Hi all, I have a slave disk with an old PostgreSQL installation. Now I want to migrate its information to my new primary disk with a new PGSQL installation. I have to do it this way because de old disk does not boot as primary any more. It prints a kernel error. Can somebody help me? Thanks a lot! Regards, Gastón. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] Installation Docs for Linux and Windows
ever try www.postgresql.org? --( Forwarded letter 1 follows )- Date: Fri, 02 Apr 2004 01:26:56 +0530 To: [EMAIL PROTECTED] From: [EMAIL PROTECTED] Sender: [EMAIL PROTECTED] Subject: [ADMIN] Installation Docs for Linux and Windows Hi Team, Any good url which talk about linux and windows installation. Thanks in advance suresh a. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] Frequent 'deadlock detected' in 7.4 ... or just my bad
D'oh ... just tested my assumption, it was wrong ... *sigh* okay, back to the drawing board on the code ... On Mon, 5 Apr 2004, Marc G. Fournier wrote: On Mon, 5 Apr 2004, Tom Lane wrote: Marc G. Fournier [EMAIL PROTECTED] writes: Now, the scripts are wrap'd in a BEGIN/END ... if a file fails to be loaded, I want the whole thing to rollback ... the deadlock itself, I'm presuming, is because two servers are trying to update the same $ip_id/$port/$company_id record, at the same time ... Actually, the problem is more likely that two servers try to update two different rows in opposite orders. It's not possible to deadlock when only one lock is involved. You could work around this by ensuring that all sessions update rows in a consistent order; for instance, at the beginning of a transaction sort your intended updates by primary key and then apply in that order. Actually, unless I'm mistaken about how hashes work in perl, the update order for all servers is the same ... basically what happens is: 1. a traffic table is read in, and loaded into a hash table that is ordered by company_id, ip_id and port: $traffic{$ip_rec{$ip}{'company_id'}}{$ip_id}{$port} += $bytes1 + $bytes2; 2. a foreach loop is run on that resultant list to do the updates to the database: foreach $company_id ( keys %traffic ) { foreach $ip_id ( keys %{$traffic{$company_id}} ) { foreach $port ( keys %{$traffic{$company_id}{$ip_id}} ) { and the updates are done based on those 3 values, plus the byte value of $traffic{$company_id}{$ip_id}{$port} ... Now, my first mistake may be that I'm mis-assuming that the hashes will be read in a sorted order ... ? If this is the case, though, then sort order shouldn't be an issue, as all servers would be sorted the same way ... Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] Frequent 'deadlock detected' in 7.4 ... or just my bad
That appears to have fixed it, thanks ... at least it hasn't happened in a few hours, and it was happening at least once an hour previously ... On Mon, 5 Apr 2004, Matt Clark wrote: 1. a traffic table is read in, and loaded into a hash table that is ordered by company_id, ip_id and port: $traffic{$ip_rec{$ip}{'company_id'}}{$ip_id}{$port} += $bytes1 + $bytes2; 2. a foreach loop is run on that resultant list to do the updates to the database: foreach $company_id ( keys %traffic ) { foreach $ip_id ( keys %{$traffic{$company_id}} ) { foreach $port ( keys %{$traffic{$company_id}{$ip_id}} ) { and the updates are done based on those 3 values, plus the byte value of $traffic{$company_id}{$ip_id}{$port} ... Now, my first mistake may be that I'm mis-assuming that the hashes will be read in a sorted order ... ? If this is the case, though, then sort order shouldn't be an issue, as all servers would be sorted the same way The output of keys(%hash) is NOT ordered! Try: foreach $company_id ( sort keys %traffic ) { foreach $ip_id ( sort keys %{$traffic{$company_id}} ) { foreach $port ( sort keys %{$traffic{$company_id}{$ip_id}} ) { Matt Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] Migrating from a slave disk
On Mon, 5 Apr 2004, Gastón Simone wrote: Hi all, I have a slave disk with an old PostgreSQL installation. Now I want to migrate its information to my new primary disk with a new PGSQL installation. I have to do it this way because de old disk does not boot as primary any more. It prints a kernel error. Can somebody help me? Thanks a lot! Is this a linux box running kernel level mirroring? If so you can just mount the drive to access it. If you've got a boot disk for the old drive you just change the jumpers to move the old one in place of the new one and boot from the boot floppy. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Raw devices vs. Filesystems
No point to beating a dead horse (other than the sheer joy of the thing) since postgres does not have raw device support, but ... raw devices, at least on solaris, are about 10 times as fast as cooked file systems for Informix. This might still be a gain for postgres' performance, but the portability issues remain. raw device use in Informix is safer in terms of data because Informix does not ever have to use the regular file system and so issues of buffering and so on go away. My understanding -- fortunately not ever tried in the real world -- is that postgres' WAL log system is as reliable as Informix writing to raw devices. raw devices can't be copied or tampered with with regular file tools (mv, cp etc.); this changes how backups get done but also adds a layer of insulation between valuable data and users. Greg Williamson DBA GlobeXplorer LLC -Original Message- From: Christopher Browne [mailto:[EMAIL PROTECTED] Sent: Mon 3/29/2004 10:28 AM To: [EMAIL PROTECTED] Cc: Subject:Re: [ADMIN] Raw devices vs. Filesystems After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] (Jaime Casanova) belched out: Can you tell me (or at least guide me to a palce where i can find the answer) what are the benefits of filesystems over raw devices? For PostgreSQL, filesystems have the merit that you can actually use them. PostgreSQL doesn't support use of raw devices. Two major benefits of using filesystems as opposed to raw devices are that: a) The use of raw devices is dramatically non-portable; you have to reimplement data access on every platform you are trying to support; b) The use of raw devices essentially mandates that you implement some form of generic filesystem on top of them, which adds considerable complexity to your code. Two benefits to raw devices are claimed... c) It's faster. But that assumes that the cooked filesystems are implemented fairly badly. That was typically true, a dozen years ago, but it isn't so typical now, particularly with a fancy cacheing controller. d) It guarantees application control of update ordering. Of course, with a cacheing controller, or disk drives that lie to one degree or another, those guarantees might be gone anyways. There are other filesystem advantages, such as e) Shifting cooked data around may be as simple as a mv, whereas reorganizing on raw disk requires DB-specific tools... And what filesystem is the best for postgresql performance? That would depend, assortedly, on what OS you are using, what kind of hardware you are running on, what kind of usage patterns you have, as well as on how you define the notion of best. Absent of any indication of any of those things, the best that can be said is that depends... -- (format nil [EMAIL PROTECTED] cbbrowne acm.org) http://cbbrowne.com/info/languages.html TTY Message from The-XGP at MIT-AI: [EMAIL PROTECTED] 02/59/69 02:59:69 Your XGP output is startling. ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] code editing
On Fri, 2 Apr 2004, Mark Bross wrote: What is your recommendations for code editin in Postgresql? I'm a student at Regis university in Denver co. Do you mean for editing the backend code itself, stylewise, or do you mean for editing your own code, like plpgsql functions? I'll assume you mean an editor for plpgsql or queries or scripts. I myself am an old schooler who prefers simple editors like vi or pico. But almost any editor you like will work. Are you on unix or windows? ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] Frequent 'deadlock detected' in 7.4 ... or just my bad
Marc G. Fournier [EMAIL PROTECTED] wrote: On Mon, 5 Apr 2004, Tom Lane wrote: Marc G. Fournier [EMAIL PROTECTED] writes: D'oh ... just tested my assumption, it was wrong ... *sigh* okay, back to the drawing board on the code ... Can't you just change foreach $company_id ( keys %traffic ) { to foreach $company_id ( sort keys %traffic ) { etc. Ya, just saw the note from Matt on that also ... didn't realize it was *that* simple ... was going to look into using the NULL Berkeley DB driver ... the O'Reilly Programming Perl book that I have,... [snip] Kind of OT for this mailing list but... What you just ran into is why I recommend to *anybody*, even the most experienced of designers/engineers/coders/what-have-you, that they start with Learning Perl. (Also an O'Reilly book.) Saves no end of grief ;). IMO, Programming Perl is mainly useful after you already have a good handle on the language. Jim ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] Problem after RH9 Install of 7.4.2
[EMAIL PROTECTED] (Bud Curtis) wrote: [snip] When I attempt to start the server, ie.: -bash-2.05b$ postmaster -D /usr/local/pgsql/data FATAL: /usr/local/pgsql/data is not a valid data directory DETAIL: File /usr/local/pgsql/data/PG_VERSION is missing. -bash-2.05b$ [snip] You need to read the docs. Start here http://www.postgresql.org/docs/7.4/static/installation.html Skip the parts about building pgsql. Read the rest. Jim ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html