RE: Tablespace management.
This is not a comprehensive test by any means, though I think it should ally at least a few fears (and it gets me out of housework for half an hour or so. Comments with a SQL select initial_EXTENT,next_EXTENT from dba_tablespaces 2 where tablespace_name='XXX'; INITIAL_EXTENT NEXT_EXTENT -- --- 131072 131072 128k ULMT SQL SELECT ROUND(SUM(BYTES)/(1024*1024),2) FREE_SPACE 2 FROM DBA_FREE_SPACE 3 where tablespace_name='XXX'; FREE_SPACE -- 4064.13 4gb or so free SQL CREATE TABLE TEST_LMT(C1 CHAR(255)) 2 TABLESPACE XXX; Table created. Create my table SQL BEGIN 2 FOR I IN 1..10 LOOP 3 EXECUTE IMMEDIATE 'ALTER TABLE TEST_LMT ALLOCATE EXTENT'; 4 END LOOP; 5 END; 6 / BEGIN * ERROR at line 1: ORA-01653: unable to extend table XXX.TEST_LMT by 8 in tablespace XXX ORA-06512: at line 3 Fill up the tablespace - generate automatic email to myself for Monday when I'm not in - oops SQL SELECT COUNT(*) 2 FROM DBA_EXTENTS 3 WHERE SEGMENT_NAME='TEST_LMT'; COUNT(*) -- 32513 Shame it wasn't 32768 but never mind SQL SET TIMING ON SQL DROP TABLE TEST_LMT; Table dropped. Elapsed: 00:00:01.04 SQL SPOOL OFF So DROP performance seems good up to at least 32k extents, and my guess is that most objects that actually eat up 4gb or more of space are good candidates for partitioning. All of the above 9.2.0.3 on Win2k. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Cary Millsap Sent: 30 May 2003 17:55 To: Multiple recipients of list ORACLE-L Subject: RE: Tablespace management. Wow. Maybe someone on the list has the time and motive to construct a test to determine how many extents for a segment in a ULMT are bad. My guess from some tests we did a couple of years ago is that it will take hundreds of thousands of extents before even DROP performance will suffer. And I can't think of *anything* that would make having even hundreds of millions of extents a bad idea for INSERTs, UPDATEs, MERGEs, or DELETEs. The only possible downsides of huge numbers of extents that I can think of are perhaps: * During the INSERT, UPDATE, or MERGE, what is the overhead of the actual allocation of the ULMT extent? (This actually may have nothing to do with how many extents are already there.) * During checkpoints on RAC systems, does the number of extents matter the way it did when Jonathan Lewis showed a problem with DMT and OPS a few years ago? * Does a huge bitmap section in the head of a data file cause any performance problems for backup and recovery? Aside from that, I can't imagine any more downside of huge numbers of ULMT extents than there is from having the Unix filesystem extents that most of us have right now and never notice. Cary Millsap Hotsos Enterprises, Ltd. http://www.hotsos.com Upcoming events: - Hotsos Clinic 101 in Reykjavik, Ottawa, Dallas, Washington, Denver, Sydney - Visit www.hotsos.com for schedule details... -Original Message- Sent: Friday, May 30, 2003 8:50 AM To: Multiple recipients of list ORACLE-L Jared, It's rather simple. If you follow the rules of third normal form you have a table with a certain number of rows, a second with a certain number of rows for each row in the first table. Obviously the second table needs more space than the first. Now if you use Dictionary management you can set the storage parameters of each table individually. But if your using local management they both have the same extent sizes. This leads one to having the extent sizes smaller to accommodate the first table and large numbers of extents for the second table. True fragmentation, namely those small useless extents that land between larger used extents, is eliminated in local management but then I have not had those problems with dictionary management either, unless someone makes the case for moving a table but that's very rare. Dick Goulet Senior Oracle DBA Oracle Certified 8i DBA -Original Message- Sent: Thursday, May 29, 2003 8:25 PM To: [EMAIL PROTECTED] Cc: Goulet, Dick Importance: High Dick, I'm trying to follow your line of thought, but I think I missed the path. Objects may not have the same storage requirements, but what does that matter? The only way I can make sense of what you say is if trying to have all objects occupy a single extent, and there's not much point in that. Jared Goulet, Dick [EMAIL PROTECTED] Sent by: [EMAIL PROTECTED] 05/29/2003 03:51 PM Please respond to ORACLE-L To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] cc: Subject:RE: Tablespace management. Thomas, With the exception of temp and rollback tablespaces I have not user
FW: Tablespace management.
-Original Message- From: Niall Litchfield [mailto:[EMAIL PROTECTED] Sent: 31 May 2003 14:42 To: '[EMAIL PROTECTED]' Subject: RE: Tablespace management. Hi It might be a sobering exercise to work out how much the time and effort getting computing storage needs into an exact science has cost. Now I know that ULMTs haven't been around that long, and people may not have been on 816 or higher for that long, but your post does highlight for me just why they are a good thing. It is very difficult to get storage needs calculated correctly - especially if your business/app behaviour is unpredictable say 3 years in advance - and if you get it wrong you may well have to reorg. If You use ULMTs you won't ever have to reorg for fragmentation reasons, and probably not for performance reasons (I can't think of a single performance problem - except maybe the DBA_EXTENTS view but then a)how often and when do you query this and b) who other than the DBA does it hurt.) Niall -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Goulet, Dick Sent: 30 May 2003 18:40 To: Multiple recipients of list ORACLE-L Subject: RE: Tablespace management. Steve, I'm not sure I'd call all of the functionality that has been added over the years worth it. Way too many of them have caused more trouble than their worth, like descending indexes. And given the drivel that I've seen from many a third party vendor in the past (PeopleSoft and their damned 16K extents) this can certainly get turned into another nightmare. As far as fragmentation is concerned, I've NOT had to do any in the last few years, mainly due to spending a lot of time effort to get computing storage needs into an exact science around here. That has been due to disk storage space not being an invisible cost item, but instead a significant one that we're constantly battling with. Sure they've become cheaper, but when our buying GB's of the stuff, mirrored, from a reliable vendor those half MB's wasted begin to add up FAST. Therefore I still contend that everything inside a single tablespace does not need a uniform extent size. If one size fits all was absolutely ! true there would be a lot less problems in this world. Dick Goulet Senior Oracle DBA Oracle Certified 8i DBA -Original Message- Sent: Friday, May 30, 2003 1:06 PM To: Multiple recipients of list ORACLE-L I think you're missing the point of the last message. What's wrong with multiple extents if the extent size is a multiple of a multiblock read? What's wrong with having two tablespaces? I'd definitely suggest reading How to Stop Defragmenting and Start Living: The Definitive Word on Fragmentation. (http://otn.oracle.com/deploy/availability/pdf/defrag.pdf) No one is suggesting *everything* should have a single extent size but everything in a tablespace should. LMT is the future and dovetails nicely with a lot of the functionality we've seen added in recent releases. What good are online table/index rebuilds if the space reclaimed is far outweighed by the space wasted by the fragmentation left behind? S- On Fri, 30 May 2003, Goulet, Dick wrote: Richard, My troubles come mainly form PeopleSoft and some in-house created applications. I'll use the in-house applications as the example since their simpler. Our CIM system has tables that contain very few rows of data, like the identification information for each robot(CELLS). Now there are only 30 robots on the longest/most complex line we have (BTW: due to the duhvelopers of this application each line needs it's own instance on it's own server, don't ask why). Now this table NEVER grows beyond 512KB is size. But each robot can have up to 1024 component slots (512 on each side) that need to be defined with what is in them (SLOTS). This table easily gets into a couple of MB but then sits there since we do tons of updates but no more inserts. If we're doing LMT's then to optimize the storage on this mess I either need 2 tablespace or else set the uniform extent size to 512K and allow the SLOTS table to have several extents. This example is one of the simpler ones, there are a lot more that get even more problematic, like those for our test data. If 10i has bad news on this front it may well become the straw that breaks the camel's back for Oracle around here. We're already toying around with DB2. Dick Goulet Senior Oracle DBA Oracle Certified 8i DBA -Original Message- Sent: Friday, May 30, 2003 11:30 AM To: Multiple recipients of list ORACLE-L Hi Dick, What do you consider to be a large number of extents in a LMT ? At what point do you consider performance
RE: Tablespace management.
Then again, why query DBA_EXTENTS ? Pretty much anything that its got you can get from DBA_SEGMENTS, the exception being the file/block_id/blocks info. I would contend that the reason we want that level of information is when mapping segments to file position to determine I/O rates, which we can get from segment level stats in 9i anyway. And of course, everyone is already on 9i by now, as they start getting ready for a migation to 10 :-) --- Kirtikumar Deshpande [EMAIL PROTECTED] wrote: Additional downside item: * Queries against DBA_EXTENTS will take a bit longer to return. - Kirti --- Cary Millsap [EMAIL PROTECTED] wrote: Wow. Maybe someone on the list has the time and motive to construct a test to determine how many extents for a segment in a ULMT are bad. My guess from some tests we did a couple of years ago is that it will take hundreds of thousands of extents before even DROP performance will suffer. And I can't think of *anything* that would make having even hundreds of millions of extents a bad idea for INSERTs, UPDATEs, MERGEs, or DELETEs. The only possible downsides of huge numbers of extents that I can think of are perhaps: * During the INSERT, UPDATE, or MERGE, what is the overhead of the actual allocation of the ULMT extent? (This actually may have nothing to do with how many extents are already there.) * During checkpoints on RAC systems, does the number of extents matter the way it did when Jonathan Lewis showed a problem with DMT and OPS a few years ago? * Does a huge bitmap section in the head of a data file cause any performance problems for backup and recovery? Aside from that, I can't imagine any more downside of huge numbers of ULMT extents than there is from having the Unix filesystem extents that most of us have right now and never notice. Cary Millsap Hotsos Enterprises, Ltd. http://www.hotsos.com Upcoming events: - Hotsos Clinic 101 in Reykjavik, Ottawa, Dallas, Washington, Denver, Sydney - Visit www.hotsos.com for schedule details... -Original Message- Sent: Friday, May 30, 2003 8:50 AM To: Multiple recipients of list ORACLE-L Jared, It's rather simple. If you follow the rules of third normal form you have a table with a certain number of rows, a second with a certain number of rows for each row in the first table. Obviously the second table needs more space than the first. Now if you use Dictionary management you can set the storage parameters of each table individually. But if your using local management they both have the same extent sizes. This leads one to having the extent sizes smaller to accommodate the first table and large numbers of extents for the second table. True fragmentation, namely those small useless extents that land between larger used extents, is eliminated in local management but then I have not had those problems with dictionary management either, unless someone makes the case for moving a table but that's very rare. Dick Goulet Senior Oracle DBA Oracle Certified 8i DBA -Original Message- Sent: Thursday, May 29, 2003 8:25 PM To: [EMAIL PROTECTED] Cc: Goulet, Dick Importance: High Dick, I'm trying to follow your line of thought, but I think I missed the path. Objects may not have the same storage requirements, but what does that matter? The only way I can make sense of what you say is if trying to have all objects occupy a single extent, and there's not much point in that. Jared Goulet, Dick [EMAIL PROTECTED] Sent by: [EMAIL PROTECTED] 05/29/2003 03:51 PM Please respond to ORACLE-L To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] cc: Subject:RE: Tablespace management. Thomas, With the exception of temp and rollback tablespaces I have not user locally managed tablespaces just because all objects must have the same sized extents. I do not see most tables sharing an equal need for storage and using dictionary management allows one to do that, at a cost I'll admit, but one that is much easier to swallow. Dick Goulet Senior Oracle DBA Oracle Certified 8i DBA -Original Message- Sent: Thursday, May 29, 2003 3:25 PM To: Multiple recipients of list ORACLE-L After reading the documents I've recommended using LOCAL, UNIFORM, AUTO as the options for tablespace management. Does anyone have any bad experiences with these? AUTOALLOCATE seems to come up with extents that are much smaller than I want and MANUAL segment management requires the use of FREELISTs (and I know that there are problems with freelists freeing up space correctly, especially in a parallel environment). I can't find any basis for making a decision
RE: Parallel Query Server died
Funeral at 8pm EST, movieat11pm EST on TNT ... Raj -Original Message-From: Rajesh Dayal [mailto:[EMAIL PROTECTED]Sent: Saturday, May 31, 2003 2:50 AMTo: Multiple recipients of list ORACLE-LSubject: RE: Parallel Query Server died After long time .. LOL . ;-) -Original Message-From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]On Behalf Of Vladimir BaracSent: Friday, May 30, 2003 2:35 PMTo: Multiple recipients of list ORACLE-LSubject: Re: Parallel Query Server died Kool, now some cyber funeral will take place... - Original Message - From: shuan.tay(PCI¾G¸R³Ô) To: Multiple recipients of list ORACLE-L Sent: Friday, May 30, 2003 11:19 Subject: Parallel Query Server died Dear all DBAs, What should i check for this error? "ORA-12805: parallel query server died unexpectedly" The SQL statement was runningwell before. There's nothing in the alert log about this error. I'm using Oracle 8.1.6 on Redhat 7.2. Thanks and have a nice day. *This e-mail message is confidential, intended only for the named recipient(s) above and may contain information that is privileged, attorney work product or exempt from disclosure under applicable law. If you have received this message in error, or are not the named recipient(s), please immediately notify corporate MIS at (860) 766-2000 and delete this e-mail message from your computer, Thank you.*1
Oracle 9IAS
Hi list, I'm trying to install 9IAS 1.0.2.2 in a W2000 SP3 but nothing comes up when clicking the setup button. Checked in metalink and nothing related to it. TIA Ramon E. Estevez [EMAIL PROTECTED] 809-535-8994 -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Ramon E. Estevez INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).
RE: Oracle 9IAS
Ah! The old trick of the symjitc.dll. It's working now. Ramon E. Estevez [EMAIL PROTECTED] 809-535-8994 -Original Message- Estevez Sent: Saturday, May 31, 2003 3:20 PM To: Multiple recipients of list ORACLE-L Hi list, I'm trying to install 9IAS 1.0.2.2 in a W2000 SP3 but nothing comes up when clicking the setup button. Checked in metalink and nothing related to it. TIA Ramon E. Estevez [EMAIL PROTECTED] 809-535-8994 -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Ramon E. Estevez INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing). -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Ramon E. Estevez INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).
RE: HELP URGENT RMAN FAILS - FILE?????
Paula - I don't see where anyone replied, so . . . In Oracle 8i, backing up the control file with RMAN and then recovering that control file is a dicey business. I recall that part of the problem is the point in the backup the control file is backed, maybe not after the file backups. Anyway, I gave up on that backup and just issue a separate statement after the RMAN backup. I think in 9i this is much better and maybe RMAN backs the control file up automatically. I think Robert Freeman's book has information about this, but my copy is at work. I know, I know, I should have two copies, right? Sorry this isn't much help, hopefully you've made more progress by now. Dennis Williams DBA, 80%OCP, 100% DBA Lifetouch, Inc. [EMAIL PROTECTED] -Original Message- Sent: Friday, May 30, 2003 5:45 PM To: Multiple recipients of list ORACLE-L methinks resync - could I be write? I had taken a tablespace offline and had not resynched with catalog - kind of makes sense. -Original Message- Sent: Friday, May 30, 2003 5:34 PM To: '[EMAIL PROTECTED]' Guys, Running Oracle 8.1.7 RMAN with automated backups - no problem Wish to recover recovered controlfile from backup then issued following: MAN run {execute script alloc_all_tapes; restore database; recover database noredo; execute script rel_all_tapes;} It failed with: RMAN-03002: failure during compilation of command RMAN-03013: command type: restore RMAN-03002: failure during compilation of command RMAN-03013: command type: IRESTORE RMAN-06026: some targets not found - aborting restore RMAN-06023: no backup or copy of datafile 173 found to restore Yet, when I query the target database mounted and look for 1* select file#,ts#,status,name from v$datafile SQL i 2 where file#=173; no rows selected MAKES NO SENSE AT ALL!!! WHERE IS IT COMING FROM WITH FILE#173 - IF THE MOUNTED TARGET DATABASE DOESN'T LIST 173 FROM V$DATAFILE OR V$RECOVER_FILE!!! -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: DENNIS WILLIAMS INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).
Re: RMAN - Remote vs Local Backups
Metalink Note 73431.1 has an RMAN Compatibility Matrix. TargetRMANCatalog DBCatalog Schema 9.2.0=9.0.3=8.1.x = RMAN executable An RMAN920 schema can be created in an Oracle 8.1.6 database that contains the recovery catalog for Oracle 9.2.0 databases. Have Fun :) DENNIS WILLIAMS wrote: Jared - Excellent point. My understanding is that the RMAN catalog must run on an Oracle version equal or greater than the target instances. Has anyone found this requirement to be a big pain? I am looking to configuring RMAN on another set of servers, but they are Oracle 9.2 and my current RMAN server is 8.1.6, and would need an O.S. upgrade to move to 9.2. Dennis Williams DBA, 80%OCP, 100% DBA Lifetouch, Inc. [EMAIL PROTECTED] -Original Message- Sent: Thursday, May 29, 2003 8:20 PM To: Multiple recipients of list ORACLE-L Dennis, The cron job can run on B only if it is the same version of Oracle that is on A. Jared DENNIS WILLIAMS [EMAIL PROTECTED] Sent by: [EMAIL PROTECTED] 05/29/2003 03:14 PM Please respond to ORACLE-L To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] cc: Subject:RE: RMAN - Remote vs Local Backups Walter - What you describe is the standard RMAN configuration. Box B contains the RMAN catalog, therefore it must command the backup. And so the cron job must run on Box B. But the actual backup occurs on the target machine (A in your example). If you back up to tape, you must have an MML (Media Management Library). You can also back up to disk (that is what I do). Since the actual backup occurs on the target machine, not much network traffic is involved. RMAN sends some commands, the target sends some status back, and that is about it. Dennis Williams DBA, 80%OCP, 100% DBA Lifetouch, Inc. [EMAIL PROTECTED] -Original Message- Sent: Thursday, May 29, 2003 4:30 PM To: Multiple recipients of list ORACLE-L Thanks Tim, Dennis and Ron for your feedback. I appreciate it. Let me clarify what I'm seeking. In my example, I am using a centralized catalog which is on its own dedicated database/server and backups are to tape. BCV's are not involved. Normally, in my experience, RMAN backups are initiated from the target server via a cron job. But, I've seen a case where a cron job for an RMAN backup was run from a box that was different from the database server machine. I find this configuration strange and confusing because it implies this was done for a reason and makes life difficult to find out where all the backups are running from. In the scenario of backing up the database on box A via an rman/cron job on box B, is this particular configuration more network resource intensive and therefore slower versus the backup being initiated from the same machine as t -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Chip INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).
Re: archiving data
but i think there is a sqlplus limitation of 64k and any data longet than 64k will get truncated in this case too.. correct me if i am wrong,even if u set long to a very high value,data more than 64k in lenght will get truncated . sai --- Arup Nanda [EMAIL PROTECTED] wrote: For situations like this you have the COPY command of SQL*Plus. Remember, it's a SQL*Plus comamnd like set, btitle, etc. not a sql command you can embed inside a pl/sql block. You could create a table similar in structure to main table and then polulate the data SQL SET LONG 99 -- this is neededto set the max size of the long data; otherwise it gets truncated. COPY FROM SCHEMA_NAME/[EMAIL PROTECTED] - APPEND HOLDINGTABLE - USING SELECT * FROM MAINTABLE WHERE DATE_COL SYSDATE - 12*30 Note the use of hyphens after the lines. SQL*PLus commands are expected to be in one line. Since I am continuing on to the next, I used the continuation character hyphen. This by default commits after all the rows are loaded. You can control the commit frequency by specifying two parameters -- sets 100 records per array SET ARRAYSIZE 100 -- sets a commit to occur after every 200 batches, or 20,000 records SET COPYCOMMIT 200 This process is fairly simple and can be easily automated using a shell script. Any error raised by the sql block can be checked. Hope this helps. Arup Nanda www.proligence.com - Original Message - To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] Sent: Friday, May 30, 2003 7:04 PM hi there is this project that is going on for archiving old data from oltp system that is older than 12 months and then purging them in the main db. the tables that are to be archived are with long rows. they cannot be converted to lobs since this is a third party application. here is where the problem lies. oracle support when contacted says either mv to lobs to make this move easier or use oci ..blah.blah.. to get this working if you want to remain in longs. there are some options i have though about: 1. export /import ..but should make this highly automated since the main db and archival db will be on different hosts, this will not be monitored and import has to go thru w/o issues etc. 2. create snapshot - but they dont work with long..hence not an option. 3. getting sqlldr to work but i think it has that 32k column size limitation. so can you please suggest me whetehr there is something else i can do or option 1 is the best given the environment. the oracle is 8.1.7.2 on sun 2.8. thanks sai -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Sai Selvaganesan INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing). -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Arup Nanda INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing). -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Sai Selvaganesan INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).
RE: quickest method
Sorry about the late response - - - Catching up Do you remember the syntax to do this? How do you set up UNIX pipes across a network? Thanks Babette -Original Message- Greenfield Sent: Thursday, May 15, 2003 2:02 PM To: Multiple recipients of list ORACLE-L For a two terabyte data transfer at one client, we wrote a C extractor, ran it in four-plicate on partitioned data, sent the output directly to pipes, sent those pipes zooming across the 400 Mbit network into waiting pipes that served as the input files for direct path SQL Loader. That was amazingly fast, and loads of fun, sort of like a data warehouse rube goldberg device. Actually, it was blazingly fast in test, I left the project before it hit production... -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Mladen Gogala Sent: Thursday, May 15, 2003 12:27 AM To: Multiple recipients of list ORACLE-L Subject: Re: quickest method Essentially, yes, that's precisely the way to do it. You do an array fetch into a C/C++ array and write it out. As for setbuf, setvbuf and printf, in my opinion, the best thing to do is to use mmap to map the file to a buffer and then use sprintf to populate the output buffer. If you don't want to play with mmap/munpap you can always use just a simple and unpretentious write call, just like printf does. On 2003.05.14 23:17 Ryan wrote: I know C/C++ moderately well. I thought PRO*C just added pragmas to embed SQL? Why is it faster? So your just doing some selects into variables and Printf() to a file? - Original Message - To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] Sent: Wednesday, May 14, 2003 10:22 PM PRO*C using array fetches. Nothing's faster than a simple printf(), especially if you increase output buffer settings with setbuf()... on 5/14/03 6:01 PM, Ryan at [EMAIL PROTECTED] wrote: If SQLLOADER is the faster to load data, what is the fastest to unload it? UTL_FILE is notoriously slow. - Original Message - To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] Sent: Wednesday, May 14, 2003 7:01 PM Carol, Hands down, SQL Loader is the fastest. Export/Import is rather slow. SQL and PL/SQL commands can be on either side of exp/imp, depending on what you are doing and how well the code is written. e.g. SQL statements are fairly fast, PL/SQL for loops are not. Pl/SQL bulk processing is fast. Unless you need the programatic abilities of PL/SQL, use SQL Loader. Exp/Imp can still be useful, even with SQL Loader. Use exp/imp to build your tables, then the indexes and constraints after the data is loader. No pat answer as to how to load data, depends on your requirements. There's probably no point in messing with SQL Loader if the data sets are small, and you can easily export from another database and then import. If the data is in CSV or flat files though, and/or is very large, SQL Loader is very fast. HTH Jared Carol Legros [EMAIL PROTECTED] Sent by: [EMAIL PROTECTED] 05/14/2003 02:57 PM Please respond to ORACLE-L To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] cc: Subject:quickest method I'm curious to know whether anyone out there has seen a comparison discussing the pros and cons and/or results of any simulation tests that compare the speed with which data can be loaded into a target database from a source (database or flat file) using the following 3 methods : (i) Export (from source), Import (to target) (ii) SQL*Loader (to target) (iii) SQL or PL/SQL commands (insert to target) using a Database Link between source target I'm working on a data loading strategy and since there are many ways to skin a cat, I'm considering these as options. Of course, there are other criteria that impact the method chosen, but assuming all things are equal (ie network bandwidth is good, access to both source and target are not an issue etc.), which of these methods would be quickest ? Thanks, Carol _ Add photos to your e-mail with MSN 8. Get 2 months FREE*. http://join.msn.com/?page=features/featuredemail -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Carol Legros INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services
RE: archiving data
Sai I would research that before making an assumption. The COPY command is a bit different from anything else in Oracle. I found the following note with a quick Google search: Note that sqlplus COPY has a port specific limit on the maximum size of LONG you can copy. Refer to the SQLPLUS User Guide and your port specific documentation to determine if this is feasible or not. Also, I haven't seen where anyone has suggested that transportable tablespace might meet your requirement. Have you considered that? Dennis Williams DBA, 80%OCP, 100% DBA Lifetouch, Inc. [EMAIL PROTECTED] -Original Message- Sent: Saturday, May 31, 2003 5:30 PM To: Multiple recipients of list ORACLE-L but i think there is a sqlplus limitation of 64k and any data longet than 64k will get truncated in this case too.. correct me if i am wrong,even if u set long to a very high value,data more than 64k in lenght will get truncated . sai --- Arup Nanda [EMAIL PROTECTED] wrote: For situations like this you have the COPY command of SQL*Plus. Remember, it's a SQL*Plus comamnd like set, btitle, etc. not a sql command you can embed inside a pl/sql block. You could create a table similar in structure to main table and then polulate the data SQL SET LONG 99 -- this is neededto set the max size of the long data; otherwise it gets truncated. COPY FROM SCHEMA_NAME/[EMAIL PROTECTED] - APPEND HOLDINGTABLE - USING SELECT * FROM MAINTABLE WHERE DATE_COL SYSDATE - 12*30 Note the use of hyphens after the lines. SQL*PLus commands are expected to be in one line. Since I am continuing on to the next, I used the continuation character hyphen. This by default commits after all the rows are loaded. You can control the commit frequency by specifying two parameters -- sets 100 records per array SET ARRAYSIZE 100 -- sets a commit to occur after every 200 batches, or 20,000 records SET COPYCOMMIT 200 This process is fairly simple and can be easily automated using a shell script. Any error raised by the sql block can be checked. Hope this helps. Arup Nanda www.proligence.com - Original Message - To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] Sent: Friday, May 30, 2003 7:04 PM hi there is this project that is going on for archiving old data from oltp system that is older than 12 months and then purging them in the main db. the tables that are to be archived are with long rows. they cannot be converted to lobs since this is a third party application. here is where the problem lies. oracle support when contacted says either mv to lobs to make this move easier or use oci ..blah.blah.. to get this working if you want to remain in longs. there are some options i have though about: 1. export /import ..but should make this highly automated since the main db and archival db will be on different hosts, this will not be monitored and import has to go thru w/o issues etc. 2. create snapshot - but they dont work with long..hence not an option. 3. getting sqlldr to work but i think it has that 32k column size limitation. so can you please suggest me whetehr there is something else i can do or option 1 is the best given the environment. the oracle is 8.1.7.2 on sun 2.8. thanks sai -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Sai Selvaganesan INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing). -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Arup Nanda INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing). -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Sai Selvaganesan INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services
Re: Oracle 9IAS
Are you trying to install on a P4 system? If so, check metalink for installer crashes on P4 systems. It's a symjit.dll issue. Ron Thomas Hypercom, Inc [EMAIL PROTECTED] Each new user of a new system uncovers a new class of bugs. -- Kernighan [EMAIL PROTECTED] do To: [EMAIL PROTECTED] Sent by: cc: [EMAIL PROTECTED] Subject: Oracle 9IAS 05/31/2003 02:19 PM Please respond to ORACLE-L Hi list, I'm trying to install 9IAS 1.0.2.2 in a W2000 SP3 but nothing comes up when clicking the setup button. Checked in metalink and nothing related to it. TIA Ramon E. Estevez [EMAIL PROTECTED] 809-535-8994 -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Ramon E. Estevez INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing). -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Ron Thomas INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).
Re: quickest method
How do you set pipes accross the network? Using rsh/remsh, of course! dd if=expdat.dmp|rsh node2 dd of=/tmp/expdat.dmp It doesn't matter if expdat.dmp files on both nodes are pipes. On 2003.05.31 19:04 Babette Turner-Underwood wrote: Sorry about the late response - - - Catching up Do you remember the syntax to do this? How do you set up UNIX pipes across a network? Thanks Babette -Original Message- Greenfield Sent: Thursday, May 15, 2003 2:02 PM To: Multiple recipients of list ORACLE-L For a two terabyte data transfer at one client, we wrote a C extractor, ran it in four-plicate on partitioned data, sent the output directly to pipes, sent those pipes zooming across the 400 Mbit network into waiting pipes that served as the input files for direct path SQL Loader. That was amazingly fast, and loads of fun, sort of like a data warehouse rube goldberg device. Actually, it was blazingly fast in test, I left the project before it hit production... -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Mladen Gogala Sent: Thursday, May 15, 2003 12:27 AM To: Multiple recipients of list ORACLE-L Subject: Re: quickest method Essentially, yes, that's precisely the way to do it. You do an array fetch into a C/C++ array and write it out. As for setbuf, setvbuf and printf, in my opinion, the best thing to do is to use mmap to map the file to a buffer and then use sprintf to populate the output buffer. If you don't want to play with mmap/munpap you can always use just a simple and unpretentious write call, just like printf does. On 2003.05.14 23:17 Ryan wrote: I know C/C++ moderately well. I thought PRO*C just added pragmas to embed SQL? Why is it faster? So your just doing some selects into variables and Printf() to a file? - Original Message - To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] Sent: Wednesday, May 14, 2003 10:22 PM PRO*C using array fetches. Nothing's faster than a simple printf(), especially if you increase output buffer settings with setbuf()... on 5/14/03 6:01 PM, Ryan at [EMAIL PROTECTED] wrote: If SQLLOADER is the faster to load data, what is the fastest to unload it? UTL_FILE is notoriously slow. - Original Message - To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] Sent: Wednesday, May 14, 2003 7:01 PM Carol, Hands down, SQL Loader is the fastest. Export/Import is rather slow. SQL and PL/SQL commands can be on either side of exp/imp, depending on what you are doing and how well the code is written. e.g. SQL statements are fairly fast, PL/SQL for loops are not. Pl/SQL bulk processing is fast. Unless you need the programatic abilities of PL/SQL, use SQL Loader. Exp/Imp can still be useful, even with SQL Loader. Use exp/imp to build your tables, then the indexes and constraints after the data is loader. No pat answer as to how to load data, depends on your requirements. There's probably no point in messing with SQL Loader if the data sets are small, and you can easily export from another database and then import. If the data is in CSV or flat files though, and/or is very large, SQL Loader is very fast. HTH Jared Carol Legros [EMAIL PROTECTED] Sent by: [EMAIL PROTECTED] 05/14/2003 02:57 PM Please respond to ORACLE-L To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] cc: Subject:quickest method I'm curious to know whether anyone out there has seen a comparison discussing the pros and cons and/or results of any simulation tests that compare the speed with which data can be loaded into a target database from a source (database or flat file) using the following 3 methods : (i) Export (from source), Import (to target) (ii) SQL*Loader (to target) (iii) SQL or PL/SQL commands (insert to target) using a Database Link between source target I'm working on a data loading strategy and since there are many ways to skin a cat, I'm considering these as options. Of course, there are other criteria that impact the method chosen, but assuming all things are equal (ie network bandwidth is good, access to both source and target are not an issue etc.), which of these methods would be quickest ? Thanks, Carol _ Add photos to your e-mail with MSN 8. Get 2 months
Re[2]: Need Help for 9i OCP
Saturday, May 31, 2003, 2:44:41 AM, you wrote: RD Simply go to RD http://testprep.selftestsoftware.com/W4RT052EB6F8A396076E3EEF0024 RD and your discount will automatically show on all individually RD packaged products on the site. I looked at this. Their regular price is $99.00. As I recall, it only costs $125 to take the actual test itself. Why bother paying $99.00 for questions that some company hopes are close to what's on the real test. Just go ahead and pay the $125 and have a go at the real thing? If you pass, great. If you fail, then you know what to study. Best regards, Jonathan Gennick --- Brighten the corner where you are http://Gennick.com * 906.387.1698 * mailto:[EMAIL PROTECTED] Join the Oracle-article list and receive one article on Oracle technologies per month by email. To join, visit http://four.pairlist.net/mailman/listinfo/oracle-article, or send email to [EMAIL PROTECTED] and include the word subscribe in either the subject or body. -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Jonathan Gennick INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).
Re: RE: Oracle 11i new features
Better still!! Maybe they will call everything (database/iAS/JDeveloper/IDS) 11i :) Cyril On Sat, 31 May 2003 John Kanagaraj wrote : Nope! - Oracle 10i will be the end of the world (as Oracle knows it at least :) since we already have an Oracle 11i (aka Oracle Applications 11i - but generally known in the ERP world as Oracle 11i or Apps 11i). Fyi - it mutated from Apps 10.7 to Apps 11.0.x and now to Apps 11.5.x - the 'i' replacing the 5 here. So when 11.5.9 is released later this year and they run out of numbers there, I believe it will mutate to Oracle 12i or Apps 12i The life of the person who is in charge of numbering at Oracle is gonna become quite complicated for sure. John Kanagaraj Oracle Applications DBA DB Soft Inc Work : (408) 970 7002 Listen to great, commercial-free christian music 24x7x365 at http://www.klove.com http://www.klove.com/ ** The opinions and facts contained in this message are entirely mine and do not reflect those of my employer or customers ** -Original Message- Sent: Friday, May 30, 2003 9:20 PM To: Multiple recipients of list ORACLE-L UNLESS Oracle decides to skip Oracle 10i and go directly to Oracle 11i. They did something similar to bring the numbers for Oracle Database and Oracle Designer and Oracle Application Servers all up to 9i (9iDB, 9iAS, Designer 9i. . . .) After all, they must have introduced enough new features and bugs to skip a number or two?!?! - Babette -Original Message- Sent: Wednesday, May 28, 2003 8:05 AM To: Multiple recipients of list ORACLE-L Oracle 11/11i: http://www.vapourware.com http://www.vapourware.com Oracle Apps 11/11i http://www.oracle.com/appsnet/content.html http://www.oracle.com/appsnet/content.html http://www.oaug.org/ http://www.oaug.org/ http://www.appsdba.com http://www.appsdba.com HTH Mark -Original Message- Sent: 28 May 2003 11:08 To: Multiple recipients of list ORACLE-L Hi Can anybody tell me a website where I can know about new features in Oracle 11/11i? Thanks in Advance Ajay K. Garg -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: John Kanagaraj INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing). ___ Impress your clients! Send mail from me @ mycompany.com . Just Rs.1499/year. Click http://www.rediffmailpro.com to know more. -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Cyril Thankappan INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).
Re: Parallel Query Server died
This server is dead. It has gone to meet its maker. It is no more. It's pushing up the lillies... This... is an ex-server. Thanks to Monty Python. Jamadagni, Rajendra wrote: Funeral at 8pm EST, movieat11pm EST on TNT ... Raj -Original Message- From: Rajesh Dayal[mailto:[EMAIL PROTECTED]] Sent: Saturday, May 31, 2003 2:50AM To: Multiple recipients of list ORACLE-L Subject: RE:Parallel Query Server died After long time .. LOL .;-) -Original Message- From:[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of VladimirBarac Sent: Friday, May 30, 2003 2:35 PM To: Multiplerecipients of list ORACLE-L Subject: Re: Parallel Query Serverdied Kool, now some cyber funeral will take place... - Original Message - From: shuan.tay(PCI¾G¸R³Ô) To: Multiple recipients of listORACLE-L Sent: Friday, May 30, 2003 11:19 Subject: Parallel Query Serverdied Dear all DBAs, What should i check for this error? "ORA-12805: parallel query server diedunexpectedly" The SQL statement was runningwell before. There's nothing in the alert log about thiserror. I'm using Oracle 8.1.6 on Redhat 7.2. Thanks and have a nice day. *This e-mail message is confidential, intended only for the named recipient(s) above and may contain information that is privileged, attorney work product or exempt from disclosure under applicable law. If you have received this message in error, or are not the named recipient(s), please immediately notify corporate MIS at (860) 766-2000 and delete this e-mail message from your computer, Thank you.*1
Re: archiving data
Sai, Where did you find that limitation of 64K? Although I admit I have not used a long column of that size, but according to the fine manuals, the max size of LONG column copied is 2 GB; actually 2,000,000,000 bytes, not 64K. You have to specify the size of long in your session using SET LONG 20 before attempting the copy command. Please let us know where you found that 64K limitation. The ohter thin you have to consider is that COPY is being depecrated in 10i, or whatever it will be called; but then again, I hope your application will have ceased using LONGs. Arup Nanda www.proligence.com - Original Message - To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] Sent: Saturday, May 31, 2003 6:29 PM but i think there is a sqlplus limitation of 64k and any data longet than 64k will get truncated in this case too.. correct me if i am wrong,even if u set long to a very high value,data more than 64k in lenght will get truncated . sai --- Arup Nanda [EMAIL PROTECTED] wrote: For situations like this you have the COPY command of SQL*Plus. Remember, it's a SQL*Plus comamnd like set, btitle, etc. not a sql command you can embed inside a pl/sql block. You could create a table similar in structure to main table and then polulate the data SQL SET LONG 99 -- this is neededto set the max size of the long data; otherwise it gets truncated. COPY FROM SCHEMA_NAME/[EMAIL PROTECTED] - APPEND HOLDINGTABLE - USING SELECT * FROM MAINTABLE WHERE DATE_COL SYSDATE - 12*30 Note the use of hyphens after the lines. SQL*PLus commands are expected to be in one line. Since I am continuing on to the next, I used the continuation character hyphen. This by default commits after all the rows are loaded. You can control the commit frequency by specifying two parameters -- sets 100 records per array SET ARRAYSIZE 100 -- sets a commit to occur after every 200 batches, or 20,000 records SET COPYCOMMIT 200 This process is fairly simple and can be easily automated using a shell script. Any error raised by the sql block can be checked. Hope this helps. Arup Nanda www.proligence.com - Original Message - To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] Sent: Friday, May 30, 2003 7:04 PM hi there is this project that is going on for archiving old data from oltp system that is older than 12 months and then purging them in the main db. the tables that are to be archived are with long rows. they cannot be converted to lobs since this is a third party application. here is where the problem lies. oracle support when contacted says either mv to lobs to make this move easier or use oci ..blah.blah.. to get this working if you want to remain in longs. there are some options i have though about: 1. export /import ..but should make this highly automated since the main db and archival db will be on different hosts, this will not be monitored and import has to go thru w/o issues etc. 2. create snapshot - but they dont work with long..hence not an option. 3. getting sqlldr to work but i think it has that 32k column size limitation. so can you please suggest me whetehr there is something else i can do or option 1 is the best given the environment. the oracle is 8.1.7.2 on sun 2.8. thanks sai -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Sai Selvaganesan INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing). -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Arup Nanda INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing). -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Sai Selvaganesan INET: [EMAIL PROTECTED]
RE: archiving data
Paula - Sorry to hear you are having problems. Since your posting time was last night, hope you fixed the problem in time to have a weekend with your kids. I get hit by the same thing now and then. You are right that you want to see your family, but when you are tired and grumpy they may not enjoy you. Better spend the time while the tape is loading planning a comp day to spend some well-rested time with them. Dennis Williams DBA, 80%OCP, 100% DBA Lifetouch, Inc. [EMAIL PROTECTED] -Original Message- Sent: Friday, May 30, 2003 7:45 PM To: Multiple recipients of list ORACLE-L I/O failure this week. Productional system restore/verified and backed up - fully operational once I/O subsystem rebuilt in 2.5 hours - required full restore because key datafiles corrupted - system, redos, control. Waiting for I/O took longest didn't see kids all night. Was supposed to take off today. Working as technical architect on SQL Server Vital Stats. COTS, project planning, setting up processes for releases and standard forms, data migration/cleansing, QA plan. Another test database recovery tonight. I miss my little boys. I want to escape. I want to eat an entire French Silk Pie -Original Message- Sent: Friday, May 30, 2003 7:55 PM To: Multiple recipients of list ORACLE-L how do you define older than 12 months?? are you using enterprise edition and is it feasible to use partitioning?, if you partition on the field that defines older than 12 months, its easy enough to drop a partition(or exchange a partition with a non-partitioned table, export that and drop it. joe Sai Selvaganesan wrote: hi there is this project that is going on for archiving old data from oltp system that is older than 12 months and then purging them in the main db. the tables that are to be archived are with long rows. they cannot be converted to lobs since this is a third party application. here is where the problem lies. oracle support when contacted says either mv to lobs to make this move easier or use oci ..blah.blah.. to get this working if you want to remain in longs. there are some options i have though about: 1. export /import ..but should make this highly automated since the main db and archival db will be on different hosts, this will not be monitored and import has to go thru w/o issues etc. 2. create snapshot - but they dont work with long..hence not an option. 3. getting sqlldr to work but i think it has that 32k column size limitation. so can you please suggest me whetehr there is something else i can do or option 1 is the best given the environment. the oracle is 8.1.7.2 on sun 2.8. thanks sai -- Joseph S Testa Chief Technology Officer Data Management Consulting 614-791-9000 It's all about the CACHE -- Please see the official ORACLE-L FAQ: http://www.orafaq.net http://www.orafaq.net -- Author: Joe Testa INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing). -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: DENNIS WILLIAMS INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).
Re: archiving data
I just did a few tests with a LONG field in a table. Final Answer: data more than 64K is properly loaded using COPY. Test Setup Used a plain text file, s.dat in unix with 97885 characters (97K) . Created a table LT3 with only one field COL1 LONG. Used SQL*Loader to load the data into the table. The controlfile looks like this load data infile 's.dat' var 5 into table lt3 ( col1 position(1:10) char) I placed a number 99000 in the beginning of the line 1 on the file s.dat to indicate the length. After loading to the table LT3, I created a table LT4 as follows SQL set long 99000 SQL copy from ananda/[EMAIL PROTECTED] create lt4 using select * from lt3 Then the long size was changed and I created two more tables SQL set long 64000 SQL copy from ananda/[EMAIL PROTECTED] create lt5 using select * from lt3 SQL set long 80 SQL copy from ananda/[EMAIL PROTECTED] create lt6 using select * from lt3 Finally, I created a third table to hold the data in LOB format so that I can measure it. SQL set long 99000 SQL insert into lt7 select 3, to_lob(col1) from lt3; SQL insert into lt7 select 4, to_lob(col1) from lt4; SQL insert into lt7 select 5, to_lob(col1) from lt5; SQL insert into lt7 select 6, to_lob(col1) from lt6; SQL commit; SQL select col1, dbms_lob.getlength(col2) from lt7; COL1 DBMS_LOB.GETLENGTH(COL2) -- 396057 496057 563996 6 76 4 rows selected. The results speak for themselves. As you can see, the COPY command correctly copied data from one table to the other where the chunk was about 97K, more than the 64K limit you mentioned. But the key was setting the LONGSIZE parameter in SQL*Plus. When I set it a low value, like 80 bytes, the value was truncated. Hope this helps in your archiving strategy. Arup Nanda www.proligence.com - Original Message - To: [EMAIL PROTECTED] Sent: Saturday, May 31, 2003 10:46 PM Sai, Where did you find that limitation of 64K? Although I admit I have not used a long column of that size, but according to the fine manuals, the max size of LONG column copied is 2 GB; actually 2,000,000,000 bytes, not 64K. You have to specify the size of long in your session using SET LONG 20 before attempting the copy command. Please let us know where you found that 64K limitation. The ohter thin you have to consider is that COPY is being depecrated in 10i, or whatever it will be called; but then again, I hope your application will have ceased using LONGs. Arup Nanda www.proligence.com - Original Message - From: Sai Selvaganesan [EMAIL PROTECTED] To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] Sent: Saturday, May 31, 2003 6:29 PM Subject: Re: archiving data but i think there is a sqlplus limitation of 64k and any data longet than 64k will get truncated in this case too.. correct me if i am wrong,even if u set long to a very high value,data more than 64k in lenght will get truncated . sai --- Arup Nanda [EMAIL PROTECTED] wrote: For situations like this you have the COPY command of SQL*Plus. Remember, it's a SQL*Plus comamnd like set, btitle, etc. not a sql command you can embed inside a pl/sql block. You could create a table similar in structure to main table and then polulate the data SQL SET LONG 99 -- this is neededto set the max size of the long data; otherwise it gets truncated. COPY FROM SCHEMA_NAME/[EMAIL PROTECTED] - APPEND HOLDINGTABLE - USING SELECT * FROM MAINTABLE WHERE DATE_COL SYSDATE - 12*30 Note the use of hyphens after the lines. SQL*PLus commands are expected to be in one line. Since I am continuing on to the next, I used the continuation character hyphen. This by default commits after all the rows are loaded. You can control the commit frequency by specifying two parameters -- sets 100 records per array SET ARRAYSIZE 100 -- sets a commit to occur after every 200 batches, or 20,000 records SET COPYCOMMIT 200 This process is fairly simple and can be easily automated using a shell script. Any error raised by the sql block can be checked. Hope this helps. Arup Nanda www.proligence.com - Original Message - To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] Sent: Friday, May 30, 2003 7:04 PM hi there is this project that is going on for archiving old data from oltp system that is older than 12 months and then purging them in the main db. the tables that are to be archived are with long rows. they cannot be converted to lobs since this is a third party application. here is where the problem lies. oracle support when contacted says either mv to lobs to make this move easier or use oci ..blah.blah.. to get
Re: archiving data
hi i read thru this document id # 1022033.6 in metalink which says the follwoing Use the Copy Command with Longs and Long Raw: The COPY command is one way to get long data from one table to another.This will work remotely as well. The COPY command will not work with long raws, however. The COPY command can be used on long of any length. Please note, long data greater than 64k will be truncated after the 65535th character. This is a SQLPlus limitation. ___ correct me if i am wrong.this really means that above 64k the data does get truncated. transportable tablespaces are an option but not right now since the application(meaning the db)was not built for this requirement. saiArup Nanda [EMAIL PROTECTED] wrote: Sai,Where did you find that limitation of 64K? Although I admit I have not useda long column of that size, but according to the fine manuals, the max sizeof LONG column copied is 2 GB; actually 2,000,000,000 bytes, not 64K. Youhave to specify the size of long in your session using SET LONG 20before attempting the copy command.Please let us know where you found that 64K limitation. The ohter thin youhave to consider is that COPY is being depecrated in 10i, or whatever itwill be called; but then again, I hope your application will have ceasedusing LONGs.Arup Nandawww.proligence.com- Original Message -To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]>Sent: Saturday, May 31, 2003 6:29 PM but i think there is a sqlplus limitation of 64k and any data longet than 64k will ! get truncated in this case too.. correct me if i am wrong,even if u set long to a very high value,data more than 64k in lenght will get truncated . sai --- Arup Nanda <[EMAIL PROTECTED]>wrote: For situations like this you have the COPY command of SQL*Plus. Remember, it's a SQL*Plus comamnd like set, btitle, etc. not a sql command you can embed inside a pl/sql block. You could create a table similar in structure to main table and then polulate the data SQL SET LONG 99 -- this is neededto set the max size of the long data; otherwise it gets truncated. COPY FROM SCHEMA_NAME/[EMAIL PROTECTED] - APPEND HOLDINGTABLE - USING SELECT * FROM MAINTABLE WHERE DATE_COL S! YSDATE - 12*30 Note the use of hyphens after the lines. SQL*PLus commands are expected to be in one line. Since I am continuing on to the next, I used the continuation character hyphen. This by default commits after all the rows are loaded. You can control the commit frequency by specifying two parameters -- sets 100 records per array SET ARRAYSIZE 100 -- sets a commit to occur after every 200 batches, or 20,000 records SET COPYCOMMIT 200 This process is fairly simple and can be easily automated using a shell script. Any error raised by the sql block can be checked. Hope this helps. Arup Nanda www.proligence.com ! - Original Message - To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]> Sent: Friday, May 30, 2003 7:04 PM hi there is this project that is going on for archiving old data from oltp system that is older than 12 months and then purging them in the main db. the tables that are to be archived are with long rows. they cannot be converted to lobs since this is a third party application. here is where the problem lies. oracle support when contacted says either mv to lobs to make this move easier or use oci ..blah.blah.. to get this working if you want to remain in longs. there are some options i have t! hough about: 1. export /import ..but should make this highly automated since the main db and archival db will be on different hosts, this will not be monitored and import has to go thru w/o issues etc. 2. create snapshot - but they dont work with long..hence not an option. 3. getting sqlldr to work but i think it has that 32k column size limitation. so can you please suggest me whetehr there is something else i can do or option 1 is the best given the environment. the oracle is 8.1.7.2 on sun 2.8. thanks sai -- Please see the official ORACLE-L FAQ: http://www.orafaq.net --&! gt; Author: Sai Selvaganesan INET: [EMAIL PROTECTED] Fat City Network Services -- 858-538-5051 http://www.fatcity.com San Diego, California -- Mailing list and web hosting services- To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing). -- Please see the official ORACLE-L FAQ! :
Re: quickest method
Thanks Mladen, I was just gonna ask that question. Didn't know that particular trick. Jared On Saturday 31 May 2003 19:48, Mladen Gogala wrote: How do you set pipes accross the network? Using rsh/remsh, of course! dd if=expdat.dmp|rsh node2 dd of=/tmp/expdat.dmp It doesn't matter if expdat.dmp files on both nodes are pipes. On 2003.05.31 19:04 Babette Turner-Underwood wrote: Sorry about the late response - - - Catching up Do you remember the syntax to do this? How do you set up UNIX pipes across a network? Thanks Babette -Original Message- Greenfield Sent: Thursday, May 15, 2003 2:02 PM To: Multiple recipients of list ORACLE-L For a two terabyte data transfer at one client, we wrote a C extractor, ran it in four-plicate on partitioned data, sent the output directly to pipes, sent those pipes zooming across the 400 Mbit network into waiting pipes that served as the input files for direct path SQL Loader. That was amazingly fast, and loads of fun, sort of like a data warehouse rube goldberg device. Actually, it was blazingly fast in test, I left the project before it hit production... -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Mladen Gogala Sent: Thursday, May 15, 2003 12:27 AM To: Multiple recipients of list ORACLE-L Subject: Re: quickest method Essentially, yes, that's precisely the way to do it. You do an array fetch into a C/C++ array and write it out. As for setbuf, setvbuf and printf, in my opinion, the best thing to do is to use mmap to map the file to a buffer and then use sprintf to populate the output buffer. If you don't want to play with mmap/munpap you can always use just a simple and unpretentious write call, just like printf does. On 2003.05.14 23:17 Ryan wrote: I know C/C++ moderately well. I thought PRO*C just added pragmas to embed SQL? Why is it faster? So your just doing some selects into variables and Printf() to a file? - Original Message - To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] Sent: Wednesday, May 14, 2003 10:22 PM PRO*C using array fetches. Nothing's faster than a simple printf(), especially if you increase output buffer settings with setbuf()... on 5/14/03 6:01 PM, Ryan at [EMAIL PROTECTED] wrote: If SQLLOADER is the faster to load data, what is the fastest to unload it? UTL_FILE is notoriously slow. - Original Message - To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] Sent: Wednesday, May 14, 2003 7:01 PM Carol, Hands down, SQL Loader is the fastest. Export/Import is rather slow. SQL and PL/SQL commands can be on either side of exp/imp, depending on what you are doing and how well the code is written. e.g. SQL statements are fairly fast, PL/SQL for loops are not. Pl/SQL bulk processing is fast. Unless you need the programatic abilities of PL/SQL, use SQL Loader. Exp/Imp can still be useful, even with SQL Loader. Use exp/imp to build your tables, then the indexes and constraints after the data is loader. No pat answer as to how to load data, depends on your requirements. There's probably no point in messing with SQL Loader if the data sets are small, and you can easily export from another database and then import. If the data is in CSV or flat files though, and/or is very large, SQL Loader is very fast. HTH Jared Carol Legros [EMAIL PROTECTED] Sent by: [EMAIL PROTECTED] 05/14/2003 02:57 PM Please respond to ORACLE-L To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] cc: Subject:quickest method I'm curious to know whether anyone out there has seen a comparison discussing the pros and cons and/or results of any simulation tests that compare the speed with which data can be loaded into a target database from a source (database or flat file) using the following 3 methods : (i) Export (from source), Import (to target) (ii) SQL*Loader (to target) (iii) SQL or PL/SQL commands (insert to target) using a Database Link between source target I'm working on a data loading strategy and since there are many ways to skin a cat, I'm considering these as options. Of course, there are other
Re: use of reverse key index,cost based optimizer
Skewed, yes. Unbalanced, no. Jared On Saturday 31 May 2003 00:34, [EMAIL PROTECTED] wrote: Assume an index on employee number. The number is assigned sequentially, and as such, the rightmost index leaf block would always be used. A possible hot block. A reverse key index can avoid this. Also, assume when an employee retires or quits, the record is deleted. But the space freed within the index leaf block will never be used (unless of course, all entries from that leaf block are deleted). A reverse key index can help you avoid these holes or otherwise skewed indexes, and help the index become more balanced, but has the pitfall that is mentioned. Raj [EMAIL PROTECTED] disys.comTo: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] Sent by: cc: [EMAIL PROTECTED] Subject: RE: use of reverse key index,cost based optimizer om 05/30/2003 10:44 PM Please respond to ORACLE-L Dennis, My understanding of B*tree is that it is always balanced. Monotonically increasing keys will create a right hand index, but nonetheless balanced. If wrong, I'm sure to be corrected. :) Also, I don't believe the reverse key index will help queries any. I'm guessing that under normal circumstances it would increase the number of index blocks that needed to be cached. In the case of a range scan, it would definitely not perform as well, and increase the likelihood of a FFS or FTS, depending on the queries normally used in a system. The primary purpose of these was to reduce block pings on OPS IIRC, which would also reduce block contention on inserts as you said. Jared DENNIS WILLIAMS [EMAIL PROTECTED] Sent by: [EMAIL PROTECTED] 05/30/2003 12:09 PM Please respond to ORACLE-L To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED] cc: Subject:RE: use of reverse key index,cost based optimizer helpdesk I don't see where anyone responded. If you look up reverse key index in the documentation, it says something about if you have a column where most of the values have leading values that are close. Reverse key will help the btree of the index be more balanced. That helps on queries. And on inserts you aren't continually hitting the same block, but spreading the inserts. Oracle has two SQL optimizers, rule-based and cost based. The cost based is more sophisticated. You first populate statistics on your tables. When creating an execution plan for your SQL the CBO will consider those statistics. Does that answer your questions? Dennis Williams DBA, 80%OCP, 100% DBA Lifetouch, Inc. [EMAIL PROTECTED] -Original Message- Sent: Friday, May 30, 2003 1:25 AM To: Multiple recipients of list ORACLE-L hai gurus please tell use of using reverse key index and what exactly cost based optimizer thanks in advance manjunath -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Jared Still INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).
Re: archiving data
thanks a bunch for this test case...it surely will help me a lot saiArup Nanda [EMAIL PROTECTED] wrote: I just did a few tests with a LONG field in a table. Final Answer: data morethan 64K is properly loaded using COPY.Test SetupUsed a plain text file, s.dat in unix with 97885 characters (97K) . Createda table LT3 with only one field COL1 LONG. Used SQL*Loader to load the datainto the table. The controlfile looks like thisload datainfile 's.dat' "var 5"into table lt3(col1 position(1:10) char)I placed a number 99000 in the beginning of the line 1 on the file s.dat toindicate the length. After loading to the table LT3, I created a table LT4as followsSQL set long 99000SQL copy from ananda/[EMAIL PROTECTED] create lt4 using select * from lt3Then the long size was changed and I created two more tablesSQL set long 64000SQL copy from ananda/[EMAIL PROTECTED] create lt5 using select * from lt3
SQL set long 80SQL copy from ananda/[EMAIL PROTECTED] create lt6 using select * from lt3Finally, I created a third table to hold the data in LOB format so that Ican measure it.SQL set long 99000SQL insert into lt7 select 3, to_lob(col1) from lt3;SQL insert into lt7 select 4, to_lob(col1) from lt4;SQL insert into lt7 select 5, to_lob(col1) from lt5;SQL insert into lt7 select 6, to_lob(col1) from lt6;SQL commit;SQL select col1, dbms_lob.getlength(col2) from lt7;COL1 DBMS_LOB.GETLENGTH(COL2)-- 3 960574 960575 639966 764 rows selected.The results speak for themselves. As you can see, the COPY command correctlycopied data from one table to the other where the chunk was about 97K, morethan the 64K limit you mentioned. But the key was setting the LONGSIZEparameter in SQL*Plus. When I set it a low value, like 80 b! ytes, the valuewas truncated.Hope this helps in your archiving strategy.Arup Nandawww.proligence.com- Original Message -To: <[EMAIL PROTECTED]>Sent: Saturday, May 31, 2003 10:46 PM Sai, Where did you find that limitation of 64K? Although I admit I have notused a long column of that size, but according to the fine manuals, the maxsize of LONG column copied is 2 GB; actually 2,000,000,000 bytes, not 64K. You have to specify the size of long in your session using SET LONG 20 before attempting the copy command. Please let us know where you found that 64K limitation. The ohter thin you have to consider is that COPY is being depecrated in 10i, or whatever it will be called; but then again, I hope your application will have ceased using LONGs. Arup Nanda www.proligence.com ---! -- Original Message - From: "Sai Selvaganesan" <[EMAIL PROTECTED]> To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]> Sent: Saturday, May 31, 2003 6:29 PM Subject: Re: archiving data but i think there is a sqlplus limitation of 64k and any data longet than 64k will get truncated in this case too.. correct me if i am wrong,even if u set long to a very high value,data more than 64k in lenght will get truncated . sai --- Arup Nanda <[EMAIL PROTECTED]>wrote: For situations like this you have the COPY command of SQL*Plus. Remember, it's a SQL*Plus comamnd like set, btitle, etc. not a sql command you can embed inside a pl/sql block. You could cr! eate a table similar in structure to main table and then polulate the data SQL SET LONG 99 -- this is neededto set the max size of the long data; otherwise it gets truncated. COPY FROM SCHEMA_NAME/[EMAIL PROTECTED] - APPEND HOLDINGTABLE - USING SELECT * FROM MAINTABLE WHERE DATE_COLSYSDATE - 12*30 Note the use of hyphens after the lines. SQL*PLus commands are expected to be in one line. Since I am continuing on to the next, I used the continuation character hyphen. This by default commits after all the rows are loaded. You can control the commit frequency by specifying two parameters! -- sets 100 records per array SET ARRAYSIZE 100 -- sets a commit to occur after every 200 batches, or 20,000 records SET COPYCOMMIT 200 This process is fairly simple and can be easily automated using a shell script. Any error raised by the sql block can be checked. Hope this helps. Arup Nanda www.proligence.com - Original Message - To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]> Sent: Friday, May 30, 2003 7:04 PMhi there is this project that is going on forarchiving! old data from oltp system that is older than12 months and then purging them in the main db. the tables that are to be archived are with long rows.they cannot be converted to lobs since this is a thirdparty application. here is where the problem lies.oracle support when contacted says either mv to lobsto make this move easier or use oci ..blah.blah.. toget this working if you want to remain in longs. there are some options i have though about:1. export /import ..but should make this highly
TERM LICENSES - ANY Experience
We are concerned on licensing especially as we are setting up a Disaster Recovery Site. During our Investigation it seems that TERM licensing can come out cheaper. Anyone have any idea on this or any disadvantages of leasing the software as opposed to owning it, thanks Sam -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: INET: [EMAIL PROTECTED] Fat City Network Services-- 858-538-5051 http://www.fatcity.com San Diego, California-- Mailing list and web hosting services - To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).