Re: Query results to .csv/use of dblinks

2003-08-27 Thread M Rafiq
Tanel,

Thanks for your response.

Regards
Rafiq


Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Date: Tue, 26 Aug 2003 13:29:26 -0800
Hi!

In one migration of about 600GB DB I used 2 dblinks (loading different
partitions in parallel with separate statements), then added 3rd link after
what the bottleneck seemed to be the network. (I saw full network
utilization from perfmon on windows, wasn't too much of a wait interface
user back then). But your mileage may vary.
So, full hardware utilization is definitely good, at least during
migrations, but before thinking about that, you have to put together an
optimal migration path and methodology.
Tanel.

- Original Message -
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Tuesday, August 26, 2003 10:54 PM
 Tanel,

 A quick question? How many open dblinks you have used at one time without
 any issues? Default setting in init.ora is 4(if I am not wrong) and I
never
 used it more than that. If Dennis wants to use more than 4 dblinks at one
 time, he should modify this param(open_dblinks) in init.ora, right.

 Regards
 Rafiq








 Reply-To: [EMAIL PROTECTED]
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Date: Tue, 26 Aug 2003 09:14:26 -0800

 Hi!

 What is your goal? To finish data transfer in the shortest time or keep
 hardware utilization maximum during transfer?
 I think you should concentrate on keeping the time for doing data 
transfer
 low.

 Depending on your network - in case of gigabit (or 10 Gb) you could look
at
 enabling jumbo frames, which enable ethernet packets up to 9000 bytes.
Also
 set (SDU=32768) in your listener and tnsnames.oras (you can set it with
 normal 1500 byte frames as well).

 About parallellism, you might want to run several bulk inserts over 
dblink
 to fully utilize your network (fill the gaps when one session is busy
 inserting, thus not using the network). But if your source disk array (or
 CPUs) are slow then they might be the bottleneck.
 If you got SAN and a temporary spare server, do a BCV copy or mirror
split,
 open up several clones of a database and copy data from all of them.

 Also, when you have SAN, there's no need for network transfer at all - 
you
 just mount the filesystem with dump/exportfiles on target database and do
 the load from there. If your operating systems are different, then just
dump
 to raw device, with pipe and tar for example, or completely raw,
remembering
 your data sizes. Note that in some (older) operating systems there were
few
 blocks in beginning of device which were used (and written) by operating
 system (Tru64 had the largest I know - 64k). Thus you had to make sure 
you
 didn't write anything there (oseek=65536 for dd for example).

 If downtime isn't an issue for you, it might not be worth trying above
 recommendations, but in RVLDBs (really very large databases) all of this
can
 help a lot.

 Tanel.
 - Original Message -
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Sent: Tuesday, August 26, 2003 5:54 PM


   Taniel, Binley
  Thanks for the excellent suggestions.
  At this point we have been testing with two smaller test systems,
 moving
   a single table at a time, but initial indications are that the
 performance
   order is:
   1. Perl dump to CSV / ftp / SQL*Loader
   2. Copy across database link
   3. Export/ ftp / import
  
   I need to re-run the tests once the target production system is
available
 to
   re-confirm which is faster. I am pretty confident in the ability to 
run
   multiple SQL*Loader and import sessions simultaneously. I am a little
   nervous about the ability of the database link to scale to enough
   simultaneous sessions to keep the RAID sets maxed out on the target
 system.
   Several years ago when I was doing a large conversion I hit that 
limit.
 Some
   days I wonder if we are the beneficiary or victim of our prior
 experience.
   Oh well thanks for all the good suggestions, on to testing, testing,
   testing.
  
   Dennis Williams
   DBA, 80%OCP, 100% DBA
   Lifetouch, Inc.
   [EMAIL PROTECTED]
  
  
   -Original Message-
   Sent: Monday, August 25, 2003 7:09 PM
   To: Multiple recipients of list ORACLE-L
  
  
   Hi!
  
   What about several insert /*+ APPEND NOLOGGING */ commands over
database
   link run in parallel? (Possibly over dedicated network). This is fast
and
 is
   easier (in case you don't hit any compatibility problems). If you
happen
 to
   be running on Windows for some reason, you could try to use named 
pipes
   network protocol instead of TCP as well.
  
   Tanel.
  
   - Original Message -
   To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
   Sent: Monday, August 25, 2003 11:04 PM
  
  
Thanks Tanel
  We will undoubtedly use export/import for the many small tables. 
We
 are
looking for alternatives that will perform even faster. The insert
 phase
seems to be the slowest part, and 

Re: Query results to .csv

2003-08-26 Thread Tanel Poder
Hi!

What about several insert /*+ APPEND NOLOGGING */ commands over database
link run in parallel? (Possibly over dedicated network). This is fast and is
easier (in case you don't hit any compatibility problems). If you happen to
be running on Windows for some reason, you could try to use named pipes
network protocol instead of TCP as well.

Tanel.

- Original Message - 
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Monday, August 25, 2003 11:04 PM


 Thanks Tanel
   We will undoubtedly use export/import for the many small tables. We are
 looking for alternatives that will perform even faster. The insert phase
 seems to be the slowest part, and that is where SQL*Loader in direct path
 really shines. Now the next issue is how to produce a CSV file as fast as
 possible, and so far it looks like Jared's Perl program is the clear
winner.

 Dennis Williams
 DBA, 80%OCP, 100% DBA
 Lifetouch, Inc.
 [EMAIL PROTECTED]


 -Original Message-
 Sent: Monday, August 25, 2003 1:05 PM
 To: Multiple recipients of list ORACLE-L


 Hi!

 Spooling from sqlplus is VERY slow.
 Is the source database Oracle? Then use export/import
 If not, is there an ODBC driver for source database? Then use Oracle
 heterogenous services and do your transfer directly, without any
 intermediate files.
 Or use some very expensive software for doing this simple job...

 Tanel.
 P.S. if you definitely want to spool to textfile fast, Sparky could be
what
 you want...

 - Original Message - 
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 7:24 PM


  Jared - Thanks for posting this. At the moment, we are preparing to move
  large database to a new server. Based on the advice you posted several
  months ago, we have been testing SQL*Loader and as you predicted, it is

  indeed fast. But also as you predicted, using SQL*Plus to create a CSV
 isn't
  very fast. Am I correct in assuming the dump.sql will not be the best
 choice
  for large tables? We are installing perl since you mentioned that would
  probably be much faster.
 
  Dennis Williams
  DBA, 80%OCP, 100% DBA
  Lifetouch, Inc.
  [EMAIL PROTECTED]
 
 
  -Original Message-
  Sent: Monday, August 25, 2003 9:40 AM
  To: Multiple recipients of list ORACLE-L
 
 
  http://www.cybcon.com/~jkstill/util/dump/dump.html
 
  On Mon, 2003-08-25 at 05:39, Imran Ashraf wrote:
   Hi,
  
   Whats the best way to write the results of a SQL query to a CSV file?
  
   Thanks.
  
   _
   Hotmail messages direct to your mobile phone
  http://www.msn.co.uk/msnmobile
  
   -- 
   Please see the official ORACLE-L FAQ: http://www.orafaq.net
   -- 
   Author: Imran Ashraf
 INET: [EMAIL PROTECTED]
  
   Fat City Network Services-- 858-538-5051 http://www.fatcity.com
   San Diego, California-- Mailing list and web hosting services
   -
   To REMOVE yourself from this mailing list, send an E-Mail message
   to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
   the message BODY, include a line containing: UNSUB ORACLE-L
   (or the name of mailing list you want to be removed from).  You may
   also send the HELP command for other information (like subscribing).
  
 
 
  -- 
  Please see the official ORACLE-L FAQ: http://www.orafaq.net
  -- 
  Author: Jared Still
INET: [EMAIL PROTECTED]
 
  Fat City Network Services-- 858-538-5051 http://www.fatcity.com
  San Diego, California-- Mailing list and web hosting services
  -
  To REMOVE yourself from this mailing list, send an E-Mail message
  to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
  the message BODY, include a line containing: UNSUB ORACLE-L
  (or the name of mailing list you want to be removed from).  You may
  also send the HELP command for other information (like subscribing).
  -- 
  Please see the official ORACLE-L FAQ: http://www.orafaq.net
  -- 
  Author: DENNIS WILLIAMS
INET: [EMAIL PROTECTED]
 
  Fat City Network Services-- 858-538-5051 http://www.fatcity.com
  San Diego, California-- Mailing list and web hosting services
  -
  To REMOVE yourself from this mailing list, send an E-Mail message
  to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
  the message BODY, include a line containing: UNSUB ORACLE-L
  (or the name of mailing list you want to be removed from).  You may
  also send the HELP command for other information (like subscribing).
 


 -- 
 Please see the official ORACLE-L FAQ: http://www.orafaq.net
 -- 
 Author: Tanel Poder
   INET: [EMAIL PROTECTED]

 Fat City Network Services-- 858-538-5051 http://www.fatcity.com
 San Diego, California-- Mailing list and web hosting services
 

RE: Query results to .csv

2003-08-26 Thread John Kanagaraj
One caveat strikes my mind when considering Direct Load... Due to the fact
that the buffer is constructed and written directly, the kernel can perform
INSERTs only *above* the HWM. If the rate at which you perform Direct
INSERTs is high (i.e. multiple runs in a day), then you may have an
artificially large segment, most of which is empty. And your FTS will be
reaching farther and farther... The situation can be compounded by parallel
INSERT where you might acquire different 'start-to-insert' points in
parallel.

All this is from memory - I think it is mentioned in the Concepts/Admin
manual for Direct INSERT - and I might be wrong. Just cross-check this out
before implementing...

John Kanagaraj

-Original Message-
Sent: Monday, August 25, 2003 5:09 PM
To: Multiple recipients of list ORACLE-L

Hi!

What about several insert /*+ APPEND NOLOGGING */ commands over database
link run in parallel? (Possibly over dedicated network). This is fast and is
easier (in case you don't hit any compatibility problems). If you happen to
be running on Windows for some reason, you could try to use named pipes
network protocol instead of TCP as well.

Tanel.

- Original Message - 
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Monday, August 25, 2003 11:04 PM


 Thanks Tanel
   We will undoubtedly use export/import for the many small tables. We are
 looking for alternatives that will perform even faster. The insert phase
 seems to be the slowest part, and that is where SQL*Loader in direct path
 really shines. Now the next issue is how to produce a CSV file as fast as
 possible, and so far it looks like Jared's Perl program is the clear
winner.

 Dennis Williams
 DBA, 80%OCP, 100% DBA
 Lifetouch, Inc.
 [EMAIL PROTECTED]


 -Original Message-
 Sent: Monday, August 25, 2003 1:05 PM
 To: Multiple recipients of list ORACLE-L


 Hi!

 Spooling from sqlplus is VERY slow.
 Is the source database Oracle? Then use export/import
 If not, is there an ODBC driver for source database? Then use Oracle
 heterogenous services and do your transfer directly, without any
 intermediate files.
 Or use some very expensive software for doing this simple job...

 Tanel.
 P.S. if you definitely want to spool to textfile fast, Sparky could be
what
 you want...

 - Original Message - 
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 7:24 PM


  Jared - Thanks for posting this. At the moment, we are preparing to move
  large database to a new server. Based on the advice you posted several
  months ago, we have been testing SQL*Loader and as you predicted, it is

  indeed fast. But also as you predicted, using SQL*Plus to create a CSV
 isn't
  very fast. Am I correct in assuming the dump.sql will not be the best
 choice
  for large tables? We are installing perl since you mentioned that would
  probably be much faster.
 
  Dennis Williams
  DBA, 80%OCP, 100% DBA
  Lifetouch, Inc.
  [EMAIL PROTECTED]
 
 
  -Original Message-
  Sent: Monday, August 25, 2003 9:40 AM
  To: Multiple recipients of list ORACLE-L
 
 
  http://www.cybcon.com/~jkstill/util/dump/dump.html
 
  On Mon, 2003-08-25 at 05:39, Imran Ashraf wrote:
   Hi,
  
   Whats the best way to write the results of a SQL query to a CSV file?
  
   Thanks.
  
   _
   Hotmail messages direct to your mobile phone
  http://www.msn.co.uk/msnmobile
  
   -- 
   Please see the official ORACLE-L FAQ: http://www.orafaq.net
   -- 
   Author: Imran Ashraf
 INET: [EMAIL PROTECTED]
  
   Fat City Network Services-- 858-538-5051 http://www.fatcity.com
   San Diego, California-- Mailing list and web hosting services
   -
   To REMOVE yourself from this mailing list, send an E-Mail message
   to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
   the message BODY, include a line containing: UNSUB ORACLE-L
   (or the name of mailing list you want to be removed from).  You may
   also send the HELP command for other information (like subscribing).
  
 
 
  -- 
  Please see the official ORACLE-L FAQ: http://www.orafaq.net
  -- 
  Author: Jared Still
INET: [EMAIL PROTECTED]
 
  Fat City Network Services-- 858-538-5051 http://www.fatcity.com
  San Diego, California-- Mailing list and web hosting services
  -
  To REMOVE yourself from this mailing list, send an E-Mail message
  to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
  the message BODY, include a line containing: UNSUB ORACLE-L
  (or the name of mailing list you want to be removed from).  You may
  also send the HELP command for other information (like subscribing).
  -- 
  Please see the official ORACLE-L FAQ: http://www.orafaq.net
  -- 
  Author: DENNIS WILLIAMS
INET: [EMAIL PROTECTED]
 
  Fat City 

Re: Query results to .csv

2003-08-26 Thread Prem Khanna J
Imran,

Try this script.
i have used it often.

Jp.





-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Prem Khanna J
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


Re: Query results to .csv

2003-08-26 Thread Prem Khanna J
Oops...it's Jared's script.
i was not knowing till now.

Jp.

26-08-2003 11:14:26, Prem Khanna J [EMAIL PROTECTED] wrote:
Imran,
Try this script.
i have used it often.
Jp.



-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Prem Khanna J
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


RE: Query results to .csv

2003-08-26 Thread VIVEK_SHARMA

Spooling into a pipe file  concurrently SQL Loading from the same is even faster as 
the Loading Starts off even while the spool is underway i.e. being built . Both spool 
 Load Run concurrently . Also NO Disk space is consumed.

HTH

-Original Message-
Sent: Tuesday, August 26, 2003 1:35 AM
To: Multiple recipients of list ORACLE-L

Thanks Tanel
  We will undoubtedly use export/import for the many small tables. We are
looking for alternatives that will perform even faster. The insert phase
seems to be the slowest part, and that is where SQL*Loader in direct path
really shines. Now the next issue is how to produce a CSV file as fast as
possible, and so far it looks like Jared's Perl program is the clear winner.

Dennis Williams
DBA, 80%OCP, 100% DBA
Lifetouch, Inc.
[EMAIL PROTECTED] 


-Original Message-
Sent: Monday, August 25, 2003 1:05 PM
To: Multiple recipients of list ORACLE-L


Hi!

Spooling from sqlplus is VERY slow.
Is the source database Oracle? Then use export/import
If not, is there an ODBC driver for source database? Then use Oracle
heterogenous services and do your transfer directly, without any
intermediate files.
Or use some very expensive software for doing this simple job...

Tanel.
P.S. if you definitely want to spool to textfile fast, Sparky could be what
you want...

- Original Message - 
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Monday, August 25, 2003 7:24 PM


 Jared - Thanks for posting this. At the moment, we are preparing to move
 large database to a new server. Based on the advice you posted several
 months ago, we have been testing SQL*Loader and as you predicted, it is
 indeed fast. But also as you predicted, using SQL*Plus to create a CSV
isn't
 very fast. Am I correct in assuming the dump.sql will not be the best
choice
 for large tables? We are installing perl since you mentioned that would
 probably be much faster.

 Dennis Williams
 DBA, 80%OCP, 100% DBA
 Lifetouch, Inc.
 [EMAIL PROTECTED]


 -Original Message-
 Sent: Monday, August 25, 2003 9:40 AM
 To: Multiple recipients of list ORACLE-L


 http://www.cybcon.com/~jkstill/util/dump/dump.html

 On Mon, 2003-08-25 at 05:39, Imran Ashraf wrote:
  Hi,
 
  Whats the best way to write the results of a SQL query to a CSV file?
 
  Thanks.
 
  _
  Hotmail messages direct to your mobile phone
 http://www.msn.co.uk/msnmobile
 
  -- 
  Please see the official ORACLE-L FAQ: http://www.orafaq.net
  -- 
  Author: Imran Ashraf
INET: [EMAIL PROTECTED]
 
  Fat City Network Services-- 858-538-5051 http://www.fatcity.com
  San Diego, California-- Mailing list and web hosting services
  -
  To REMOVE yourself from this mailing list, send an E-Mail message
  to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
  the message BODY, include a line containing: UNSUB ORACLE-L
  (or the name of mailing list you want to be removed from).  You may
  also send the HELP command for other information (like subscribing).
 


 -- 
 Please see the official ORACLE-L FAQ: http://www.orafaq.net
 -- 
 Author: Jared Still
   INET: [EMAIL PROTECTED]

 Fat City Network Services-- 858-538-5051 http://www.fatcity.com
 San Diego, California-- Mailing list and web hosting services
 -
 To REMOVE yourself from this mailing list, send an E-Mail message
 to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
 the message BODY, include a line containing: UNSUB ORACLE-L
 (or the name of mailing list you want to be removed from).  You may
 also send the HELP command for other information (like subscribing).
 -- 
 Please see the official ORACLE-L FAQ: http://www.orafaq.net
 -- 
 Author: DENNIS WILLIAMS
   INET: [EMAIL PROTECTED]

 Fat City Network Services-- 858-538-5051 http://www.fatcity.com
 San Diego, California-- Mailing list and web hosting services
 -
 To REMOVE yourself from this mailing list, send an E-Mail message
 to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
 the message BODY, include a line containing: UNSUB ORACLE-L
 (or the name of mailing list you want to be removed from).  You may
 also send the HELP command for other information (like subscribing).



-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Tanel Poder
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or 

RE: Query results to .csv

2003-08-26 Thread DENNIS WILLIAMS
John - Thanks for the heads-up. My intention is to truncate all tables on
the target system beforehand, so that should reset the HWM. Then I have a
lot of tables to load, so my plan is to load multiple tables simultaneously,
trying for separate RAID sets, but use only a single insert on an individual
table.

Dennis Williams
DBA, 80%OCP, 100% DBA
Lifetouch, Inc.
[EMAIL PROTECTED] 


-Original Message-
Sent: Monday, August 25, 2003 7:54 PM
To: Multiple recipients of list ORACLE-L


One caveat strikes my mind when considering Direct Load... Due to the fact
that the buffer is constructed and written directly, the kernel can perform
INSERTs only *above* the HWM. If the rate at which you perform Direct
INSERTs is high (i.e. multiple runs in a day), then you may have an
artificially large segment, most of which is empty. And your FTS will be
reaching farther and farther... The situation can be compounded by parallel
INSERT where you might acquire different 'start-to-insert' points in
parallel.

All this is from memory - I think it is mentioned in the Concepts/Admin
manual for Direct INSERT - and I might be wrong. Just cross-check this out
before implementing...

John Kanagaraj

-Original Message-
Sent: Monday, August 25, 2003 5:09 PM
To: Multiple recipients of list ORACLE-L

Hi!

What about several insert /*+ APPEND NOLOGGING */ commands over database
link run in parallel? (Possibly over dedicated network). This is fast and is
easier (in case you don't hit any compatibility problems). If you happen to
be running on Windows for some reason, you could try to use named pipes
network protocol instead of TCP as well.

Tanel.

- Original Message - 
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Monday, August 25, 2003 11:04 PM


 Thanks Tanel
   We will undoubtedly use export/import for the many small tables. We are
 looking for alternatives that will perform even faster. The insert phase
 seems to be the slowest part, and that is where SQL*Loader in direct path
 really shines. Now the next issue is how to produce a CSV file as fast as
 possible, and so far it looks like Jared's Perl program is the clear
winner.

 Dennis Williams
 DBA, 80%OCP, 100% DBA
 Lifetouch, Inc.
 [EMAIL PROTECTED]


 -Original Message-
 Sent: Monday, August 25, 2003 1:05 PM
 To: Multiple recipients of list ORACLE-L


 Hi!

 Spooling from sqlplus is VERY slow.
 Is the source database Oracle? Then use export/import
 If not, is there an ODBC driver for source database? Then use Oracle
 heterogenous services and do your transfer directly, without any
 intermediate files.
 Or use some very expensive software for doing this simple job...

 Tanel.
 P.S. if you definitely want to spool to textfile fast, Sparky could be
what
 you want...

 - Original Message - 
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 7:24 PM


  Jared - Thanks for posting this. At the moment, we are preparing to move
  large database to a new server. Based on the advice you posted several
  months ago, we have been testing SQL*Loader and as you predicted, it is

  indeed fast. But also as you predicted, using SQL*Plus to create a CSV
 isn't
  very fast. Am I correct in assuming the dump.sql will not be the best
 choice
  for large tables? We are installing perl since you mentioned that would
  probably be much faster.
 
  Dennis Williams
  DBA, 80%OCP, 100% DBA
  Lifetouch, Inc.
  [EMAIL PROTECTED]
 
 
  -Original Message-
  Sent: Monday, August 25, 2003 9:40 AM
  To: Multiple recipients of list ORACLE-L
 
 
  http://www.cybcon.com/~jkstill/util/dump/dump.html
 
  On Mon, 2003-08-25 at 05:39, Imran Ashraf wrote:
   Hi,
  
   Whats the best way to write the results of a SQL query to a CSV file?
  
   Thanks.
  
   _
   Hotmail messages direct to your mobile phone
  http://www.msn.co.uk/msnmobile
  
   -- 
   Please see the official ORACLE-L FAQ: http://www.orafaq.net
   -- 
   Author: Imran Ashraf
 INET: [EMAIL PROTECTED]
  
   Fat City Network Services-- 858-538-5051 http://www.fatcity.com
   San Diego, California-- Mailing list and web hosting services
   -
   To REMOVE yourself from this mailing list, send an E-Mail message
   to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
   the message BODY, include a line containing: UNSUB ORACLE-L
   (or the name of mailing list you want to be removed from).  You may
   also send the HELP command for other information (like subscribing).
  
 
 
  -- 
  Please see the official ORACLE-L FAQ: http://www.orafaq.net
  -- 
  Author: Jared Still
INET: [EMAIL PROTECTED]
 
  Fat City Network Services-- 858-538-5051 http://www.fatcity.com
  San Diego, California-- Mailing list and web hosting services
  

RE: Query results to .csv

2003-08-26 Thread DENNIS WILLIAMS
Taniel, Binley
   Thanks for the excellent suggestions.
   At this point we have been testing with two smaller test systems, moving
a single table at a time, but initial indications are that the performance
order is:
1. Perl dump to CSV / ftp / SQL*Loader
2. Copy across database link
3. Export/ ftp / import

I need to re-run the tests once the target production system is available to
re-confirm which is faster. I am pretty confident in the ability to run
multiple SQL*Loader and import sessions simultaneously. I am a little
nervous about the ability of the database link to scale to enough
simultaneous sessions to keep the RAID sets maxed out on the target system.
Several years ago when I was doing a large conversion I hit that limit. Some
days I wonder if we are the beneficiary or victim of our prior experience.
Oh well thanks for all the good suggestions, on to testing, testing,
testing.

Dennis Williams
DBA, 80%OCP, 100% DBA
Lifetouch, Inc.
[EMAIL PROTECTED] 


-Original Message-
Sent: Monday, August 25, 2003 7:09 PM
To: Multiple recipients of list ORACLE-L


Hi!

What about several insert /*+ APPEND NOLOGGING */ commands over database
link run in parallel? (Possibly over dedicated network). This is fast and is
easier (in case you don't hit any compatibility problems). If you happen to
be running on Windows for some reason, you could try to use named pipes
network protocol instead of TCP as well.

Tanel.

- Original Message - 
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Monday, August 25, 2003 11:04 PM


 Thanks Tanel
   We will undoubtedly use export/import for the many small tables. We are
 looking for alternatives that will perform even faster. The insert phase
 seems to be the slowest part, and that is where SQL*Loader in direct path
 really shines. Now the next issue is how to produce a CSV file as fast as
 possible, and so far it looks like Jared's Perl program is the clear
winner.

 Dennis Williams
 DBA, 80%OCP, 100% DBA
 Lifetouch, Inc.
 [EMAIL PROTECTED]


 -Original Message-
 Sent: Monday, August 25, 2003 1:05 PM
 To: Multiple recipients of list ORACLE-L


 Hi!

 Spooling from sqlplus is VERY slow.
 Is the source database Oracle? Then use export/import
 If not, is there an ODBC driver for source database? Then use Oracle
 heterogenous services and do your transfer directly, without any
 intermediate files.
 Or use some very expensive software for doing this simple job...

 Tanel.
 P.S. if you definitely want to spool to textfile fast, Sparky could be
what
 you want...

 - Original Message - 
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 7:24 PM


  Jared - Thanks for posting this. At the moment, we are preparing to move
  large database to a new server. Based on the advice you posted several
  months ago, we have been testing SQL*Loader and as you predicted, it is

  indeed fast. But also as you predicted, using SQL*Plus to create a CSV
 isn't
  very fast. Am I correct in assuming the dump.sql will not be the best
 choice
  for large tables? We are installing perl since you mentioned that would
  probably be much faster.
 
  Dennis Williams
  DBA, 80%OCP, 100% DBA
  Lifetouch, Inc.
  [EMAIL PROTECTED]
 
 
  -Original Message-
  Sent: Monday, August 25, 2003 9:40 AM
  To: Multiple recipients of list ORACLE-L
 
 
  http://www.cybcon.com/~jkstill/util/dump/dump.html
 
  On Mon, 2003-08-25 at 05:39, Imran Ashraf wrote:
   Hi,
  
   Whats the best way to write the results of a SQL query to a CSV file?
  
   Thanks.
  
   _
   Hotmail messages direct to your mobile phone
  http://www.msn.co.uk/msnmobile
  
   -- 
   Please see the official ORACLE-L FAQ: http://www.orafaq.net
   -- 
   Author: Imran Ashraf
 INET: [EMAIL PROTECTED]
  
   Fat City Network Services-- 858-538-5051 http://www.fatcity.com
   San Diego, California-- Mailing list and web hosting services
   -
   To REMOVE yourself from this mailing list, send an E-Mail message
   to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
   the message BODY, include a line containing: UNSUB ORACLE-L
   (or the name of mailing list you want to be removed from).  You may
   also send the HELP command for other information (like subscribing).
  
 
 
  -- 
  Please see the official ORACLE-L FAQ: http://www.orafaq.net
  -- 
  Author: Jared Still
INET: [EMAIL PROTECTED]
 
  Fat City Network Services-- 858-538-5051 http://www.fatcity.com
  San Diego, California-- Mailing list and web hosting services
  -
  To REMOVE yourself from this mailing list, send an E-Mail message
  to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
  the message BODY, include a line containing: UNSUB ORACLE-L
  (or the name 

Re: Query results to .csv

2003-08-26 Thread Tanel Poder
Hi!

What is your goal? To finish data transfer in the shortest time or keep
hardware utilization maximum during transfer?
I think you should concentrate on keeping the time for doing data transfer
low.

Depending on your network - in case of gigabit (or 10 Gb) you could look at
enabling jumbo frames, which enable ethernet packets up to 9000 bytes. Also
set (SDU=32768) in your listener and tnsnames.oras (you can set it with
normal 1500 byte frames as well).

About parallellism, you might want to run several bulk inserts over dblink
to fully utilize your network (fill the gaps when one session is busy
inserting, thus not using the network). But if your source disk array (or
CPUs) are slow then they might be the bottleneck.
If you got SAN and a temporary spare server, do a BCV copy or mirror split,
open up several clones of a database and copy data from all of them.

Also, when you have SAN, there's no need for network transfer at all - you
just mount the filesystem with dump/exportfiles on target database and do
the load from there. If your operating systems are different, then just dump
to raw device, with pipe and tar for example, or completely raw, remembering
your data sizes. Note that in some (older) operating systems there were few
blocks in beginning of device which were used (and written) by operating
system (Tru64 had the largest I know - 64k). Thus you had to make sure you
didn't write anything there (oseek=65536 for dd for example).

If downtime isn't an issue for you, it might not be worth trying above
recommendations, but in RVLDBs (really very large databases) all of this can
help a lot.

Tanel.
- Original Message - 
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Tuesday, August 26, 2003 5:54 PM


 Taniel, Binley
Thanks for the excellent suggestions.
At this point we have been testing with two smaller test systems,
moving
 a single table at a time, but initial indications are that the performance
 order is:
 1. Perl dump to CSV / ftp / SQL*Loader
 2. Copy across database link
 3. Export/ ftp / import

 I need to re-run the tests once the target production system is available
to
 re-confirm which is faster. I am pretty confident in the ability to run
 multiple SQL*Loader and import sessions simultaneously. I am a little
 nervous about the ability of the database link to scale to enough
 simultaneous sessions to keep the RAID sets maxed out on the target
system.
 Several years ago when I was doing a large conversion I hit that limit.
Some
 days I wonder if we are the beneficiary or victim of our prior experience.
 Oh well thanks for all the good suggestions, on to testing, testing,
 testing.

 Dennis Williams
 DBA, 80%OCP, 100% DBA
 Lifetouch, Inc.
 [EMAIL PROTECTED]


 -Original Message-
 Sent: Monday, August 25, 2003 7:09 PM
 To: Multiple recipients of list ORACLE-L


 Hi!

 What about several insert /*+ APPEND NOLOGGING */ commands over database
 link run in parallel? (Possibly over dedicated network). This is fast and
is
 easier (in case you don't hit any compatibility problems). If you happen
to
 be running on Windows for some reason, you could try to use named pipes
 network protocol instead of TCP as well.

 Tanel.

 - Original Message - 
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 11:04 PM


  Thanks Tanel
We will undoubtedly use export/import for the many small tables. We
are
  looking for alternatives that will perform even faster. The insert phase
  seems to be the slowest part, and that is where SQL*Loader in direct
path
  really shines. Now the next issue is how to produce a CSV file as fast
as
  possible, and so far it looks like Jared's Perl program is the clear
 winner.
 
  Dennis Williams
  DBA, 80%OCP, 100% DBA
  Lifetouch, Inc.
  [EMAIL PROTECTED]
 
 
  -Original Message-
  Sent: Monday, August 25, 2003 1:05 PM
  To: Multiple recipients of list ORACLE-L
 
 
  Hi!
 
  Spooling from sqlplus is VERY slow.
  Is the source database Oracle? Then use export/import
  If not, is there an ODBC driver for source database? Then use Oracle
  heterogenous services and do your transfer directly, without any
  intermediate files.
  Or use some very expensive software for doing this simple job...
 
  Tanel.
  P.S. if you definitely want to spool to textfile fast, Sparky could be
 what
  you want...
 
  - Original Message - 
  To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
  Sent: Monday, August 25, 2003 7:24 PM
 
 
   Jared - Thanks for posting this. At the moment, we are preparing to
move
   large database to a new server. Based on the advice you posted several
   months ago, we have been testing SQL*Loader and as you predicted, it
is

   indeed fast. But also as you predicted, using SQL*Plus to create a CSV
  isn't
   very fast. Am I correct in assuming the dump.sql will not be the best
  choice
   for large tables? We are installing perl since you mentioned that

RE: Query results to .csv

2003-08-26 Thread DENNIS WILLIAMS
Tanel
   Thanks for the ideas. My simple mind says that by fully utilizing the
hardware I can minimize the overall time. But today we were on a conference
call with the application vendor and they were touting their utility for
handling this. Everyone around the table seemed pretty impressed, so maybe I
shouldn't be worrying about all these silly Oracle methods. Anyway I said
that I would test their utility, but since we don't have the server ready,
that is all a little way off. 

Dennis Williams
DBA, 80%OCP, 100% DBA
Lifetouch, Inc.
[EMAIL PROTECTED] 


-Original Message-
Sent: Tuesday, August 26, 2003 12:14 PM
To: Multiple recipients of list ORACLE-L


Hi!

What is your goal? To finish data transfer in the shortest time or keep
hardware utilization maximum during transfer?
I think you should concentrate on keeping the time for doing data transfer
low.

Depending on your network - in case of gigabit (or 10 Gb) you could look at
enabling jumbo frames, which enable ethernet packets up to 9000 bytes. Also
set (SDU=32768) in your listener and tnsnames.oras (you can set it with
normal 1500 byte frames as well).

About parallellism, you might want to run several bulk inserts over dblink
to fully utilize your network (fill the gaps when one session is busy
inserting, thus not using the network). But if your source disk array (or
CPUs) are slow then they might be the bottleneck.
If you got SAN and a temporary spare server, do a BCV copy or mirror split,
open up several clones of a database and copy data from all of them.

Also, when you have SAN, there's no need for network transfer at all - you
just mount the filesystem with dump/exportfiles on target database and do
the load from there. If your operating systems are different, then just dump
to raw device, with pipe and tar for example, or completely raw, remembering
your data sizes. Note that in some (older) operating systems there were few
blocks in beginning of device which were used (and written) by operating
system (Tru64 had the largest I know - 64k). Thus you had to make sure you
didn't write anything there (oseek=65536 for dd for example).

If downtime isn't an issue for you, it might not be worth trying above
recommendations, but in RVLDBs (really very large databases) all of this can
help a lot.

Tanel.
- Original Message - 
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Tuesday, August 26, 2003 5:54 PM


 Taniel, Binley
Thanks for the excellent suggestions.
At this point we have been testing with two smaller test systems,
moving
 a single table at a time, but initial indications are that the performance
 order is:
 1. Perl dump to CSV / ftp / SQL*Loader
 2. Copy across database link
 3. Export/ ftp / import

 I need to re-run the tests once the target production system is available
to
 re-confirm which is faster. I am pretty confident in the ability to run
 multiple SQL*Loader and import sessions simultaneously. I am a little
 nervous about the ability of the database link to scale to enough
 simultaneous sessions to keep the RAID sets maxed out on the target
system.
 Several years ago when I was doing a large conversion I hit that limit.
Some
 days I wonder if we are the beneficiary or victim of our prior experience.
 Oh well thanks for all the good suggestions, on to testing, testing,
 testing.

 Dennis Williams
 DBA, 80%OCP, 100% DBA
 Lifetouch, Inc.
 [EMAIL PROTECTED]


 -Original Message-
 Sent: Monday, August 25, 2003 7:09 PM
 To: Multiple recipients of list ORACLE-L


 Hi!

 What about several insert /*+ APPEND NOLOGGING */ commands over database
 link run in parallel? (Possibly over dedicated network). This is fast and
is
 easier (in case you don't hit any compatibility problems). If you happen
to
 be running on Windows for some reason, you could try to use named pipes
 network protocol instead of TCP as well.

 Tanel.

 - Original Message - 
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 11:04 PM


  Thanks Tanel
We will undoubtedly use export/import for the many small tables. We
are
  looking for alternatives that will perform even faster. The insert phase
  seems to be the slowest part, and that is where SQL*Loader in direct
path
  really shines. Now the next issue is how to produce a CSV file as fast
as
  possible, and so far it looks like Jared's Perl program is the clear
 winner.
 
  Dennis Williams
  DBA, 80%OCP, 100% DBA
  Lifetouch, Inc.
  [EMAIL PROTECTED]
 
 
  -Original Message-
  Sent: Monday, August 25, 2003 1:05 PM
  To: Multiple recipients of list ORACLE-L
 
 
  Hi!
 
  Spooling from sqlplus is VERY slow.
  Is the source database Oracle? Then use export/import
  If not, is there an ODBC driver for source database? Then use Oracle
  heterogenous services and do your transfer directly, without any
  intermediate files.
  Or use some very expensive software for doing this simple job...
 
  Tanel.
  P.S. if you 

Re: Query results to .csv/use of dblinks

2003-08-26 Thread M Rafiq
Tanel,

A quick question? How many open dblinks you have used at one time without 
any issues? Default setting in init.ora is 4(if I am not wrong) and I never 
used it more than that. If Dennis wants to use more than 4 dblinks at one 
time, he should modify this param(open_dblinks) in init.ora, right.

Regards
Rafiq






Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Date: Tue, 26 Aug 2003 09:14:26 -0800
Hi!

What is your goal? To finish data transfer in the shortest time or keep
hardware utilization maximum during transfer?
I think you should concentrate on keeping the time for doing data transfer
low.
Depending on your network - in case of gigabit (or 10 Gb) you could look at
enabling jumbo frames, which enable ethernet packets up to 9000 bytes. Also
set (SDU=32768) in your listener and tnsnames.oras (you can set it with
normal 1500 byte frames as well).
About parallellism, you might want to run several bulk inserts over dblink
to fully utilize your network (fill the gaps when one session is busy
inserting, thus not using the network). But if your source disk array (or
CPUs) are slow then they might be the bottleneck.
If you got SAN and a temporary spare server, do a BCV copy or mirror split,
open up several clones of a database and copy data from all of them.
Also, when you have SAN, there's no need for network transfer at all - you
just mount the filesystem with dump/exportfiles on target database and do
the load from there. If your operating systems are different, then just dump
to raw device, with pipe and tar for example, or completely raw, remembering
your data sizes. Note that in some (older) operating systems there were few
blocks in beginning of device which were used (and written) by operating
system (Tru64 had the largest I know - 64k). Thus you had to make sure you
didn't write anything there (oseek=65536 for dd for example).
If downtime isn't an issue for you, it might not be worth trying above
recommendations, but in RVLDBs (really very large databases) all of this can
help a lot.
Tanel.
- Original Message -
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Tuesday, August 26, 2003 5:54 PM
 Taniel, Binley
Thanks for the excellent suggestions.
At this point we have been testing with two smaller test systems,
moving
 a single table at a time, but initial indications are that the 
performance
 order is:
 1. Perl dump to CSV / ftp / SQL*Loader
 2. Copy across database link
 3. Export/ ftp / import

 I need to re-run the tests once the target production system is available
to
 re-confirm which is faster. I am pretty confident in the ability to run
 multiple SQL*Loader and import sessions simultaneously. I am a little
 nervous about the ability of the database link to scale to enough
 simultaneous sessions to keep the RAID sets maxed out on the target
system.
 Several years ago when I was doing a large conversion I hit that limit.
Some
 days I wonder if we are the beneficiary or victim of our prior 
experience.
 Oh well thanks for all the good suggestions, on to testing, testing,
 testing.

 Dennis Williams
 DBA, 80%OCP, 100% DBA
 Lifetouch, Inc.
 [EMAIL PROTECTED]


 -Original Message-
 Sent: Monday, August 25, 2003 7:09 PM
 To: Multiple recipients of list ORACLE-L


 Hi!

 What about several insert /*+ APPEND NOLOGGING */ commands over database
 link run in parallel? (Possibly over dedicated network). This is fast and
is
 easier (in case you don't hit any compatibility problems). If you happen
to
 be running on Windows for some reason, you could try to use named pipes
 network protocol instead of TCP as well.

 Tanel.

 - Original Message -
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 11:04 PM


  Thanks Tanel
We will undoubtedly use export/import for the many small tables. We
are
  looking for alternatives that will perform even faster. The insert 
phase
  seems to be the slowest part, and that is where SQL*Loader in direct
path
  really shines. Now the next issue is how to produce a CSV file as fast
as
  possible, and so far it looks like Jared's Perl program is the clear
 winner.
 
  Dennis Williams
  DBA, 80%OCP, 100% DBA
  Lifetouch, Inc.
  [EMAIL PROTECTED]
 
 
  -Original Message-
  Sent: Monday, August 25, 2003 1:05 PM
  To: Multiple recipients of list ORACLE-L
 
 
  Hi!
 
  Spooling from sqlplus is VERY slow.
  Is the source database Oracle? Then use export/import
  If not, is there an ODBC driver for source database? Then use Oracle
  heterogenous services and do your transfer directly, without any
  intermediate files.
  Or use some very expensive software for doing this simple job...
 
  Tanel.
  P.S. if you definitely want to spool to textfile fast, Sparky could be
 what
  you want...
 
  - Original Message -
  To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
  Sent: Monday, August 25, 2003 7:24 PM
 
 
   Jared - Thanks 

RE: Query results to .csv/use of dblinks

2003-08-26 Thread Stephane Paquette
You can also close the dblink to avoid having many open idle sessions on the
remote database.
alter session close database link dblink;


Stephane Paquette
Administrateur de bases de donnees
Database Administrator
Standard Life
www.standardlife.ca
Tel. (514) 499-7999 7470 and (514) 925-7187
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]



-Original Message-
Rafiq
Sent: Tuesday, August 26, 2003 3:54 PM
To: Multiple recipients of list ORACLE-L


Tanel,

A quick question? How many open dblinks you have used at one time without
any issues? Default setting in init.ora is 4(if I am not wrong) and I never
used it more than that. If Dennis wants to use more than 4 dblinks at one
time, he should modify this param(open_dblinks) in init.ora, right.

Regards
Rafiq








Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Date: Tue, 26 Aug 2003 09:14:26 -0800

Hi!

What is your goal? To finish data transfer in the shortest time or keep
hardware utilization maximum during transfer?
I think you should concentrate on keeping the time for doing data transfer
low.

Depending on your network - in case of gigabit (or 10 Gb) you could look at
enabling jumbo frames, which enable ethernet packets up to 9000 bytes. Also
set (SDU=32768) in your listener and tnsnames.oras (you can set it with
normal 1500 byte frames as well).

About parallellism, you might want to run several bulk inserts over dblink
to fully utilize your network (fill the gaps when one session is busy
inserting, thus not using the network). But if your source disk array (or
CPUs) are slow then they might be the bottleneck.
If you got SAN and a temporary spare server, do a BCV copy or mirror split,
open up several clones of a database and copy data from all of them.

Also, when you have SAN, there's no need for network transfer at all - you
just mount the filesystem with dump/exportfiles on target database and do
the load from there. If your operating systems are different, then just dump
to raw device, with pipe and tar for example, or completely raw, remembering
your data sizes. Note that in some (older) operating systems there were few
blocks in beginning of device which were used (and written) by operating
system (Tru64 had the largest I know - 64k). Thus you had to make sure you
didn't write anything there (oseek=65536 for dd for example).

If downtime isn't an issue for you, it might not be worth trying above
recommendations, but in RVLDBs (really very large databases) all of this can
help a lot.

Tanel.
- Original Message -
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Tuesday, August 26, 2003 5:54 PM


  Taniel, Binley
 Thanks for the excellent suggestions.
 At this point we have been testing with two smaller test systems,
moving
  a single table at a time, but initial indications are that the
performance
  order is:
  1. Perl dump to CSV / ftp / SQL*Loader
  2. Copy across database link
  3. Export/ ftp / import
 
  I need to re-run the tests once the target production system is available
to
  re-confirm which is faster. I am pretty confident in the ability to run
  multiple SQL*Loader and import sessions simultaneously. I am a little
  nervous about the ability of the database link to scale to enough
  simultaneous sessions to keep the RAID sets maxed out on the target
system.
  Several years ago when I was doing a large conversion I hit that limit.
Some
  days I wonder if we are the beneficiary or victim of our prior
experience.
  Oh well thanks for all the good suggestions, on to testing, testing,
  testing.
 
  Dennis Williams
  DBA, 80%OCP, 100% DBA
  Lifetouch, Inc.
  [EMAIL PROTECTED]
 
 
  -Original Message-
  Sent: Monday, August 25, 2003 7:09 PM
  To: Multiple recipients of list ORACLE-L
 
 
  Hi!
 
  What about several insert /*+ APPEND NOLOGGING */ commands over database
  link run in parallel? (Possibly over dedicated network). This is fast and
is
  easier (in case you don't hit any compatibility problems). If you happen
to
  be running on Windows for some reason, you could try to use named pipes
  network protocol instead of TCP as well.
 
  Tanel.
 
  - Original Message -
  To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
  Sent: Monday, August 25, 2003 11:04 PM
 
 
   Thanks Tanel
 We will undoubtedly use export/import for the many small tables. We
are
   looking for alternatives that will perform even faster. The insert
phase
   seems to be the slowest part, and that is where SQL*Loader in direct
path
   really shines. Now the next issue is how to produce a CSV file as fast
as
   possible, and so far it looks like Jared's Perl program is the clear
  winner.
  
   Dennis Williams
   DBA, 80%OCP, 100% DBA
   Lifetouch, Inc.
   [EMAIL PROTECTED]
  
  
   -Original Message-
   Sent: Monday, August 25, 2003 1:05 PM
   To: Multiple recipients of list ORACLE-L
  
  
   Hi!
  
   Spooling from sqlplus is VERY slow.
   Is the source 

RE: Query results to .csv/use of dblinks

2003-08-26 Thread M Rafiq
Thanks for your input. Discussion here is to keep maximum use of dblinks for 
data load/transfer from one server to another.

Regards
Rafiq






Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Date: Tue, 26 Aug 2003 12:49:27 -0800
You can also close the dblink to avoid having many open idle sessions on the
remote database.
alter session close database link dblink;
Stephane Paquette
Administrateur de bases de donnees
Database Administrator
Standard Life
www.standardlife.ca
Tel. (514) 499-7999 7470 and (514) 925-7187
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]


-Original Message-
Rafiq
Sent: Tuesday, August 26, 2003 3:54 PM
To: Multiple recipients of list ORACLE-L
Tanel,

A quick question? How many open dblinks you have used at one time without
any issues? Default setting in init.ora is 4(if I am not wrong) and I never
used it more than that. If Dennis wants to use more than 4 dblinks at one
time, he should modify this param(open_dblinks) in init.ora, right.
Regards
Rafiq






Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Date: Tue, 26 Aug 2003 09:14:26 -0800
Hi!

What is your goal? To finish data transfer in the shortest time or keep
hardware utilization maximum during transfer?
I think you should concentrate on keeping the time for doing data transfer
low.
Depending on your network - in case of gigabit (or 10 Gb) you could look at
enabling jumbo frames, which enable ethernet packets up to 9000 bytes. Also
set (SDU=32768) in your listener and tnsnames.oras (you can set it with
normal 1500 byte frames as well).
About parallellism, you might want to run several bulk inserts over dblink
to fully utilize your network (fill the gaps when one session is busy
inserting, thus not using the network). But if your source disk array (or
CPUs) are slow then they might be the bottleneck.
If you got SAN and a temporary spare server, do a BCV copy or mirror split,
open up several clones of a database and copy data from all of them.
Also, when you have SAN, there's no need for network transfer at all - you
just mount the filesystem with dump/exportfiles on target database and do
the load from there. If your operating systems are different, then just dump
to raw device, with pipe and tar for example, or completely raw, remembering
your data sizes. Note that in some (older) operating systems there were few
blocks in beginning of device which were used (and written) by operating
system (Tru64 had the largest I know - 64k). Thus you had to make sure you
didn't write anything there (oseek=65536 for dd for example).
If downtime isn't an issue for you, it might not be worth trying above
recommendations, but in RVLDBs (really very large databases) all of this can
help a lot.
Tanel.
- Original Message -
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Tuesday, August 26, 2003 5:54 PM
  Taniel, Binley
 Thanks for the excellent suggestions.
 At this point we have been testing with two smaller test systems,
moving
  a single table at a time, but initial indications are that the
performance
  order is:
  1. Perl dump to CSV / ftp / SQL*Loader
  2. Copy across database link
  3. Export/ ftp / import
 
  I need to re-run the tests once the target production system is 
available
to
  re-confirm which is faster. I am pretty confident in the ability to run
  multiple SQL*Loader and import sessions simultaneously. I am a little
  nervous about the ability of the database link to scale to enough
  simultaneous sessions to keep the RAID sets maxed out on the target
system.
  Several years ago when I was doing a large conversion I hit that limit.
Some
  days I wonder if we are the beneficiary or victim of our prior
experience.
  Oh well thanks for all the good suggestions, on to testing, testing,
  testing.
 
  Dennis Williams
  DBA, 80%OCP, 100% DBA
  Lifetouch, Inc.
  [EMAIL PROTECTED]
 
 
  -Original Message-
  Sent: Monday, August 25, 2003 7:09 PM
  To: Multiple recipients of list ORACLE-L
 
 
  Hi!
 
  What about several insert /*+ APPEND NOLOGGING */ commands over database
  link run in parallel? (Possibly over dedicated network). This is fast 
and
is
  easier (in case you don't hit any compatibility problems). If you happen
to
  be running on Windows for some reason, you could try to use named pipes
  network protocol instead of TCP as well.
 
  Tanel.
 
  - Original Message -
  To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
  Sent: Monday, August 25, 2003 11:04 PM
 
 
   Thanks Tanel
 We will undoubtedly use export/import for the many small tables. We
are
   looking for alternatives that will perform even faster. The insert
phase
   seems to be the slowest part, and that is where SQL*Loader in direct
path
   really shines. Now the next issue is how to produce a CSV file as fast
as
   possible, and so far it looks like Jared's Perl program is the clear
  winner.
  
   Dennis Williams
   

Re: Query results to .csv

2003-08-26 Thread Tanel Poder
Hi!

What do you customers care about? Usability and uptime of their app or
utilization of your server?

If you use direct exp and imp method:
1) you read data from disk (server process from oracle datafiles)
2) you write data to disk (expfile)
3) you read data from disk (ftp reading expfile)
4) you send data over network (quite efficient if using ftp)
5) you write data to disk
6) you read data from disk (imp process reading expfile)
7) you write data to disk (DBWR in background
8) you write data to LGWR too since there is no direct or nologging imp
method

Using dblinks
1) you read data from disk (server process from oracle datafiles)
2) you transfer data over nework (not so efficient that ftp if untuned)
3) you write data to disk (server process doing direct writes when using
append hint)

Nothing more because of append  nologging hint no redo has to be generated
(provided that you build indexes on tables afterwards)

So yes, since dblink transfer can me much more efficient, it won't probably
utilize your servers so much. But you always can do the job 10 times in a
row, or just start compressing ambigous big files on the same disk, then
your utilization should be acceptable as well.

Btw, these silly Oracle methods can provide you valuable knowledge how
Oracle works and they don't cost you money - only some time for learning.
And learning time, as we all (hopefully) know, isn't wasted time.

Tanel.


- Original Message - 
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Tuesday, August 26, 2003 10:44 PM


 Tanel
Thanks for the ideas. My simple mind says that by fully utilizing the
 hardware I can minimize the overall time. But today we were on a
conference
 call with the application vendor and they were touting their utility for
 handling this. Everyone around the table seemed pretty impressed, so maybe
I
 shouldn't be worrying about all these silly Oracle methods. Anyway I said
 that I would test their utility, but since we don't have the server ready,
 that is all a little way off.

 Dennis Williams
 DBA, 80%OCP, 100% DBA
 Lifetouch, Inc.
 [EMAIL PROTECTED]


 -Original Message-
 Sent: Tuesday, August 26, 2003 12:14 PM
 To: Multiple recipients of list ORACLE-L


 Hi!

 What is your goal? To finish data transfer in the shortest time or keep
 hardware utilization maximum during transfer?
 I think you should concentrate on keeping the time for doing data transfer
 low.

 Depending on your network - in case of gigabit (or 10 Gb) you could look
at
 enabling jumbo frames, which enable ethernet packets up to 9000 bytes.
Also
 set (SDU=32768) in your listener and tnsnames.oras (you can set it with
 normal 1500 byte frames as well).

 About parallellism, you might want to run several bulk inserts over dblink
 to fully utilize your network (fill the gaps when one session is busy
 inserting, thus not using the network). But if your source disk array (or
 CPUs) are slow then they might be the bottleneck.
 If you got SAN and a temporary spare server, do a BCV copy or mirror
split,
 open up several clones of a database and copy data from all of them.

 Also, when you have SAN, there's no need for network transfer at all - you
 just mount the filesystem with dump/exportfiles on target database and do
 the load from there. If your operating systems are different, then just
dump
 to raw device, with pipe and tar for example, or completely raw,
remembering
 your data sizes. Note that in some (older) operating systems there were
few
 blocks in beginning of device which were used (and written) by operating
 system (Tru64 had the largest I know - 64k). Thus you had to make sure you
 didn't write anything there (oseek=65536 for dd for example).

 If downtime isn't an issue for you, it might not be worth trying above
 recommendations, but in RVLDBs (really very large databases) all of this
can
 help a lot.

 Tanel.
 - Original Message - 
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Sent: Tuesday, August 26, 2003 5:54 PM


  Taniel, Binley
 Thanks for the excellent suggestions.
 At this point we have been testing with two smaller test systems,
 moving
  a single table at a time, but initial indications are that the
performance
  order is:
  1. Perl dump to CSV / ftp / SQL*Loader
  2. Copy across database link
  3. Export/ ftp / import
 
  I need to re-run the tests once the target production system is
available
 to
  re-confirm which is faster. I am pretty confident in the ability to run
  multiple SQL*Loader and import sessions simultaneously. I am a little
  nervous about the ability of the database link to scale to enough
  simultaneous sessions to keep the RAID sets maxed out on the target
 system.
  Several years ago when I was doing a large conversion I hit that limit.
 Some
  days I wonder if we are the beneficiary or victim of our prior
experience.
  Oh well thanks for all the good suggestions, on to testing, testing,
  testing.
 
  Dennis Williams
  

Re: Query results to .csv/use of dblinks

2003-08-26 Thread Tanel Poder
Hi!

In one migration of about 600GB DB I used 2 dblinks (loading different
partitions in parallel with separate statements), then added 3rd link after
what the bottleneck seemed to be the network. (I saw full network
utilization from perfmon on windows, wasn't too much of a wait interface
user back then). But your mileage may vary.

So, full hardware utilization is definitely good, at least during
migrations, but before thinking about that, you have to put together an
optimal migration path and methodology.

Tanel.

- Original Message - 
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Tuesday, August 26, 2003 10:54 PM


 Tanel,

 A quick question? How many open dblinks you have used at one time without
 any issues? Default setting in init.ora is 4(if I am not wrong) and I
never
 used it more than that. If Dennis wants to use more than 4 dblinks at one
 time, he should modify this param(open_dblinks) in init.ora, right.

 Regards
 Rafiq








 Reply-To: [EMAIL PROTECTED]
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Date: Tue, 26 Aug 2003 09:14:26 -0800

 Hi!

 What is your goal? To finish data transfer in the shortest time or keep
 hardware utilization maximum during transfer?
 I think you should concentrate on keeping the time for doing data transfer
 low.

 Depending on your network - in case of gigabit (or 10 Gb) you could look
at
 enabling jumbo frames, which enable ethernet packets up to 9000 bytes.
Also
 set (SDU=32768) in your listener and tnsnames.oras (you can set it with
 normal 1500 byte frames as well).

 About parallellism, you might want to run several bulk inserts over dblink
 to fully utilize your network (fill the gaps when one session is busy
 inserting, thus not using the network). But if your source disk array (or
 CPUs) are slow then they might be the bottleneck.
 If you got SAN and a temporary spare server, do a BCV copy or mirror
split,
 open up several clones of a database and copy data from all of them.

 Also, when you have SAN, there's no need for network transfer at all - you
 just mount the filesystem with dump/exportfiles on target database and do
 the load from there. If your operating systems are different, then just
dump
 to raw device, with pipe and tar for example, or completely raw,
remembering
 your data sizes. Note that in some (older) operating systems there were
few
 blocks in beginning of device which were used (and written) by operating
 system (Tru64 had the largest I know - 64k). Thus you had to make sure you
 didn't write anything there (oseek=65536 for dd for example).

 If downtime isn't an issue for you, it might not be worth trying above
 recommendations, but in RVLDBs (really very large databases) all of this
can
 help a lot.

 Tanel.
 - Original Message -
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Sent: Tuesday, August 26, 2003 5:54 PM


   Taniel, Binley
  Thanks for the excellent suggestions.
  At this point we have been testing with two smaller test systems,
 moving
   a single table at a time, but initial indications are that the
 performance
   order is:
   1. Perl dump to CSV / ftp / SQL*Loader
   2. Copy across database link
   3. Export/ ftp / import
  
   I need to re-run the tests once the target production system is
available
 to
   re-confirm which is faster. I am pretty confident in the ability to run
   multiple SQL*Loader and import sessions simultaneously. I am a little
   nervous about the ability of the database link to scale to enough
   simultaneous sessions to keep the RAID sets maxed out on the target
 system.
   Several years ago when I was doing a large conversion I hit that limit.
 Some
   days I wonder if we are the beneficiary or victim of our prior
 experience.
   Oh well thanks for all the good suggestions, on to testing, testing,
   testing.
  
   Dennis Williams
   DBA, 80%OCP, 100% DBA
   Lifetouch, Inc.
   [EMAIL PROTECTED]
  
  
   -Original Message-
   Sent: Monday, August 25, 2003 7:09 PM
   To: Multiple recipients of list ORACLE-L
  
  
   Hi!
  
   What about several insert /*+ APPEND NOLOGGING */ commands over
database
   link run in parallel? (Possibly over dedicated network). This is fast
and
 is
   easier (in case you don't hit any compatibility problems). If you
happen
 to
   be running on Windows for some reason, you could try to use named pipes
   network protocol instead of TCP as well.
  
   Tanel.
  
   - Original Message -
   To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
   Sent: Monday, August 25, 2003 11:04 PM
  
  
Thanks Tanel
  We will undoubtedly use export/import for the many small tables. We
 are
looking for alternatives that will perform even faster. The insert
 phase
seems to be the slowest part, and that is where SQL*Loader in direct
 path
really shines. Now the next issue is how to produce a CSV file as
fast
 as
possible, and so far it looks like Jared's Perl program 

Re: Query results to .csv

2003-08-25 Thread Ron Rogers
Imran,
 Select COL1 ||','|| COL2||','||COL3 from table...
will create a comma delimited file that you can SPOOL to an OS file.
Or you could use a third patry product.
Ron

 [EMAIL PROTECTED] 08/25/03 08:39AM 
Hi,

Whats the best way to write the results of a SQL query to a CSV file?

Thanks.

_
Hotmail messages direct to your mobile phone
http://www.msn.co.uk/msnmobile 

-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net 
-- 
Author: Imran Ashraf
  INET: [EMAIL PROTECTED] 

Fat City Network Services-- 858-538-5051 http://www.fatcity.com 
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Ron Rogers
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


Re: Query results to .csv

2003-08-25 Thread rgaffuri
if straight sql. spool and then just embed commas
spool myfile
select col1||','||col2
from tab;

if in pl/sql do the same thing with utl_file
 
 From: Imran Ashraf [EMAIL PROTECTED]
 Date: 2003/08/25 Mon AM 08:39:03 EDT
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Subject: Query results to .csv
 
 Hi,
 
 Whats the best way to write the results of a SQL query to a CSV file?
 
 Thanks.
 
 _
 Hotmail messages direct to your mobile phone http://www.msn.co.uk/msnmobile
 
 -- 
 Please see the official ORACLE-L FAQ: http://www.orafaq.net
 -- 
 Author: Imran Ashraf
   INET: [EMAIL PROTECTED]
 
 Fat City Network Services-- 858-538-5051 http://www.fatcity.com
 San Diego, California-- Mailing list and web hosting services
 -
 To REMOVE yourself from this mailing list, send an E-Mail message
 to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
 the message BODY, include a line containing: UNSUB ORACLE-L
 (or the name of mailing list you want to be removed from).  You may
 also send the HELP command for other information (like subscribing).
 

-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: [EMAIL PROTECTED]
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


RE: Query results to .csv

2003-08-25 Thread Guido Konsolke
Hi,

do some reading in the SQL*PLUS manual.
Read about:
- SET command (many useful options, e. g. LINESIZE, HEADING, COLSEP)
- SPOOL command (spools the result)

hth and greetings,
Guido

 [EMAIL PROTECTED] 25.08.2003  14.39 Uhr 
Hi,

Whats the best way to write the results of a SQL query to a CSV file?

Thanks.


-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Guido Konsolke
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


RE: Query results to .csv

2003-08-25 Thread Hatzistavrou John
Well, you might as well search akstom.oracle.com for owa_sylk

Kind Regards,


Hatzistavrou Yannis

-Original Message-
Sent: Monday, August 25, 2003 3:39 PM
To: Multiple recipients of list ORACLE-L

Hi,

Whats the best way to write the results of a SQL query to a CSV file?

Thanks.

_
Hotmail messages direct to your mobile phone http://www.msn.co.uk/msnmobile

-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Imran Ashraf
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Hatzistavrou John
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


RE: Query results to .csv

2003-08-25 Thread Jack van Zanen

That can be achieved by setting colsep in sql*plus as well and than select *
from table (less typing)  :-)

Does not help when a text field also contains comma's


Jack



-Original Message-
Sent: Monday, August 25, 2003 3:07 PM
To: Multiple recipients of list ORACLE-L


Imran,
 Select COL1 ||','|| COL2||','||COL3 from table...
will create a comma delimited file that you can SPOOL to an OS file. Or you
could use a third patry product. Ron

 [EMAIL PROTECTED] 08/25/03 08:39AM 
Hi,

Whats the best way to write the results of a SQL query to a CSV file?

Thanks.

_
Hotmail messages direct to your mobile phone http://www.msn.co.uk/msnmobile 

-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net 
-- 
Author: Imran Ashraf
  INET: [EMAIL PROTECTED] 

Fat City Network Services-- 858-538-5051 http://www.fatcity.com 
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the
message BODY, include a line containing: UNSUB ORACLE-L (or the name of
mailing list you want to be removed from).  You may also send the HELP
command for other information (like subscribing).
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Ron Rogers
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the
message BODY, include a line containing: UNSUB ORACLE-L (or the name of
mailing list you want to be removed from).  You may also send the HELP
command for other information (like subscribing).
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Jack van Zanen
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).



RE: Query results to .csv

2003-08-25 Thread Chelur, Jayadas {PBSG}
This is probably the easiest way, if you are
doing it from SQL*Plus :-

SET COLSEP ,



-Original Message-
Sent: Monday, August 25, 2003 8:39 AM
To: Multiple recipients of list ORACLE-L


Hi,

Whats the best way to write the results of a SQL query to a CSV file?

Thanks.

_
Hotmail messages direct to your mobile phone http://www.msn.co.uk/msnmobile

-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Imran Ashraf
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Chelur, Jayadas {PBSG}
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


Re: Query results to .csv

2003-08-25 Thread Jared Still
http://www.cybcon.com/~jkstill/util/dump/dump.html

On Mon, 2003-08-25 at 05:39, Imran Ashraf wrote:
 Hi,
 
 Whats the best way to write the results of a SQL query to a CSV file?
 
 Thanks.
 
 _
 Hotmail messages direct to your mobile phone http://www.msn.co.uk/msnmobile
 
 -- 
 Please see the official ORACLE-L FAQ: http://www.orafaq.net
 -- 
 Author: Imran Ashraf
   INET: [EMAIL PROTECTED]
 
 Fat City Network Services-- 858-538-5051 http://www.fatcity.com
 San Diego, California-- Mailing list and web hosting services
 -
 To REMOVE yourself from this mailing list, send an E-Mail message
 to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
 the message BODY, include a line containing: UNSUB ORACLE-L
 (or the name of mailing list you want to be removed from).  You may
 also send the HELP command for other information (like subscribing).
 


-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Jared Still
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


RE: Query results to .csv

2003-08-25 Thread DENNIS WILLIAMS
Jared - Thanks for posting this. At the moment, we are preparing to move
large database to a new server. Based on the advice you posted several
months ago, we have been testing SQL*Loader and as you predicted, it is
indeed fast. But also as you predicted, using SQL*Plus to create a CSV isn't
very fast. Am I correct in assuming the dump.sql will not be the best choice
for large tables? We are installing perl since you mentioned that would
probably be much faster.

Dennis Williams
DBA, 80%OCP, 100% DBA
Lifetouch, Inc.
[EMAIL PROTECTED] 


-Original Message-
Sent: Monday, August 25, 2003 9:40 AM
To: Multiple recipients of list ORACLE-L


http://www.cybcon.com/~jkstill/util/dump/dump.html

On Mon, 2003-08-25 at 05:39, Imran Ashraf wrote:
 Hi,
 
 Whats the best way to write the results of a SQL query to a CSV file?
 
 Thanks.
 
 _
 Hotmail messages direct to your mobile phone
http://www.msn.co.uk/msnmobile
 
 -- 
 Please see the official ORACLE-L FAQ: http://www.orafaq.net
 -- 
 Author: Imran Ashraf
   INET: [EMAIL PROTECTED]
 
 Fat City Network Services-- 858-538-5051 http://www.fatcity.com
 San Diego, California-- Mailing list and web hosting services
 -
 To REMOVE yourself from this mailing list, send an E-Mail message
 to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
 the message BODY, include a line containing: UNSUB ORACLE-L
 (or the name of mailing list you want to be removed from).  You may
 also send the HELP command for other information (like subscribing).
 


-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Jared Still
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: DENNIS WILLIAMS
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


RE: Query results to .csv

2003-08-25 Thread John Flack
That's one of the reasons I use a development tool (mine is TOAD, but there are others 
that can do the job).  I got tired of typing queries like:
SELECT ''||ename||','||
   TO_CHAR(salary)
  FROM emp

-Original Message-
Sent: Monday, August 25, 2003 8:39 AM
To: Multiple recipients of list ORACLE-L


Hi,

Whats the best way to write the results of a SQL query to a CSV file?

Thanks.

_
Hotmail messages direct to your mobile phone http://www.msn.co.uk/msnmobile

-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Imran Ashraf
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: John Flack
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


RE: Query results to .csv

2003-08-25 Thread Jared . Still
 of owner
  will ignore -table settings

 -rowlimitlimit number of rows returned

 -longlen if longs are in the table, set this
  to the maximum length you want.
  defaults to 65535

 -bincol use to specify columns that should be dumped
  in hex format. columns with binary data tend
  to cause problems in text dumps.
  e.g. -bincol table_name=column_name,column_name,...

 dunldr -database orcl -username system -password manager \
 -owner scott -directory scott.tables \
 -header \
 -table emp \
 -table dept \
 -table sales

 dunldr -database orcl -username system -password manager \
 -owner scott \
 -dateformat 'mm/dd/' \
 -header \
 -schemadump \
 -bincol xml_data=payload,header,authorization \
 -bincol app_notes=text


};

exit $exitCode ? $exitCode : 0;
}


package Tables;

sub new {

my $pkg = shift;
my $class = ref($pkg) || $pkg;

my ( $dbh, $optionHash ) = @_;

my $tableHash;
if ( grep(/^SCHEMADUMP$/, @{$optionHash-{table}} ) ) {
# get all tables of owner
my $sql = q{
select table_name
from all_tables
where owner = ?
};
my $sth = $dbh-prepare($sql);
$sth-execute(uc($optionHash-{owner}));
my @tableArray;
while( my $ary = $sth-fetchrow_arrayref ) {
push(@tableArray, $ary-[0]);
}
$tableHash = setTables([EMAIL PROTECTED]);
} else {
$tableHash = setTables([EMAIL PROTECTED]{table}});
}

bless $tableHash, $class;
return $tableHash;

}


=head1 setTables

 make a neat hash of the form TABLE_NAME = 'table_name.dump'
 all table names upper case, all file names lower case
 for dump file names - Perl is awesome

=cut


sub setTables {
my ($tableArray) = shift;

my %tables = map(
split(/:/, $_),
map(
$_.':'.lc($_).'.txt',
split(
/:/,
uc(join(':',@{$tableArray}))
)
)
);

# uncomment these lines to see it
#use Data::Dumper;
#print Dumper(\%tables);
#exit;

my $hashRef = \%tables;
return $hashRef;
}


sub createCtl {
my($self,%args) = @_;

my @columns = @{$args{COLUMNS}};
my %colOrder = %{$args{COLORDER}};

if ( $args{HEXCOLS} ) {
for my $hexdumpcol ( @{$args{HEXCOLS}} ) {
$columns[$colOrder{uc($hexdumpcol)}] =
$columns[$colOrder{uc($hexdumpcol)}] .
qq{ hex_to_raw(:$columns[$colOrder{uc($hexdumpcol)}])};
}
}

my $ctlFile = $args{DIRECTORY}. '/' . lc($args{TABLE}) . '.ctl';
my $ctlFh = new IO::File();
$ctlFh-open( $ctlFile) || die cannot create file $ctlFile - $!\n;
$ctlFh-print(load data\n);
$ctlFh-print(infile '$args{DUMPFILE}'\n);
$ctlFh-print(into table $args{TABLE}\n);
$ctlFh-print(q{fields terminated by ',' optionally enclosed by ''}. \n);
$ctlFh-print((\n);
$ctlFh-print( \t . join(,\n\t,@columns) . \n);
$ctlFh-print()\n);
$ctlFh-close;


my $parFile = $args{DIRECTORY}. '/' . lc($args{TABLE}) . '.par';
my $parFh = new IO::File();
$parFh-open( $parFile) || die cannot create file $parFile - $!\n;
$parFh-print(userid = $args{SCHEMA}\n);
$parFh-print(control =  . lc($args{TABLE}) . .ctl\n);
$parFh-print(log =  . lc($args{TABLE}) . .log\n);
$parFh-print(bad =  . lc($args{TABLE}) . .bad\n);
$parFh-close;

}






DENNIS WILLIAMS [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
08/25/2003 09:24 AM
Please respond to ORACLE-L


To:Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
cc:
Subject:RE: Query results to .csv


Jared - Thanks for posting this. At the moment, we are preparing to move
large database to a new server. Based on the advice you posted several
months ago, we have been testing SQL*Loader and as you predicted, it is
indeed fast. But also as you predicted, using SQL*Plus to create a CSV isn't
very fast. Am I correct in assuming the dump.sql will not be the best choice
for large tables? We are installing perl since you mentioned that would
probably be much faster.

Dennis Williams
DBA, 80%OCP, 100% DBA
Lifetouch, Inc.
[EMAIL PROTECTED] 


-Original Message-
Sent: Monday, August 25, 2003 9:40 AM
To: Multiple recipients of list ORACLE-L


http://www.cybcon.com/~jkstill/util/dump/dump.html






Re: Query results to .csv

2003-08-25 Thread Tanel Poder
Hi!

Spooling from sqlplus is VERY slow.
Is the source database Oracle? Then use export/import
If not, is there an ODBC driver for source database? Then use Oracle
heterogenous services and do your transfer directly, without any
intermediate files.
Or use some very expensive software for doing this simple job...

Tanel.
P.S. if you definitely want to spool to textfile fast, Sparky could be what
you want...

- Original Message - 
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Monday, August 25, 2003 7:24 PM


 Jared - Thanks for posting this. At the moment, we are preparing to move
 large database to a new server. Based on the advice you posted several
 months ago, we have been testing SQL*Loader and as you predicted, it is
 indeed fast. But also as you predicted, using SQL*Plus to create a CSV
isn't
 very fast. Am I correct in assuming the dump.sql will not be the best
choice
 for large tables? We are installing perl since you mentioned that would
 probably be much faster.

 Dennis Williams
 DBA, 80%OCP, 100% DBA
 Lifetouch, Inc.
 [EMAIL PROTECTED]


 -Original Message-
 Sent: Monday, August 25, 2003 9:40 AM
 To: Multiple recipients of list ORACLE-L


 http://www.cybcon.com/~jkstill/util/dump/dump.html

 On Mon, 2003-08-25 at 05:39, Imran Ashraf wrote:
  Hi,
 
  Whats the best way to write the results of a SQL query to a CSV file?
 
  Thanks.
 
  _
  Hotmail messages direct to your mobile phone
 http://www.msn.co.uk/msnmobile
 
  -- 
  Please see the official ORACLE-L FAQ: http://www.orafaq.net
  -- 
  Author: Imran Ashraf
INET: [EMAIL PROTECTED]
 
  Fat City Network Services-- 858-538-5051 http://www.fatcity.com
  San Diego, California-- Mailing list and web hosting services
  -
  To REMOVE yourself from this mailing list, send an E-Mail message
  to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
  the message BODY, include a line containing: UNSUB ORACLE-L
  (or the name of mailing list you want to be removed from).  You may
  also send the HELP command for other information (like subscribing).
 


 -- 
 Please see the official ORACLE-L FAQ: http://www.orafaq.net
 -- 
 Author: Jared Still
   INET: [EMAIL PROTECTED]

 Fat City Network Services-- 858-538-5051 http://www.fatcity.com
 San Diego, California-- Mailing list and web hosting services
 -
 To REMOVE yourself from this mailing list, send an E-Mail message
 to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
 the message BODY, include a line containing: UNSUB ORACLE-L
 (or the name of mailing list you want to be removed from).  You may
 also send the HELP command for other information (like subscribing).
 -- 
 Please see the official ORACLE-L FAQ: http://www.orafaq.net
 -- 
 Author: DENNIS WILLIAMS
   INET: [EMAIL PROTECTED]

 Fat City Network Services-- 858-538-5051 http://www.fatcity.com
 San Diego, California-- Mailing list and web hosting services
 -
 To REMOVE yourself from this mailing list, send an E-Mail message
 to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
 the message BODY, include a line containing: UNSUB ORACLE-L
 (or the name of mailing list you want to be removed from).  You may
 also send the HELP command for other information (like subscribing).



-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Tanel Poder
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


RE: Query results to .csv

2003-08-25 Thread DENNIS WILLIAMS
Thanks Tanel
  We will undoubtedly use export/import for the many small tables. We are
looking for alternatives that will perform even faster. The insert phase
seems to be the slowest part, and that is where SQL*Loader in direct path
really shines. Now the next issue is how to produce a CSV file as fast as
possible, and so far it looks like Jared's Perl program is the clear winner.

Dennis Williams
DBA, 80%OCP, 100% DBA
Lifetouch, Inc.
[EMAIL PROTECTED] 


-Original Message-
Sent: Monday, August 25, 2003 1:05 PM
To: Multiple recipients of list ORACLE-L


Hi!

Spooling from sqlplus is VERY slow.
Is the source database Oracle? Then use export/import
If not, is there an ODBC driver for source database? Then use Oracle
heterogenous services and do your transfer directly, without any
intermediate files.
Or use some very expensive software for doing this simple job...

Tanel.
P.S. if you definitely want to spool to textfile fast, Sparky could be what
you want...

- Original Message - 
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Monday, August 25, 2003 7:24 PM


 Jared - Thanks for posting this. At the moment, we are preparing to move
 large database to a new server. Based on the advice you posted several
 months ago, we have been testing SQL*Loader and as you predicted, it is
 indeed fast. But also as you predicted, using SQL*Plus to create a CSV
isn't
 very fast. Am I correct in assuming the dump.sql will not be the best
choice
 for large tables? We are installing perl since you mentioned that would
 probably be much faster.

 Dennis Williams
 DBA, 80%OCP, 100% DBA
 Lifetouch, Inc.
 [EMAIL PROTECTED]


 -Original Message-
 Sent: Monday, August 25, 2003 9:40 AM
 To: Multiple recipients of list ORACLE-L


 http://www.cybcon.com/~jkstill/util/dump/dump.html

 On Mon, 2003-08-25 at 05:39, Imran Ashraf wrote:
  Hi,
 
  Whats the best way to write the results of a SQL query to a CSV file?
 
  Thanks.
 
  _
  Hotmail messages direct to your mobile phone
 http://www.msn.co.uk/msnmobile
 
  -- 
  Please see the official ORACLE-L FAQ: http://www.orafaq.net
  -- 
  Author: Imran Ashraf
INET: [EMAIL PROTECTED]
 
  Fat City Network Services-- 858-538-5051 http://www.fatcity.com
  San Diego, California-- Mailing list and web hosting services
  -
  To REMOVE yourself from this mailing list, send an E-Mail message
  to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
  the message BODY, include a line containing: UNSUB ORACLE-L
  (or the name of mailing list you want to be removed from).  You may
  also send the HELP command for other information (like subscribing).
 


 -- 
 Please see the official ORACLE-L FAQ: http://www.orafaq.net
 -- 
 Author: Jared Still
   INET: [EMAIL PROTECTED]

 Fat City Network Services-- 858-538-5051 http://www.fatcity.com
 San Diego, California-- Mailing list and web hosting services
 -
 To REMOVE yourself from this mailing list, send an E-Mail message
 to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
 the message BODY, include a line containing: UNSUB ORACLE-L
 (or the name of mailing list you want to be removed from).  You may
 also send the HELP command for other information (like subscribing).
 -- 
 Please see the official ORACLE-L FAQ: http://www.orafaq.net
 -- 
 Author: DENNIS WILLIAMS
   INET: [EMAIL PROTECTED]

 Fat City Network Services-- 858-538-5051 http://www.fatcity.com
 San Diego, California-- Mailing list and web hosting services
 -
 To REMOVE yourself from this mailing list, send an E-Mail message
 to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
 the message BODY, include a line containing: UNSUB ORACLE-L
 (or the name of mailing list you want to be removed from).  You may
 also send the HELP command for other information (like subscribing).



-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Tanel Poder
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: DENNIS WILLIAMS
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California   

Re: Query results to .csv

2003-08-25 Thread Binley Lim
If both servers are on the same network with reasonable connection speed,

- create target_table nologging as select * from [EMAIL PROTECTED]

will beat all other options as it creates the table in one step - no
writing/transfering/reading intermediate files in between, and nologging
uses the same direct-path functionality as SQL*Loader. Simple enough  to
whip out the stopwatch and test whether it applies in your case or not.

- Original Message -
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Tuesday, August 26, 2003 8:04 AM


 Thanks Tanel
   We will undoubtedly use export/import for the many small tables. We are
 looking for alternatives that will perform even faster. The insert phase
 seems to be the slowest part, and that is where SQL*Loader in direct path
 really shines. Now the next issue is how to produce a CSV file as fast as
 possible, and so far it looks like Jared's Perl program is the clear
winner.

 Dennis Williams
 DBA, 80%OCP, 100% DBA
 Lifetouch, Inc.
 [EMAIL PROTECTED]


 -Original Message-
 Sent: Monday, August 25, 2003 1:05 PM
 To: Multiple recipients of list ORACLE-L


 Hi!

 Spooling from sqlplus is VERY slow.
 Is the source database Oracle? Then use export/import
 If not, is there an ODBC driver for source database? Then use Oracle
 heterogenous services and do your transfer directly, without any
 intermediate files.
 Or use some very expensive software for doing this simple job...

 Tanel.
 P.S. if you definitely want to spool to textfile fast, Sparky could be
what
 you want...

 - Original Message -
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 7:24 PM


  Jared - Thanks for posting this. At the moment, we are preparing to move
  large database to a new server. Based on the advice you posted several
  months ago, we have been testing SQL*Loader and as you predicted, it is
  indeed fast. But also as you predicted, using SQL*Plus to create a CSV
 isn't
  very fast. Am I correct in assuming the dump.sql will not be the best
 choice
  for large tables? We are installing perl since you mentioned that would
  probably be much faster.
 
  Dennis Williams
  DBA, 80%OCP, 100% DBA
  Lifetouch, Inc.
  [EMAIL PROTECTED]
 
 
  -Original Message-
  Sent: Monday, August 25, 2003 9:40 AM
  To: Multiple recipients of list ORACLE-L
 
 
  http://www.cybcon.com/~jkstill/util/dump/dump.html
 
  On Mon, 2003-08-25 at 05:39, Imran Ashraf wrote:
   Hi,
  
   Whats the best way to write the results of a SQL query to a CSV file?
  
   Thanks.
  
   _
   Hotmail messages direct to your mobile phone
  http://www.msn.co.uk/msnmobile
  
   --
   Please see the official ORACLE-L FAQ: http://www.orafaq.net
   --
   Author: Imran Ashraf
 INET: [EMAIL PROTECTED]
  
   Fat City Network Services-- 858-538-5051 http://www.fatcity.com
   San Diego, California-- Mailing list and web hosting services
   -
   To REMOVE yourself from this mailing list, send an E-Mail message
   to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
   the message BODY, include a line containing: UNSUB ORACLE-L
   (or the name of mailing list you want to be removed from).  You may
   also send the HELP command for other information (like subscribing).
  
 
 
  --
  Please see the official ORACLE-L FAQ: http://www.orafaq.net
  --
  Author: Jared Still
INET: [EMAIL PROTECTED]
 
  Fat City Network Services-- 858-538-5051 http://www.fatcity.com
  San Diego, California-- Mailing list and web hosting services
  -
  To REMOVE yourself from this mailing list, send an E-Mail message
  to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
  the message BODY, include a line containing: UNSUB ORACLE-L
  (or the name of mailing list you want to be removed from).  You may
  also send the HELP command for other information (like subscribing).
  --
  Please see the official ORACLE-L FAQ: http://www.orafaq.net
  --
  Author: DENNIS WILLIAMS
INET: [EMAIL PROTECTED]
 
  Fat City Network Services-- 858-538-5051 http://www.fatcity.com
  San Diego, California-- Mailing list and web hosting services
  -
  To REMOVE yourself from this mailing list, send an E-Mail message
  to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
  the message BODY, include a line containing: UNSUB ORACLE-L
  (or the name of mailing list you want to be removed from).  You may
  also send the HELP command for other information (like subscribing).
 


 --
 Please see the official ORACLE-L FAQ: http://www.orafaq.net
 --
 Author: Tanel Poder
   INET: [EMAIL PROTECTED]

 Fat City Network Services-- 858-538-5051 http://www.fatcity.com
 San Diego,