Tanel,
Thanks for your response.
Regards
Rafiq
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
Date: Tue, 26 Aug 2003 13:29:26 -0800
Hi!
In one migration of about 600GB DB I used 2 dblinks (loading different
partitions in parallel with separate statemen
Hi!
In one migration of about 600GB DB I used 2 dblinks (loading different
partitions in parallel with separate statements), then added 3rd link after
what the bottleneck seemed to be the network. (I saw full network
utilization from perfmon on windows, wasn't too much of a wait interface
user bac
Hi!
What do you customers care about? Usability and uptime of their app or
utilization of your server?
If you use direct exp and imp method:
1) you read data from disk (server process from oracle datafiles)
2) you write data to disk (expfile)
3) you read data from disk (ftp reading expfile)
4) yo
Thanks for your input. Discussion here is to keep maximum use of dblinks for
data load/transfer from one server to another.
Regards
Rafiq
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
Date: Tue, 26 Aug 2003 12:49:27 -0800
You can also close the dbl
You can also close the dblink to avoid having many open idle sessions on the
remote database.
alter session close database link dblink;
Stephane Paquette
Administrateur de bases de donnees
Database Administrator
Standard Life
www.standardlife.ca
Tel. (514) 499-7999 7470 and (514) 925-7187
[EMAIL
Tanel,
A quick question? How many open dblinks you have used at one time without
any issues? Default setting in init.ora is 4(if I am not wrong) and I never
used it more than that. If Dennis wants to use more than 4 dblinks at one
time, he should modify this param(open_dblinks) in init.ora, rig
Tanel
Thanks for the ideas. My simple mind says that by fully utilizing the
hardware I can minimize the overall time. But today we were on a conference
call with the application vendor and they were touting their utility for
handling this. Everyone around the table seemed pretty impressed, so ma
Hi!
What is your goal? To finish data transfer in the shortest time or keep
hardware utilization maximum during transfer?
I think you should concentrate on keeping the time for doing data transfer
low.
Depending on your network - in case of gigabit (or 10 Gb) you could look at
enabling jumbo fram
Taniel, Binley
Thanks for the excellent suggestions.
At this point we have been testing with two smaller test systems, moving
a single table at a time, but initial indications are that the performance
order is:
1. Perl dump to CSV / ftp / SQL*Loader
2. Copy across database link
3. Export/ ftp
John - Thanks for the heads-up. My intention is to truncate all tables on
the target system beforehand, so that should reset the HWM. Then I have a
lot of tables to load, so my plan is to load multiple tables simultaneously,
trying for separate RAID sets, but use only a single insert on an individu
Spooling into a "pipe" file & concurrently SQL Loading from the same is even faster as
the Loading Starts off even while the spool is underway i.e. being built . Both spool
& Load Run concurrently . Also NO Disk space is consumed.
HTH
-Original Message-
Sent: Tuesday, August 26, 2003 1
Oops...it's Jared's script.
i was not knowing till now.
Jp.
26-08-2003 11:14:26, Prem Khanna J <[EMAIL PROTECTED]> wrote:
>Imran,
>Try this script.
>i have used it often.
>Jp.
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author: Prem Khanna J
INET: [EMAIL PROTECTED]
F
Imran,
Try this script.
i have used it often.
Jp.
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author: Prem Khanna J
INET: [EMAIL PROTECTED]
Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web host
One caveat strikes my mind when considering Direct Load... Due to the fact
that the buffer is constructed and written directly, the kernel can perform
INSERTs only *above* the HWM. If the rate at which you perform Direct
INSERTs is high (i.e. multiple runs in a day), then you may have an
artificial
Hi!
What about several insert /*+ APPEND NOLOGGING */ commands over database
link run in parallel? (Possibly over dedicated network). This is fast and is
easier (in case you don't hit any compatibility problems). If you happen to
be running on Windows for some reason, you could try to use named pi
If both servers are on the same network with reasonable connection speed,
- create target_table nologging as select * from [EMAIL PROTECTED]
will beat all other options as it creates the table in one step - no
writing/transfering/reading intermediate files in between, and nologging
uses the same
Thanks Tanel
We will undoubtedly use export/import for the many small tables. We are
looking for alternatives that will perform even faster. The insert phase
seems to be the slowest part, and that is where SQL*Loader in direct path
really shines. Now the next issue is how to produce a CSV file as
Hi!
Spooling from sqlplus is VERY slow.
Is the source database Oracle? Then use export/import
If not, is there an ODBC driver for source database? Then use Oracle
heterogenous services and do your transfer directly, without any
intermediate files.
Or use some very expensive software for doing this
;
$ctlFh->print(q{fields terminated by ',' optionally enclosed by '"'}. "\n");
$ctlFh->print("(\n");
$ctlFh->print( "\t" . join(",\n\t",@columns) . "\n");
$ctlFh->print(")\n"
That's one of the reasons I use a development tool (mine is TOAD, but there are others
that can do the job). I got tired of typing queries like:
SELECT '"'||ename||'",'||
TO_CHAR(salary)
FROM emp
-Original Message-
Sent: Monday, August 25, 2003 8:39 AM
To: Multiple recipients of
Jared - Thanks for posting this. At the moment, we are preparing to move
large database to a new server. Based on the advice you posted several
months ago, we have been testing SQL*Loader and as you predicted, it is
indeed fast. But also as you predicted, using SQL*Plus to create a CSV isn't
very f
http://www.cybcon.com/~jkstill/util/dump/dump.html
On Mon, 2003-08-25 at 05:39, Imran Ashraf wrote:
> Hi,
>
> Whats the best way to write the results of a SQL query to a CSV file?
>
> Thanks.
>
> _
> Hotmail messages direct to your
This is probably the easiest way, if you are
doing it from SQL*Plus :-
SET COLSEP ","
-Original Message-
Sent: Monday, August 25, 2003 8:39 AM
To: Multiple recipients of list ORACLE-L
Hi,
Whats the best way to write the results of a SQL query to a CSV file?
Thanks.
That can be achieved by setting colsep in sql*plus as well and than select *
from table (less typing) :-)
Does not help when a text field also contains comma's
Jack
-Original Message-
Sent: Monday, August 25, 2003 3:07 PM
To: Multiple recipients of list ORACLE-L
Imran,
Select COL
Well, you might as well search akstom.oracle.com for owa_sylk
Kind Regards,
Hatzistavrou Yannis
-Original Message-
Sent: Monday, August 25, 2003 3:39 PM
To: Multiple recipients of list ORACLE-L
Hi,
Whats the best way to write the results of a SQL query to a CSV file?
Thanks.
___
Hi,
do some reading in the SQL*PLUS manual.
Read about:
- SET command (many useful options, e. g. LINESIZE, HEADING, COLSEP)
- SPOOL command (spools the result)
hth and greetings,
Guido
>>> [EMAIL PROTECTED] 25.08.2003 14.39 Uhr >>>
Hi,
Whats the best way to write the results of a SQL query to
if straight sql. spool and then just embed commas
spool myfile
select col1||','||col2
from tab;
if in pl/sql do the same thing with utl_file
>
> From: "Imran Ashraf" <[EMAIL PROTECTED]>
> Date: 2003/08/25 Mon AM 08:39:03 EDT
> To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
> Subject
Imran,
Select COL1 ||','|| COL2||','||COL3 from table...
will create a comma delimited file that you can SPOOL to an OS file.
Or you could use a third patry product.
Ron
>>> [EMAIL PROTECTED] 08/25/03 08:39AM >>>
Hi,
Whats the best way to write the results of a SQL query to a CSV file?
Thanks.
28 matches
Mail list logo