Janne,
Thanks very much for your wonderful detail suggestion. I'll definitely use
it to analyze the redo.
David
From: Jan Korecki <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
Subject: Re: insert in batch load
int
frequency. The table has two indexes. We don't set them to unusable
during inserting.
David
From: Jan Korecki <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
Subject: Re: insert in batch loading
Date: Tue, 25
checkpoint? Thanks for your inputs.
David
From: Mladen Gogala <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
Subject: Re: insert in batch loading
Date: Tue, 25 Nov 2003 12:49:28 -0800
What does it mean that redo log si
log size (100 MB) dominates the checkpoint frequency.
> The table has two indexes. We don't set them to unusable during inserting.
>
> David
>
>
> >From: Jan Korecki <[EMAIL PROTECTED]>
> >Reply-To: [EMAIL PROTECTED]
> >To: Multiple recipients of list ORA
t;[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
Subject: Re: insert in batch loading
Date: Tue, 25 Nov 2003 08:19:26 -0800
David Boyd wrote:
Hi All,
We have some batch loading jobs that truncate the tables first, then
insert i
We have 5 groups of redo log. Each group has two members. Each member is
100 MB.
David
From: Mladen Gogala <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
Subject: Re: insert in batch loading
Date: Tue, 25 Nov 2003 07
David Boyd wrote:
Hi All,
We have some batch loading jobs that truncate the tables first, then
insert into the tables as select from tables through database link.
Those jobs run daily. Every time when those jobs run, they cause
"cannot allocate new log, Checkpoint not complete". All of tabl
How big are yor redo logs? How many o them do you have?
On 11/25/2003 10:29:37 AM, David Boyd wrote:
> Hi All,
>
> We have some batch loading jobs that truncate the tables first, then insert
> into the tables as select from tables through database link. Those jobs run
> daily. Every time when
At 04:44 PM 10/17/2003, you wrote:
Got it, thanks to both of you, I really appreciate it.
maa
MaryAnn,
To get back to the original question, use a MOD function on rownum to get
your commit point. Check the docs to make sure, but try
If mod(rownum, 1000) = 0
then
commit;
end if;
Daniel Fink
--
Assuming that the resource_id column is the PK (or at least unique) of the
RQMT table something like this should work (albeit slowly). Its missing some
definitions, but this should give you an idea on how you _might_ proceed
Kevin
DECLARE
RowCount NUMBER:= 0;
MAX
- no need for commits.
> >
> > Just back it up when finished.
> >
> > Jared
> >
> >
> >
> >
> >
> >
> >
> > MaryAnn Atkinson <[EMAIL PROTECTED]>
> > Sent by: [EMAIL PROTECTED]
> > 10/16/2003 01:49 PM
&g
MaryAnn,
To get back to the original question, use a MOD function on rownum to get
your commit point. Check the docs to make sure, but try
If mod(rownum, 1000) = 0
then
commit;
end if;
Daniel Fink
MaryAnn Atkinson wrote:
> I still dont get it...
> I dont know what I have done to have me
gt;
>
>
>
>
>
>
> MaryAnn Atkinson <[EMAIL PROTECTED]>
> Sent by: [EMAIL PROTECTED]
> 10/16/2003 01:49 PM
> Please respond to ORACLE-L
>
>
> To: Multiple recipients of list ORACLE-L
> <[EMAIL PROTECTED]>
> cc:
>
Nologging suspends redo log entries generation FOR DIRECT OPERATIONS
ONLY! Direct operations do not use SQL, they use Lisp (just kidding).
Direct insert preformats blocks and appends blocks as a whole after
the highwater mark, without using SQL in the process.
On 10/17/2003 02:43:29 PM, MaryAnn Atk
eed for commits.
>
> Just back it up when finished.
>
> Jared
>
>
>
>
>
>
>
> MaryAnn Atkinson <[EMAIL PROTECTED]>
> Sent by: [EMAIL PROTECTED]
> 10/16/2003 01:49 PM
> Please respond to ORACLE-L
>
>
> To: Multiple
[EMAIL PROTECTED] wrote:
Mladen Gogala wrote:
I believe that this would be the best solution:
DECLARE
RowCount NUMBER:= 0;
BEGIN
/* This will work if the RESOURCE table has the "parallel"
attribute set. In 8i, table needs to be partitioned as well */
EXECUTE IMMMEDIAT
Mladen Gogala wrote:
>
> I believe that this would be the best solution:
> DECLARE
> RowCount NUMBER:= 0;
>
> BEGIN
> /* This will work if the RESOURCE table has the "parallel"
> attribute set. In 8i, table needs to be partitioned as well */
>
> EXECUTE IMMMEDIA
Sent by: Subject: Re: insert and commit 1000
records at a time
[EMAIL PROTECTED]
it up when finished.
Jared
MaryAnn Atkinson <[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED]
10/16/2003 01:49 PM
Please respond to ORACLE-L
To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
cc:
Subject: Re:
log into target database.
SQL> set long 32000 (or whatever if you have long datatype involved)
SQL> set arraysize 100
SQL> set copycommit 1000 <-- LOOKY!!
SQL> COPY FROM ${REMOTE_LOGIN}/[EMAIL PROTECTED] INSERT
${LOCAL_SCHEMA}.${THE_TABLE} USING ${QUERY};
In this case QUERY will probably be "s
--- [EMAIL PROTECTED] wrote:
> That will work, slowly.
> You might like to try something like this
> insert into resource
> nologging
> select * from rqmt
> append;
How's that commiting every 1000 records?
> Read up on the 'append' and 'nologging' first.
???
>
>
>
>
>
>
>
> Maryann
That will work, slowly.
You might like to try something like this
insert into resource
nologging
select * from rqmt
append;
Read up on the 'append' and 'nologging' first.
Jared
Maryann Atkinson <[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED]
10/16/2003 08:54 AM
Please respond to ORAC
I believe that this would be the best solution:
DECLARE
RowCount NUMBER:= 0;
BEGIN
/* This will work if the RESOURCE table has the "parallel"
attribute set. In 8i, table needs to be partitioned as well */
EXECUTE IMMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';
SELE
yeah dont commit every 1000 records and do it in one shot. this is going to be much
slower.
why do you want to do it this way? Ive done 100m inserts with just an insert select
and one commit.
>
> From: Maryann Atkinson <[EMAIL PROTECTED]>
> Date: 2003/10/16 Thu AM 11:54:33 EDT
> To: Multiple
: Tuesday, September 23, 2003
4:05 PM
To: Multiple recipients of list ORACLE-L
Subject: RE: Insert performance
I have already bumped
that up quite a bit as well, but I still get many waits.
Thanks,
Rick
Stephenson
Oracle
Database Administrator
Ovid
Technologies, Inc.
[EMAIL PROTECTED
22, 2003
11:10 PM
To: Multiple recipients of list
ORACLE-L
Subject: Re: Insert performance
At 09:44 PM 9/22/2003, you wrote:
Does anyone have any idea how to
improve performance for multiple inserts into a table that uses a sequence
generated primary key?
Is the sequence cache set to an
Hi!
Search for "surrogate key" in google for example.
You have to modify your generated primary key values to not hit the same
index blocks consecutively. For example, add another high-cardinality column to
your primary key, or if using meaningless primary keys then just have your
primary
At 09:44 PM 9/22/2003, you wrote:
Does
anyone have any idea how to improve performance for multiple inserts into
a table that uses a sequence generated primary
key?
Is the sequence cache set to an appropriately large value?
Justin Cave
I have approximately 6 concurrent inserts per second into t
Rick,
I haven't tried this myself but you could consider a reverse key index (depending on
your version).
That way multiple inserts won't go to the same block.
However, (from Perf Tuning 101) you will incur much more IO than a normal index if you
do range scans, so you'd need to consider how yo
Guido,
Where are getting the data from - other database tables? Because, you could
build a PL/SQL block that contains PL/SQL variables that can hold up to 32k
of data. You could build the string in the variable and insert it into the
LONG column.
Something like:
declare
str_var varchar2(8000
Hi Tom,
thanks for trying to help. Unfortunately I have no chance
because I've no account with write permissions. So it
seems impossible to put the data on the server through ftp.
Oh: and (of course!) no UTL_FILE_DIR defined in the init...ora.
Any other suggestions?
tia,
Guido
>>> [EMAIL PROTEC
Guido...
change the datatype! :-)
Of course, you can't.
Well, have you tried with a Pl-Sql anonymous block?
or...
Oracle says that using BIND variables you can get more
than 4000 chars...
have you tried with bind vars?
HTH
JL
--- Guido Konsolke >
>
> > Hi dear list members,
> >
> > we run
Guido,
Can you use utl_file for this within a PL/SQL block?
Tom Mercadante
Oracle Certified Professional
-Original Message-
[mailto:[EMAIL PROTECTED]
Sent: Wednesday, July 16, 2003 9:54 AM
To: Multiple recipients of list ORACLE-L
Hi dear list members,
we run a database 8.1.6.2
I'
Thanks for the answer, but this no way for me.
Ain't got sqlldr on my client pc and no telnet
to the server 8-((
>>> [EMAIL PROTECTED] 16.07.2003 16.59 Uhr >>>
Try using sql*loader. Put the input data string in a single file as the
input file.
- Original Message -
To: "Multiple recipient
Try using sql*loader. Put the input data string in a single file as the
input file.
- Original Message -
To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]>
Sent: Wednesday, July 16, 2003 9:54 AM
> Hi dear list members,
>
> we run a database 8.1.6.2
> I've run into a problem.
Let's follow the transaction and see what happens. I am sure there are
some errors and omissions, so please take with a grain of salt and add
other steps/clarifications.
1) Object resolution. If the object definition does not already exist in
the dictionary cache, it must be read from the data dict
I dont have any foreign key const on the table . Yes but there are two
indexes on this table . Would that cause this high disk reads ?
-ak
- Original Message -
To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]>
Sent: Monday, March 17, 2003 10:14 AM
> Constraint validations?
insert
into ... select * from ..
-Original Message-From: AK
[mailto:[EMAIL PROTECTED]Sent: Monday, March 17, 2003 12:22
PMTo: Multiple recipients of list ORACLE-LSubject:
insert stmt disk reads
In sqlarea I am finding some insert statements
with high disk reads . why wo
Do you have many indexes on your system ?
A table insert often results in index updates;
and for large tables with many indexes you
usually find that some of the index leaf blocks
have to be read from disk.
Regards
Jonathan Lewis
http://www.jlcomp.demon.co.uk
Now available One-day tutorials:
Constraint validations?
RF
-Original Message-
To: Multiple recipients of list ORACLE-L
Sent: 3/17/2003 11:22 AM
In sqlarea I am finding some insert statements with high disk reads .
why would an insert statement will generate disk reads ? Any idea .
-ak
--
Please see the official O
How
about foreign key constraints? - going to the parent table to see if the value
exists.
Tom Mercadante Oracle Certified Professional
-Original Message-From: AK
[mailto:[EMAIL PROTECTED]Sent: Monday, March 17, 2003 12:22
PMTo: Multiple recipients of list ORACLE-LSubject:
Partial List
Foreign Key Validation
Primary Key Validation
Reading blocks on the freelist for insert
Before Insert/After Insert Triggers
--
Daniel W. Fink
http://www.optimaldba.com
IOUG-A Live! April 27 - May 1, 2003 Orlando, FL
Sunday, April 27 8:30am - 4:30pm - Problem Solving with Or
Hi Sam,
What about this,
SQL > declare
3 cRow varchar2(18);
4 begin
5 INSERT INTO MEMBER_PLAN_FUND
6 (CLNT,PLAN,MKEY,FUND,TRADATE,ERKEY) VALUES
7 ('0001','1',
'000-3','ER',SYSDATE,Misc.GetDefaultErKey('0001','000-3'))
8
Sathish
Given that your high wait times relate not to the actual Oracle insert
process but to communication with the client, the first place I would look
is the connection between Oracle and Informatica. Are you going through
ODBC? There are options in SQL*Net and in the ODBC driver. I would als
Except for primay key constraint we dont have any other constraints or
indexes present during the load.
What does the sql * net wait events signify. Is there cause for concern
when we see those events. ??
Thanks,
Sathish.
On Wed, 23 Oct 2002 09:02:21 -0800, "Whittle Jerome Contr NCI"
<[EMAIL PROT
Title: RE: Insert too slow...
How may indexes are on DM_TRANS_PYMT_HIST? If there are a lot, you might want to drop the indexes, except for the primary key or unique constraints, insert the records, rebuild the dropped indexes, and analyze those indexes.
Jerry Whittle
ACIFICS DBA
NCI
D On Behalf Of "Naveen Nahata"
>><[EMAIL PROTECTED]>
>> Sent:Wednesday, August 14, 2002 1:19 PM
>> To: Multiple recipients of list ORACLE-L
>> Subject: RE: insert nologging parallel/noparallel and archiving
>>
>> In the second case you are
14, 2002 1:19 PM
> To: Multiple recipients of list ORACLE-L
> Subject: RE: insert nologging parallel/noparallel and archiving
>
> In the second case you are also selecting from sequence which might have
> generated the logs. Use logminer and get the details of what was log
In the second case you are also selecting from sequence which might have
generated the logs. Use logminer and get the details of what was logged.
Naveen
-Original Message-
Sent: Wednesday, August 14, 2002 10:14 PM
To: Multiple recipients of list ORACLE-L
Rick,
These are the transacti
Rick,
These are the transactions:
This one does not create any logs (nothing was written
into arch directory);
insert /*+ parallel (egurev1.offr,16) */ into
egurev1.offr
nologging
(select
a.PRDCT_ID ,
a.OFFR_RSPNS_TYP_CDE ,
a.PRTY_TYP_NBR ,
a.PREFR_IND ,
a.OFFR_CNT ,
a.PROC_M
cc:
Sent by: Subject: Re: insert nologging
parallel/noparallel and archiving
[EMAIL PROTEC
Jack,
I have another twist on this situation. Again I'm
loading the tables in nologging mode with all the
indices dropped. Both transactions are insert into
table nologging (select *). The only difference is
that one select is a select from a single table while
the other - is a select from two jo
Title: RE: INSERT INTO Syntax: Insert a complete record using a cursor
Hi,
The update method is interesting. I need to think the reprocussions through first, but that may save me a lot of hassle
Good idea!
Thanks,
Hannah
-Original Message-
From
Hannah,
What you are asking is possible only in 9.2. OTOH, you could use SQL to
select from dba_tab_columns to generate INSERT ... script though.
Raj
__
Rajendra Jamadagni MIS, ESPN Inc.
Rajendra dot Jamadagni at ESPN dot com
Any
Hannah
Maybe you could temporarily change the insert trigger to an update trigger
(create or replace trigger before insert or update on...)
and then simply update the table setting some column to itself. This will
will fire the trigger (and will not change any data in the table).
I didn't unders
.. or i had a similar problem ... i wrote an update trigger to populate the
backup table ... and i did an update the the master table to update a record
back to itself ...
update mytab
set acol = acol;
when the update happens use the :old/:new values to populate the backup
table
-Ori
Gene,
Glad to help.
A word of caution, do consider the recovery aspects of
nologging carefully before use. A backup immediately
after the move might be prudent, or keeping the old
data around as a fallback.
Jack
--- Gurelei <[EMAIL PROTECTED]> wrote:
> Just wanted to thank Jack, Connor and J
Just wanted to thank Jack, Connor and Jared for their
help!!
Gene
--- Jack Silvey <[EMAIL PROTECTED]> wrote:
> sorry, meant insert as select, not insert as append
> --- Jack Silvey <[EMAIL PROTECTED]> wrote:
> > Gene,
> >
> > This sounds right.
___
I'm doing this from memory, so you may want to RTFM
to verify this information.
With a non-parallel insert, you will need to use the APPEND
hint to avoid generating redo. Read the fine documentation
as to why this is necessary.
With a parallel INSERT, the data will all be APPENDed
by default
sorry, meant insert as select, not insert as append
--- Jack Silvey <[EMAIL PROTECTED]> wrote:
> Gene,
>
> This sounds right.
>
> Standard insert as append does freelist block
> checking
> and generates redo. Parallel DML generates minimial
> redo, just for the new temp segments being created,
Gene,
This sounds right.
Standard insert as append does freelist block checking
and generates redo. Parallel DML generates minimial
redo, just for the new temp segments being created,
not for the datablock updates. This is one reason it
is so fast.
Index changes are always logged, even in PDML
You'll need the APPEND hint to avoid the logging in
serial mode.
hth
connor
--- Gurelei <[EMAIL PROTECTED]> wrote: > Hi all.
>
> I've been trying to copy a 20mil rows table between
> two databases via insert as select statement. I
> have dropped all the indices and ran in nologging
> mode
> w
No. CPU utilitization averages 40-60% across 4 CPUs.
--- Mohammed Shakir <[EMAIL PROTECTED]> wrote:
> No enough inserts to bog down the CPUs?
>
> --- Paul Baumgartel <[EMAIL PROTECTED]> wrote:
> > Thanks, Mohammed and Anjo, for your replies.
> >
> > Now my question is this: given that the tabl
No enough inserts to bog down the CPUs?
--- Paul Baumgartel <[EMAIL PROTECTED]> wrote:
> Thanks, Mohammed and Anjo, for your replies.
>
> Now my question is this: given that the table structures (freelists,
> etc.) are identical on the two machines, the init.ora parameters are
> identical for th
Disk layout and disk speed itself matters. While testing one same batch job
with similar data on prod and test machine, test machine was performing same
job in half of the time(2 hours) than prod(4 hours) and difference was fast
fiber optical drives on test box even with slower processor but as
Paul - My point is that I have seen performance decrease when too many
processes are used. For example, with imports, a multi-cpu system may import
faster with two import jobs running. But at some point it seems that Oracle
and/or the operating system is just trying to switch between each process
Dennis--
The faster machine had 10 database connections, the slower, 50. Not
all were used in either case. But my comparison is for inserts done by
a single session, i.e., it's not the aggregate insert rate, but a
direct comparison of the same insert statement in a single session on
each server.
Thanks, Mohammed and Anjo, for your replies.
Now my question is this: given that the table structures (freelists,
etc.) are identical on the two machines, the init.ora parameters are
identical for the two instances, and the machines themselves are nearly
identical (one has 6 CPUs, one 4, but in n
Paul - How many insert processes did you run on each system? Is the disk
layout identical in terms of spreading across devices?
Dennis Williams
DBA
Lifetouch, Inc.
[EMAIL PROTECTED]
-Original Message-
Sent: Wednesday, April 10, 2002 6:43 PM
To: Multiple recipients of list ORACLE-L
Gree
data block waits is the indicator of freelists contention. I have never
seen a freelists contention, even though I have been running 11
processing doing inserts using 11 CPUs. I have seen library cache,
Shared pool and cache buffer chains waits. I took care of buffer busy
waits and db file sequent
aul,
BBW due to not enough freelists is caused by multiple processes waiting
on the head of the freelist to check if there is space in the block.
problems can get worse by doing array inserts ..
So by having enough freelists, different processes will check on
different heads of freelsits.
An
Follow-up question: can someone explain exactly why buffer busy waits
can be due to heavy insert activity when there are insufficient
freelists? I suspect that this may figure into my problem with insert
performance. Thanks!
Paul Baumgartel
__
D
Stephane,
I don't understand what you mean when you say you can
select on t1 before any commit.
I just tried this on an 8.1.7 database, and there was
a signicant reduction in redo. There was only 668
bytes of redo generated vs. 2320 (see below). I think
some redo will still be generated as you
This is code I use and it works. If you see I do not use 'AS' before
select. I am not sure if you need it.
insert /*+ parallel(&&1, 6) */ into &&1
select /*+ parallel(&&2, 6) */ * from &&2;
Shakir
--- paquette stephane <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'm trying the following insert /*+ ap
v$session_wait
--
Jeremiah Wilton
http://www.speakeasy.net/~jwilton
On Wed, 3 Apr 2002, [EMAIL PROTECTED] wrote:
> I am trying to insert a record thru sql*plus into a table. I am the only
> one accessing database. It just hangs and never inserts. What I can I
> look at to help determine why t
Rick ,
check logfiles
May be ur database is running in archivelog and archive destinations are
full . There are many other possible reasons ..look into alertSID.log for
them .
-Bigp
- Original Message -
To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]>
Sent: Wednesday, Apr
If you have the Diagnostics Pack, try looking at your session in Top
Sessions.
Beth
-Original Message-
Sent: Wednesday, April 03, 2002 1:06 PM
To: Multiple recipients of list ORACLE-L
Hi DBAs,
I am trying to insert a record thru sql*plus into a table. I am the only
one accessing data
If you care to post your perl code, or send it to me offline, I can
take a few minutes to look at it.
-Joe
--- Mohammed Shakir <[EMAIL PROTECTED]> wrote:
> We use ForAll for insert and it works fine. However, we do not use
> prepare_cached or execute. We use bind variables with our static
> SQL
We use ForAll for insert and it works fine. However, we do not use
prepare_cached or execute. We use bind variables with our static SQL
and insert millions of records and it is fast.
--- Gavin D'mello <[EMAIL PROTECTED]> wrote:
> MessageI'm trying to do a bulk insert using DBI and Oracle for abou
How about doing a:
$dbh->do(q{alter session set sql_trace = true});
in your script before the insert, then checking the trace file
with tkprof.
Jared
On Saturday 02 March 2002 07:33, Gavin D'mello wrote:
> MessageI'm trying to do a bulk insert using DBI and Oracle for about 248
> rows, th
what is the user in the db link. it it "b" or some other user?
babu
- Original Message -
To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]>
Sent: Thursday, January 03, 2002 4:30 PM
> the db link is a public one owned by system.
>
> User B is only in Bshema. User A with the
the db link is a public one owned by system.
User B is only in Bshema. User A with the trigger is only is Aschema.
Kathy
-Original Message-
Sent: Thursday, January 03, 2002 1:16 PM
To: Multiple recipients of list ORACLE-L
in the db link are you using the same user as the table owner
in the db link are you using the same user as the table owner in the remote
db. in your case are you using the user "b" in your dblink?
babu
- Original Message -
To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]>
Sent: Thursday, January 03, 2002 3:35 PM
> database is 8.1.7
>
Hello Jack,
because I have need to recover information of an existing table
that had a field dates decomposed in field dates, field hour field times.
I have need to recover all existing information in an unique field dates, in
the new table...
I hope that you will understand my explanations,
my
Hello,
Effectively, your syntax perfectly function ...
Thanks again !
Jean Berthold
SIM/HAOUHACH a écrit :
> I think that it is possible to use the next syntax:
> insert into grandeur_mesure2
> (ID_GRANDEUR,DATE_AQUISITION,VALEUR,VALIDITE,ID_TYPE_ACQUISITION,UTILISATION
> )
>
ll@ra
disys.comTo: Multiple recipients of list ORACLE-L
<[EMAIL PROTECTED]>
Sent by: cc:
[EMAIL PROTECTED] Subject: Re: Insert scrip
I think that it is possible to use the next syntax:
insert into grandeur_mesure2
(ID_GRANDEUR,DATE_AQUISITION,VALEUR,VALIDITE,ID_TYPE_ACQUISITION,UTILISATION
)
select
( ID_GRANDEUR, to_char(JOUR,'DD')||'
'||to_char(HEURE,'HH24')||':'||to_char(MINUTE, 'MI'), VALEUR,
Hi
Both 'DATE_AQUISITION' and 'JOUR' are date fields.
Why would you want to add hour and minute to a field that already contains
this information?
Jack
[EMAIL PROTECTED] (Jean Berthold)@fatcity.com on 12-11-2001 16:05:22
Please respond to [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
Hi,
You can use UTL_FILE or SQL*Loader utilities.
--
Chris J. Guidry P.Eng.
ATCO Electric, Metering Services
Phone: (780) 420-4142
Fax: (780) 420-3854
Email: [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTECTED] [SMTP:[EMAIL PROTECTED]]
> Sent: Friday, October 19, 2001 01:15
[EMAIL PROTECTED] wrote:
>
> I AM LEARNING ORACLE.
>
> I CREATED A TABLE JUNK AS BELOW,
>
> SQL> CREATE TABLE JUNK
> 2 (NAME VARCHAR2(10),
> 3 EMPID NUMBER(5) ,
> 4 SALARY NUMBER(5,3) ) ;
>
> Table created.
>
> I NEED TO INSERT " C:|JUNK.SQL " ( I SAVED IN NOTEPAD ) WHICH LOOKS
>
if it is like previous versions:
wouldn't it just be easier to download &install the Windows/NT
oracle(8?) client software (including sqlloader) for free?
otherwise, can't you run sqlloader on the unix host through telnet
connection/logon?
ORACLE-L Digest -- Volume 2001, Number 276
>
>
You couldn't use UTL_FILE, unless there is a mechanism for the Unix
machine to see the filesystem of the NT machine - and if there was, it
would be easier to run SQL*Loader on Unix.
What do you have on your client machine? I assume that the Oracle client
is installed. Therefore, you can write a s
Another option would be to use perl/VB/shell and format the data into sql
insert
statements then apply these thru sqlplus. Very low tech and cheesy - but it
works.
-Original Message-
Sent: Tuesday, October 02, 2001 3:14 PM
To: Multiple recipients of list ORACLE-L
Hi listers,
I want to
Thank you Michael! set
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P9 in profile does
solve a part of my problem!
Now if I insert â from sqlplus on NT side and do a
select from sqlplus on Unix, I got â! Thank godness.
The only strange thing I have now is I cannot do
insert correctly from Unix. In Unix
Tatireddy, Shrinivas (MED, Keane) wrote:
>
> Hi lists
>
> how to do the folloiwng inserts into a table?
>
> table xyz has col1,col2, col3.
>
> I need to populate the values to col1 from table dummy1 (cola)
> col2 table dummy2(colb)
>
Hi Anita,
In my schema all tables have less than 100 columns.
Thanks,
Ajay Singh Rathore
HCL Infosystems Ltd
---
visit us at www.hclinsystems.com
- Original Message -
To: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Saturday, June
Ajay,
Do you have tables with more than 255 columns? Oracle
handles tables with more than 255 columns using
intra-block chaining. There was a bug where the
"table fetch continued row" was incorrectly
incremented when accessing those tables.
HTH,
-- Anita
--- Ajay Singh Rathore <[EMAIL PROTE
PL/SQL will let you go up to 32k, after that you're
into 3GL (for a LONG). If you use LOB's then you have
more options..
hth
connor
--- Helmut Daiminger <[EMAIL PROTECTED]> wrote: >
Hi!
>
> How do I insert a very long string of text (about
> 200k in notepad) into a
> LONG column with a standar
If the production environment is already available, you could use
DBMS_STATS.EXPORT_TABLE_STATS on the production table
and import those stats into your development instance.
Regards,
Chris Gait
On 18 Apr 2001, at 11:00, Murali Vallath wrote:
> Thanks for the feedback, I am coming from the O
1 - 100 of 122 matches
Mail list logo