I'm on HPUX 11.0 on a 9000/800 with Oracle 8.0.6, Perl 5.004_04, DBI 1.00
and DBD:Oracle 0.53.
Please don't suggest to just upgrade or my system administrators will shoot
me :-)
This is the ouput of perl -V:
Summary of my perl5 (5.0 patchlevel 4 subversion 4) configuration:
Platform:
osname=hpux, osvers=10, archname=PA-RISC2.0
uname='hp-ux asterix b.10.20 b 9000889 33308372 16-user license '
hint=recommended, useposix=true, d_sigaction=define
bincompat3=y useperlio=undef d_sfio=undef
Compiler:
cc='cc', optimize='-O', gccversion=
cppflags='-D_HPUX_SOURCE -Aa'
ccflags ='-D_HPUX_SOURCE -Aa'
stdchar='unsigned char', d_stdstdio=define, usevfork=false
voidflags=15, castflags=0, d_casti32=define, d_castneg=define
intsize=4, alignbytes=8, usemymalloc=y, prototype=define
Linker and Libraries:
ld='ld', ldflags =' -L/usr/local/lib -L/opt/local/lib'
libpth=/usr/local/lib /opt/local/lib /usr/lib/pa1.1 /lib /usr/lib
/usr/ccs/lib
libs=-lnet -lnsl_s -lndbm -ldld -lm -lc -lndir -lcrypt
libc=/lib/libc.sl, so=sl
useshrplib=false, libperl=libperl.a
Dynamic Linking:
dlsrc=dl_hpux.xs, dlext=sl, d_dlsymun=undef, ccdlflags='-Wl,-E
-Wl,-B,deferred '
cccdlflags='+z', lddlflags='-b -L/usr/local/lib -L/opt/local/lib'
Characteristics of this binary (from libperl):
Built under hpux
Compiled at Sep 3 1998 17:10:19
@INC:
/opt/perl5/lib/PA-RISC2.0/5.00404
/opt/perl5/lib
/opt/perl5/lib/site_perl/PA-RISC2.0
/opt/perl5/lib/site_perl
.
Speaking very roughly I have a script that make subsequent fetch of long
columns. Sometimes (apparently in a random fashion) occur truncations
during fetch. Why? Below is the portion of the involved code:
$dbh->{LongReadLen}=$blobLong; # blobLong is determined by another sql
statement
$andVar='';
$qrySqlSelect="SELECT BLOB,TO_CHAR(DT_START,'YYYYMMDD
hh24:mi'),'$dtExpF' AS DTEXP,K1,K2,K3,K4,";
$qrySqlSelect.="BLOB_LEN,TO_CHAR(DT_ACQ,'YYYYMMDD hh24:mi'),FLAG_C FROM
CACHE WHERE ( ";
$qrySqlWhere='';
if ($key1) {
($qrySqlWhere .= " $andVar (K1='$key1') " ) ;
$andVar=' AND ';
}
if ($key2) {
($qrySqlWhere .= " $andVar (K2='$key2') " ) ;
$andVar=' AND ';
}
if ($key3) {
($qrySqlWhere .= " $andVar (K3='$key3') " ) ;
$andVar=' AND ';
}
if ($key4) {
($qrySqlWhere .= " $andVar (K4='$key4') " ) ;
$andVar=' AND ';
}
$qrySqlWhere .= " )";
$qrySql=$qrySqlSelect.$qrySqlWhere;
$sth = $dbh->prepare($qrySql)
|| return "cacheApi::cMove: Can't prepare SELECT...
$DBI::errstr";
$rc = $sth->execute
|| return "cacheApi::cMove: Can't execute SELECT...
$DBI::errstr";
while ( @resQry = $sth->fetchrow() ) { ...
And here is a sketch of the trace file:
DBI 1.00 dispatch debug level set to 0
DBI 1.00 dispatch debug level set to 2
-> STORE for DBD::Oracle::db (DBI::db=HASH(0x400934a0)~INNER
'LongReadLen' '88449')
STORE DBI::db=HASH(0x400934a0) 'LongReadLen' => '88449'
<- STORE= 1 at cacheApi.pm line 711.
-> prepare for DBD::Oracle::db (DBI::db=HASH(0x40092bac)~0x400934a0
'SELECT BLOB,TO_CHAR(DT_START,'YYYYMMDD
hh24:mi'),'01-DEC-2001' AS
DTEXP,K1,K2,K3,K4,BLOB_LEN,TO_CHAR(DT_ACQ,'YYYYMMDD hh24:mi'),FLAG_C FROM
CACHE WH
ERE ( (K1='PI') AND (K2='113192') AND (K3='S') AND (K4=' ') )')
dbih_setup_handle(DBI::st=HASH(0x403b16e4)=>DBI::st=HASH(0x403c4e48),
DBD::Oracle::st, 40092aec, Null!)
dbih_make_com(DBI::db=HASH(0x400934a0), DBD::Oracle::st, 232)
dbd_st_prepare'd sql f4
dbd_describe (for sql f4 after oci f54, lb 88449, csr 0x406fd278)...
fbh 1: 'BLOB' NULLable, type 8, dbsize 88449, dsize 88449, p0 s0
out: ftype 8, bufl 22914. cache@0: indp 0, rlen 0, rcode 0
fbh 2: 'TO_CHAR(DT_START,'YYYYMMDDHH24:MI')' NULLable, type 1, dbsize
14, dsize 14, p0 s0
out: ftype 5, bufl 15. cache@0: indp 0, rlen 0, rcode 0
fbh 3: 'DTEXP' NULLable, type 96, dbsize 11, dsize 11, p0 s0
out: ftype 5, bufl 12. cache@0: indp 0, rlen 0, rcode 0
fbh 4: 'K1' , type 1, dbsize 200, dsize 200, p0 s0
out: ftype 5, bufl 201. cache@0: indp 0, rlen 0, rcode 0
fbh 5: 'K2' , type 1, dbsize 200, dsize 200, p0 s0
out: ftype 5, bufl 201. cache@0: indp 0, rlen 0, rcode 0
fbh 6: 'K3' , type 1, dbsize 200, dsize 200, p0 s0
out: ftype 5, bufl 201. cache@0: indp 0, rlen 0, rcode 0
fbh 7: 'K4' , type 1, dbsize 100, dsize 100, p0 s0
out: ftype 5, bufl 101. cache@0: indp 0, rlen 0, rcode 0
fbh 8: 'BLOB_LEN' NULLable, type 2, dbsize 22, dsize 40, p10 s0
out: ftype 5, bufl 41. cache@0: indp 0, rlen 0, rcode 0
fbh 9: 'TO_CHAR(DT_ACQ,'YYYYMMDDHH24:MI')' NULLable, type 1, dbsize 14,
dsize 14, p0 s0
out: ftype 5, bufl 15. cache@0: indp 0, rlen 0, rcode 0
fbh 10: 'FLAG_C' NULLable, type 2, dbsize 22, dsize 40, p2 s0
out: ftype 5, bufl 41. cache@0: indp 0, rlen 0, rcode 0
dbd_describe'd 10 columns (Row bytes: 89232 max, 89079 est avg. Cache: 1
rows)
<- prepare= DBI::st=HASH(0x403b16e4) at cacheApi.pm line 739.
(outer handle DESTROY ignored)
-> DESTROY for DBD::Oracle::st (DBI::st=HASH(0x4001f134)~INNER)
<- DESTROY= undef at cacheApi.pm line 741.
dbih_clearcom 0x4001f134 (com 0x4070ca08, type 3) done.
-> execute for DBD::Oracle::st (DBI::st=HASH(0x403b16e4)~0x403c4e48)
dbd_st_execute (for sql f4 after oci f8, outs 0)...
dbd_st_execute complete (rc0, w03, rpc1, eod0, out0)
<- execute= 1 at cacheApi.pm line 741.
-> fetchrow for DBD::Oracle::st (DBI::st=HASH(0x403b16e4)~0x403c4e48)
dbih_setup_fbav for 10 fields => 0x403d94ec
ERROR EVENT 1406 'ORA-01406: fetched column value was truncated (DBD:
ofetch error on field 1 (of 10), ora_
type 8, LongReadLen too small and/or LongTruncOk not set)' (Handlers: undef)
ofetch error on field 1 (of 10), ora_type 8, LongReadLen too small and/or
LongTruncOk not set error 1406 record
ed: ORA-01406: fetched column value was truncated (DBD: ofetch error on
field 1 (of 10), ora_type 8, LongReadLe
n too small and/or LongTruncOk not set)
<- fetchrow= ( ) [0 items] at cacheApi.pm line 777.
!! ERROR: 1406 'ORA-01406: fetched column value was truncated (DBD:
ofetch error on field 1 (of 10), ora_ty
pe 8, LongReadLen too small and/or LongTruncOk not set)'
-> finish for DBD::Oracle::st (DBI::st=HASH(0x403b16e4)~0x403c4e48)
<- finish= 1 at cacheApi.pm line 777.
I'm not a Perl guru. But seems that the allocated buffer (22914 bytes) is
unable to contain my blob (88449 bytes). I heard there is a size limit of
64K in fetching long columns (again, I'm not a magician but 88449 - 65535 =
22914, wow). Is this a bug? Notice that there is columns larger than 64K
that the scripts can fetch without any problem. Any workaround? Can using
blob_read fix my problem?
Thanks in advance for your suggestions.
Domenico Vasile