Re: Minor issue with ping

2010-06-18 Thread Martin Evans
John Scoles wrote:
> The way Ping works is different is depending on the version of DBD::Oracle
> you are using.  Which version of DBD::Oracle are you using??
> 
> cheers
> John Scoles

The patch looked to be against the subversion trunk John.

Martin

> On Thu, Jun 17, 2010 at 2:34 PM, Thomas M. Payerle  wrote:
> 
>> Hi,
>>
>> My colleagues and I encountered a problem in some code which seems
>> to be due to some impolite behavior on the part of the ping routine
>> in DBD::Oracle.
>>
>> Basically, we had an eval block with a locally declared (my) CGI::Session
>> object using Oracle DB for storing session info.  When we raise an
>> exception,
>> the CGI::Session is destroyed, which somewhere results in DBD::Oracle ping
>> being called.  ping() does not localize $@ for its eval block, thereby
>> clobbering the exception text in $...@.
>>
>> I believe adding a "local $@" in the ping routine resolves this issue
>> without any ill effect on the routine, as shown in attached patch.
>> Not really a bug, but I believe this is better behavior.
>>
>> Tom Payerle
>> OIT-TSS-DCS paye...@umd.edu
>> University of Maryland  (301) 405-6135
>> College Park, MD 20742-4111
>>
>> PS: I just wanted to offer my gratitude to the DBD::Oracle developers for
>> their fine work on this module.
> 
> --
> Catch Alex & Sheeri at ODTUG/Kaleidoscope - June 27 - July 1. 
> Hear Sheeri speak or email eve...@pythian.com to meet with Pythian.
> 


Re: Open points to discuss for DBD::File 0.39

2010-06-08 Thread Martin Evans
Jens Rehsack wrote:
> Hi DBI and DBD developers,
> 
> I have some open points for DBD::File I'd like to ask for feedback on them.
> 
> The first two are related to the table meta data.
> 
> 1) I introduced simple getters and setters for DBD::File table's meta
>data (look for get_file_meta and set_file_meta in
>lib/DBD/File/Developers.pod). Merijn and me came over to extend
>this interface with some wild cards:
>- '.': default - read, the attributes of the database handle is
>   delivered/modified (even through a proxy, which was the
>   primary intention)
>- '*': all - the attribute of all tables (and the default one)
>   is modified
>- '+': as '*', but restricted to all ANSI conform tables names
>   (default isn't touched)
>- qr//: all tables matching the regex are affected

I read Developers.pod and did not notice these wild cards but I'll
assume they are implemented and not documented as yet.

>   The question is related to the getter: how should the attributes being
>   returned (or more: what API should be supported)?
>   Let me explain a bit deeper the cause of the question. set_file_meta
>   can be called using the table_name as first argument, the attribute
>   name as second one and the new attribute value as third argument.
>   It sounds reasonable to allow following call, too:
> 
>   $dbh->func( $tname, { attr1 => val1, attr2 => val2 },
>   'set_file_meta' );
> 
>   Consequently get_file_meta should be able to return more than one
>   attribute, shouldn't it? So we have 3 situations for get_file_meta
>   regarding the expected return values:
>   a) 1 table, 1 attribute - expected return value is a scalar
>   b) n tables, 1 attribute - expected return value is a hash of
>  table names pointing to the scalar value of the attribute
>  belonging to that table
>   c) n tables, m attributes - expected return value is a hash of
>  table names pointing to a hash of attribute names containing
>  the attribute values of the affected table
> 
>   I rate it to complex in API and need external thoughts :)

I never like APIs where you need to examine the result to know what sort
of result you have; I prefer to know up front what I'm going to get. In
that respect I presume (a) is ok since at most it can return the
attribute value or perhaps undef. (b) and (c) could be combined into
always returning:

  {
table1 => {attr1 => value, attr2 => value},
table2 => {attr1 => value, attr2 => value},
.
.
  }

Just a thought.


> 2) backward compatibility of depreciated structures per table of DBD::CSV,
>DBD::AnyData (and maybe later DBD::PO or DBD::DBM). DBD::CSV had a
>structure $dbh->{csv_tables}{$tname}{...} - which is now handled via
>$dbh->{f_meta}{$tname}{...} ... - DBD::AnyData had the same with
>$dbh->{ad_tables}{$tname}{...}.
>Both had several attributes in their which were not synchronized
>between both DBD's. Further, the $dbh->{f_meta} structure contains
>the table names after they walked through an internal filter
>(suggested by mje) which handles the identifier case. An additional
>structure $dbh->{f_meta_map} is used to hold the map between the
>original table name (from SQL statement or through the getter/setter)
>and the internal used identifier.
> 
>Because it could be very difficult to manage backward compatibility
>there, I would like to have a solution which can be plugged in as
>early as possible:
>a) $dbh->{csv_tables} or $dbh->{ad_tables} should become a tied hash
>   which access the table's meta data using the get_table_meta method
>   from the table implementor class.

Sounds ok to me if you need that backwards compatibility.

>b) further, the meta data which can be accessed through this tie
>   should be tied, too - to allow overriding FETCH/STORE methods which
>   could handle the mapping between old attribute names and new
>   attribute names.

If you want/need to go that far in maintaining compatibility.

How much code relies on these attributes - is it all possible to know?

>While I rate being 2a more important than 2b, I would like to get
>feedback for both ideas. Discarding 2a could result in inconsistent
>entries in $dbh->{f_meta} which could lead the DBD into unpredictable
>behavior.


> My third question has lesser implications :)
> 
> 3) Merijn and me had the idea to let the users decide on connection
>whether to use DBI::SQL::Nano or SQL::Statement - not globally via
>DBI_SQL_NANO environment variable.
> 
>Because of the handling of the handles for the dr, db and st
>instances of a DBD I hoped to get some suggestions how we could
>implement a similar behavior for DBD::File::Statement and
>DBD::File::Table.

Don't understand the question - sorry.

> Thanks for all answers in advance,
> Jens
> 
> 

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easyso

Re: Spelling

2010-06-08 Thread Martin Evans
H.Merijn Brand wrote:
> On Mon, 07 Jun 2010 13:51:26 -0400, John Scoles 
> wrote:

Others have answered individual items you highlighted, I only have one
point below:

>> H.Merijn Brand wrote:
>>> For my own projects (which includes two DBD's), I have been working on
>>> spell-check issues. I'm not born in an English-speaking country, nor
>>> was I raised in one, so I make errors. Probably quite a few.
>>>
>>> spell-checkers help a lot, but most work on en_US, not en_EN, and I try
>>> to at least be consistent inside a project.
>>>
>>> When I was done with my own projects, I threw my newly built utility at
>>> the perl source tree itself, and found a few mistakes as well. Then I
>>> implemented Text::Aspell into it and fixed all that it found that was
>>> obviously wrong. It supports reading local aspell lists of words that
>>> are considered to be correct for the given project.
>>>
>>> DBI documentation is written in en_EN instead of en_US, so the
>>> spell-checker will see "behaviour" as wrong and suggests "behavior".
>>> Same for "ACKNOWLEDGEMENT" vs "ACKNOWLEDGMENT".
>> So it spells it correctly good thing.
> 
> Huh? "it"? So you want to move everything to en_US?
> I'm really trying to be serious here (and learn).
> Consistency is VERY high in my goals, so IMHO we
> should stick to en_EN for DBI.

My belief was that the DBI docs were more en_EN than en_US. Given Tim
probably wrote most of it that is no surprise. I think the documentation
should be consistent and probably should be en_EN.

>>> That was my trigger to implement project specific language support.
>>> Done.
>>>
>>> Before I try to get deeper into DBI docs and its spelling, would it be
>>> considered good-work?
>>>
>>> As an example to start (this part DOES contain real errors, like
>>> abreviate (one b) and unlikey (instead of unlikely)):
> 
> Summary:
> 
>  ☑  unicode => Unicode
>  ☐  DBDs<= DBD's
>  ☐  DSNs<= DSN's
>  ☑  unlikey => unlikely
>  ☑  abreviated  => abbreviated
>  ☐  NULLs   <= NULL's
> 
>>> @@ -2303,7 +2303,7 @@ use by the DBI. Extensions and related modules use 
>>> the C
>>>  namespace (see L).
>>>  Package names beginning with C are reserved for use
>>>  by DBI database drivers.  All environment variables used by the DBI
>>> -or by individual DBDs begin with "C" or "C".
>>> +or by individual DBD's begin with "C" or "C".
>> the first one is correct. As you are referring to many DBDs not  
>> something that belongs to a DBD
>>
>> Seems like your spell checker cannot tell or (does not know) the correct 
>> use of "s" in its plural, possessive, and plural possessive.
> 
> My spell checker is a perl script using Text::Aspell and doesn't know
> any context at all.
> 
>> Most likely just taking a guess based on wheather of not the first 
>> letter is capitalized. 
>>
>> Welcome to the wonderful world of English.
>>
>> have a go at this
>>
>> http://www.meredith.edu/grammar/plural.htm
>>
>>> :
>> We could play this game for a long time as we here in Canada have some 
>> of our own funny ways to spell things??
> 
> Thanks for the insightful remarks.
> 
> Things I also noted:
> 
> # 'DEFERABILITY' => (DEFER ABILITY DEFER-ABILITY DESIRABILITY DURABILITY 
> DIVISIBILITY)
> # 'deferrability' => (desirability durability divisibility)
> # 'DEFERRABILITY' => (DESIRABILITY DURABILITY DIVISIBILITY)
>  I'm blank on this: one or two 'r's?
>  My "Collins Cobuild" English Language Dictionary doesn't know the word,
>  but spells alle deferr... with two 'r's
> 
> # 'implementors' => (implementers implements implementer's implement's 
> impalement's implementer)
> 
> # 'thru' => (Thur thrum Thu thou)
>  isn't it "through" in English?
> 
> # 'piggback' => (piggyback piggybacks piggyback's piggybacked)
>  piggyback?
> 
> # 'scaleable' => (scale able scale-able scalable saleable salable 
> callable)
> 
> I fixed "a subtile difference" to "a subtle difference". Unless sub-tile
> has some weird meaning, that looked s weird.
> 
> I have no idea how to change "ommiting" in:
>   'You can put every SQL-statement you like in simply ommiting
>"sql => ...", but the more important thing is to restrict the
>connection so that only allowed queries are possible.'
> 
> This line has two errors:
>   =item * "accept" tells the dbiproxy-server wether ip-adresse like in "mask" 
> are allowed to connect or not (0/1)
> 
> WTF does 'Pern' mean in:
> 
>   But you'll note that there is only one call to
>   DBD::_::db::selectrow_arrayref but another 99 to
>   DBD::mysql::db::selectrow_arrayref. Currently the first
>   call Pern't record the true location. That may change.
> 
> lib/DBD/Multiplex has different/wrong line endings :(
> 
> I have committed fixes to all the obvious errors.
> 

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: DBD::Oracle 11gr2 & ORA-38909

2010-05-21 Thread Martin Evans
John Scoles wrote:
> Ok I have patched up a solution I think will work across the board and you
> can find it here
> 
> http://svn.perl.org/modules/dbd-oracle/branches/oci_batch
> 
> here are the details
> 
> ora_oci_batch
> 
> For 11g users you may encounter an error while using the execute_array in
> that it does not
> return a full list of tuples.  This seems to be a result in that a statement
> can only
> have 'LOG ERRORS' or 'SAVE EXCEPTIONS'set, By setting this flag to a value
> should stop this
> problem error.
> 
> For convenience I have added support for a 'ORA_DBD_OCI_BATCH'
> environment variable that you can use at the OS level to set this
> value. It can also be set as an attribute on both the Connect and Prepare.
> 
> Unfortunately I can't test it (do not have an 11g box yet)  so It will stay
> in the above branch until it is tested hopefully by you Scott
> 
> Cheers
> John Scoles
> 
> --
> See Pythian's Alex Gorbachev, co-author of "Expert Oracle Practices" at 
> NoCOUG Spring Conference May 20th.
> Details, interview & book chapter in the May NoCOUG Journal: 
> http://bit.ly/alexnocoug
> 

I'm not sure why I seem to have ignored your mail but I just noticed it
again - sorry for the delay.

I checked out the branch you mentioned and

export ORA_DBD_OCI_BATCH=1

but 26exe_array still seems to fail for me:

mar...@bragi:~/svn/dbd-oracle/branches/oci_batch$ prove -vb t/26exe_array.t
t/26exe_array.t ..
1..17
ok 1 - use DBI;
ok 2 - The object isa DBI::db
ok 3 - ... execute_array should return true
ok 4 - ... we should have 10 tuple_status
ok 5 - ... execute_array should return false
ok 6 - ... we should have 10 tuple_status
ok 7 - ... we should get text
ok 8 - ... we should get -1
ok 9 - ... we should get a warning
ok 10 - ... execute_for_fetch should return true
not ok 11 - ... we should have 19 tuple_status

#   Failed test '... we should have 19 tuple_status'
#   at t/26exe_array.t line 128.
#  got: 10
# expected: 19
ok 12 - ... execute_array should return flase
ok 13 - ... we should have 10 tuple_status
not ok 14 - ... we should have 48 rows

#   Failed test '... we should have 48 rows'
#   at t/26exe_array.t line 154.
#  got: 30
# expected: 48
ok 15 - ... execute_array should return true
ok 16 - ... \#5 should be a warning
ok 17 - ... we should have 10 tuple_status
# Looks like you failed 2 tests of 17.
Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/17 subtests

Test Summary Report
---
t/26exe_array.t (Wstat: 512 Tests: 17 Failed: 2)
  Failed tests:  11, 14
  Non-zero exit status: 2
Files=1, Tests=17,  0 wallclock secs ( 0.02 usr  0.01 sys +  0.05 cusr
0.01 csys =  0.09 CPU)
Result: FAIL

This was using oracle 11.1 server and 11.1 instant client.

If I've not set the right thing let me know.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


DBD::ODBC 1.24 released to CPAN

2010-05-14 Thread Martin Evans
I have just uploaded the official 1.24 release of DBD::ODBC to CPAN.

Many thanks to everyone who has helped with this whether it was patches
or testing. Here are the changes since 1.23:

=head2 Changes in DBD::ODBC  1.24 May 14, 2010

Minor change in Makefile.PL to only use NO_META if ExtUtils::MakeMaker
is at least at version 6.10. Reported by Chunmei Wu.

Minor change to test rt_50852 which had wrong skip count.

=head2 Changes in DBD::ODBC  1.23_5 May 6, 2010

Added advice from Jan Dubois (ActiveState) on building DBD::ODBC for
ActivePerl (see README.windows).

rt56692. Fix spelling mistake in DBD::ODBC pod - thanks to Ansgar
Burchardt.

Added a 7th way to help documentation - become a tester.

Hopefully fixed problems building on windows 32 bit platforms that
have old sql header files not mentioning SQLLEN/SQLULEN.

=head2 Changes in DBD::ODBC  1.23_4 April 13, 2010

Added more FAQs.

Small optimization to remove calls to SQLError when tracing is not
turned on. This was a bug. We only need to call SQLError when
SQLExecute succeeds if there is an error handler or if tracing is
enabled. The test was for tracing disabled!

Large experimental change primarily affecting MS SQL Server users but
it does impact on other drivers too. Firstly, for MS SQL Server users
we no longer SQLFreeStmt(SQL_RESET_PARAMS) and rebind bound parameters
as it is causing the MS SQL Server ODBC driver to re-prepare the SQL.
Secondly (for all drivers) we no longer call SQLBindParameter again IF
all the arguments to it are the same as the previous call. If you find
something not working you better let me know as this is such a speed
up I'm going to go with this unless anyone complains.

Minor change to avoid a double call to SQLGetInfo for SQL_DBMS_NAME
immediately after connection.

Small change for rt 55736 (reported by Matthew Kidd) to not assume a
parameter is varXXX(max) if SQLDescribeParam failed in the Microsoft
Native Client driver.

=head2 Changes in DBD::ODBC  1.23_3 March 24, 2010

Minor changes to Makefile.PL and dbdimp.c to remove some compiler
warnings.

Fix some calls to SQLMoreResults which were not passing informational
messages on to DBI's set_err. As you could not see all the
informational messages from procedures, only the first.

Fix minor issue in 02simple test which printed the Perl subversion
before the version.

Changes to 20SqlServer.t to fix a few typos and make table names
consistent wrt to case - (as someone had turned on case-sensitivity in
SQL Server) Similar changes in rt_38977.t and rt_50852.t

=head2 Changes in DBD::ODBC  1.23_2 January 26, 2010

Fixed bug in Makefile.PL which could fail to find unixODBC/iODBC
header files but not report it as a problem. Thanks to Thomas
J. Dillman and his smoker for finding this.

Fixed some compiler warnings in dbdimp.c output by latest gcc wrt to
format specifiers in calls to PerlIO_printf.

Added the odbc_force_bind_type attribute to help sort out problems
with ODBC Drivers which support SQLDescribeParam but describe the
parameters incorrectly (see rt 50852). Test case also added as
rt_50852.t.

=head2 Changes in DBD::ODBC  1.23_1 October 21, 2009

makefile.PL changes:
  some formatting changes to output
  warn if unixodbc headers are not found that the unixodbc-dev package
is not
installed
  use $arext instead of "a"
  pattern match for pulling libodbc.* changed
  warn if DBI_DSN etc not defined
  change odbc_config output for stderr to /dev/null
  missing / on /usr/local wheb finding find_dm_hdr_files()

New FAQ entries from Oystein Torget for bind parameter bugs in SQL Server.

rt_46597.rt - update on wrong table

Copied dbivport.h from the latest DBI distribution into DBD::ODBC.

Added if_you_are_taking_over_this_code.txt.

Add latest Devel::PPPort ppport.h to DBD::ODBC and followed all
recommendations for changes to dbdimp.c.

Added change to Makefile.PL provided by Shawn Zong to make
Windows/Cygwin work again.

Minor change to Makefile.PL to output env vars to help in debugging
peoples build failures.

Added odbc_utf8_on attribute to dbh and sth handles to mark all
strings coming from the database as utf8.  This is for Aster (based on
PostgreSQL) which returns all strings as UTF-8 encoded unicode.
Thanks to Noel Burton-Krahn.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Clarification sought on private_attribute_info WAS Re: DBD::Oracle 11gr2 & ORA-38909

2010-05-07 Thread Martin Evans
Apologies for top posting but this is an old thread and I include it for
reference but really I'd like some clarification from Tim as to whether
the following is correct.

As John states the DBI documentation says for private_attribute_info:

"Returns a reference to a hash whose keys are the names of
driver-private attributes available for the kind of handle (driver,
database, statement) that the method was called on."

My question is does this include attributes which may be specified on
the prepare call when there is no separate store/fetch on the handle. e.g.,

$h->prepare("select 1 from dual", {ora_parse_lang => 2});

You cannot set ora_parse_lang on a $sth or retrieve it so should it be
in private_attribute_info?

I would like to know since if ora_parse_lang in DBD::Oracle should be in
private_attribute_info when it cannot be independently stored or fetched
then this impacts similar prepare attributes in DBD::ODBC.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


John Scoles wrote:
> 
> 
> On Tue, Apr 6, 2010 at 4:51 AM, Martin Evans  <mailto:martin.ev...@easysoft.com>> wrote:
> 
> I haven't seen a reply to this yet but I've been on holiday so might
> have missed it:
> 
> Scott T. Hildreth wrote:
> > On Wed, 2010-03-31 at 12:20 -0500, Scott T. Hildreth wrote:
> >> We have run into an issue with array processing in 11g.  The
> developer
> >> was using execute_array and his sql statement had 'LOG ERRORS' in it.
> >> This did not error out until we switched to 11g.  The issue is
> that only
> >> one is allowed, either 'LOG ERRORS' or 'SAVE EXCEPTIONS'.  Our DBA
> >> logged and error report with Oracle and after several posts back and
> >> forth this is what they concluded,
> >>
> >>
> ==
> >> After investigation and discussion, development has closed the bug as
> >> 'Not a Bug' with the following reason:
> >>
> >> "this is an expected behavior in 11g and the user needs to specify
> >> either of 'SAVE EXCEPTIONS' clause or the 'DML error logging',
> but NOT
> >> both together.
> >> The batch error mode, in the context of this bug, is basically
> referring
> >> to the SAVE EXCEPTIONS clause.
> >> It seems the code is trying to use both dml error logging and batch
> >> error handling for the same insert. In that case, this is not a bug.
> >>
> >> For INSERT, the data errors are logged in an error logging table
> (when
> >> the dml error logging feature is used) or returned in batch error
> >> handles (when using batch mode).
> >> Since the error messages are available to the user in either
> case, there
> >> is no need to both log the error in the error logging table and
> return
> >> the errors in batch error handles,
> >> and we require the user to specify one option or the other but
> not both
> >> in 11G.
> >>
> >> Both features exist in 10.x. For 11.x, users should change their
> >> application to avoid the error.
> >>
> ==
> >>
> >> So basically we need a way to turn off the 'SAVE EXCEPTIONS' for the
> >> batch mode.  I found in dbdimp.c that the oci_mode is being set to
> >> OCI_BATCH_ERRORS in the ora_st_execute_array function.  I was
> planning
> >> on setting it to OCI_BATCH_MODE and running a test to see if this
> will
> >> not error out.  I report back when I have run the test, but I was
> >> wondering what would be the best way to give the user the ability to
> >> override the oci_mode.
> >
> > Setting oci_mode to OCI_BATCH_MODE works.  So I want to add a prepare
> > attribute that will turn off the SAVE EXCEPTIONS.  I'm looking for
> some
> > direction on how to add it to dbdimp.c. I haven't thought of a
> name yet,
> > but something like
> >
> > my $sth = $dbh->prepare($SQL,{ora_oci_err_mode => 0});
> >
> > I assume I would have to add it to dbd_db_FETCH_attrib() and would
> I do
> > something like this in ora_st_execute_array(),
> 
> Don't you mean dbd_st_FETCH_attrib as it is a statement level attribute
> not a connection one? Any

Testing a DBD with Test::Database

2010-05-06 Thread Martin Evans
I recently looked at Test::Database (after I saw it mentioned in the
last QA Hackathon). It does not have specific support for DBD::ODBC
right now but it will work to provide data sources for DBD::ODBC without
change.

One of the big problems I have is finding testers. Most smokers don't
have DBI_DSN, DBI_USER, DBI_PASS environment variables set up so none of
the tests run. Apparently, some do have Test::Database set up. Has
anyone one else in the DBD world added support for Test::Database in
their tests and if so how did it go?

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


New 1.23_5 development release of DBD::ODBC

2010-05-06 Thread Martin Evans
I've just uploaded the 1.23_5 development release of DBD::ODBC. This
will hopefully be the last release before an official 1.24. Below are
the changes since 1.23. All testing welcome however, once a few smoke
testers have passed this release I am going to move to a full release
fairly quickly as I'd like to look at supporting Test::Database in
future releases.

A big thank you to all those who tested the last release and to anyone
who reported bugs.

=head2 Changes in DBD::ODBC  1.23_5 May 6, 2010

Added advice from Jan Dubois (ActiveState) on building DBD::ODBC for
ActivePerl (see README.windows).

rt56692. Fix spelling mistake in DBD::ODBC pod - thanks to Ansgar
Burchardt.

Added a 7th way to help documentation - become a tester.

Hopefully fixed problems building on windows 32 bit platforms that
have old sql header files not mentioning SQLLEN/SQLULEN.

=head2 Changes in DBD::ODBC  1.23_4 April 13, 2010

Added more FAQs.

Small optimization to remove calls to SQLError when tracing is not
turned on. This was a bug. We only need to call SQLError when
SQLExecute succeeds if there is an error handler or if tracing is
enabled. The test was for tracing disabled!

Large experimental change primarily affecting MS SQL Server users but
it does impact on other drivers too. Firstly, for MS SQL Server users
we no longer SQLFreeStmt(SQL_RESET_PARAMS) and rebind bound parameters
as it is causing the MS SQL Server ODBC driver to re-prepare the SQL.
Secondly (for all drivers) we no longer call SQLBindParameter again IF
all the arguments to it are the same as the previous call. If you find
something not working you better let me know as this is such a speed
up I'm going to go with this unless anyone complains.

Minor change to avoid a double call to SQLGetInfo for SQL_DBMS_NAME
immediately after connection.

Small change for rt 55736 (reported by Matthew Kidd) to not assume a
parameter is varXXX(max) if SQLDescribeParam failed in the Microsoft
Native Client driver.

=head2 Changes in DBD::ODBC  1.23_3 March 24, 2010

Minor changes to Makefile.PL and dbdimp.c to remove some compiler
warnings.

Fix some calls to SQLMoreResults which were not passing informational
messages on to DBI's set_err. As you could not see all the
informational messages from procedures, only the first.

Fix minor issue in 02simple test which printed the Perl subversion
before the version.

Changes to 20SqlServer.t to fix a few typos and make table names
consistent wrt to case - (as someone had turned on case-sensitivity in
SQL Server) Similar changes in rt_38977.t and rt_50852.t

=head2 Changes in DBD::ODBC  1.23_2 January 26, 2010

Fixed bug in Makefile.PL which could fail to find unixODBC/iODBC
header files but not report it as a problem. Thanks to Thomas
J. Dillman and his smoker for finding this.

Fixed some compiler warnings in dbdimp.c output by latest gcc wrt to
format specifiers in calls to PerlIO_printf.

Added the odbc_force_bind_type attribute to help sort out problems
with ODBC Drivers which support SQLDescribeParam but describe the
parameters incorrectly (see rt 50852). Test case also added as
rt_50852.t.

=head2 Changes in DBD::ODBC  1.23_1 October 21, 2009

makefile.PL changes:
  some formatting changes to output
  warn if unixodbc headers are not found that the unixodbc-dev package
is not
installed
  use $arext instead of "a"
  pattern match for pulling libodbc.* changed
  warn if DBI_DSN etc not defined
  change odbc_config output for stderr to /dev/null
  missing / on /usr/local wheb finding find_dm_hdr_files()

New FAQ entries from Oystein Torget for bind parameter bugs in SQL Server.

rt_46597.rt - update on wrong table

Copied dbivport.h from the latest DBI distribution into DBD::ODBC.

Added if_you_are_taking_over_this_code.txt.

Add latest Devel::PPPort ppport.h to DBD::ODBC and followed all
recommendations for changes to dbdimp.c.

Added change to Makefile.PL provided by Shawn Zong to make
Windows/Cygwin work again.

Minor change to Makefile.PL to output env vars to help in debugging
peoples build failures.

Added odbc_utf8_on attribute to dbh and sth handles to mark all
strings coming from the database as utf8.  This is for Aster (based on
PostgreSQL) which returns all strings as UTF-8 encoded unicode.
Thanks to Noel Burton-Krahn.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Encoding support in DBD::File

2010-04-21 Thread Martin Evans
H.Merijn Brand wrote:
> This rocks!
> 
> my $dbh = DBI->connect ("dbi:CSV:", undef, undef, {
> RaiseError=> 1,
> PrintError=> 1,
> 
> f_dir => ".",
> f_schema  => undef,
> f_ext => ".csv/r",
> f_encoding=> "utf8",
> });
> 
> Any objections to me committing that?

Looks good to me and I'm pleased you are using encoding and not just
":utf8" as the former is validating utf-8 encoding and the latter does not.


> --8<---
> --- /pro/3gl/CPAN/DBI-svn/lib/DBD/File.pm   2009-12-03 20:58:35.0 
> +0100
> +++ /pro/lib/perl5/site_perl/5.12.0/i686-linux-64int-ld/DBD/File.pm 
> 2010-04-21 16:06:12.0 +0200
> @@ -163,6 +163,7 @@ sub connect ($$;$$$)
> f_schema=> 1, # schema name
> f_tables=> 1, # base directory
> f_lock  => 1, # Table locking mode
> +   f_encoding  => 1, # Encoding of the file
> };
>  $this->{sql_valid_attrs} = {
> sql_handler   => 1, # Nano or S:S
> @@ -724,7 +725,14 @@ sub open_table ($)
> $safe_drop or croak "Cannot open $file: $!";
> }
> }
> -$fh and binmode $fh;
> +if ($fh) {
> +   if (my $enc = $data->{Database}{f_encoding}) {
> +   binmode $fh, ":encoding($enc)";
> +   }
> +   else {
> +   binmode $fh;
> +   }
> +   }
>  if ($locking and $fh) {
> my $lm = defined $data->{Database}{f_lock}
>   && $data->{Database}{f_lock} =~ m/^[012]$/
> @@ -961,6 +969,11 @@ But see L below.
> 
>  =back
> 
> +=item f_encoding
> +
> +With this attribute, you can set the encoding in which the file is opened.
> +This is implemented using C)">.
> +
>  =head2 Driver private methods
> 
>  =over 4
> -->8---
> 

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: New experimental development release of DBD::ODBC 1.23_4 - faster - please test

2010-04-20 Thread Martin Evans
Jan Dubois wrote:
> On Wed, 14 Apr 2010, Martin J. Evans wrote:
>> As ActiveState do not release development builds (no criticism
>> intended) I am looking at producing a ppm for people to try.
> 
> Note that everyone can also just compile the module for themselves
> with ActivePerl:
> 
> cpan M/MJ/MJEVANS/DBD-ODBC-1.23_4.tar.gz
> 
> If you don't have Microsoft VC on your PATH, then this will download
> and install MinGW for you (inside the Perl tree) and use that instead.
> 
> This should work for the latest builds of all major Perl versions,
> 5.8.9.827, 5.10.1.1007 and 5.12.0.1200.  On older versions you may
> need to install MinGW with PPM "by-hand" first:
> 
> ppm install MinGW
> cpan M/MJ/MJEVANS/DBD-ODBC-1.23_4.tar.gz
> 
> On ActivePerl 5.8.8 and earlier you will have to download, install
> and configure MinGW and dmake manually though, so you may not want
> to bother...
> 
> Cheers,
> -Jan
> 
> 
> 

Thanks Jan and sorry I missed some of the chat on #dbi yesterday that I
guess led to this. I will add your advice to DBD::ODBC.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Thread safety for DBD::File and DBD::DBM's inheritance

2010-04-13 Thread Martin Evans
Jens Rehsack wrote:
> Hi DBI Developers,
> 
> as maintainer of SQL::Statement I try often some easy examples using
> DBD::CSV, because I have a good knowledge what happens behind the
> scenes.
> So I did last days when I played around with threads and
> Log::Log4perl::Appender::DBI. I've seen, than (even if in my simply
> example everything works fine), a cloned DBD::File still uses the same
> file handles for reading/writing.
> I talked with Merijn about this situation in #dbi on irc.perl.org and
> we decided, that it would be better, to croak an error when a $dbh is
> used in another thread than the owning one.
> 
> So I started with some hacking this morning to implement that and did
> 'make test' on DBI (as teached by Merijn).
> I got errors from DBD::DBM and after a short analyze I've seen, that
> it derives from DBD::File, but doesn't inherit the entire behavior,
> just a subset.
> This caused DBD::DBM::db::prepare to fail in my tests.
> 
> Now I'm unsure how to step forward. I can hack DBD::File to handle
> those incomplete inheritance by ignoring them, or I can fix DBD::DBM
> to full inherit and modify the behavior it needs to change (instead of
> ignoring parent's methods).
> I would prefer the second way, because I think it's the cleaner one.
> 
> Anyone against it?
> 
> Best regards,
> Jens
> 
> 

The second of your suggestions sounds better to me but this is not an
area I am too familiar with. I am familiar with some of your work though
and imagine your offer is not one that should be shrugged off without
good reason.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: DBD::Oracle 11gr2 & ORA-38909

2010-04-06 Thread Martin Evans
I haven't seen a reply to this yet but I've been on holiday so might
have missed it:

Scott T. Hildreth wrote:
> On Wed, 2010-03-31 at 12:20 -0500, Scott T. Hildreth wrote:
>> We have run into an issue with array processing in 11g.  The developer
>> was using execute_array and his sql statement had 'LOG ERRORS' in it.
>> This did not error out until we switched to 11g.  The issue is that only
>> one is allowed, either 'LOG ERRORS' or 'SAVE EXCEPTIONS'.  Our DBA
>> logged and error report with Oracle and after several posts back and
>> forth this is what they concluded,
>>
>> ==
>> After investigation and discussion, development has closed the bug as
>> 'Not a Bug' with the following reason:
>>
>> "this is an expected behavior in 11g and the user needs to specify
>> either of 'SAVE EXCEPTIONS' clause or the 'DML error logging', but NOT
>> both together.
>> The batch error mode, in the context of this bug, is basically referring
>> to the SAVE EXCEPTIONS clause.
>> It seems the code is trying to use both dml error logging and batch
>> error handling for the same insert. In that case, this is not a bug.
>>
>> For INSERT, the data errors are logged in an error logging table (when
>> the dml error logging feature is used) or returned in batch error
>> handles (when using batch mode).
>> Since the error messages are available to the user in either case, there
>> is no need to both log the error in the error logging table and return
>> the errors in batch error handles, 
>> and we require the user to specify one option or the other but not both
>> in 11G.
>>
>> Both features exist in 10.x. For 11.x, users should change their
>> application to avoid the error.
>> ==
>>
>> So basically we need a way to turn off the 'SAVE EXCEPTIONS' for the
>> batch mode.  I found in dbdimp.c that the oci_mode is being set to 
>> OCI_BATCH_ERRORS in the ora_st_execute_array function.  I was planning 
>> on setting it to OCI_BATCH_MODE and running a test to see if this will
>> not error out.  I report back when I have run the test, but I was
>> wondering what would be the best way to give the user the ability to
>> override the oci_mode. 
> 
> Setting oci_mode to OCI_BATCH_MODE works.  So I want to add a prepare
> attribute that will turn off the SAVE EXCEPTIONS.  I'm looking for some 
> direction on how to add it to dbdimp.c. I haven't thought of a name yet,
> but something like 
> 
> my $sth = $dbh->prepare($SQL,{ora_oci_err_mode => 0});
> 
> I assume I would have to add it to dbd_db_FETCH_attrib() and would I do
> something like this in ora_st_execute_array(),

Don't you mean dbd_st_FETCH_attrib as it is a statement level attribute
not a connection one? Anyway, I don't think it is required unless you
really want to get it back out in a Perl script.

I don't even think you need to add it to a statements
private_attribute_info but then when I checked Oracle.pm it appears a
load of prepare flags have been added. I might be wrong here but since
there is no way to get ora_parse_lang etc (prepare attributes) I don't
think they should be in private_attribute_info.

perl -e 'use DBI;$h =
DBI->connect("dbi:Oracle:host=xxx;sid=yyy","xxx","yyy"); $s =
$h->prepare("select 1 from dual", {ora_parse_lang => 2}); print
$s->{ora_parse_lang};'

prints nothing as you'd expect as there is no way to get ora_parse_lang.

> if (DBD_ATTRIB_TRUE(attr,"ora_oci_err_mode",16,svp))
> DBD_ATTRIB_GET_IV(  attr, "ora_oci_err_mode",  16, svp, 
> ora_oci_err_mode);

I don't understand why you need it in ora_st_execute_array - the
statement has already been parsed by then. Do you mean dbd_st_prepare in
oci8.c.

> 
> Thanks,
> Scott
> 
> 
>>  An attribute in the prepare method?  
>>
>> Thanks,
>> Scott
> 
> 

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: column_info () vs type_info () (Summary)

2010-03-10 Thread Martin Evans
Tim Bunce wrote:
> On Wed, Mar 10, 2010 at 10:25:45AM +0000, Martin Evans wrote:
> 
>> 1. the TYPE attribute on a statement is clearly documented as to what it
>> should contain - "The values correspond to the international standards
>> (ANSI X3.135 and ISO/IEC 9075) which, in general terms, means ODBC".
>>
>> However, it is not clear from the docs what TYPE_NAME and DATA_TYPE
>> columns should be in the column_info method and how they compare with
>> the same named columns returned by the type_info method.
>>
>> e.g., for the column_info method:
>>
>> DATA_TYPE: The concise data type code
>>   this could be the database internal type number or the ODBC type
>>   the current opinion is that it should be the ODBC type and extra
>>   keys added for the internal type (e.g., ora_type, uni_type depending
>>   on DBD prefix) NOTE mysql already adds "mysql_is_auto_increment",
>>   "mysql_is_pri_key", "mysql_type_name", "mysql_values" keys.
>> TYPE_NAME: A data source dependent data type name.
>>
>> for type_info:
>>
>> DATA_TYPE (integer) SQL data type number.
>> TYPE_NAME (string) Data type name for use in CREATE TABLE statements
>>
>> Proposed Solution: document column_info DATA_TYPE as being the same as
>> {TYPE}  but allow DBDs to add other keys to the column_info result (some
>> already do).
> 
> The column_info() method maps to ODBC SQLColumns() function, and the
> type_info() method maps to the SQLGetTypeInfo() function.
> http://search.cpan.org/~timb/DBI-1.609/DBI.pm#ODBC_and_SQL/CLI_Standards_Reference_Information
> 
> How does the proposed solution fit with that model, and is a better fit
> possible?

>From my now so heavily used copy of the ODBC spec that it is falling
apart it says:

SQLColumns:
  DATA_TYPE: SQL data type. this can be an ODBC SQL data type or a
  driver-specific SQL data type. For datetime and interval data types,
  this column returns the concise data type (such as SQL_TYPE_DATE ot
  SQL_INTERVAL_YEAR_TO_MONTH, rather than the non-concise data type such
  as SQL_DATETIME or SQL_INTERVAL).

  The definition of "driver-specific SQL data types" is later in the
  book and they should be ones registered with the standard.

  TYPE_NAME: Data source dependent data type name; for example, "CHAR",
  "VARCHAR", "MONEY", "LONG VARBINARY", or "CHAR() FOR BIT DATA".

SQLGetTypeInfo:
  DATA_TYPE: exactly the same as SQLColumns above.

  TYPE_NAME: exactly the same as SQLColumns above except it adds
  "Applications must use this name in CREATE TABLE and ALTER TABLE
  statements."

In ODBC the {TYPE} attribute in a statement comes from SQLDescribeCol
which says:

  DATA_TYPE: the SQL data type of the column. This value is read from
  the SQL_DESC_CONCISE_TYPE field of the IRD (Implementation Record
  Descriptor - mje). This will be one of the values in the "SQL Data
  Types" section of Appendix D, "Data Types," or a driver-specific SQL
  data type. If the data type cannot be determined, the driver returns
  SQL_UNKNOWN_TYPE.

So I believe this means TYPE_NAME in type_info MUST be the name
recognised by the database (as usable in create table etc) but DATA_TYPE
in type_info/column_info/{TYPE} should match.

>> 2. There is no guarantee that if you find a TYPE_NAME in column_info you
>> can map it successfully to the type in type_info - this is annoying and
>> difficult to workaround.
>>
>> Proposed Solution: document they should be the same
> 
> Subject to the standard - see above. An underlying issue here may be
> (I'm guessing) that unlike ODBC, the DBI doesn't parse and rewrite
> 'standard sql' statements to the drivers dialect.

revision - if the DATA_TYPEs match (which they don't for all drivers
currently) there is no absolute reason for the names to match with
non-ODBC drivers. To go from type_info to column_info to {TYPE} and back
you'd use the DATA_TYPE field not the TYPE_NAME field.

>> 3. column_info is not always provided by a DBD and the documentation
>> fails to mention that in this case the returned statement handle is undef.
>>
>> Proposed Solution: update documentation
> 
> Yeap.

Done.

>> 4. It appears FetchHashKeyName is not honoured in the results of
>> column_info though how this occurs does appear to depend on the DBD.
>> Merijn got all upper case keys from MySQL when FetchHashKeyName =
>> name_lc e.g.,
>>
>> {   BUFFER_LENGTH=> undef, # <-- should have been lowercase
>> CHAR_OCTET_LENGTH => undef,# <-- should have been lowercase
>> .
>> .
&

Re: column_info () vs type_info () (Summary)

2010-03-10 Thread Martin Evans
Martin Evans wrote:
> H.Merijn Brand wrote:
>> On Mon, 08 Mar 2010 10:13:02 +0000, Martin Evans
>>  wrote:
>>
>> large original chunks snipped ...
>>
>>> H.Merijn Brand wrote:
>>>> I see a big difference in what $sth->{TYPE} returns (and the name) and
>>>> what column_info () - if implemented - is returning.
>>> I don't think I do with DBD::ODBC (results below).
>>>
>>>> DATA_TYPE has no specification of what type of code that is. It can be
>>>> either the code the type is internally known by with the database, or
>>>> it can be the ODBC equivalent.
>>>>
>>>> TYPE_NAME has no guarantee whatsoever to be like what type_info ()
>>>> returns with code like:
>>> I thought it should.
>>>
>>>> --8<---
>>>> {   my %types; # Cache for types
>>>>
>>>> # Convert numeric to readable
>>>> sub _type_name
>>>> {
>>>>my $type = shift;
>>>>
>>>>unless (exists $types{$dbh}{$type}) {
>>>>my $tpi = $type =~ m/^-?[0-9]+$/ ? $dbh->type_info ($type) : undef;
>>>>$types{$dbh}{$type} = $tpi ? $tpi->{TYPE_NAME} : $type // "?";
>>>>}
>>>>return $types{$dbh}{$type};
>>>>} # type_name
>>>> }
>>>> -->8---
>>>>
>>>> The keys in the hashref returned from column_info () often do not honor
>>>> the {FetchHashKeyName} dbh attribute, which makes it quite a bit harder
>>>> to write database-independent code. I think either document that the
>>>> sth returned from column_info () doesn't have to follow this attribute,
>>>> or make the authors alter the code so it does.
>>> I guess you are mostly referring to the 'COLUMN_NAME', 'TABLE_NAME',
>>> 'TABLE_SCHEM' and 'TABLE_CAT' keys - yes?
>> Yes, but esp the *extra* fields returned. FetchHaskKeyNames refers to
>> the date returned in the hashref. The 4 you name are normally provided
>> to column_info () and not the ones you want to examine. What I mean is
>> *all* the keys, so also DATA_TYPE, TYPE_NAME, COLUMN_SIZE,
>> BUFFER_LENGTH, DECIMAL_DIGITS, NUM_PREC_RADIX, NULLABLE, REMARKS,
>> COLUMN_DEF, SQL_DATA_TYPE, SQL_DATETIME_SUB, CHAR_OCTET_LENGTH,
>> ORDINAL_POSITION, IS_NULLABLE. etc
>>
>> For example, in MySQL, a hash like this is returned:
>>
>> {   BUFFER_LENGTH=> undef,
>> CHAR_OCTET_LENGTH => undef,
>> CHAR_SET_CAT => undef,
>> CHAR_SET_NAME=> undef,
>> CHAR_SET_SCHEM   => undef,
>> COLLATION_CAT=> undef,
>> COLLATION_NAME   => undef,
>> COLLATION_SCHEM  => undef,
>> COLUMN_DEF   => undef,
>> COLUMN_NAME  => 'xbb',
>> COLUMN_SIZE  => 20,
>> DATA_TYPE=> 4,
>> DECIMAL_DIGITS   => undef,
>> DOMAIN_CAT   => undef,
>> DOMAIN_NAME  => undef,
>> DOMAIN_SCHEM => undef,
>> DTD_IDENTIFIER   => undef,
>> IS_NULLABLE  => 'NO',
>> IS_SELF_REF  => undef,
>> MAX_CARDINALITY  => undef,
>> NULLABLE => 0,
>> NUM_PREC_RADIX   => 10,
>> ORDINAL_POSITION => 1,
>> REMARKS  => undef,
>> SCOPE_CAT=> undef,
>> SCOPE_NAME   => undef,
>> SCOPE_SCHEM  => undef,
>> SQL_DATA_TYPE=> 4,
>> SQL_DATETIME_SUB => undef,
>> TABLE_CAT=> undef,
>> TABLE_NAME   => 'xbb',
>> TABLE_SCHEM  => undef,
>> TYPE_NAME=> 'BIGINT',
>> UDT_CAT  => undef,
>> UDT_NAME => undef,
>> UDT_SCHEM=> undef,
>> mysql_is_auto_increment => 1,
>> mysql_is_pri_key => 1,
>> mysql_type_name  => 'bigint(20) unsigned',
>> mysql_values => undef
>> }
>>
>> and FetchHashKeyName was set to "NAME_lc", which IMHO should have
>> returned ALL keys lowercase.
> 
> I see what you mean now.
> 
> I don't see why FetchHashKeyName should affect those keys - I believed
> it was for the cases where keys are column names. Those keys are part of
> the DBI spec not some variable thing depending on database e.g., whether
> case is maintained on a column name or not.
> 
>

Re: column_info () vs type_info ()

2010-03-08 Thread Martin Evans
H.Merijn Brand wrote:
> On Mon, 08 Mar 2010 10:13:02 +0000, Martin Evans
>  wrote:
> 
> large original chunks snipped ...
> 
>> H.Merijn Brand wrote:
>>> I see a big difference in what $sth->{TYPE} returns (and the name) and
>>> what column_info () - if implemented - is returning.
>> I don't think I do with DBD::ODBC (results below).
>>
>>> DATA_TYPE has no specification of what type of code that is. It can be
>>> either the code the type is internally known by with the database, or
>>> it can be the ODBC equivalent.
>>>
>>> TYPE_NAME has no guarantee whatsoever to be like what type_info ()
>>> returns with code like:
>> I thought it should.
>>
>>> --8<---
>>> {   my %types;  # Cache for types
>>>
>>> # Convert numeric to readable
>>> sub _type_name
>>> {
>>> my $type = shift;
>>>
>>> unless (exists $types{$dbh}{$type}) {
>>> my $tpi = $type =~ m/^-?[0-9]+$/ ? $dbh->type_info ($type) : undef;
>>> $types{$dbh}{$type} = $tpi ? $tpi->{TYPE_NAME} : $type // "?";
>>> }
>>> return $types{$dbh}{$type};
>>> } # type_name
>>> }
>>> -->8---
>>>
>>> The keys in the hashref returned from column_info () often do not honor
>>> the {FetchHashKeyName} dbh attribute, which makes it quite a bit harder
>>> to write database-independent code. I think either document that the
>>> sth returned from column_info () doesn't have to follow this attribute,
>>> or make the authors alter the code so it does.
>> I guess you are mostly referring to the 'COLUMN_NAME', 'TABLE_NAME',
>> 'TABLE_SCHEM' and 'TABLE_CAT' keys - yes?
> 
> Yes, but esp the *extra* fields returned. FetchHaskKeyNames refers to
> the date returned in the hashref. The 4 you name are normally provided
> to column_info () and not the ones you want to examine. What I mean is
> *all* the keys, so also DATA_TYPE, TYPE_NAME, COLUMN_SIZE,
> BUFFER_LENGTH, DECIMAL_DIGITS, NUM_PREC_RADIX, NULLABLE, REMARKS,
> COLUMN_DEF, SQL_DATA_TYPE, SQL_DATETIME_SUB, CHAR_OCTET_LENGTH,
> ORDINAL_POSITION, IS_NULLABLE. etc
> 
> For example, in MySQL, a hash like this is returned:
> 
> {   BUFFER_LENGTH=> undef,
> CHAR_OCTET_LENGTH => undef,
> CHAR_SET_CAT => undef,
> CHAR_SET_NAME=> undef,
> CHAR_SET_SCHEM   => undef,
> COLLATION_CAT=> undef,
> COLLATION_NAME   => undef,
> COLLATION_SCHEM  => undef,
> COLUMN_DEF   => undef,
> COLUMN_NAME  => 'xbb',
> COLUMN_SIZE  => 20,
> DATA_TYPE=> 4,
> DECIMAL_DIGITS   => undef,
> DOMAIN_CAT   => undef,
> DOMAIN_NAME  => undef,
> DOMAIN_SCHEM => undef,
> DTD_IDENTIFIER   => undef,
> IS_NULLABLE  => 'NO',
> IS_SELF_REF  => undef,
> MAX_CARDINALITY  => undef,
> NULLABLE => 0,
> NUM_PREC_RADIX   => 10,
> ORDINAL_POSITION => 1,
> REMARKS  => undef,
> SCOPE_CAT=> undef,
> SCOPE_NAME   => undef,
> SCOPE_SCHEM  => undef,
> SQL_DATA_TYPE=> 4,
> SQL_DATETIME_SUB => undef,
> TABLE_CAT=> undef,
> TABLE_NAME   => 'xbb',
> TABLE_SCHEM  => undef,
> TYPE_NAME=> 'BIGINT',
> UDT_CAT  => undef,
> UDT_NAME => undef,
> UDT_SCHEM=> undef,
> mysql_is_auto_increment => 1,
> mysql_is_pri_key => 1,
> mysql_type_name  => 'bigint(20) unsigned',
> mysql_values => undef
> }
> 
> and FetchHashKeyName was set to "NAME_lc", which IMHO should have
> returned ALL keys lowercase.

I see what you mean now.

I don't see why FetchHashKeyName should affect those keys - I believed
it was for the cases where keys are column names. Those keys are part of
the DBI spec not some variable thing depending on database e.g., whether
case is maintained on a column name or not.


>>> Extra fun comes from databases that store type names instead of type
>>> codes in their data-dictionary (like Unify and SQLite), and reversing
>>> that process to make column_info () return both TYPE_NAME and DATA_TYPE
>>> makes it a different pair than TYPE and the derived counterpart from
>>> type_info ().
>>>
>>> My real question is, should the docs be enhanced to

Re: column_info () vs type_info ()

2010-03-08 Thread Martin Evans
H.Merijn Brand wrote:
> I see a big difference in what $sth->{TYPE} returns (and the name) and
> what column_info () - if implemented - is returning.

I don't think I do with DBD::ODBC (results below).

> From the DBI docs:
> 
>Handle attributes:
> 
>"TYPE"  (array-ref, read-only)
> 
>Returns a reference to an array of integer values for each column.
>The value indicates the data type of the corresponding column.
> 
>The values correspond to the international standards (ANSI X3.135
>and ISO/IEC 9075) which, in general terms, means ODBC. Driver-specific
>types that don't exactly match standard types should generally return
>the same values as an ODBC driver supplied by the makers of the
>database. That might include private type numbers in ranges the
>vendor has officially registered with the ISO working group:
> 
>  ftp://sqlstandards.org/SC32/SQL_Registry/
> 
>Where there's no vendor-supplied ODBC driver to be compatible with,
>the DBI driver can use type numbers in the range that is now
>officially reserved for use by the DBI: - to -9000.
> 
>All possible values for "TYPE" should have at least one entry in the
>output of the "type_info_all" method (see "type_info_all").
> 
>column_info:
> 
>DATA_TYPE: The concise data type code.
> 
>TYPE_NAME: A data source dependent data type name.
> 
> DATA_TYPE has no specification of what type of code that is. It can be
> either the code the type is internally known by with the database, or
> it can be the ODBC equivalent.
> 
> TYPE_NAME has no guarantee whatsoever to be like what type_info ()
> returns with code like:

I thought it should.

> --8<---
> {   my %types;# Cache for types
> 
> # Convert numeric to readable
> sub _type_name
> {
>   my $type = shift;
> 
>   unless (exists $types{$dbh}{$type}) {
>   my $tpi = $type =~ m/^-?[0-9]+$/ ? $dbh->type_info ($type) : undef;
>   $types{$dbh}{$type} = $tpi ? $tpi->{TYPE_NAME} : $type // "?";
>   }
>   return $types{$dbh}{$type};
>   } # type_name
> }
> -->8---
> 
> The keys in the hashref returned from column_info () often do not honor
> the {FetchHashKeyName} dbh attribute, which makes it quite a bit harder
> to write database-independent code. I think either document that the
> sth returned from column_info () doesn't have to follow this attribute,
> or make the authors alter the code so it does.

I guess you are mostly referring to the 'COLUMN_NAME', 'TABLE_NAME',
'TABLE_SCHEM' and 'TABLE_CAT' keys - yes?

> 
> Extra fun comes from databases that store type names instead of type
> codes in their data-dictionary (like Unify and SQLite), and reversing
> that process to make column_info () return both TYPE_NAME and DATA_TYPE
> makes it a different pair than TYPE and the derived counterpart from
> type_info ().
> 
> 
> My real question is, should the docs be enhanced to
> 
> • make clear that these two return different things

or make them return the same things. Obviously for ODBC this is simple
as they are the same things but for other DBDs I think it is useful to
know a single type that can be used across all databases and the real
type implemented in the database (and be able to map between them) -
from your results mysql looks closest in this respect.

People writing bugzilla, open LDAP etc backend support in databases are
having to hand code the schema for each database but in many cases it
may be possible (if the DBDs returned a single set of types) to code
this generically (although I'd guess it would be still quite hard).

> • column_info () is not always available (sth is undef then)

I guess so.

It should not be difficult to add column_info to DBD::Oracle - I know
this has come up in the past. I think I even provided some SQL that
would do it but I cannot find it right now.

> Here's my findings so far ...
> 
> PostgreSQL
>   Create as   sth attributes  column_info ()
>   --- --  
> ---
>   bigint  ?   -5  bigint  
>  -5
>   bigserial   ?   -5  bigint  
>  -5
>   bit unknown  0  bit 
>   0
>   bit (9) unknown  0  bit 
>   0
>   bit varying unknown  0  bit varying 
>   0
>   bit varying (35)unknown  0  bit varying 
>   0
>   boolbool16  boolean 
>  16
>   boolean bool16  boolean 
>  16
>   box unknown  0  box  

Re: Time to Document Callbacks

2010-03-08 Thread Martin Evans
Tim Bunce wrote:
> On Sun, Mar 07, 2010 at 10:29:29AM -0800, David E. Wheeler wrote:
>> On Mar 7, 2010, at 5:43 AM, Tim Bunce wrote:
>>
 Looks good, thanks. Pity you removed the `$dbh->{private_myapp_sql_mode}`
 bit, though, as that's required when using C, which
 you almost certainly are doing if you need this hack.
>>> Are you sure it's required when using connected()? The connected method
>>> is only called for new connections.
>> Yes, I just verified it with Bricolage, which uses connect_cached.
>> connected() is called every time, whether or not a connection is a new
>> connection.
> 
> Uh, yeah, I just looked at the code. Sometimes I confuse myself.
> I think that's a bug. I always intended connected() to be used as an
> on-new-physical-connection-established hook.
> 
> Any objections to making it so?

Not from me. In fact if connect_cached called it every time I can
imagine it would break some code I've seen.

> Looking at the code I can see an issue with clone(): it'll clone using
> the same method (connect/connect_cached) as the handle that's being
> cloned. I guess I can document that as a feature :)
> 
>> BTW, here's another issue I forgot to mention. I installed the DBI
>> from svn and now get this error unless I rebuild each driver:
>>
>> [Sun Mar 07 10:22:24 2010] [error] DBI/DBD internal version mismatch
>> (DBI is v95/s208, DBD ./mysql.xsi expected v94/s208) you probably need
>> to rebuild the DBD driver (or possibly the DBI).
>>
>> I've never had an issue with binary compatibility between the DBI and
>> a DBD. Did something change in this last build?
> 
> Yes, the additional hook for sql_type_cast_svpv. But I shouldn't have
> bumped DBISTATE_VERSION for just that - the change was binary compatible
> with old drivers. (Drivers that care can use the DBIXS_REVISION macro
> to check if sql_type_cast_svpv is available at compile time and check
> it's non-zero to check it's available at runtime.)
> 
> Fixed in r13837. Thanks.
> 
> Tim.
> 
> 

I used DBIXS_REVISION for those changes in DBD::Oracle and DBD::ODBC
although the latter is not released yet.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: ANNOUNCE: DBD::Oracle 1.24 Release Candidate 5

2010-01-29 Thread Martin Evans
John Scoles wrote:
> Horray.
> 
> Just one more thing has come up in like the last 15min
> 
> can you redo your the make test with the  following file
> 
> http://svn.perl.org/modules/dbd-oracle/trunk/ocitrace.h
> 
> It is a trivial change but just want to make sure it does not break
> anything
> 
> if is a change to precision for OCIDateTimeToText to 6 instead of 0 for
> varrays of timestamps
> 
> needed to conduct a large scale experiment of some sort
> 
> cheers
> John Scoles

I've not got time this afternoon to run through all my tests again but I
have updated ocitrace.h and run the inbuilt tests on 2 machines ok and
I've restarted my test system with this change and have not seen
anything wrong as yet.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com

> Charles Jardine wrote:
>> On 28/01/10 15:59, John Scoles wrote:
>>  
>>> Well here comes the big #5
>>>
>>>
>>> It can be found at the usual place
>>>
>>> http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.24-RC5.tar
>>> 
>>
>> My environment is Linux x86-64, Perl 5.10.1 (64 bit), DBI 1.609,
>> Oracle 10.2.0.4.2 (64 bit). Database charset UTF8, national
>> charset AL16UTF16.
>>
>> RC5 compiles without warnings and passes all its tests, including
>> the regression tests for my object patches. I have run some sample
>> work at trace level 15 - there are no segfaults.
>>
>> In short, I can't find anything wrong with it.
>>
>>   
> 
> 



Re: ANNOUNCE: DBD::Oracle 1.24 Release Candidate 5

2010-01-29 Thread Martin Evans
John Scoles wrote:
> Well here comes the big #5
> 
> 
> It can be found at the usual place
> 
> http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.24-RC5.tar
> 
> 
> This time round was a patch to fix a patch for some warnings that was
> causing a seg-fault on some 32 bit boxes
> 
> Cheers
> 
> and thanks for all the testing
> 
> John Scoles
> 
> 

Thanks for pulling this together John and to everyone else who has
contributed.

For me no new problems to report.

I still have 25exe_array test failing and the problem I posted with lob
freeing but neither are causing any issues in our application.

I did the following:

o no compiler warnings other than those generated by Perl macros in XS.

o all tests except one in 26exe_array succeed

o confirmed the test code I posted still generates "DBD::Oracle::st
DESTROY failed: ORA-22922: nonexistent LOB value (DBD ERROR:
OCILobFreeTemporary)". I'll maybe rt this after 1.24 unless I find a fix
before.

o re-tested RowCacheSize changes
  checked multiple rows are fetched at a time (rt46763 and rt46998)
  checked the RowsInCache changes (not rt'ed - posted in dbi-dev)

o checked unicode in errors is displayed properly (rt46438)

o checked binding of integers as IVs works (rt49818)

o tested I can run simple scripts with ora_verbose=15

o tested the default settings retrieve lobs when chrset is utf-8
(ORA_DBD_NCS_BUFFER)

o run through our application tests without issues.

This was on a linux 32 bit system with "v5.10.0 built for
i486-linux-gnu-thread-multi". I am using DBI from subversion trunk,
Oracle 11.1 server and instant client 11.1

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Bug in tracing in DBD::Oracle

2010-01-28 Thread Martin Evans
Charles Jardine wrote:
> On 27/01/10 17:38, Martin Evans wrote:
>> Charles Jardine wrote:
>>> On 27/01/10 15:52, Martin Evans wrote:
>>>> Hi,
>>>>
>>>> I was asked to enable ora_verbose and send a trace a few days ago.
>>>>
>>>> I'm getting a segfault with DBD::Oracle when ora_verbose or dbd_verbose
>>>> is set to 15 in the connect method call. The stack trace is:
> 
> [snip]
> 
>>>> and that refers to the following line in dbdimp.c:
>>>>
>>>> OCINlsEnvironmentVariableGet_log_stat( &ncharsetid,(size_t)  0,
>>>> OCI_NLS_NCHARSET_ID, 0, &rsize ,status );
>>>>
>>>> Oracle defines the second argument as size_t so I guess that cast of 0
>>>> to size_t is ok but ocitrace.h then goes on to cast it again to
>>>> (unsigned long long) and the format argument has been changed to %llu.
>>>> Although these match it segfaults.
>>> I am responsible for this change. It was part of a campaign to avoid
>>> warnings
>>> when compiling on 64-bit gcc platforms. All that is necessary to avoid
>>> the compiler warnings is that the format arguments match the casts
>>> (subject to integral promotion).
>>>
>>> I used (unsigned long long) in this case for maximum portability. I
>>> couldn't
>>> find any standard that said that (size_t) might not be wider than
>>> (unsigned long).
>>>
>>> If my change breaks PerlIO_vprintf, we must back off. Using (unsigned
>>> long)
>>> and %lu would work on all platforms I use. Using (unsigned int) and %u,
>>> would work in this case, but not for all uses of size_t.
>>>
>>> This is the only place where I used a %llu or %lld, so there is only
>>> one place to change.
>>>
>>> Martin, can you try changing the casts to (unsigned long) and the
>>> formats
>>> to %lu, and see if this fixes your problem.
>>
>> That is what I did in effect (nearly).
>>
>> I took the casts of 0 to size_t out of the 2 calls in dbdimp.c and added
>> a cast to size_t on the real call to oracle in the macro. Then I change
>> the format in the PerlIO_printf to %lu and change the cast to (unsigned
>> long). This works for me and I guess it will work without warning for
>> you too.
>>
>> This isn't exactly what John has in subversion at the moment.
> 
> John seems to have corrected my over-zealous cast, and produced
> a version which complies without warning and works on both 32-
> and 64-bit platforms. Thank you John.
> 
> I prefer his version, with the cast to site_t left where it was,
> rather than imported into the macro.

I'm not that comfortable with the cast to size_t in dbdimp.c because
then it is later cast back to unsigned long and I'd guess on platforms
where size_t is an unsigned long long the compiler might whine about that.

The best fix would be if there was a reliable format for size_t but I
don't know of one.

However, I don't think we need to get this out of proportion after all
it is only two calls and in both cases the size_t is 0 anyway as the
requested attributes are integers and not strings.

> If the current SVN version works for Martin, I suggest that no
> more needs to be done.

It does work.


> I an sorry to have caused this bother.
> 

Didn't cost me much bother and was done for all the right reasons.

I am considerably more bothered about this serious problem:

oci8.c
==
/*line 1897 */
if (DBIS->debug >= 3 || dbd_verbose >= 3 || oci_warn){
char buf[10];
sprintf(buf,"bytes");

if (ftype == ORA_CLOB)
   sprintf(buf,"characters");

PerlIO_printf(DBILOGFP,
"   OCILobRead %s %s: csform %d (%s), LOBlen 
%lu(%s), LongReadLen
%lu(%s), BufLen %lu(%s), Got %lu(%s)\n",
name, oci_status_name(status), 
csform,oci_csform_name(csform),
ul_t(loblen),buf ,
ul_t(imp_sth->long_readlen),buf, 
ul_t(buflen),buf, ul_t(amtp),buf);

}

That sprintf will always overflow buf as buf is 10 chrs long and does
not have room for the trailing NUL.

The attached patch fixes above and a few other ones I saw.

I suggest we try very hard to get someone with a 64bit platform to try
the next RC.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


1.dif
Description: video/dv


Re: Bug in tracing in DBD::Oracle

2010-01-27 Thread Martin Evans
Charles Jardine wrote:
> On 27/01/10 15:52, Martin Evans wrote:
>> Hi,
>>
>> I was asked to enable ora_verbose and send a trace a few days ago.
>>
>> I'm getting a segfault with DBD::Oracle when ora_verbose or dbd_verbose
>> is set to 15 in the connect method call. The stack trace is:
>>
>> (gdb) bt
>> #0  0x080be45c in Perl_sv_vcatpvfn ()
>> #1  0x080ccd6d in Perl_vnewSVpvf ()
>> #2  0x0811cb54 in PerlIO_vprintf ()
>> #3  0x0811cbdf in PerlIO_printf ()
>> #4  0x007e961c in ora_db_login6 (dbh=0x830f6a0, imp_dbh=0x834b0b0,
>> dbname=, uid=0x81aedf8 "bet",
>> pwd=0x81aee08 "b3t", attr=0x830ee20) at dbdimp.c:546
>> #5  0x007dd0e0 in XS_DBD__Oracle__db__login (my_perl=0x8188008,
>> cv=0x8344b88) at ./Oracle.xsi:100
>> #6  0x080b12c0 in Perl_pp_entersub ()
>> #7  0x080af688 in Perl_runops_standard ()
>> #8  0x080acf4b in Perl_call_sv ()
>> #9  0x00575f0a in XS_DBI_dispatch (my_perl=0x8188008, cv=0x82bfa88) at
>> DBI.xs:3442
>> #10 0x080b12c0 in Perl_pp_entersub ()
>> #11 0x080af688 in Perl_runops_standard ()
>> #12 0x080adbb2 in perl_run ()
>> #13 0x08063ffd in main ()
>>
>> and that refers to the following line in dbdimp.c:
>>
>> OCINlsEnvironmentVariableGet_log_stat( &ncharsetid,(size_t)  0,
>> OCI_NLS_NCHARSET_ID, 0, &rsize ,status );
>>
>> Oracle defines the second argument as size_t so I guess that cast of 0
>> to size_t is ok but ocitrace.h then goes on to cast it again to
>> (unsigned long long) and the format argument has been changed to %llu.
>> Although these match it segfaults.
> 
> I am responsible for this change. It was part of a campaign to avoid
> warnings
> when compiling on 64-bit gcc platforms. All that is necessary to avoid
> the compiler warnings is that the format arguments match the casts
> (subject to integral promotion).
> 
> I used (unsigned long long) in this case for maximum portability. I
> couldn't
> find any standard that said that (size_t) might not be wider than
> (unsigned long).
> 
> If my change breaks PerlIO_vprintf, we must back off. Using (unsigned long)
> and %lu would work on all platforms I use. Using (unsigned int) and %u,
> would work in this case, but not for all uses of size_t.
> 
> This is the only place where I used a %llu or %lld, so there is only
> one place to change.
> 
> Martin, can you try changing the casts to (unsigned long) and the formats
> to %lu, and see if this fixes your problem.

That is what I did in effect (nearly).

I took the casts of 0 to size_t out of the 2 calls in dbdimp.c and added
a cast to size_t on the real call to oracle in the macro. Then I change
the format in the PerlIO_printf to %lu and change the cast to (unsigned
long). This works for me and I guess it will work without warning for
you too.

This isn't exactly what John has in subversion at the moment.

>> This segfaults on my Linux machine described with the Perl -V output
>> below. I cannot believe the size of the first argument passed to
>> OCINlsEnvironmentVariableGet is every going to need a size_t and in any
>> case it has a max size of OCI_NLS_MAXBUFSZ (100 in Instant Client 11.1
>> for Linux X86).
> 

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Bug in tracing in DBD::Oracle

2010-01-27 Thread Martin Evans
Hi,

I was asked to enable ora_verbose and send a trace a few days ago.

I'm getting a segfault with DBD::Oracle when ora_verbose or dbd_verbose
is set to 15 in the connect method call. The stack trace is:

(gdb) bt
#0  0x080be45c in Perl_sv_vcatpvfn ()
#1  0x080ccd6d in Perl_vnewSVpvf ()
#2  0x0811cb54 in PerlIO_vprintf ()
#3  0x0811cbdf in PerlIO_printf ()
#4  0x007e961c in ora_db_login6 (dbh=0x830f6a0, imp_dbh=0x834b0b0,
dbname=, uid=0x81aedf8 "bet",
pwd=0x81aee08 "b3t", attr=0x830ee20) at dbdimp.c:546
#5  0x007dd0e0 in XS_DBD__Oracle__db__login (my_perl=0x8188008,
cv=0x8344b88) at ./Oracle.xsi:100
#6  0x080b12c0 in Perl_pp_entersub ()
#7  0x080af688 in Perl_runops_standard ()
#8  0x080acf4b in Perl_call_sv ()
#9  0x00575f0a in XS_DBI_dispatch (my_perl=0x8188008, cv=0x82bfa88) at
DBI.xs:3442
#10 0x080b12c0 in Perl_pp_entersub ()
#11 0x080af688 in Perl_runops_standard ()
#12 0x080adbb2 in perl_run ()
#13 0x08063ffd in main ()

and that refers to the following line in dbdimp.c:

OCINlsEnvironmentVariableGet_log_stat( &ncharsetid,(size_t)  0,
OCI_NLS_NCHARSET_ID, 0, &rsize ,status );

Oracle defines the second argument as size_t so I guess that cast of 0
to size_t is ok but ocitrace.h then goes on to cast it again to
(unsigned long long) and the format argument has been changed to %llu.
Although these match it segfaults.

This segfaults on my Linux machine described with the Perl -V output
below. I cannot believe the size of the first argument passed to
OCINlsEnvironmentVariableGet is every going to need a size_t and in any
case it has a max size of OCI_NLS_MAXBUFSZ (100 in Instant Client 11.1
for Linux X86).

I imagine this got changed by someone with a 64 bit system where size_t
was possibly unsigned long long and that generated a warning on the call
to PerlIO_printf.

I changed my version to remove the cast to size_t from the call to
OCINlsEnvironmentVariableGet and put this cast in the real call in the
macro instead. I then changed the format for the size in the
PerlIO_printf to %lu and cast to (unsigned long)

I believe this should work for 64bit machines too since
OCINlsEnvironmentVariableGet is only currently used for integer types
and not string types so all the calls pass 0 anyway. Perhaps someone who
has a 64bit machine could check this out.

Summary of my perl5 (revision 5 version 10 subversion 0) configuration:
  Platform:
osname=linux, osvers=2.6.24-23-server,
archname=i486-linux-gnu-thread-multi
uname='linux vernadsky 2.6.24-23-server #1 smp wed apr 1 22:22:14
utc 2009 i686 gnulinux '
config_args='-Dusethreads -Duselargefiles -Dccflags=-DDEBIAN
-Dcccdlflags=-fPIC -Darchname=i486-linux-gnu -Dprefix=/usr
-Dprivlib=/usr/share/perl/5.10 -Darchlib=/usr/lib/perl/5.10
-Dvendorprefix=/usr -Dvendorlib=/usr/share/perl5
-Dvendorarch=/usr/lib/perl5 -Dsiteprefix=/usr/local
-Dsitelib=/usr/local/share/perl/5.10.0
-Dsitearch=/usr/local/lib/perl/5.10.0 -Dman1dir=/usr/share/man/man1
-Dman3dir=/usr/share/man/man3 -Dsiteman1dir=/usr/local/man/man1
-Dsiteman3dir=/usr/local/man/man3 -Dman1ext=1 -Dman3ext=3perl
-Dpager=/usr/bin/sensible-pager -Uafs -Ud_csh -Ud_ualarm -Uusesfio
-Uusenm -DDEBUGGING=-g -Doptimize=-O2 -Duseshrplib
-Dlibperl=libperl.so.5.10.0 -Dd_dosuid -des'
hint=recommended, useposix=true, d_sigaction=define
useithreads=define, usemultiplicity=define
useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef
use64bitint=undef, use64bitall=undef, uselongdouble=undef
usemymalloc=n, bincompat5005=undef
  Compiler:
cc='cc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -DDEBIAN
-fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE
-D_FILE_OFFSET_BITS=64',
optimize='-O2 -g',
cppflags='-D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fno-strict-aliasing
-pipe -I/usr/local/include'
ccversion='', gccversion='4.4.1', gccosandvers=''
intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t',
lseeksize=8
alignbytes=4, prototype=define
  Linker and Libraries:
ld='cc', ldflags =' -L/usr/local/lib'
libpth=/usr/local/lib /lib /usr/lib /usr/lib64
libs=-lgdbm -lgdbm_compat -ldb -ldl -lm -lpthread -lc -lcrypt
perllibs=-ldl -lm -lpthread -lc -lcrypt
libc=/lib/libc-2.10.1.so, so=so, useshrplib=true,
libperl=libperl.so.5.10.0
gnulibc_version='2.10.1'
  Dynamic Linking:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E'
cccdlflags='-fPIC', lddlflags='-shared -O2 -g -L/usr/local/lib'


Characteristics of this binary (from libperl):
  Compile-time options: MULTIPLICITY PERL_DONT_CREATE_GVSV
PERL_IMPLICIT_CONTEXT PERL_MALLOC_WRAP USE_ITHREADS
USE_LARGE_FILES USE_PERLIO USE_REENTRANT_API
  Built under linux
  Compiled at Oct  1 2009 22:19:26
  %ENV:
PERL5LIB="/home/martin/xxx/tools/modules/XXX/lib:/home/martin/xxx/cgi"
  @I

New 1.23_2 development release of DBD::ODBC

2010-01-26 Thread Martin Evans
Just uploaded to CPAN. This is a development release as I've introduced
two significant changes which I'm waiting on feedback for.

=head2 Changes in DBD::ODBC  1.23_2 January 26, 2010

Fixed bug in Makefile.PL which could fail to find unixODBC/iODBC
header files but not report it as a problem. Thanks to Thomas
J. Dillman and his smoker for finding this.

Fixed some compiler warnings in dbdimp.c output by latest gcc wrt to
format specifiers in calls to PerlIO_printf.

Added the odbc_force_bind_type attribute to help sort out problems
with ODBC Drivers which support SQLDescribeParam but describe the
parameters incorrectly (see rt 50852). Test case also added as
rt_50852.t.

=head2 Changes in DBD::ODBC  1.23_1 October 21, 2009

makefile.PL changes:
  some formatting changes to output
  warn if unixodbc headers are not found that the unixodbc-dev package
is not
installed
  use $arext instead of "a"
  pattern match for pulling libodbc.* changed
  warn if DBI_DSN etc not defined
  change odbc_config output for stderr to /dev/null
  missing / on /usr/local wheb finding find_dm_hdr_files()

New FAQ entries from Oystein Torget for bind parameter bugs in SQL Server.

rt_46597.rt - update on wrong table

Copied dbivport.h from the latest DBI distribution into DBD::ODBC.

Added if_you_are_taking_over_this_code.txt.

Add latest Devel::PPPort ppport.h to DBD::ODBC and followed all
recommendations for changes to dbdimp.c.

Added change to Makefile.PL provided by Shawn Zong to make
Windows/Cygwin work again.

Minor change to Makefile.PL to output env vars to help in debugging
peoples build failures.

Added odbc_utf8_on attribute to dbh and sth handles to mark all
strings coming from the database as utf8.  This is for Aster (based on
PostgreSQL) which returns all strings as UTF-8 encoded unicode.
Thanks to Noel Burton-Krahn.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


New release 0.18 of DBIx::Log4perl

2010-01-25 Thread Martin Evans
I've just uploaded a new release of DBIx::Log4perl to CPAN. The changes:

0.18  Mon January 22 2010

  Minor speedups in bind_param, bind_param_inout and execute methods.
Thanks to Devel::NYTProf.

  Minor speedups in _unseen_sth.
Thanks to Devel::NYTProf.

  Fix rt 53755 (fetchrow_array ignoring calling context). Thanks to
  Bill Rios for spotting this and identifying the problem.

  Log attributes passed to the connect call.

  When outputting the result of a selectall_* it didn't show the
  connection number.

  Add support for clone.
  NOTE: This changes introduces incompatibilities from previous versions as
all attribute names have changed to become lower case.

Please let me know if you find any problems.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: ANNOUNCE: DBD::Oracle 1.24 Release Candidate 4 (ORA-22922 error)

2010-01-22 Thread Martin Evans
John Scoles wrote:
> 
> Well here comes #4
> 
> 
> It can be found at the usual place
> 
> http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.24-RC4.tar
> 
> 
> This time round I added a little patch from Charles Jardine fro objects
> and some fixes for warnings
> 
> I have changed the ora_ncs_buff_mtpl default value back to 4 so it
> doesn't muck anyone up and have made it so it reports better and have
> updated the Pod to reflect this
> 
> Hopefully this will get most of them this time round
> 
> Cheers
> 
> and thanks for all the testing
> 
> John S
> 
> 

John,

I have a vague recollection someone mentioned lobs and freeing them
during this RC round but I cannot locate that email now - if someone did
it may be relevant to this issue. Anyway, I am getting a new error

"DBD::Oracle::st DESTROY failed: ORA-22922: nonexistent LOB value (DBD
ERROR: OCILobFreeTemporary) [for Statement "BEGIN p_mje(?); END;"]."

but as yet I cannot be certain this was not there before as our code
base for the application has changed. Code demonstrating the problem is
below and I will a) try and check this with a stock 1.23 and b) see if I
can locate the problem.

# $Id: fork.pl 3727 2010-01-22 10:47:27Z martin $
# Perl script which demonstrates ORA-22922 error in DBD::Oracle 1.24 RC4
#
use strict;
use warnings;
use DBI;
use Proc::Fork;
use Data::Dumper;
use DBD::Oracle qw(:ora_types);

my $ph = DBI->connect(
"dbi:Oracle:host=betoracle.easysoft.local;sid=devel",
"bet", "b3t");
print "ph InactiveDestroy = $ph->{InactiveDestroy}\n";

eval {
local $ph->{PrintError} = 0;
$ph->do(q/drop table mje/);
$ph->do(q/drop procedure p_mje/);
};
$ph->do(q/create table mje (a clob)/);
$ph->do(bind_param(1, $clob, {ora_type => ORA_CLOB});
$st->execute;

run_fork {
child {
#my $ch = $ph->clone;
$ph->{InactiveDestroy} = 1;
#$ph = undef;
exit 0;
}
parent {
waitpid $_[0], 0;
}
};

NOTE, it is not sufficient to simply set InactiveDestroy, you seem to
need to do the fork.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: subclassing the DBI and the clone method

2010-01-21 Thread Martin Evans
Tim Bunce wrote:
> On Wed, Jan 20, 2010 at 03:24:23PM +0000, Martin Evans wrote:
>>>> but I don't see any call to connect when clone is called.
>>> You don't see a call to DBI->connect, but there is a call to
>>> $drh->connect via the closure.
>>>
>>>> I presume there is something I need to do - any ideas?
>>> The closure calles the connected() method ad that's a good method to
>>> override to (re)setup any private stuff you need.
> 
>> Tim,
>>
>> Thank you for the pointers.
> 
> I'm glad you could interpret it despite the typos :)
> 
>> I had all DBIx::Log4perl setup in the
>> connect method (strangely I don't recollect reading about the connected
>> method) and after moving it all to the connected method and deleting my
>> failing attempt at clone override this now seems to work except:
>>
>> o when I pass DBIx::Log4perl attributes in the connection e.g.,
>>   connect('dbi:Oracle:xxx','user', 'pass', {DBIx_l4p_logmask => 1})
>>
>> I get warnings from the connect closure like this:
>>
>> Can't set DBIx::Log4perl::db=HASH(0x87116c0)->{DBIx_l4p_logmask}:
>> unrecognised attribute name or invalid value
>>
>> Previously I didn't get these as I parsed my attributes out in my
>> connect method then deleted them before DBI saw them but now I need them
>> to get down to the connected method but I don't want those warnings and
>> the code checks before calling connected:
>> [...]
>> and I cannot capture them in connect as this does not work for clone (as
>> my connect never gets called if you clone).
> 
> It seems that the current clone method doesn't play well with DBI
> subclasses.

Doesn't seem to although can be made to work.

> I've never been very happy with the behaviour of the clone() method
> (which is why it has "likely to change" in the docs) but I've not had a
> clear idea of what to do with it.

I saw the "likely to change" but unfortunately the problem is others
have ignored it and there are examples all over the place (quite a
number on perlmonks) of forking where they use clone.

> Part of the problem in this case is that you're using uppercase
> attribute names. If you used lowercase then things might 'just work'.

Argh - I had read the "Driver or database engine specific
(non-portable)" attributes are in lower case and can contain underscores
back in the distant past but must have forgotten that when it came to
writing DBIx::Log4perl.

Making that change does in deed make those warnings go away and I am
happy to make that change since it was written down and DBI spec.

> Otherwise you probably need to override clone() to remove your
> attributes (perhaps stashing them in a 'private_l4p_tmp_attr'
> attribute), call the SUPER::clone and then handle your attributes.
> Either after SUPER::clone returns or in connected().

That is sort of where I started - overriding clone and I had all sorts
of other issues so I'd rather not go there again.

If I moved to lowercase attribute names and moved all the setup code I
used to have in the overriden connect to connected would that be
compatible with the DBI spec as far as you are concerned and how you had
envisioned it to be done.


>> o "Can't locate auto/DBIx/Log4perl/st/DELETE.al"
>>
>> I feel this is something I've done wrong but I cannot find it yet. My
>> connect method is trying to call $dbh->func('dbms_output_enable') in
>> DBD::Oracle (as it always has done) but it is failing in my execute
>> attempting to undefine HandleError:
> 
> Does DBIx::Log4perl::st subclass DBI::st? It needs to.
> 
> Tim.
> 
> 

Yes:

st.pm:

use strict;
use warnings;
use DBI;
use Log::Log4perl;

package DBIx::Log4perl::st;
@DBIx::Log4perl::st::ISA = qw(DBI::st DBIx::Log4perl);
use DBIx::Log4perl::Constants qw (:masks $LogMask);

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: subclassing the DBI and the clone method

2010-01-20 Thread Martin Evans
Tim Bunce wrote:
> On Tue, Jan 19, 2010 at 09:04:43PM +, Martin J. Evans wrote:
>> Hi,
>>
>> Is there anything special a subclassed DBI module (DBIx::Log4perl in
>> this case) needs to do for the clone method?
>>
>> The DBI docs currently say "The clone method duplicates the $dbh
>> connection by connecting with the same parameters ($dsn, $user,
>> $password) as originally used."
> 
> sub clone {
> my ($old_dbh, $attr) = @_;
> my $closure = $old_dbh->{dbi_connect_closure} or return;
> 
> That's a closure created by connect() that performs the $drh->connect call.
> 
> unless ($attr) {
> # copy attributes visible in the attribute cache
> keys %$old_dbh; # reset iterator
> while ( my ($k, $v) = each %$old_dbh ) {
> # ignore non-code refs, i.e., caches, handles, Err etc
> next if ref $v && ref $v ne 'CODE'; # HandleError etc
> $attr->{$k} = $v;
> }
> # explicitly set attributes which are unlikely to be in the
> # attribute cache, i.e., boolean's and some others
> $attr->{$_} = $old_dbh->FETCH($_) for (qw(
> AutoCommit ChopBlanks InactiveDestroy
> LongTruncOk PrintError PrintWarn Profile RaiseError
> ShowErrorStatement TaintIn TaintOut
> ));
> }
> # use Data::Dumper; warn Dumper([$old_dbh, $attr]);
> my $new_dbh = &$closure($old_dbh, $attr);
> unless ($new_dbh) {
> # need to copy err/errstr from driver back into $old_dbh
> my $drh = $old_dbh->{Driver};
> return $old_dbh->set_err($drh->err, $drh->errstr, $drh->state);
> }
> return $new_dbh;
> }
> 
>> but I don't see any call to connect when clone is called.
> 
> You don't see a call to DBI->connect, but there is a call to
> $drh->connect via the closure.
> 
>> I presume there is something I need to do - any ideas?
> 
> The closure calles the connected() method ad that's a good method to
> override to (re)setup any private stuff you need.
> 
> Tim.
> 
> 

Tim,

Thank you for the pointers. I had all DBIx::Log4perl setup in the
connect method (strangely I don't recollect reading about the connected
method) and after moving it all to the connected method and deleting my
failing attempt at clone override this now seems to work except:

o when I pass DBIx::Log4perl attributes in the connection e.g.,
  connect('dbi:Oracle:xxx','user', 'pass', {DBIx_l4p_logmask => 1})

I get warnings from the connect closure like this:

Can't set DBIx::Log4perl::db=HASH(0x87116c0)->{DBIx_l4p_logmask}:
unrecognised attribute name or invalid value

Previously I didn't get these as I parsed my attributes out in my
connect method then deleted them before DBI saw them but now I need them
to get down to the connected method but I don't want those warnings and
the code checks before calling connected:

if (%$apply) {

if ($apply->{DbTypeSubclass}) {
my $DbTypeSubclass = delete $apply->{DbTypeSubclass};
DBI::_rebless_dbtype_subclass($dbh,
$rebless_class||$class, $DbTypeSubclass);
}
my $a;
foreach $a (qw(Profile RaiseError PrintError AutoCommit)) { # do
these first
next unless  exists $apply->{$a};
$dbh->{$a} = delete $apply->{$a};
}
while ( my ($a, $v) = each %$apply) {
# MJE warnings generated here
eval { $dbh->{$a} = $v } or $@ && warn $@;
}
}

# confirm to driver (ie if subclassed) that we've connected
sucessfully
# and finished the attribute setup. pass in the original arguments
$dbh->connected(@orig_args); #if ref $dbh ne 'DBI::db' or $proxy;

and I cannot capture them in connect as this does not work for clone (as
my connect never gets called if you clone).

o "Can't locate auto/DBIx/Log4perl/st/DELETE.al"

I feel this is something I've done wrong but I cannot find it yet. My
connect method is trying to call $dbh->func('dbms_output_enable') in
DBD::Oracle (as it always has done) but it is failing in my execute
attempting to undefine HandleError:

#
# If DBDSPECIFIC is enabled and this is DBD::Oracle we will attempt to
# to retrieve any dbms_output. However, 'dbms_output_get' actually
# creates a new statement, prepares it, executes it, binds parameters
# and then fetches the dbms_output. This will cause this execute method
# to be called again and we could recurse forever. To prevent that
# happening we set {dbd_specific} flag before calling dbms_output_get
# and clear it afterwards.
#
# Also in DBI (at least up to 1.54) and most DBDs, the same memory is
# used for a dbh errstr/err/state and each statement under it. As a
# result, if you sth1->execute (it fails) then $sth2->execute which
# succeeds, sth1->errstr/err are undeffed :-(
# see http://www.nntp.perl.org/group/perl.dbi.

Re: ANNOUNCE: DBD::Oracle 1.24 Release Candidate 3

2010-01-14 Thread Martin Evans
John Scoles wrote:
> At a first look it most likely is my new
> 
> ora_ncs_buff_mtpl /ORA_DBD_NCS_BUFFER  might be the source of the problem
> 
> byt default it is 1 which may be too small.
> 
> Can you set the ORA_DBD_NCS_BUFFER to 2 then 3 and  4  to see if it
> cleans up the problem
> 
> My guess is that 1 is too small and I might have to make it 2 to cover
> more bases.
> 
> cheers
> 
> John Scoles

John,

Sorry but it wasn't until I saw your reply that I realised I'm missed
off the probable cause.

OCILobRead is very strange. You pass it bytes and for NLS_LANG=utf8 you
get character count back. Also, when utf8, if you pass it 1024 byte
buffer you get 256 filled in and not 1024 and hence why the previous
code was *4 I'd guess. As such ORA_DBD_NCS_BUFFER = 4 fixes the problem
(and 2 and 3 do not).

I suspect this peculiar behaviour of OCILobRead should be documented in
DBD::Oracle to avoid this happening again and it might even have been
worth supporting the NEED_DATA return state.

There are other funny cases with OCILobRead but I'd have to go back
through my notes from some years back to dig them out as my memory is
not too good these days.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com

> Martin Evans wrote:
>> John Scoles wrote:
>>  
>>> Thank Charles that is really good stuff
>>>
>>> I have not investigated the
>>>
>>> NLS_LANG=.WE8ISO8859P1, but the tests 30long.t and 31lob_extended.t
>>>  still fail badly if NLS_LANG=.AL32UTF8.
>>>
>>> bug yet as my local test box is just US7ASCII.
>>>
>>> BTW can you tell me what the '
>>>
>>> NLS_CHARACTERSET and
>>>
>>> NLS_NCHAR_CHARACTERSET
>>>
>>> setting of your  Oralcle DB you are testing on
>>>
>>>
>>> I will have to do that later today or tonight as I have to install a
>>> different version or Oracle to get that to fail (I hope)
>>>
>>> If you can set $dbh->{dbd_verbose}=15 just before the test start to
>>> fail in
>>>
>>> 30long.t and 31lob_extended.t
>>>
>>> and send me the results I will have something more to go on.
>>>
>>>
>>> Look for another  RC in the next day or two.
>>>
>>> Cheers
>>>
>>> Jardine wrote:
>>>
>>>> On 14/01/10 12:19, Charles Jardine wrote:
>>>>  
>>>>> On 12/01/10 12:07, John Scoles wrote:
>>>>>
>>>>>> Ok third time is a Charm
>>>>>>
>>>>>> The Third RC of the beer edition of DBD::Oracle 1.24 can be found at
>>>>>>
>>>>>>
>>>>>> http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.24-RC3.tar
>>>>>>
>>>>>>
>>>>>> This round has a few little patches from Martin Evans on it.
>>>>>>
>>>>>>
>>>>>> Please test and enjoy
>>>>>>   
>>>>> My environment is Linux x86-64, Perl 5.10.1 (64 bit), DBI 1.609,
>>>>> Oracle 10.2.0.4.2 (64 bit). Database charset UTF8, national
>>>>> charset AL16UTF16
>>>>>
>>>>> Three things:
>>>>> 
>>>> [snip]
>>>>
>>>>  
>>>>> 3. Here is a patch which removes the remaining warnings detected by
>>>>>   gcc in 64-bit mode.
>>>>> 
>>>> [snip]
>>>>
>>>> I realise that something has wrapped the very long lines in the patch,
>>>> so I am trying again, sending the patch as an attachment.
>>>>
>>>>   
>>> 
>> For me tests 40long.t and 30lob_extended.t also fail when $NLS_LANG is
>> AMERICAN_AMERICA.AL32UTF8. The values of NLS_CHARACTERSET and
>> NLS_NCHAR_CHARACTERSET in my database are:
>>
>> AL32UTF8 and
>> UTF8
>>
>> as the test output below shows.
>>
>> The errors I get are:
>>
>> prove -vb t/30long.t
>> t/30long.t ..
>> 1..479
>> # ora_server_version: 11 1 0 6 0
>> # Database 11.1.0.6.0 CHAR set is AL32UTF8 (Unicode), NCHAR set is UTF8
>> (Unicode)
>> # Client 11.1.0.6 NLS_LANG is 'AMERICAN_AMERICA.AL32UTF8', NLS_NCHAR is
>> ''
>> #
>> #
>> =
>> # Running long test for LONG (0) use_utf8_data=0
>> # create table dbd_ora__drop_me ( idx integer, lng LONG,  dt date )
>> # 

Re: ANNOUNCE: DBD::Oracle 1.24 Release Candidate 3

2010-01-14 Thread Martin Evans
John Scoles wrote:
> Thank Charles that is really good stuff
> 
> I have not investigated the
> 
> NLS_LANG=.WE8ISO8859P1, but the tests 30long.t and 31lob_extended.t
>  still fail badly if NLS_LANG=.AL32UTF8.
> 
> bug yet as my local test box is just US7ASCII.
> 
> BTW can you tell me what the '
> 
> NLS_CHARACTERSET and
> 
> NLS_NCHAR_CHARACTERSET
> 
> setting of your  Oralcle DB you are testing on
> 
> 
> I will have to do that later today or tonight as I have to install a
> different version or Oracle to get that to fail (I hope)
> 
> If you can set $dbh->{dbd_verbose}=15 just before the test start to fail in
> 
> 30long.t and 31lob_extended.t
> 
> and send me the results I will have something more to go on.
> 
> 
> Look for another  RC in the next day or two.
> 
> Cheers
> 
> Jardine wrote:
>> On 14/01/10 12:19, Charles Jardine wrote:
>>> On 12/01/10 12:07, John Scoles wrote:
>>>> Ok third time is a Charm
>>>>
>>>> The Third RC of the beer edition of DBD::Oracle 1.24 can be found at
>>>>
>>>>
>>>> http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.24-RC3.tar
>>>>
>>>>
>>>> This round has a few little patches from Martin Evans on it.
>>>>
>>>>
>>>> Please test and enjoy
>>>
>>> My environment is Linux x86-64, Perl 5.10.1 (64 bit), DBI 1.609,
>>> Oracle 10.2.0.4.2 (64 bit). Database charset UTF8, national
>>> charset AL16UTF16
>>>
>>> Three things:
>>
>> [snip]
>>
>>> 3. Here is a patch which removes the remaining warnings detected by
>>>   gcc in 64-bit mode.
>>
>> [snip]
>>
>> I realise that something has wrapped the very long lines in the patch,
>> so I am trying again, sending the patch as an attachment.
>>
> 
> 
For me tests 40long.t and 30lob_extended.t also fail when $NLS_LANG is
AMERICAN_AMERICA.AL32UTF8. The values of NLS_CHARACTERSET and
NLS_NCHAR_CHARACTERSET in my database are:

AL32UTF8 and
UTF8

as the test output below shows.

The errors I get are:

prove -vb t/30long.t
t/30long.t ..
1..479
# ora_server_version: 11 1 0 6 0
# Database 11.1.0.6.0 CHAR set is AL32UTF8 (Unicode), NCHAR set is UTF8
(Unicode)
# Client 11.1.0.6 NLS_LANG is 'AMERICAN_AMERICA.AL32UTF8', NLS_NCHAR is
''
#
#
=
# Running long test for LONG (0) use_utf8_data=0
# create table dbd_ora__drop_me ( idx integer, lng LONG,  dt date )
# long_data[0] length 10240
# long_data[1] length 81920
# long_data[2] length 71680
#  --- insert some LONG data (ora_type 0)
ok 1 - prepare: insert into dbd_ora__drop_me values (?, ?, SYSDATE)
ok 2 - insert long data 40
ok 3 - insert long data 41
ok 4 - insert long data 42
ok 5 - insert long data undef 43
#  --- fetch LONG data back again -- truncated - LongTruncOk == 1
# LongReadLen 20, LongTruncOk 1
ok 6 - prepare: select * from dbd_ora__drop_me order by idx
ok 7 - execute: select * from dbd_ora__drop_me order by idx
ok 8 - fetch_arrayref for select * from dbd_ora__drop_me order by idx
ok 9 - four rows
ok 10 - byte_string test of truncated to LongReadLen 20
ok 11 - nice_string test of truncated to LongReadLen 20
ok 12 - LONG UTF8 setting
ok 13 - byte_string test of truncated to LongReadLen 20
ok 14 - nice_string test of truncated to LongReadLen 20
ok 15 - LONG UTF8 setting
ok 16 - byte_string test of truncated to LongReadLen 20
ok 17 - nice_string test of truncated to LongReadLen 20
ok 18 - LONG UTF8 setting
ok 19 - last row undefined
ok 20 - prepare select * from dbd_ora__drop_me order by idx
#  --- fetch LONG data back again -- truncated - LongTruncOk == 0
# LongReadLen 81910, LongTruncOk
ok 21 - execute select * from dbd_ora__drop_me order by idx
ok 22 - fetchrow_arrayref select * from dbd_ora__drop_me order by idx
ok 23 - length tmp->[1] 10240
ok 24 - truncation error not triggered (LongReadLen 81910, data 10240)
ok 25 - tmp==1406 || tmp==24345 tmp actually=24345
#  --- fetch LONG data back again -- complete - LongTruncOk == 0
# LongReadLen 82920, LongTruncOk
ok 26 - prepare: select * from dbd_ora__drop_me order by idx
ok 27 - execute select * from dbd_ora__drop_me order by idx
ok 28 - fetchrow_arrayref select * from dbd_ora__drop_me order by idx
ok 29 - Strings are identical, Len 10240
ok 30 - fetchrow_arrayref select * from dbd_ora__drop_me order by idx
ok 31 - Strings are identical, Len 10240
ok 32 - fetchrow_arrayref select * from dbd_ora__drop_me order by idx
ok 33 - Strings are identical, Len 10240
ok 34 # skip blob_read tests for LONGs - not currently supported

ok 94 # skip ora_auto_lob tests for LONGs - not supported
#
#
===

Re: ANNOUNCE: DBD::Oracle 1.24 Release Candidate 2

2010-01-11 Thread Martin Evans
st lob length 6000
ok 10 - ora_auto_lobs prefetch - correct lob length
ok 11 - ora_auto_lobs prefetch - read lob
ok 12 - ora_auto_lobs prefetch - lob returned matches lob inserted
ok 13 - ora_auto_lobs prefetch - lob locator retrieved
ok 14 - ora_auto_lobs prefetch - is a lob locator
ok 15 - ora_auto_lobs prefetch - first lob length 6000
ok 16 - ora_auto_lobs prefetch - correct lob length
ok 17 - ora_auto_lobs prefetch - read lob
ok 18 - ora_auto_lobs prefetch - lob returned matches lob inserted
ok 19 - ora_auto_lobs prefetch - finished returned sth
ok 20 - ora_auto_lobs prefetch - finished sth
ok 21 - ora_auto_lobs not fetching prepare call proc
ok 22 - ora_auto_lobs not fetching - bind out cursor
ok 23 - ora_auto_lobs not fetching - execute to get out cursor
DBD::Oracle has returned a NEED_DATA status when doing a LobRead!! 
procedure p_DBD_Oracle_drop_me possibly not dropped- check
table dbd_ora__drop_me possibly not dropped - check
# Looks like you planned 31 tests but ran 23.
# Looks like your test exited with 255 just after 23.
DBD::Oracle::db DESTROY failed: ORA-03127: no new operations allowed until the active operation ends (DBD ERROR: OCIStmtExecute)
ORA-03127: no new operations allowed until the active operation ends (DBD ERROR: OCISessionEnd) at t/31lob_extended.t line 91.
Dubious, test returned 255 (wstat 65280, 0xff00)
Failed 8/31 subtests 

Test Summary Report
---
t/31lob_extended.t (Wstat: 65280 Tests: 23 Failed: 0)
  Non-zero exit status: 255
  Parse errors: Bad plan.  You planned 31 tests but ran 23.
Files=1, Tests=23,  1 wallclock secs ( 0.03 usr  0.00 sys +  0.07 cusr  0.01 csys =  0.11 CPU)
Result: FAIL

Index: ocitrace.h
===
--- ocitrace.h	(revision 13722)
+++ ocitrace.h	(working copy)
@@ -267,7 +267,7 @@
 	stat=OCIAttrSet(th,ht,ah,s1,a,eh);\
 	(DBD_OCI_TRACEON) ? PerlIO_printf(DBD_OCI_TRACEFP,			\
 		"%sAttrSet(%p,%s, %p,%lu,Attr=%s,%p)=%s\n",			\
-		OciTp, (void*)th,oci_hdtype_name(ht),sl_t(ah),ul_t(s1),oci_attr_name(a),(void*)eh,	\
+		OciTp, (void*)th,oci_hdtype_name(ht),(void *)ah,ul_t(s1),oci_attr_name(a),(void*)eh,	\
 		oci_status_name(stat)),stat : stat
 
 #define OCIBindByName_log_stat(sh,bp,eh,p1,pl,v,vs,dt,in,al,rc,mx,cu,md,stat)	\
Index: Oracle.pm
===
--- Oracle.pm	(revision 13722)
+++ Oracle.pm	(working copy)
@@ -1477,12 +1477,14 @@
 
 =item ora_ncs_buff_mtpl
 
-You can now customize the size of the buffer when selecting a LOBs with the build in AUTO Lob
-The default value is 1 which should be fine for most situations if you are converting between
-a NCS on the DB and one on the Client they you might want to set this to 2.  The orginal value
-was 4 which was excessive.  For convieance I have added support for a 'ORA_DBD_NCS_BUFFER' enviornemnt
-varaible that you can use at the OS level to set this value.  If used it will take the value at the
-connect stage.
+You can now customize the size of the buffer when selecting LOBs with
+the built in AUTO Lob.  The default value is 1 which should be fine
+for most situations. If you are converting between a NCS on the DB and
+one on the Client then you might want to set this to 2.  The orignal
+value (prior to version 1.24) of 4 was found to be excessive.  For
+convenience I have added support for a 'ORA_DBD_NCS_BUFFER'
+environment variable that you can use at the OS level to set this
+value.  If used it will take the value at the connect stage.
 
 See more details in the LOB section of the POD
 
Index: t/31lob_extended.t
===
--- t/31lob_extended.t	(revision 13722)
+++ t/31lob_extended.t	(working copy)
@@ -1,7 +1,7 @@
 #!perl -w
 
 ## 
-## 26exe_array.t
+## 31lob_extended.t
 ## By Martin Evans, The Pythian Group
 ## 
 ##  This run through some bugs that have been found in earlier versions of DBD::Oracle
@@ -38,7 +38,7 @@
 my ($table, $data0, $data1) = setup_test($dbh);
 
 #
-# bug in DBD::0.21 where if ora_auto_lobs is set and we attempt to
+# bug in DBD::Oracle 0.21 where if ora_auto_lobs is not set and we attempt to
 # fetch from a table containing lobs which has more than one row
 # we get a segfault. This was due to prefetching more than one row.
 #
@@ -82,7 +82,8 @@
 # ora_auto_lobs is supposed to default to set
 q/begin p_DBD_Oracle_drop_me(?); end;/);
   };
-ok(!$@, "$testname prepare call proc");
+ok(!$@, "$testname - prepare call proc");
+
 my $sth2;
 ok($sth1->bind_param_inout(1, \$sth2, 500, {ora_type => ORA_RSET}),
"$testname - bind out cursor");
@@ -175,13 +176,13 @@
 eval {$dbh->do(q/drop procedure p_DBD_Oracle_drop_me/);};
 if

Re: ANNOUNCE: DBD::Oracle 1.24 Release Candidate 1

2010-01-05 Thread Martin Evans
Martin Evans wrote:
> Martin Evans wrote:
>> John Scoles wrote:
>>> Well here it is the long awaited 1.24 Beer version of  DBD::ORACLE
>>>
>>> http://sctvguide.ca/images/bd_two-four.jpg
>>>
>>>
>>> You can find the release candidate here
>>>
>>> http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.24-RC1.tar
>>>
>>> Any and all testing will be most welcome!
>>>
>>> Well a big load of stuff this time.  A number of patches and bug fixes
>>> plus with this RC I am introducing a two really big features
>>>
>>> 1) Full support for a multiple records from a single Fetch.  This should
>>>really speed things up as it cuts down on round trips to the
>>>server
>>>
>>> 2) I have added ora_ncs_buff_mtpl or environment var ORA_DBD_NCS_BUFFER
>>>so you can control the size of the byte buffer for lobs.  So rather
>>>than a default buffer 4* the Long_Read_Length or it is now 1.
>>>This should free up great hoards of memory for your LOB Fetches
>>>
>>> anyway here is a complete list
>>>
>>>
>>>  Added extended support for 64 bit clients in Makefile.PL from Ralph
>>> Doncaster
>>>   Added extended nvarchar support from Jan Mach
>>>   Added support for the TYPE attribute on bind_col and the new DBI
>>> bind_col attributes StrictlyTyped and DiscardString from Martin J. Evans
>>>   Added ora_ncs_buff_mtpl and environment var ORA_DBD_NCS_BUFFER so we
>>> can control the size of the buffer when doing nclob reads
>>>   Fix for bug in for  changes to row fetch buffer mostly lobs and object
>>> fetches
>>>   Fix for rt.cpan.org Ticket #=49741 Oracle.h has commented out params
>>> in OCIXMLTypeCreateFromSrc from Kartik Thakore
>>>   Added from rt.cpan.org Ticket #=49436 Patch to add support for a few
>>> Oracle data types to type_info_all from David Hull
>>>   Added from rt.cpan.org Ticket #=49435 Patch to add support for a few
>>> Oracle data types to dbd_describe from David Hull
>>>   Fix for rt.cpan.org Ticket #=49331 Bad code example in POD from John
>>> Scoles
>>>   Added support for looking up OCI_DTYPE_PARAM Attributes
>>>   Added support for looking up csform values
>>>   Fix for rt.cpan.org Ticket #=46763,46998 enhancement -Rowcache size is
>>> now being properly implemented with row fetch buffer from John Scoles
>>>   Fix for rt.cpan.org Ticket #=46448 enhancement -Errors returned by
>>> procedures are now unicode strings from Martin Evans, John Scoles and
>>> Tim Bunce
>>>   Fix for rt.cpan.org Ticket #=47503 bugfix - using more than 1 LOB in
>>> insert broken from APLA
>>>   Fix for rt.cpan.org Ticket #=46613 bugfix - sig-abort on nested
>>> objects with ora_objects=1 from TomasP
>>>   Fix for rt.cpan.org Ticket #=46661 DBD::Oracle hungs when
>>> insert/update with LOB and quoted table name from APLA
>>>   Fix for rt.cpan.org Ticket #=46246 fetching from nested cursor
>>> (returned from procedure) leads to application crash (abort) from John
>>> Scoles
>>>   Fix for rt.cpan.org Ticket #=46016  LOBs bound with ora_field broken
>>> from RKITOVER
>>>   Fix for bug in 58object.t when test run as externally identified user
>>> from Charles Jardine
>>>
>>>
>> Thanks for this John.
>>
>> All tests pass on "v5.10.0 built for i486-linux-gnu-thread-multi" with
>> instant client 11.1 to Oracle 11.1.0 and the latest (from subversion)
>> DBI except 26exe_array (the usual problem).
>>
>> I have a few minor comments.
>>
>> 1.
>>
>> The following minor patch makes a lot of warnings go away because ah is
>> actually an OCIServer * and not a signed long:
>>
>> Index: ocitrace.h
>> ===
>> --- ocitrace.h   (revision 13710)
>> +++ ocitrace.h   (working copy)
>> @@ -267,7 +267,7 @@
>>  stat=OCIAttrSet(th,ht,ah,s1,a,eh);  \
>>  (DBD_OCI_TRACEON) ? PerlIO_printf(DBD_OCI_TRACEFP,  
>> \
>>  "%sAttrSet(%p,%s, %p,%lu,Attr=%s,%p)=%s\n", 
>> \
>> -OciTp,
>> (void*)th,oci_hdtype_name(ht),sl_t(ah),ul_t(s1),oci_attr_name(a),(void*)eh,
>> \
>> +OciTp, (void*)th,oci_hdtype_name(ht),(void
>> *)ah,ul_t(s1),oci_attr_name(a),(void*)eh,\
>>  oci_status_name(s

Re: ANNOUNCE: DBD::Oracle 1.24 Release Candidate 1

2010-01-04 Thread Martin Evans
Martin Evans wrote:
> John Scoles wrote:
>> Well here it is the long awaited 1.24 Beer version of  DBD::ORACLE
>>
>> http://sctvguide.ca/images/bd_two-four.jpg
>>
>>
>> You can find the release candidate here
>>
>> http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.24-RC1.tar
>>
>> Any and all testing will be most welcome!
>>
>> Well a big load of stuff this time.  A number of patches and bug fixes
>> plus with this RC I am introducing a two really big features
>>
>> 1) Full support for a multiple records from a single Fetch.  This should
>>really speed things up as it cuts down on round trips to the
>>server
>>
>> 2) I have added ora_ncs_buff_mtpl or environment var ORA_DBD_NCS_BUFFER
>>so you can control the size of the byte buffer for lobs.  So rather
>>than a default buffer 4* the Long_Read_Length or it is now 1.
>>This should free up great hoards of memory for your LOB Fetches
>>
>> anyway here is a complete list
>>
>>
>>  Added extended support for 64 bit clients in Makefile.PL from Ralph
>> Doncaster
>>   Added extended nvarchar support from Jan Mach
>>   Added support for the TYPE attribute on bind_col and the new DBI
>> bind_col attributes StrictlyTyped and DiscardString from Martin J. Evans
>>   Added ora_ncs_buff_mtpl and environment var ORA_DBD_NCS_BUFFER so we
>> can control the size of the buffer when doing nclob reads
>>   Fix for bug in for  changes to row fetch buffer mostly lobs and object
>> fetches
>>   Fix for rt.cpan.org Ticket #=49741 Oracle.h has commented out params
>> in OCIXMLTypeCreateFromSrc from Kartik Thakore
>>   Added from rt.cpan.org Ticket #=49436 Patch to add support for a few
>> Oracle data types to type_info_all from David Hull
>>   Added from rt.cpan.org Ticket #=49435 Patch to add support for a few
>> Oracle data types to dbd_describe from David Hull
>>   Fix for rt.cpan.org Ticket #=49331 Bad code example in POD from John
>> Scoles
>>   Added support for looking up OCI_DTYPE_PARAM Attributes
>>   Added support for looking up csform values
>>   Fix for rt.cpan.org Ticket #=46763,46998 enhancement -Rowcache size is
>> now being properly implemented with row fetch buffer from John Scoles
>>   Fix for rt.cpan.org Ticket #=46448 enhancement -Errors returned by
>> procedures are now unicode strings from Martin Evans, John Scoles and
>> Tim Bunce
>>   Fix for rt.cpan.org Ticket #=47503 bugfix - using more than 1 LOB in
>> insert broken from APLA
>>   Fix for rt.cpan.org Ticket #=46613 bugfix - sig-abort on nested
>> objects with ora_objects=1 from TomasP
>>   Fix for rt.cpan.org Ticket #=46661 DBD::Oracle hungs when
>> insert/update with LOB and quoted table name from APLA
>>   Fix for rt.cpan.org Ticket #=46246 fetching from nested cursor
>> (returned from procedure) leads to application crash (abort) from John
>> Scoles
>>   Fix for rt.cpan.org Ticket #=46016  LOBs bound with ora_field broken
>> from RKITOVER
>>   Fix for bug in 58object.t when test run as externally identified user
>> from Charles Jardine
>>
>>
> 
> Thanks for this John.
> 
> All tests pass on "v5.10.0 built for i486-linux-gnu-thread-multi" with
> instant client 11.1 to Oracle 11.1.0 and the latest (from subversion)
> DBI except 26exe_array (the usual problem).
> 
> I have a few minor comments.
> 
> 1.
> 
> The following minor patch makes a lot of warnings go away because ah is
> actually an OCIServer * and not a signed long:
> 
> Index: ocitrace.h
> ===
> --- ocitrace.h(revision 13710)
> +++ ocitrace.h(working copy)
> @@ -267,7 +267,7 @@
>   stat=OCIAttrSet(th,ht,ah,s1,a,eh);  \
>   (DBD_OCI_TRACEON) ? PerlIO_printf(DBD_OCI_TRACEFP,  
> \
>   "%sAttrSet(%p,%s, %p,%lu,Attr=%s,%p)=%s\n", 
> \
> - OciTp,
> (void*)th,oci_hdtype_name(ht),sl_t(ah),ul_t(s1),oci_attr_name(a),(void*)eh,
> \
> + OciTp, (void*)th,oci_hdtype_name(ht),(void
> *)ah,ul_t(s1),oci_attr_name(a),(void*)eh, \
>   oci_status_name(stat)),stat : stat
> 
>  #define
> OCIBindByName_log_stat(sh,bp,eh,p1,pl,v,vs,dt,in,al,rc,mx,cu,md,stat) \
> 
> 2.
> 
> There are a number of typos in the Changes file for 1.24:
> 
> extened (*2) => extended
> enviornment => environment
> "Fix for bug in for  changes" => ? what does this mean?
> implimented => implemented
> hu

Re: ANNOUNCE: DBD::Oracle 1.24 Release Candidate 1

2010-01-04 Thread Martin Evans
Martin Evans wrote:
> John Scoles wrote:
>> Well here it is the long awaited 1.24 Beer version of  DBD::ORACLE
>>
>> http://sctvguide.ca/images/bd_two-four.jpg
>>
>>
>> You can find the release candidate here
>>
>> http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.24-RC1.tar
>>
>> Any and all testing will be most welcome!
>>
>> Well a big load of stuff this time.  A number of patches and bug fixes
>> plus with this RC I am introducing a two really big features
>>
>> 1) Full support for a multiple records from a single Fetch.  This should
>>really speed things up as it cuts down on round trips to the
>>server
>>
>> 2) I have added ora_ncs_buff_mtpl or environment var ORA_DBD_NCS_BUFFER
>>so you can control the size of the byte buffer for lobs.  So rather
>>than a default buffer 4* the Long_Read_Length or it is now 1.
>>This should free up great hoards of memory for your LOB Fetches
>>
>> anyway here is a complete list
>>
>>
>>  Added extended support for 64 bit clients in Makefile.PL from Ralph
>> Doncaster
>>   Added extended nvarchar support from Jan Mach
>>   Added support for the TYPE attribute on bind_col and the new DBI
>> bind_col attributes StrictlyTyped and DiscardString from Martin J. Evans
>>   Added ora_ncs_buff_mtpl and environment var ORA_DBD_NCS_BUFFER so we
>> can control the size of the buffer when doing nclob reads
>>   Fix for bug in for  changes to row fetch buffer mostly lobs and object
>> fetches
>>   Fix for rt.cpan.org Ticket #=49741 Oracle.h has commented out params
>> in OCIXMLTypeCreateFromSrc from Kartik Thakore
>>   Added from rt.cpan.org Ticket #=49436 Patch to add support for a few
>> Oracle data types to type_info_all from David Hull
>>   Added from rt.cpan.org Ticket #=49435 Patch to add support for a few
>> Oracle data types to dbd_describe from David Hull
>>   Fix for rt.cpan.org Ticket #=49331 Bad code example in POD from John
>> Scoles
>>   Added support for looking up OCI_DTYPE_PARAM Attributes
>>   Added support for looking up csform values
>>   Fix for rt.cpan.org Ticket #=46763,46998 enhancement -Rowcache size is
>> now being properly implemented with row fetch buffer from John Scoles
>>   Fix for rt.cpan.org Ticket #=46448 enhancement -Errors returned by
>> procedures are now unicode strings from Martin Evans, John Scoles and
>> Tim Bunce
>>   Fix for rt.cpan.org Ticket #=47503 bugfix - using more than 1 LOB in
>> insert broken from APLA
>>   Fix for rt.cpan.org Ticket #=46613 bugfix - sig-abort on nested
>> objects with ora_objects=1 from TomasP
>>   Fix for rt.cpan.org Ticket #=46661 DBD::Oracle hungs when
>> insert/update with LOB and quoted table name from APLA
>>   Fix for rt.cpan.org Ticket #=46246 fetching from nested cursor
>> (returned from procedure) leads to application crash (abort) from John
>> Scoles
>>   Fix for rt.cpan.org Ticket #=46016  LOBs bound with ora_field broken
>> from RKITOVER
>>   Fix for bug in 58object.t when test run as externally identified user
>> from Charles Jardine
>>
>>
> 
> Thanks for this John.
> 
> All tests pass on "v5.10.0 built for i486-linux-gnu-thread-multi" with
> instant client 11.1 to Oracle 11.1.0 and the latest (from subversion)
> DBI except 26exe_array (the usual problem).
> 
> I have a few minor comments.
> 
> 1.
> 
> The following minor patch makes a lot of warnings go away because ah is
> actually an OCIServer * and not a signed long:
> 
> Index: ocitrace.h
> ===
> --- ocitrace.h(revision 13710)
> +++ ocitrace.h(working copy)
> @@ -267,7 +267,7 @@
>   stat=OCIAttrSet(th,ht,ah,s1,a,eh);  \
>   (DBD_OCI_TRACEON) ? PerlIO_printf(DBD_OCI_TRACEFP,  
> \
>   "%sAttrSet(%p,%s, %p,%lu,Attr=%s,%p)=%s\n", 
> \
> - OciTp,
> (void*)th,oci_hdtype_name(ht),sl_t(ah),ul_t(s1),oci_attr_name(a),(void*)eh,
> \
> + OciTp, (void*)th,oci_hdtype_name(ht),(void
> *)ah,ul_t(s1),oci_attr_name(a),(void*)eh, \
>   oci_status_name(stat)),stat : stat
> 
>  #define
> OCIBindByName_log_stat(sh,bp,eh,p1,pl,v,vs,dt,in,al,rc,mx,cu,md,stat) \
> 
> 2.
> 
> There are a number of typos in the Changes file for 1.24:
> 
> extened (*2) => extended
> enviornment => environment
> "Fix for bug in for  changes" => ? what does this mean?
> implimented => implemented
> hu

Re: ANNOUNCE: DBD::Oracle 1.24 Release Candidate 1

2010-01-04 Thread Martin Evans
John Scoles wrote:
> Well here it is the long awaited 1.24 Beer version of  DBD::ORACLE
> 
> http://sctvguide.ca/images/bd_two-four.jpg
> 
> 
> You can find the release candidate here
> 
> http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.24-RC1.tar
> 
> Any and all testing will be most welcome!
> 
> Well a big load of stuff this time.  A number of patches and bug fixes
> plus with this RC I am introducing a two really big features
> 
> 1) Full support for a multiple records from a single Fetch.  This should
>really speed things up as it cuts down on round trips to the
>server
> 
> 2) I have added ora_ncs_buff_mtpl or environment var ORA_DBD_NCS_BUFFER
>so you can control the size of the byte buffer for lobs.  So rather
>than a default buffer 4* the Long_Read_Length or it is now 1.
>This should free up great hoards of memory for your LOB Fetches
> 
> anyway here is a complete list
> 
> 
>  Added extended support for 64 bit clients in Makefile.PL from Ralph
> Doncaster
>   Added extended nvarchar support from Jan Mach
>   Added support for the TYPE attribute on bind_col and the new DBI
> bind_col attributes StrictlyTyped and DiscardString from Martin J. Evans
>   Added ora_ncs_buff_mtpl and environment var ORA_DBD_NCS_BUFFER so we
> can control the size of the buffer when doing nclob reads
>   Fix for bug in for  changes to row fetch buffer mostly lobs and object
> fetches
>   Fix for rt.cpan.org Ticket #=49741 Oracle.h has commented out params
> in OCIXMLTypeCreateFromSrc from Kartik Thakore
>   Added from rt.cpan.org Ticket #=49436 Patch to add support for a few
> Oracle data types to type_info_all from David Hull
>   Added from rt.cpan.org Ticket #=49435 Patch to add support for a few
> Oracle data types to dbd_describe from David Hull
>   Fix for rt.cpan.org Ticket #=49331 Bad code example in POD from John
> Scoles
>   Added support for looking up OCI_DTYPE_PARAM Attributes
>   Added support for looking up csform values
>   Fix for rt.cpan.org Ticket #=46763,46998 enhancement -Rowcache size is
> now being properly implemented with row fetch buffer from John Scoles
>   Fix for rt.cpan.org Ticket #=46448 enhancement -Errors returned by
> procedures are now unicode strings from Martin Evans, John Scoles and
> Tim Bunce
>   Fix for rt.cpan.org Ticket #=47503 bugfix - using more than 1 LOB in
> insert broken from APLA
>   Fix for rt.cpan.org Ticket #=46613 bugfix - sig-abort on nested
> objects with ora_objects=1 from TomasP
>   Fix for rt.cpan.org Ticket #=46661 DBD::Oracle hungs when
> insert/update with LOB and quoted table name from APLA
>   Fix for rt.cpan.org Ticket #=46246 fetching from nested cursor
> (returned from procedure) leads to application crash (abort) from John
> Scoles
>   Fix for rt.cpan.org Ticket #=46016  LOBs bound with ora_field broken
> from RKITOVER
>   Fix for bug in 58object.t when test run as externally identified user
> from Charles Jardine
> 
> 

Thanks for this John.

All tests pass on "v5.10.0 built for i486-linux-gnu-thread-multi" with
instant client 11.1 to Oracle 11.1.0 and the latest (from subversion)
DBI except 26exe_array (the usual problem).

I have a few minor comments.

1.

The following minor patch makes a lot of warnings go away because ah is
actually an OCIServer * and not a signed long:

Index: ocitrace.h
===
--- ocitrace.h  (revision 13710)
+++ ocitrace.h  (working copy)
@@ -267,7 +267,7 @@
stat=OCIAttrSet(th,ht,ah,s1,a,eh);  \
(DBD_OCI_TRACEON) ? PerlIO_printf(DBD_OCI_TRACEFP,  
\
"%sAttrSet(%p,%s, %p,%lu,Attr=%s,%p)=%s\n", 
\
-   OciTp,
(void*)th,oci_hdtype_name(ht),sl_t(ah),ul_t(s1),oci_attr_name(a),(void*)eh,
\
+   OciTp, (void*)th,oci_hdtype_name(ht),(void
*)ah,ul_t(s1),oci_attr_name(a),(void*)eh,   \
oci_status_name(stat)),stat : stat

 #define
OCIBindByName_log_stat(sh,bp,eh,p1,pl,v,vs,dt,in,al,rc,mx,cu,md,stat)   \

2.

There are a number of typos in the Changes file for 1.24:

extened (*2) => extended
enviornment => environment
"Fix for bug in for  changes" => ? what does this mean?
implimented => implemented
hungs => hangs

3.

Since 26exe_array fails for a growing number of people (758 hits on
google for 26exe_array fail) I think it would be useful to explain why
somewhere and add a Test::More::diag (or note, but needs a later
Test::More - I think DBI needs note now too). I would happily supply the
text but I still don't understand exactly why it fails.

4.

There are a number of comments on annocpan (and typos) which would be
worth considering.

5.

What does ora_ncs_bu

Re: data retrieved from database is unexpectedly tainted

2009-12-15 Thread Martin Evans
Tim Bunce wrote:
> On Tue, Dec 15, 2009 at 02:53:03PM +0000, Martin Evans wrote:
>> If you are using the latest DBI and Perl 5.10.0 or 5.10.1 and running in
>> taint mode (but have not set DBI's Taint, TainTIn, TaintOut) then use
>> tainted strings in the SQL you issue the resulting data is tainted. All
>> we were doing is adding $0 as a comment to the end of the SQL e.g., like
>> this:
>>
>> select * from table -- myprogram.pl
>>
>> but $0 is tainted and so all data coming back from the select is tainted.
>>
>> We moved our application from Perl 5.8.8 to an ubuntu box running 5.10.0
>> a few weeks ago but did not notice this problem until late last week.
>> This did not occur for us on 5.8.8 on another machine.
>>
>> I've no idea what is tainting the returned data but this is reproducible
>> for us here is a small amount of perl.
> 
> From memory, perl tainting works on a per-statement basis. If a tainted
> value is accessed during a statement then any new values created by that
> statement are marked as tainted. The 'tainted value seen' flag gets
> reset for each statement.
> 
> So I'd guess that you're using a single statement, like a select*_*
> method, to pass the (tained) SQL in and get the result data back.
> 
> Tim.
> 
> 

I saw that "single statement" mechanism you refer to and there are
references to it in DBI.xs too. I presumed it was talking about perl
statements and not DBI statement handles. However, in this case I was
not using a perl single statement I was doing (and can continue to
demonstrate):

prepare($sql) <-- tained data went in here
execute
fetchall_arrayref

Also, I continue to do the same thing I always have in 5.8.8 with the
same module versions where this does not happen so I'm really saying
something seems to have changed between 5.8.8 and 5.10 and possibly in
Perl itself.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: data retrieved from database is unexpectedly tainted

2009-12-15 Thread Martin Evans
Martin J. Evans wrote:
> Martin J. Evans wrote:
>> Hi,
>>
>> I've spent some time on this today and I am getting nowhere. Before I
>> redouble my efforts I thought I'd post here just in case anything rings
>> a bell with someone.
>>
>> We are using the latest DBI and DBD::Oracle to get data from an Oracle
>> data. All data is retrieved via  reference cursor returned from a
>> function or procedure. When our application starts work there is no
>> problem but part way through all of the returned data is inexplicably
>> tainted. This is a real PITA because of bugs in Locale::MakeText
>> (http://rt.cpan.org/Public/Bug/Display.html?id=40727) via the Perl bug
>> re pos not updated on \G in regexps
>> (http://rt.perl.org/rt3/Public/Bug/Display.html?id=27344) and in general
>> because some DB returned data is used to create filenames and because it
>> prevents us using -T (instead we are having to run -t).
>>
>> We are running the perl script as root but it makes no difference run as
>> a normal user. When tainted data is returned TaintIn, TaintOut and Taint
>> are all false on the connection handle and the statement handles are
>> created a fresh for each procedure/function call.
>>
>> I've tried with the Ubuntu 9.10 supplied 5.10 and a separate 5.10.1 I
>> built - no difference.
>>
>> Any ideas where to go next?
>>
>> Thanks
>>
>> Martin
>>
>>
> 
> I forgot to mention I stuck an printf and abort in _get_fbav where
> output data is tainted and this never seems to get called. However, my
> retrieved data is still tainted.
> 
> Martin
> 
> 

For those interested I've finally tracked this down and although there
is some logic to it there seems to be a worrying change in behaviour.

If you are using the latest DBI and Perl 5.10.0 or 5.10.1 and running in
taint mode (but have not set DBI's Taint, TainTIn, TaintOut) then use
tainted strings in the SQL you issue the resulting data is tainted. All
we were doing is adding $0 as a comment to the end of the SQL e.g., like
this:

select * from table -- myprogram.pl

but $0 is tainted and so all data coming back from the select is tainted.

We moved our application from Perl 5.8.8 to an ubuntu box running 5.10.0
a few weeks ago but did not notice this problem until late last week.
This did not occur for us on 5.8.8 on another machine.

I've no idea what is tainting the returned data but this is reproducible
for us here is a small amount of perl.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: sql_type_cast small inconsistency and bugs

2009-12-08 Thread Martin Evans
Tim Bunce wrote:
> On Tue, Dec 08, 2009 at 12:04:25PM +0000, Martin Evans wrote:
>>>> My reading of Perl_sv_2nv() in sv.c is that ifdef NV_PRESERVES_UV
>>>> then SvNOK is not set (but SvNOKp is) if grok_number() returns 0
>>>> into numtype.  The else NV_PRESERVES_UV branch ends with
>>>> if (!numtype)
>>>> SvFLAGS(sv) &= ~(SVf_IOK|SVf_NOK);
>>>> So either way, if grok_number() returns 0 then SvNOK() should be false.
>>>>
>>>> And since looks_like_number is just a wrapper around grok_number I'm not
>>>> sure what's going on.
>>>>
>>>> Perhaps check it in without the change above (so with a failing test)
>>>> and I might get a change to dig into it.
>>> ok, I'll check it in as you described later this afternoon and if you
>>> get a chance to look at it that will be good but in the mean time I'll
>>> let you know if I get any further with it.
>> I've looked in to this a little more now and it appears this fails for
>> Perl < 5.10.1 and works for 5.10.1 so I'm guessing something in svc.c
>> has changed between those releases. Probably the code you looked at was
>> the latest source?
> 
> Quite possibly.
> 
>> Perhaps it was something to do with:
>>
>>   ·   The public IV and NV flags are now not set if the string value has
>>   trailing "garbage". This behaviour is consistent with not setting
>>   the public IV or NV flags if the value is out of range for the type.
>>
>> That raises the questions of whether and how to fix this in perl <
>> 5.10.1. The looks_like_number call I originally posted (above) does work
>> around the issue. I could of course skip the those tests for Perl < 5.10.1.
> 
> Let's just skip for perl < 5.10.1.
> 
> Tim.
> 
> 

Done and one last (I'm hoping) thing. The PurePerl version that was
added does not match the XS version as the success of the cast (or not)
is not reflected in the return value. The following change fixes it but
I'd rather see some comment on it before committing:


Index: lib/DBI/PurePerl.pm
===
--- lib/DBI/PurePerl.pm (revision 13653)
+++ lib/DBI/PurePerl.pm (working copy)
@@ -682,21 +682,30 @@

 return -1 unless defined $_[0];

-my $cast_ok = 0;
+my $cast_ok = 1;

-if ($sql_type == SQL_INTEGER) {
-my $dummy = $_[0] + 0;
-}
-elsif ($sql_type == SQL_DOUBLE) {
-my $dummy = $_[0] + 0.0;
-}
-elsif ($sql_type == SQL_NUMERIC) {
-my $dummy = $_[0] + 0.0;
-}
-else {
-return -2;
-}
+my $evalret = eval {
+use warnings FATAL => qw(numeric);
+if ($sql_type == SQL_INTEGER) {
+my $dummy = $_[0] + 0;
+return 1;
+}
+elsif ($sql_type == SQL_DOUBLE) {
+my $dummy = $_[0] + 0.0;
+return 1;
+}
+elsif ($sql_type == SQL_NUMERIC) {
+my $dummy = $_[0] + 0.0;
+return 1;
+}
+else {
+return -2;
+}
+} or warn $@;

+return $evalret if defined($evalret) && ($evalret == -2);
+$cast_ok = 0 unless $evalret;
+
 # DBIstcf_DISCARD_STRING not supported for PurePerl currently

 return 2 if $cast_ok;

With this in place all the tests pass on 5.10.1 and 5.10.0.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: sql_type_cast small inconsistency and bugs

2009-12-08 Thread Martin Evans
Martin Evans wrote:
> Tim Bunce wrote:
>> On Thu, Nov 26, 2009 at 07:47:45PM +, Martin J. Evans wrote:
>>> Martin Evans wrote:
>>>> Tim,
>>>>
>>>> I'm not sure if you are bothered by this but there appears to be a small
>>>> inconsistency between SQL_INTEGER/SQL_NUMERIC and SQL_DOUBLE handling in
>>>> sql_type_cast:
>>>>
>>>> sql_type_cast("aa", SQL_INTEGER, 0)
>>>>   returns 1 (no cast not strict)
>>>> sql_type_casr("aa", SQL_INTEGER, DBIstcf_STRICT)
>>>>   returns 0 (no cast, strict)
>>>> SQL_NUMERIC works as above (with a fix to the grok stuff)
>>>>
>>>> but
>>>>
>>>> sql_type_cast("aa", SQL_DOUBLE, 0)
>>>>   returns 2 (cast ok) I expected 1
>>>> sql_type_cast("aa", SQL_DOUBLE, DBIstcf_STRICT)
>>>>   returns 2 (cast ok) I expected 0
>>>>
>>>> As you point out in the code if warnings are enabled you get a warning
>>>> but you don't get the expected return.
>>> Would you have any objections to me changing:
>>>
>>> case SQL_DOUBLE:
>>> sv_2nv(sv);
>>> /* SvNOK should be set but won't if sv is not numeric (in which
>>>  * case perl would have warn'd already if -w or warnings are in
>>> effect)
>>>  */
>>> cast_ok = SvNOK(sv);
>>> break;
>>>
>>> to
>>>
>>> case SQL_DOUBLE:
>>>   if (looks_like_number(sv)) {
>>> sv_2nv(sv);
>>> /* SvNOK should be set but won't if sv is not numeric (in which
>>>  * case perl would have warn'd already if -w or warnings are in
>>> effect)
>>>  */
>>> cast_ok = SvNOK(sv);
>>>   } else {
>>>   cast_ok = 0;
>>>   }
>>> break;
>>>
>>> as this fixes the inconsistency I mentioned above i.e., sv's cast to
>>> doubles which are not numbers return 0 or 1 (depending on STRICT)
>>> instead of always returning 2 (cast ok). I worried a little about this
>>> as you end up with 0 in the NV for a non-numeric and a return of 2 which
>>> looked like the cast worked.
>> What does 
>> perl -V:.*|grep nv_preserves_uv
>> say for you?
> 
> $ perl -V:.*|grep nv_preserves_uv
> d_nv_preserves_uv='define';
> nv_preserves_uv_bits='32';
> 
>> My reading of Perl_sv_2nv() in sv.c is that ifdef NV_PRESERVES_UV
>> then SvNOK is not set (but SvNOKp is) if grok_number() returns 0
>> into numtype.  The else NV_PRESERVES_UV branch ends with
>> if (!numtype)
>> SvFLAGS(sv) &= ~(SVf_IOK|SVf_NOK);
>> So either way, if grok_number() returns 0 then SvNOK() should be false.
>>
>> And since looks_like_number is just a wrapper around grok_number I'm not
>> sure what's going on.
>>
>> Perhaps check it in without the change above (so with a failing test)
>> and I might get a change to dig into it.
> 
> ok, I'll check it in as you described later this afternoon and if you
> get a chance to look at it that will be good but in the mean time I'll
> let you know if I get any further with it.

I've looked in to this a little more now and it appears this fails for
Perl < 5.10.1 and works for 5.10.1 so I'm guessing something in svc.c
has changed between those releases. Probably the code you looked at was
the latest source?

Perhaps it was something to do with:

   ·   The public IV and NV flags are now not set if the string
value has
   trailing "garbage". This behaviour is consistent with not setting
   the public IV or NV flags if the value is out of range for the
   type.


That raises the questions of whether and how to fix this in perl <
5.10.1. The looks_like_number call I originally posted (above) does work
around the issue. I could of course skip the those tests for Perl < 5.10.1.



Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: DBI svn 13624 not very healthy

2009-12-03 Thread Martin Evans
H.Merijn Brand wrote:
> Current checkout:
> 
> commit a5a9fc491c2f0316a39af0adb5cd82b39dabafae
> Author: mjevans 
> Date:   Wed Dec 2 10:08:18 2009 +
> 
> Needs SQL types and DBIstcf_XXX
> 
> 
> git-svn-id: http://svn.perl.org/modules/dbi/tr...@13624 
> 50811bd7-b8ce-0310-adc1-d9db26280581
> 
> 
> 
> t/90sql_type_cast.t  1/?
> #   Failed test 'result, non numeric cast to double'
> #   at t/90sql_type_cast.t line 97.
> #  got: '2'
> # expected: '1'



> #   Failed test 'result, non numeric cast to double (strict)'
> #   at t/90sql_type_cast.t line 97.
> #  got: '2'
> # expected: '0'

I'd expect those 2 at the moment as there is an outstanding issue with
Perl_sv_2nv. I left them in so if Tim had any time he could see them.

> #   Failed test 'result, 4 byte max unsigned int cast to int'
> #   at t/90sql_type_cast.t line 97.
> #  got: '2'
> # expected: '1'
> # Looks like you failed 3 tests of 26.
> t/90sql_type_cast.t  Dubious, test returned 3 (wstat 768, 0x300)
> Failed 3/26 subtests

What platform did you run this test on - a 64bit platform? Can you email
me your Perl -V.

I don't really know how/why each test is duplicated with different
prefixes in those below - I don't get that.


> 
> t/zvg_90sql_type_cast.t  1/?
> #   Failed test 'result, non numeric cast to double'
> #   at ./t/90sql_type_cast.t line 97.
> #  got: '2'
> # expected: '1'
> 
> #   Failed test 'result, non numeric cast to double (strict)'
> #   at ./t/90sql_type_cast.t line 97.
> #  got: '2'
> # expected: '0'
> 
> #   Failed test 'result, 4 byte max unsigned int cast to int'
> #   at ./t/90sql_type_cast.t line 97.
> #  got: '2'
> # expected: '1'
> ./t/90sql_type_cast.t did not return a true value at t/zvg_90sql_type_cast.t 
> line 3.
> # Looks like you failed 3 tests of 26.
> # Looks like your test exited with 2 just after 26.
> t/zvg_90sql_type_cast.t  Dubious, test returned 2 (wstat 512, 0x200)
> Failed 3/26 subtests
> 
> 
> t/zvp_90sql_type_cast.t  1/? Use of inherited AUTOLOAD for non-method 
> DBI::DBIstcf_STRICT() is deprecated at ./t/90sql_type_cast.t line 23.
> Can't locate auto/DBI/DBIstcf_STR.al in @INC (@INC contains: 
> /pro/3gl/CPAN/DBI-svn/blib/lib /pro/3gl/CPAN/DBI-svn/blib/arch 
> /pro/lib/perl5/5.10.0/i686-linux-64int /pro/lib/perl5/5.10.0 
> /pro/lib/perl5/site_perl/5.10.0/i686-linux-64int 
> /pro/lib/perl5/site_perl/5.10.0 .) at ./t/90sql_type_cast.t line 23
> Compilation failed in require at t/zvp_90sql_type_cast.t line 3.
> # Tests were run but no plan was declared and done_testing() was not seen.
> t/zvp_90sql_type_cast.t  Dubious, test returned 2 (wstat 512, 0x200)
> 
> 
> t/zvxgp_90sql_type_cast.t .. 1/? Use of inherited AUTOLOAD for non-method 
> DBI::DBIstcf_STRICT() is deprecated at ./t/90sql_type_cast.t line 23.
> Can't locate auto/DBI/DBIstcf_STR.al in @INC (@INC contains: 
> /pro/3gl/CPAN/DBI-svn/blib/lib /pro/3gl/CPAN/DBI-svn/blib/arch 
> /pro/lib/perl5/5.10.0/i686-linux-64int /pro/lib/perl5/5.10.0 
> /pro/lib/perl5/site_perl/5.10.0/i686-linux-64int 
> /pro/lib/perl5/site_perl/5.10.0 .) at ./t/90sql_type_cast.t line 23
> Compilation failed in require at t/zvxgp_90sql_type_cast.t line 3.
> # Tests were run but no plan was declared and done_testing() was not seen.
> t/zvxgp_90sql_type_cast.t .. Dubious, test returned 2 (wstat 512, 0x200)
> 
> 

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: DBD::Oracle, Support binding of integers so they are returned as IVs

2009-12-02 Thread Martin Evans
Tim Bunce wrote:

> 
> Post a diff and I'll review it for you. The code you appended previously
> looks ok.

Attached is a diff for DBD::Oracle based on subversion this morning (the
diffs for oci8.c may be a little difficult to read due to the large
indentation of the surrounding code and there are some additional
changes to fix comments). It adds support for DiscardString and
StrictlyTyped bind_col attributes and  casting via sql_type_cast. Most
(all relevant) changes are conditional on a DBI with these features.

I have verified these changes work with Oracle 11 and DBD::Oracle from
subversion however, I note some lob tests were failing in DBD::Oracle
before my changes were applied.

There seem to be a large number of warnings compiling DBD::Oracle from
subversion but I've not touched any of them preferring to keep the patch
restricted to its purpose.

The issue with the SQL_DECIMAL return in sql_type_castsvpv still exists
(the NV_PRESERVES_UV issue) - I have not had time to look into this
properly as yet.

All comments appreciated.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com
Index: oci8.c
===
--- oci8.c	(revision 13624)
+++ oci8.c	(working copy)
@@ -431,7 +431,7 @@
 	case OCI_ATTR_FSPRECISION OCI_ATTR_PDSCL:return "";*/
 	  /* fs prec for datetime data types */
 	case OCI_ATTR_PDPRC:return "OCI_ATTR_PDPRC"; /* packed decimal format
-	case OCI_ATTR_LFPRECISION OCI_ATTR_PDPRC: return "";
+case OCI_ATTR_LFPRECISION OCI_ATTR_PDPRC: return ""; */
 	  /* fs prec for datetime data types */
 	case OCI_ATTR_PARAM_COUNT:			return "OCI_ATTR_PARAM_COUNT";   /* number of column in the select list */
 	case OCI_ATTR_ROWID:return "OCI_ATTR_ROWID"; /* the rowid */
@@ -513,7 +513,7 @@
 	case OCI_ATTR_COL_COUNT:			return "OCI_ATTR_COL_COUNT";/* columns of column array
 	 processed so far.   */
 	case OCI_ATTR_STREAM_OFFSET:		return "OCI_ATTR_STREAM_OFFSET";  /* str off of last row processed
-	case OCI_ATTR_SHARED_HEAPALLO:return "";/* Shared Heap Allocation Size */
+ case OCI_ATTR_SHARED_HEAPALLO:return "";*//* Shared Heap Allocation Size */
 
 	case OCI_ATTR_SERVER_GROUP:			return "OCI_ATTR_SERVER_GROUP";/* server group name */
 
@@ -686,7 +686,7 @@
 
 	/* Attr to allow setting of the stream version PRIOR to calling Prepare */
 	case OCI_ATTR_DIRPATH_STREAM_VERSION:	return "OCI_ATTR_DIRPATH_STREAM_VERSION";  /* version of the stream
-	case OCI_ATTR_RESERVED_11:return "OCI_ATTR_RESERVED_11"; /* reserved */
+  case OCI_ATTR_RESERVED_11:return "OCI_ATTR_RESERVED_11";*/ /* reserved */
 
 	case OCI_ATTR_RESERVED_12:			return "OCI_ATTR_RESERVED_12"; /* reserved */
 	case OCI_ATTR_RESERVED_13:			return "OCI_ATTR_RESERVED_13"; /* reserved */
@@ -2668,7 +2668,7 @@
 
 
 
-/*static int			/* --- Setup the row cache for this sth --- */
+static int			/* --- Setup the row cache for this sth --- */
 sth_set_row_cache(SV *h, imp_sth_t *imp_sth, int max_cache_rows, int num_fields, int has_longs)
 {
 	dTHX;
@@ -3742,10 +3742,38 @@
 		while(datalen && p[datalen - 1]==' ')
 			--datalen;
 	}
-	sv_setpvn(sv, p, (STRLEN)datalen);
-	if (CSFORM_IMPLIES_UTF8(fbh->csform) ){
-		SvUTF8_on(sv);
-	}
+sv_setpvn(sv, p, (STRLEN)datalen);
+#if DBISTATE_VERSION > 94
+/* DBIXS_REVISION > 13590 */
+/* If a bind type was specified we use DBI's sql_type_cast
+   to cast it - currently only number types are handled */
+if (fbh->req_type != 0) {
+int sts;
+D_imp_xxh(sth);
+char errstr[256];
+
+sts = DBIc_DBISTATE(imp_sth)->sql_type_cast_svpv(
+aTHX_ sv, fbh->req_type, fbh->bind_flags, NULL);
+if (sts == 0) {
+sprintf(errstr,
+"over/under flow converting column %d to type %ld",
+i+1, fbh->req_type);
+oci_error(sth, imp_sth->errhp, OCI_ERROR, errstr);
+return Nullav;
+
+} else if (sts == -2) {
+sprintf(errstr,
+"unsupported bind type %ld for column %d",
+  

Re: sql_type_cast small inconsistency and bugs

2009-11-27 Thread Martin Evans
Tim Bunce wrote:
> On Thu, Nov 26, 2009 at 07:47:45PM +, Martin J. Evans wrote:
>> Martin Evans wrote:
>>> Tim,
>>>
>>> I'm not sure if you are bothered by this but there appears to be a small
>>> inconsistency between SQL_INTEGER/SQL_NUMERIC and SQL_DOUBLE handling in
>>> sql_type_cast:
>>>
>>> sql_type_cast("aa", SQL_INTEGER, 0)
>>>   returns 1 (no cast not strict)
>>> sql_type_casr("aa", SQL_INTEGER, DBIstcf_STRICT)
>>>   returns 0 (no cast, strict)
>>> SQL_NUMERIC works as above (with a fix to the grok stuff)
>>>
>>> but
>>>
>>> sql_type_cast("aa", SQL_DOUBLE, 0)
>>>   returns 2 (cast ok) I expected 1
>>> sql_type_cast("aa", SQL_DOUBLE, DBIstcf_STRICT)
>>>   returns 2 (cast ok) I expected 0
>>>
>>> As you point out in the code if warnings are enabled you get a warning
>>> but you don't get the expected return.
>> Would you have any objections to me changing:
>>
>> case SQL_DOUBLE:
>> sv_2nv(sv);
>> /* SvNOK should be set but won't if sv is not numeric (in which
>>  * case perl would have warn'd already if -w or warnings are in
>> effect)
>>  */
>> cast_ok = SvNOK(sv);
>> break;
>>
>> to
>>
>> case SQL_DOUBLE:
>>   if (looks_like_number(sv)) {
>> sv_2nv(sv);
>> /* SvNOK should be set but won't if sv is not numeric (in which
>>  * case perl would have warn'd already if -w or warnings are in
>> effect)
>>  */
>> cast_ok = SvNOK(sv);
>>   } else {
>>   cast_ok = 0;
>>   }
>> break;
>>
>> as this fixes the inconsistency I mentioned above i.e., sv's cast to
>> doubles which are not numbers return 0 or 1 (depending on STRICT)
>> instead of always returning 2 (cast ok). I worried a little about this
>> as you end up with 0 in the NV for a non-numeric and a return of 2 which
>> looked like the cast worked.
> 
> What does 
> perl -V:.*|grep nv_preserves_uv
> say for you?

$ perl -V:.*|grep nv_preserves_uv
d_nv_preserves_uv='define';
nv_preserves_uv_bits='32';

> My reading of Perl_sv_2nv() in sv.c is that ifdef NV_PRESERVES_UV
> then SvNOK is not set (but SvNOKp is) if grok_number() returns 0
> into numtype.  The else NV_PRESERVES_UV branch ends with
> if (!numtype)
> SvFLAGS(sv) &= ~(SVf_IOK|SVf_NOK);
> So either way, if grok_number() returns 0 then SvNOK() should be false.
> 
> And since looks_like_number is just a wrapper around grok_number I'm not
> sure what's going on.
> 
> Perhaps check it in without the change above (so with a failing test)
> and I might get a change to dig into it.

ok, I'll check it in as you described later this afternoon and if you
get a chance to look at it that will be good but in the mean time I'll
let you know if I get any further with it.

>>> Also, there are a couple of bugs in sql_type_cast which I will fix when
>>> the dbi repository is available again - it is locked by some problem at
>>> svn.perl.org. I mailed s...@perl.org but no response or resolution as yet.
>> I fixed these.
> 
> Thanks.

checked in now.

>>> My test code so far which demonstrates the above attached in just in
>>> case (not tested on a system without JSON::XS or Data::Peek as yet and
>>> not using DBI::neat because it is too clever at recognising numbers -
>>> SvNIOK).
> 
> The "/* already has string version of the value, so use it */" block?

yes indeed.

>> I have now tested this on a system without JSON::XS and Data::Peek and
>> fixed the problems and added more tests.
>>
>> If you are ok with this I'll commit the change to DBI.xs, add the test
>> and document sql_type_cast_svpv in DBI::DBD then move on to making this
>> work in DBD::Oracle again.
> 
> That would be great. Many thanks Martin.

DBI::DBD pod changes checked in now although I may come back to it once
I've re-implemented this in DBD::Oracle.

>> BTW, not sure whether you prefer a mail to you or to continue this type
>> of discussion on the dbi-dev list - let me know and I'll do so in the
>> future.
> 
> dbi-dev is best - CC'd - as it keeps other driver developers in the loop
> and more eyeballs can help spot/fix problems more quickly.
> 
> Tim.
> 
> 

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com



Released DBD::ODBC 1.23_1 to CPAN

2009-11-12 Thread Martin Evans
Just uploaded 1.23_1 to CPAN. This release contains:

makefile.PL changes:
  some formatting changes to output
  warn if unixodbc headers are not found that the unixodbc-dev package
is not
installed
  use $arext instead of "a"
  pattern match for pulling libodbc.* changed
  warn if DBI_DSN etc not defined
  change odbc_config output for stderr to /dev/null
  missing / on /usr/local wheb finding find_dm_hdr_files()

New FAQ entries from Oystein Torget for bind parameter bugs in SQL Server.

rt_46597.rt - update on wrong table

Copied dbivport.h from the latest DBI distribution into DBD::ODBC.

Added if_you_are_taking_over_this_code.txt.

Add latest Devel::PPPort ppport.h to DBD::ODBC and followed all
recommendations for changes to dbdimp.c.

Added change to Makefile.PL provided by Shawn Zong to make
Windows/Cygwin work again.

Minor change to Makefile.PL to output env vars to help in debugging
peoples build failures.

Added odbc_utf8_on attribute to dbh and sth handles to mark all
strings coming from the database as utf8.  This is for Aster (based on
PostgreSQL) which returns all strings as UTF-8 encoded unicode.
Thanks to Noel Burton-Krahn.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: DBD::Oracle, Support binding of integers so they are returned as IVs

2009-11-10 Thread Martin Evans
Thread is getting a bit long so I've snipped a lot of previous code.

Tim Bunce wrote:
> On Mon, Nov 09, 2009 at 05:05:11PM +, Martin Evans wrote:
>> Martin Evans wrote:
>>>>  



> 
>> There was an omission in my addition to Tim's example as I forgot to
>> change DBISTATE_VERSION.
> 
> Thanks. Though that's less important than it was now there's also
> DBIXS_REVISION (in dbixs_rev.h) that automatically tracks the svn
> revsion number.
> 
>> I've implemented this as it stands in DBD::Oracle and it seems to work
>> out ok and certainly where I was wanting to go (and further).
> 
> Ok.
> 
>> My own feeling is that if someone asks for something to be bound as an
>> SQL_INTEGER and it cannot due to over/under flow this should be an error
>> and that is how I've implemented it.
> 
> The return value of post_fetch_sv() is meant to allow drivers to
> report an error.
> 
> I thought about making post_fetch_sv() itself call DBIh_SET_ERR_* to
> report an error but opted to avoid that because, to generate a good
> error more info would need to be passed, like the column number.

I agree and had already output an error containing the column number.

> On the other hand, if post_fetch_sv() doesn't do it then there's a
> greater risk of inconsistency between the drivers.

I think we already have a level of inconsistency as some drivers already
return IVs without being asked for them. Also, number handling in each
database tends to differ quite a bit so I suspect the default may want
to differ per DBD.

>> Perhaps it could have been one of those informationals as the sv is
>> unchanged and still usable but it is not in the requested format so
>> I'd class that an error.
> 
> Perhaps we should have $sth->bind_col(..., { LooselyTyped => 1 });
> to allow for those who don't want an error if the type doesn't fit.

I'm happy with that.

> That certainly feels better than overloading SQL_INTEGER vs SQL_NUMERIC
> to achieve the same effect!

agreed.

>> However, I have
>> a very small concern for people who might have been binding columns with
>> a type but no destination SV but their DBD did nothing about it (which I
>> believe is all DBDs up to now). For me, I didn't leave that code in and
>> just documented it as:
>>
>>  # I was hoping the following would work (according to DBI, it
>>  # might) to ensure the a, b and c
>>  # columns are returned as integers instead of strings saving
>>  # us from having to add 0 to them below. It does not with
>>  # DBD::Oracle.
>>  # NOTE: you don't have to pass a var into bind_col to receive
>>  # the column data as it works on the underlying column and not
>>  # just a particular bound variable.
>>  #$cursor->bind_col(4, undef, { TYPE => SQL_INTEGER });
>>  #$cursor->bind_col(5, undef, { TYPE => SQL_INTEGER });
>>  #$cursor->bind_col(10, undef, { TYPE => SQL_INTEGER });
>>
>> but if those last 3 lines were left uncommented they would have ended up
>> a noop before but not now. However, I'd be surprised if anyone was
>> really doing that as it did nothing.
> 
> Does anyone know of any drivers that pay any attention to the type param
> of bind_column?

I did not find one when I was looking a few months ago.

> We could make it default to issuing a warning on overflow, and have
> attributes to specify either an error or ignore.

We could but I think most people would be happy with error or specifying
LooselyTyped. You either know you need LooselyTyped or not and if not
you can leave it off and if it errors then your data was not as you
thought and have to decide if your data is wrong or you need
LooselyTyped. I'd be concerned a warning might just get in the way.

>> I think a MinMemory attribute would be ok but I'd use it as in most of
>> my cases I am retrieving the whole result-set in one go and it can be
>> very large. How would post_fetch_sv know this attribute?
> 
> Via the flags argument.

As it turns out I /need/ MinMemory or SvPOKp(sv) returns true and that
ends up being a string again in JSON::XS. i.e., I needed the equivalent
of adding 0 to the sv which does this:

 perl -le 'use Devel::Peek;my $a = "5"; Dump($a); $a = $a + 0; Dump($a);'
SV = PV(0x8154b00) at 0x815469c
  REFCNT = 1
  FLAGS = (PADBUSY,PADMY,POK,pPOK)
  PV = 0x816fb48 "5"\0
  CUR = 1
  LEN = 4
SV = PVIV(0x8155b10) at 0x815469c
  REFCNT = 1
  FLAGS = (PADBUSY,PADMY,IOK,pIOK)
  IV = 5
  PV = 0x816fb48 "5"\0
  CUR = 1
  LEN = 4

as JSON::XS does:

if (SvPOKp (sv))
{
   .
}
else if (SvNOKp (sv))
{
   .
}
else if (SvIOKp (sv))
{
   I want this case.

Of course

Re: DBD::Oracle, Support binding of integers so they are returned as IVs

2009-11-09 Thread Martin Evans
Martin Evans wrote:
> Tim Bunce wrote:
>> On Tue, Oct 27, 2009 at 02:54:43PM +0000, Martin Evans wrote:
>>>> The next question is whether overflowing to an NV should be an error.
>>>> I'm thinking we could adopt these semantics for bind_col types:
>>>>
>>>>   SQL_INTEGER  IV or UV via sv_2iv(sv) with error on overflow
>>> this would be ideal.
>>>
>>>>   SQL_DOUBLE   NV via sv_2nv(sv)
>>>>   SQL_NUMERIC  IV else UV else NV via grok_number() with no error
>>>>
>>>> I could sketch out the logic for those cases if you'd be happy to polish
>>>> up and test.
>>> I would be happy to do that.
>> I finally got around to working on this. Here's a first rough draft
>> (which a bunch of issues) I thought I'd post here for discussion.
>>
>> I've implemented it as a hook in the DBIS structure so drivers can call
>> it directly.
>>
>> I've added the idea of optionally discarding the string buffer (which
>> would save space when storing many rows but waste time if just working
>> row-at-a-time). For now I've triggered that based on the sql_type but
>> that feels like a hack we'd regret later. A better approach might be an
>> attribute to bind_col:
>>
>> $sth->bind_col(..., { MinMemory => 1 });
>> $sth->fetchall_...();
>>
>> This code doesn't raise any errors or produce any warnings (directly),
>> it just returns a status that the driver should check if it wants to
>> implement the SQL_INTEGER == error on overflow semantics, which it
>> should if we agree that's what we're going to adopt.
>>
>> Tim.
>>
>>
>> Index: DBI.xs
>> ===
>> --- DBI.xs   (revision 13466)
>> +++ DBI.xs   (working copy)
>> @@ -78,6 +78,7 @@
>>  static int  set_err_char _((SV *h, imp_xxh_t *imp_xxh, const char 
>> *err_c, IV err_i, const char *errstr, const char *state, const char 
>> *method));
>>  static int  set_err_sv   _((SV *h, imp_xxh_t *imp_xxh, SV *err, SV 
>> *errstr, SV *state, SV *method));
>>  static int  quote_type _((int sql_type, int p, int s, int *base_type, 
>> void *v));
>> +static int  post_fetch_sv _((pTHX_ SV *h, imp_xxh_t *imp_xxh, SV *sv, 
>> int sql_type, U32 flags, void *v));
>>  static I32  dbi_hash _((const char *string, long i));
>>  static void dbih_dumphandle _((pTHX_ SV *h, const char *msg, int 
>> level));
>>  static int  dbih_dumpcom _((pTHX_ imp_xxh_t *imp_xxh, const char *msg, 
>> int level));
>> @@ -439,6 +440,7 @@
>>  DBIS->set_err_sv  = set_err_sv;
>>  DBIS->set_err_char= set_err_char;
>>  DBIS->bind_col= dbih_sth_bind_col;
>> +DBIS->post_fetch_sv = post_fetch_sv;
>>  
>>  
>>  /* Remember the last handle used. BEWARE! Sneaky stuff here!*/
>> @@ -1714,6 +1718,94 @@
>>  }
>>  
>>  
>> +/* Convert a simple string representation of a value into a more specific
>> + * perl type based on an sql_type value.
>> + * The semantics of SQL standard TYPE values are interpreted _very_ loosely
>> + * on the basis of "be liberal in what you accept and let's throw in some
>> + * extra semantics while we're here" :)
>> + * Returns:
>> + *  -1: sv is undef or doesn't
>> + *   0: sv couldn't be converted to requested (strict) type
>> + *   1: sv was handled without a problem
>> + */
>> +int
>> +post_fetch_sv(pTHX_ SV *h, imp_xxh_t *imp_xxh, SV *sv, int sql_type, U32 
>> flags, void *v)
>> +{
>> +int discard_pv = 0;
>> +
>> +/* do nothing for undef (NULL) or non-string values */
>> +if (!sv || !SvPOK(sv))
>> +return -1;
>> +
>> +switch(sql_type) {
>> +
>> +/* caller would like IV (but may get UV or NV) */
>> +/* will warn if not numeric. return 0 on overflow */
>> +case SQL_SMALLINT:
>> +discard_pv = 1;
>> +case SQL_INTEGER:
>> +sv_2iv(sv); /* is liberal, may return SvIV, SvUV, or SvNV */
>> +if (SvNOK(sv)) { /* suspicious */
>> +NV nv = SvNV(sv);
>> +/* ignore NV set just to preserve digits after the decimal 
>> place */
>> +/* just complain if the value won't fit in an IV or NV  */
>> +if (nv > UV_MAX || nv < IV_MIN) 
>> +return 0;
>> +}
>> +break;
>> +
>>

Re: DBD::Oracle, Support binding of integers so they are returned as IVs

2009-11-09 Thread Martin Evans
Tim Bunce wrote:
> On Tue, Oct 27, 2009 at 02:54:43PM +0000, Martin Evans wrote:
>>> The next question is whether overflowing to an NV should be an error.
>>> I'm thinking we could adopt these semantics for bind_col types:
>>>
>>>   SQL_INTEGER  IV or UV via sv_2iv(sv) with error on overflow
>> this would be ideal.
>>
>>>   SQL_DOUBLE   NV via sv_2nv(sv)
>>>   SQL_NUMERIC  IV else UV else NV via grok_number() with no error
>>>
>>> I could sketch out the logic for those cases if you'd be happy to polish
>>> up and test.
>> I would be happy to do that.
> 
> I finally got around to working on this. Here's a first rough draft
> (which a bunch of issues) I thought I'd post here for discussion.
> 
> I've implemented it as a hook in the DBIS structure so drivers can call
> it directly.
> 
> I've added the idea of optionally discarding the string buffer (which
> would save space when storing many rows but waste time if just working
> row-at-a-time). For now I've triggered that based on the sql_type but
> that feels like a hack we'd regret later. A better approach might be an
> attribute to bind_col:
> 
> $sth->bind_col(..., { MinMemory => 1 });
> $sth->fetchall_...();
> 
> This code doesn't raise any errors or produce any warnings (directly),
> it just returns a status that the driver should check if it wants to
> implement the SQL_INTEGER == error on overflow semantics, which it
> should if we agree that's what we're going to adopt.
> 
> Tim.
> 
> 
> Index: DBI.xs
> ===
> --- DBI.xs(revision 13466)
> +++ DBI.xs(working copy)
> @@ -78,6 +78,7 @@
>  static int  set_err_char _((SV *h, imp_xxh_t *imp_xxh, const char 
> *err_c, IV err_i, const char *errstr, const char *state, const char *method));
>  static int  set_err_sv   _((SV *h, imp_xxh_t *imp_xxh, SV *err, SV 
> *errstr, SV *state, SV *method));
>  static int  quote_type _((int sql_type, int p, int s, int *base_type, 
> void *v));
> +static int  post_fetch_sv _((pTHX_ SV *h, imp_xxh_t *imp_xxh, SV *sv, 
> int sql_type, U32 flags, void *v));
>  static I32  dbi_hash _((const char *string, long i));
>  static void dbih_dumphandle _((pTHX_ SV *h, const char *msg, int level));
>  static int  dbih_dumpcom _((pTHX_ imp_xxh_t *imp_xxh, const char *msg, 
> int level));
> @@ -439,6 +440,7 @@
>  DBIS->set_err_sv  = set_err_sv;
>  DBIS->set_err_char= set_err_char;
>  DBIS->bind_col= dbih_sth_bind_col;
> +DBIS->post_fetch_sv = post_fetch_sv;
>  
>  
>  /* Remember the last handle used. BEWARE! Sneaky stuff here!*/
> @@ -1714,6 +1718,94 @@
>  }
>  
>  
> +/* Convert a simple string representation of a value into a more specific
> + * perl type based on an sql_type value.
> + * The semantics of SQL standard TYPE values are interpreted _very_ loosely
> + * on the basis of "be liberal in what you accept and let's throw in some
> + * extra semantics while we're here" :)
> + * Returns:
> + *  -1: sv is undef or doesn't
> + *   0: sv couldn't be converted to requested (strict) type
> + *   1: sv was handled without a problem
> + */
> +int
> +post_fetch_sv(pTHX_ SV *h, imp_xxh_t *imp_xxh, SV *sv, int sql_type, U32 
> flags, void *v)
> +{
> +int discard_pv = 0;
> +
> +/* do nothing for undef (NULL) or non-string values */
> +if (!sv || !SvPOK(sv))
> +return -1;
> +
> +switch(sql_type) {
> +
> +/* caller would like IV (but may get UV or NV) */
> +/* will warn if not numeric. return 0 on overflow */
> +case SQL_SMALLINT:
> +discard_pv = 1;
> +case SQL_INTEGER:
> +sv_2iv(sv); /* is liberal, may return SvIV, SvUV, or SvNV */
> +if (SvNOK(sv)) { /* suspicious */
> +NV nv = SvNV(sv);
> +/* ignore NV set just to preserve digits after the decimal place 
> */
> +/* just complain if the value won't fit in an IV or NV  */
> +if (nv > UV_MAX || nv < IV_MIN) 
> +return 0;
> +}
> +break;
> +
> +/* caller would like SvNOK/SvIOK true if the value is a number */
> +/* will warn if not numeric */
> +case SQL_FLOAT:
> +discard_pv = 1;
> +case SQL_DOUBLE:
> +sv_2nv(sv);
> +break;
> +
> +/* caller would like IV else UV else NV */
> +/* else no error and sv is untouched */
> +case SQL_NUMERIC:
> +discard_pv = 1;
> +case SQL

Re: DBD::Oracle, Support binding of integers so they are returned as IVs

2009-10-27 Thread Martin Evans
Thanks Tim for the help on this.

Tim Bunce wrote:
> On Mon, Oct 26, 2009 at 05:29:21PM +0000, Martin Evans wrote:
>> What follows is a very rough patch (definitely not finished) which
>> proves you can do what I wanted to do. However, there on no checks on
>> the column being bound existing and I'm not sure how to save the TYPE
>> attribute when bind_col is called before execute (that is when the
>> result-set is not described yet). Basically, I think more is required in
>> dbd_st_bind_col but I've not sure as yet what that is and it is possible
>> returning 1 is a total hack. I'd appreciate any advice to complete this.
>>
>> Index: oci8.c
>> ===
>> --- oci8.c   (revision 13427)
>> +++ oci8.c   (working copy)
>> @@ -3279,10 +3279,31 @@
>> +
>> +if ((fbh->req_type == 3) &&
>> +((fbh->dbtype == 2) || (fbh->dbtype == 3))){
> 
> Best to avoid 'magic numbers'.

As I said = very rough. I'd already changed those to SQLT_NUM and
SQLT_INT as ORA types but I guessed they would need to be SQL_INTEGER,
SQL_NUMERIC, SQL_DOUBLE when finished i.e. you use the DBI types not the
oracle types here since the data is coming back into perl.

>> +char *e;
>> +char zval[32];
>> +long val;
>> +
>> +memcpy(zval, p, datalen);
>> +zval[datalen] = '\0';
>> +val = strtol(zval, &e, 10);
>> +
>> +if ((val == LONG_MAX) || (val == LONG_MIN) ||
>> +(e && (*e != '\0'))) {
>> +oci_error(sth, imp_sth->errhp, OCI_ERROR,
>> +  "invalid number or over/under flow");
>> +return Nullav;
>> +}
>> +sv_setiv(sv, val);
>> +} else {
>> +sv_setpvn(sv, p, (STRLEN)datalen);
>> +if (CSFORM_IMPLIES_UTF8(fbh->csform) ){
>> +SvUTF8_on(sv);
>> +}
>> +}

Tried your suggestion of the grok_number but it does not work well for
negative numbers since it returns the abs then and puts the result in a
UV which may not fit signed into an IV. Anyway, you seem to have had
other ideas so I'll not worry about that too much.

> A simpler safer and more portable approach may be to just let the
> existing code store the value in an sv and then add these lines:
> 
> if (fbh->req_type == 3)
> sv_2iv(sv);
> 
> If the number is too large for an IV (or UV) you'll get an NV (float).
> The original string of digits is preserved in all cases. That's all very
> natural and predictable perlish behaviour.

Ok, I get that except you keep saying "(or UV)". Are you suggesting
there is some other logic to decide whether you create an IV or a UV?

I tried out various values and sv_2iv(sv) and what was returned looked
ok - I get a string when the number has decimal places or is too big and
an IV when it is an integer and fits.

> The next question is whether overflowing to an NV should be an error.
> I'm thinking we could adopt these semantics for bind_col types:
> 
>   SQL_INTEGER  IV or UV via sv_2iv(sv) with error on overflow

this would be ideal.

>   SQL_DOUBLE   NV via sv_2nv(sv)
>   SQL_NUMERIC  IV else UV else NV via grok_number() with no error
> 
> I could sketch out the logic for those cases if you'd be happy to polish
> up and test.

I would be happy to do that.

BTW, did you look over the possible hackery I did in dbd_st_bind_col - I
wasn't sure if simply storing the requested type and returning 1 was
acceptable. My current dbd_st_bind_col is:

int dbd_st_bind_col(SV *sth, imp_sth_t *imp_sth, SV *col, SV *ref, IV
type, SV *attribs) {
dTHX;

int field = SvIV(col);

if (field <= DBIc_NUM_FIELDS(imp_sth)) {
imp_sth->fbh[field-1].req_type = type;
}

return 1;
}

This means if someone attempts to bind a non-existent column it falls
back into DBI's bind_col and signals the error but it also means
dbd_st_bind_col in DBD::Oracle is only there to capture the requested
bind type.

Thanks

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: DBD::Oracle, Support binding of integers so they are returned as IVs

2009-10-26 Thread Martin Evans
Martin J. Evans wrote:
> Hi,
> 
> With reference to the rt I created "Support binding of integers so they
> are returned as IVs" at http://rt.cpan.org/Public/Bug/Display.html?id=49818
> 
> I am now at the point where being unable to bind columns to results-sets
> in DBD::Oracle with a bind type of SQL_INTEGER (or whatever) so they
> look like integers in Perl is slowing some code of ours down
> dramatically. We convert a fetchall_arrayref returned structure into
> JSON with JSON::XS and JSON::XS converts strings to "string" and numbers
> to a plain number. Our select returns a number of columns which are
> really integer columns and as the result-set is very large the extra
> space we use encoding integers as "number" is more than just annoying.
> JSON::XS appears to know a perl scalar has been used in the context of a
> number as if we add 0 to the integer columns returned in
> fetchall_arrayref it encodes them as plain numbers instead of strings
> like "number" (see the rt for the snippet from JSON::XS which does
> this). As a result, a workaround we are using now is to loop through the
> rows adding 0 to all integer columns. I believe DBI allows bind_col to
> be called without a destination scalar so you can use it to specify the
> type of the bind e.g., SQL_INTEGER but still call fetchall_arrayref.
> 
> Does anyone know if it is feasible to make this work with DBD::Oracle
> and if so do you have some pointers as to how it may be achieved. I am
> not looking for anyone else to do the work but would like to sound
> people out about the possibility before I launch into it.
> 
> Thanks
> 
> Martin
> 
> 

What follows is a very rough patch (definitely not finished) which
proves you can do what I wanted to do. However, there on no checks on
the column being bound existing and I'm not sure how to save the TYPE
attribute when bind_col is called before execute (that is when the
result-set is not described yet). Basically, I think more is required in
dbd_st_bind_col but I've not sure as yet what that is and it is possible
returning 1 is a total hack. I'd appreciate any advice to complete this.

Index: oci8.c
===
--- oci8.c  (revision 13427)
+++ oci8.c  (working copy)
@@ -3279,10 +3279,31 @@
while(datalen && p[datalen - 
1]==' ')
--datalen;
}
-   sv_setpvn(sv, p, (STRLEN)datalen);
-   if (CSFORM_IMPLIES_UTF8(fbh->csform) ){
-   SvUTF8_on(sv);
-   }
+
+if ((fbh->req_type == 3) &&
+((fbh->dbtype == 2) || (fbh->dbtype == 3))){
+char *e;
+char zval[32];
+long val;
+
+memcpy(zval, p, datalen);
+zval[datalen] = '\0';
+val = strtol(zval, &e, 10);
+
+if ((val == LONG_MAX) || (val == LONG_MIN) ||
+(e && (*e != '\0'))) {
+oci_error(sth, imp_sth->errhp, OCI_ERROR,
+  "invalid number or over/under flow");
+return Nullav;
+}
+
+sv_setiv(sv, val);
+} else {
+sv_setpvn(sv, p, (STRLEN)datalen);
+if (CSFORM_IMPLIES_UTF8(fbh->csform) ){
+SvUTF8_on(sv);
+}
+}
}
}

Index: dbdimp.c
===
--- dbdimp.c(revision 13427)
+++ dbdimp.c(working copy)
@@ -869,7 +869,16 @@
return 1;
 }

+int dbd_st_bind_col(SV *sth, imp_sth_t *imp_sth, SV *col, SV *ref, IV
type, SV *attribs) {
+dTHX;

+int field = SvIV(col);
+
+imp_sth->fbh[field-1].req_type = type;
+
+return 1;
+}
+
 int
 dbd_db_disconnect(SV *dbh, imp_dbh_t *imp_dbh)
 {
Index: dbdimp.h
===
--- dbdimp.h(revision 13427)
+++ dbdimp.h(working copy)
@@ -191,6 +191,8 @@
int piece_lob;  /*use piecewise fetch for lobs*/
/* Our storage space for the field data as it's fetched */
sword   ftype;  /* external datatype we wish to get */
+IVreq_type;/* type passed to
bind_col */
+
fb_ary_t*fb_ary ;   /* field buffer array   
*/
/* if this is an embedded object we use this */
fbh_obj_t   *obj;
@@ -371,6 +373,7 @@
 #define dbd_st_FETCH_attribora_st_FETCH_attrib
 #define

Re: DBD::Oracle, Support binding of integers so they are returned as IVs

2009-10-26 Thread Martin Evans
Greg Sabino Mullane wrote:
> 
>> I could do what DBD::Pg does here (and have to verify it works) but
>> Oracle integers can be very large - too big to fit in an IV in some
>> cases. 
> 
> Ah yes, I forgot that Oracle doesn't really have an integer type.
> 
>> I think the only person who knows if an integer is small enough
>> to fit in an IV is the person calling bind_col and in any case, my
>> situation is in fact that some of the database integers I need back as
>> strings and some I want as integers. As a result, I think it is
>> necessary to support the TYPE attribute to bind_col.
> 
> Sounds like a foot gun. What happens when the type is declared as int, but
> they send back ? Wouldn't the value get silently changed
> to 2147483647? A way around that is to have the driver check the size
> to see if it will fit in as IV, but at that point, you don't need the
> user-specified casting anymore, perhaps just a separate flag, e.g.
>
> $dbh->{ora_return_iv_when_possible} = 1;

I was never suggesting any integer would be silently changed - why would
anyone do that.

Doing things automatically won't help me as I only want integers back
for some of the columns. Also, I would not want an IV back for a small
integer in row N and then a string back for a large integer in row N+1
(I'd rather have an error). DBI already defines a TYPE on bind_col and
if it was implemented it could be generally useful as well as useful to me.

> Frankly, it sounds like we're doing a lot of smashing of square pegs into
> round holes to make JSON::XS happy: maybe it's better to look for a solution
> on that end at this point?

I cannot say why you changed DBD::Pg for JSON::XS but I can say why I
want integers back from Oracle instead of strings when I ask for
integers and the column is an integer. I don't think it is square pegs
into round holes.

I don't maintain DBD::Oracle but would have been happy to submit any
patch back for consideration. I was only looking for a push in the right
direction before implementing and even if the consensus is that it is a
waste of time it won't stop me implementing it myself as I need it.
However, I'd rather not do that as I am already maintaining 4 other
changes to DBD::Oracle until at least the next DBD::Oracle is released.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: DBD::Oracle, Support binding of integers so they are returned as IVs

2009-10-26 Thread Martin Evans
Greg Sabino Mullane wrote:
> 
>> With reference to the rt I created "Support binding of integers so they
>> are returned as IVs" at http://rt.cpan.org/Public/Bug/Display.html?id=49818
> 
> If I'm understanding you correctly, this was recently 'fixed' in DBD::Pg,
> to accomodate JSON::XS as well. For the relevant code, see:
> 
> http://svn.perl.org/modules/DBD-Pg/trunk/dbdimp.c
> 
> and grep for "cast"
> 

I could do what DBD::Pg does here (and have to verify it works) but
Oracle integers can be very large - too big to fit in an IV in some
cases. I think the only person who knows if an integer is small enough
to fit in an IV is the person calling bind_col and in any case, my
situation is in fact that some of the database integers I need back as
strings and some I want as integers. As a result, I think it is
necessary to support the TYPE attribute to bind_col.

1. Does any DBD support TYPE in bind_col (I can't find any so far)?

2. Presumably if a DBD wanted to store the TYPE would it need to
implement the bind_col (dbd_st_bind_col) method itself?

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: DBD::Oracle, Support binding of integers so they are returned as IVs

2009-10-23 Thread Martin Evans
Greg Sabino Mullane wrote:
> 
>> With reference to the rt I created "Support binding of integers so they
>> are returned as IVs" at http://rt.cpan.org/Public/Bug/Display.html?id=49818
> 
> If I'm understanding you correctly, this was recently 'fixed' in DBD::Pg,
> to accomodate JSON::XS as well. For the relevant code, see:
> 
> http://svn.perl.org/modules/DBD-Pg/trunk/dbdimp.c
> 
> and grep for "cast"
> 

Thanks Greg, that is exactly what I am talking about.

I'll take a look at that.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: DBD::Oracle, Support binding of integers so they are returned as IVs

2009-10-23 Thread Martin Evans
Thanks for the reply John.

John Scoles wrote:
> Sounds like an easy patch to DBD::Oracle (off the top of my head) I am
> not sure how it would fix into the DBI spec though.
> 
> If I am reading the question right you want to be able to tell
> DBI/DBD::Oracle that col X of a return is an int?

Yes

> something like
> 
> $SQL='select my_id,my_name from my_table'
> 
> my $C=$DBH->prepare($SQL,{row1=int})
> 
> for lack of a better example

example from DBI docs below.

> cheers
> John Scoles

DBI for bind_col says:

=
The \%attr parameter can be used to hint at the data type formatting the
column should have. For example, you can use:

$sth->bind_col(1, undef, { TYPE => SQL_DATETIME });

to specify that you'd like the column (which presumably is some kind of
datetime type) to be returned in the standard format for SQL_DATETIME,
which is '-MM-DD HH:MM:SS', rather than the native formatting the
database would normally use.

There's no $var_to_bind in that example to emphasize the point that
bind_col() works on the underlying column and not just a particular
bound variable.
=

so I think what I want to do is already in the spec but not implemented
by DBD::Oracle. I want to call bind_col saying SQL_INTEGER then call
fetchall_arrayref and get IVs back for those columns instead of SVs.

So DBD::Oracle would need to know bind_col was called with a type and
save the type then at fetch time create an IV instead of an SV.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com

> Martin J. Evans wrote:
>> Hi,
>>
>> With reference to the rt I created "Support binding of integers so they
>> are returned as IVs" at
>> http://rt.cpan.org/Public/Bug/Display.html?id=49818
>>
>> I am now at the point where being unable to bind columns to results-sets
>> in DBD::Oracle with a bind type of SQL_INTEGER (or whatever) so they
>> look like integers in Perl is slowing some code of ours down
>> dramatically. We convert a fetchall_arrayref returned structure into
>> JSON with JSON::XS and JSON::XS converts strings to "string" and numbers
>> to a plain number. Our select returns a number of columns which are
>> really integer columns and as the result-set is very large the extra
>> space we use encoding integers as "number" is more than just annoying.
>> JSON::XS appears to know a perl scalar has been used in the context of a
>> number as if we add 0 to the integer columns returned in
>> fetchall_arrayref it encodes them as plain numbers instead of strings
>> like "number" (see the rt for the snippet from JSON::XS which does
>> this). As a result, a workaround we are using now is to loop through the
>> rows adding 0 to all integer columns. I believe DBI allows bind_col to
>> be called without a destination scalar so you can use it to specify the
>> type of the bind e.g., SQL_INTEGER but still call fetchall_arrayref.
>>
>> Does anyone know if it is feasible to make this work with DBD::Oracle
>> and if so do you have some pointers as to how it may be achieved. I am
>> not looking for anyone else to do the work but would like to sound
>> people out about the possibility before I launch into it.
>>
>> Thanks
>>
>> Martin
>>   
> 
> 


Re: [svn:dbi] r13334 - dbi/trunk

2009-09-14 Thread Martin Evans
Tim Bunce wrote:
> On Mon, Sep 14, 2009 at 02:12:16AM -0700, hmbr...@cvs.perl.org wrote:
>> Author: hmbrand
>> New Revision: 13334
>>
>> Modified:
>>dbi/trunk/Changes
>>dbi/trunk/DBI.xs
>>dbi/trunk/DBIXS.h
>>dbi/trunk/Driver_xst.h
>>dbi/trunk/Perl.xs
>>dbi/trunk/dbipport.h
>>
>> Log:
>> Updated dbipport.h to Devel::PPPort 3.19
> 
> Authors of compiled drivers: please test this by building the curent svn
> head version, *installing it*, and then doing a fresh build of their
> drivers against the newly installed DBI.
> 
> Please let us know about either failures or successes, including the
> output of perl -V.
> 
> Thanks! (And thanks to H.Merijn Brand for the update.)
> 
> Tim.
> 
> 

Works fine for me with latest DBD::ODBC, Linux and Perl 5.8.8. I will
try 5.10 later.
Should driver maintainers be doing something similar with Devel::PPPort?

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


DBD::ODBC 1.23 released to CPAN

2009-09-11 Thread Martin Evans
I have just uploaded DBD::ODBC 1.23 to CPAN. It is a full release of all
the 1.22_x development series releases. The changes since 1.22 are:

=head2 Changes in DBD::ODBC 1.23 September 11, 2009

Only a readme change and version bumped to 1.23. This is a full
release of all the 1.22_x development releases.

=head2 Changes in DBD::ODBC 1.22_3 August 19, 2009

Fix skip count in rt_38977.t and typo in ok call.

Workaround a bug in unixODBC 2.2.11 which can write off the end of the
string buffer passed to SQLColAttributes.

Fix skip count in rt_null_nvarchar.t test for non SQL Server drivers.

Fix test in 02simple.t which reported a fail if you have no ODBC
datasources.

In 99_yaml.t pick up the yaml spec version from the meta file instead
of specifying it.

Change calls to SQLPrepare which passed in the string lenth of the SQL
to use SQL_NTS because a) they are null terminated and more
importantly b) unixODBC contains a bug in versions up to 2.2.16 which
can overwrite the stack by 1 byte if the string length is specified
and not built with iconv support and converting the SQL from ASCII to
Unicode.

Fixed bug in ping method reported by Lee Anne Lester where it dies if
used after the connection is closed.

A great deal of changes to Makefile.PL to improve the automatic
detection and configuration for ODBC driver managers - especially on
64bit platforms. See rt47650 from Marten Lehmann which started it all
off.

Add changes from Chris Clark for detecting IngresCLI.

Fix for rt 48304. If you are using a Microsoft SQL Server database and
nvarchar(max) you could not insert values between 4001 and 8000
(inclusive) in size. A test was added to the existing rt_38977.t test.
Thanks to Michael Thomas for spotting this.

Added FAQ on UTF-8 encoding and IBM iSeries ODBC driver.

Add support for not passing usernames and passwords in call to
connect.  Previously DBD::ODBC would set an unspecified
username/password to '' in ODBC.pm before calling one of the login_xxx
functions.  This allows the driver to pull the username/password from
elsewhere e.g., like the odbc.ini file.

=head2 Changes in DBD::ODBC 1.22_1 June 16, 2009

Applied a slightly modified version of patch from Jens Rehsack to
improve support for finding the iODBC driver manager.

A UNICODE enabled DBD::ODBC (the default on Windows) did not handle
UNICODE usernames and passwords in the connect call properly.

Updated "Attribution" in ODBC.pm.

Unicode support is no longer experimental hence warning and prompt
removed from the Makefile.PL.

old_ping method removed.

Fixed bug in 02simple.t test which is supposed to check you have at
least one data source defined. Unfortunately, it was checking you had
more than 1 data source defined.

rt_null_varchar had wrong skip count meaning non-sql-server drivers or
sql server drivers too old skipped 2 tests more than were planned.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: How to disable PrintError and RaiseError in DBD::ODBC::ping

2009-08-05 Thread Martin Evans
Tim Bunce wrote:
> On Tue, Aug 04, 2009 at 04:13:19PM +0100, Martin Evans wrote:
>> Tim Bunce wrote:
>>> On Tue, Aug 04, 2009 at 01:19:43PM +0100, Martin Evans wrote:
>>>> This is my second attempt to try and get some insight into how to do
>>>> this. The DBD::ODBC::ping method calls DBI::_new_sth to obtain a new
>>>> statement handle but DBD::ODBC prevents creating a new statement handle
>>>> when not connected (you cannot actually get one even if you wanted). The
>>>> problem is once DBD::ODBC discovers we are not connected it does:
>>>>
>>>> DBIh_SET_ERR_CHAR(
>>>>   h, imp_xxh, Nullch, 1,
>>>>   "Cannot allocate statement when disconnected from the database",
>>>> "08003", Nullch);
>>>>
>>>> and because PrintError is on the error is output and because RaiseError
>>>> is on the error handler is called. Most people are calling ping when not
>>>> connected and do not want this error and I wanted to mask it by
>>>> temporarily disabling PrintError and RaiseError but it does not seem to
>>>> work for me.
>>>>
>>>> Initially I tried using local $dbh->{PrintError} = 0 but this did not
>>>> work. Then I remembered I needed to call STORE from inside the driver so
>>>> changed to $dbh->STORE('PrintError', 0); but that does not work either.
>>> That should work. Try calling $dbh->dump_handle after (and after doing
>>> the same for RaiseError) that to see what the state of the handle is.
>>>
>>> Tim.
>>>
>>> p.s. You could also try using the HandleError attribute, but the above 
>>> should
>>> work so let's find out what's happening there first.
>> Thanks for the help Tim.
>>
>> Ignoring RaiseError for now as I'm imagining it will work (once it
>> works) the same as PrintError. The ping sub is currently:
>>
>> sub ping {
>> my $dbh = shift;
>> my $state = undef;
>>
>> my ($catalog, $schema, $table, $type);
>>
>> $catalog = q{};
>> $schema = q{};
>> $table = 'NOXXTABLE';
>> $type = q{};
>>
>> print "PrintError=", $dbh->FETCH('PrintError'), "\n";
>> my $pe = $dbh->FETCH('PrintError');
>> $dbh->STORE('PrintError', 0);
>> $dbh->dump_handle;
>> my $evalret = eval {
>># create a "blank" statement handle
>> my $sth = DBI::_new_sth($dbh, { 'Statement' => "SQLTables_PING" })
>> or return 1;
>>
>> DBD::ODBC::st::_tables($dbh,$sth, $catalog, $schema, $table, $type)
>>   or return 1;
>> $sth->finish;
>> return 0;
>> };
>> $dbh->STORE('PrintError', $pe);
>> if ($evalret == 0) {
>> return 1;
>> } else {
>> return 0;
>> }
>> }
>>
>> and the following perl and output are observed:
>>
>>  perl -Iblib/lib -Iblib/arch -le 'use DBI;$h = DBI->connect();print
>> "ping=",$h->ping(),"\n";$h->disconnect; print "ping=",$h->ping(), "\n";'
>> PrintError=1
>>
>> DBI::dump_handle (dbh 0x82c382c, com 0x82c4560, imp DBD::ODBC::db):
>>FLAGS 0x100217: COMSET IMPSET Active Warn PrintWarn AutoCommit
>>ERR ''
>>ERRSTR '[unixODBC][Easysoft][SQL Server Driver][SQL
>> Server]Changed language setting to us_english. (SQL-01000)
>> [unixODBC][Easysoft][SQL Server Driver][SQL Server]Changed database
>> context to 'master'. (SQL-01000)'
>>PARENT DBI::dr=HASH(0x82bc5e0)
>>KIDS 0 (0 Active)
>>Name 'baugi'
>> ping=1
>>
>> PrintError=1
>>
>> DBI::dump_handle (dbh 0x82c382c, com 0x82c4560, imp DBD::ODBC::db):
>>FLAGS 0x100213: COMSET IMPSET Warn PrintWarn AutoCommit
>>PARENT DBI::dr=HASH(0x82bc5e0)
>>KIDS 0 (0 Active)
>>Name 'baugi'
>> DBD::ODBC::db ping failed: Cannot allocate statement when disconnected
>> from the database at -e line 1.
>> ping=0
>>
>> I don't know if it is anything of a clue but if I fail to restore
>> PrintError with STORE at the end of ping, the error is not seen in the
>> second call to ping).
> 
> Ah! The DBI is reporting the error information that's stored in the
> handle when the ping method *returns*.
> 
> You need to use $h->set_err(...) to clear the error state before
> returning.
> 
> Tim.
> 
> 

When I add $dbh->set_err(undef,'',''); it does clear the error. BTW, I
saw the following in Gofer.pm at the start of ping:

sub ping {
  my $dbh = shift;
  return $dbh->set_err(0, "can't ping while not connected") # warning
unless $dbh->SUPER::FETCH('Active');

That did not work for me to generate a warning, even when I changed the
0 to "0" which is what I thought you did for a warning
(http://search.cpan.org/~timb/DBI-1.609/DBI.pm#set_err).

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: How to disable PrintError and RaiseError in DBD::ODBC::ping

2009-08-04 Thread Martin Evans
Tim Bunce wrote:
> On Tue, Aug 04, 2009 at 01:19:43PM +0100, Martin Evans wrote:
>> This is my second attempt to try and get some insight into how to do
>> this. The DBD::ODBC::ping method calls DBI::_new_sth to obtain a new
>> statement handle but DBD::ODBC prevents creating a new statement handle
>> when not connected (you cannot actually get one even if you wanted). The
>> problem is once DBD::ODBC discovers we are not connected it does:
>>
>> DBIh_SET_ERR_CHAR(
>>   h, imp_xxh, Nullch, 1,
>>   "Cannot allocate statement when disconnected from the database",
>> "08003", Nullch);
>>
>> and because PrintError is on the error is output and because RaiseError
>> is on the error handler is called. Most people are calling ping when not
>> connected and do not want this error and I wanted to mask it by
>> temporarily disabling PrintError and RaiseError but it does not seem to
>> work for me.
>>
>> Initially I tried using local $dbh->{PrintError} = 0 but this did not
>> work. Then I remembered I needed to call STORE from inside the driver so
>> changed to $dbh->STORE('PrintError', 0); but that does not work either.
> 
> That should work. Try calling $dbh->dump_handle after (and after doing
> the same for RaiseError) that to see what the state of the handle is.
> 
> Tim.
> 
> p.s. You could also try using the HandleError attribute, but the above should
> work so let's find out what's happening there first.
> 
> 

Thanks for the help Tim.

Ignoring RaiseError for now as I'm imagining it will work (once it
works) the same as PrintError. The ping sub is currently:

sub ping {
my $dbh = shift;
my $state = undef;

my ($catalog, $schema, $table, $type);

$catalog = q{};
$schema = q{};
$table = 'NOXXTABLE';
$type = q{};

print "PrintError=", $dbh->FETCH('PrintError'), "\n";
my $pe = $dbh->FETCH('PrintError');
$dbh->STORE('PrintError', 0);
$dbh->dump_handle;
my $evalret = eval {
   # create a "blank" statement handle
my $sth = DBI::_new_sth($dbh, { 'Statement' => "SQLTables_PING" })
or return 1;

DBD::ODBC::st::_tables($dbh,$sth, $catalog, $schema, $table, $type)
  or return 1;
$sth->finish;
return 0;
};
$dbh->STORE('PrintError', $pe);
if ($evalret == 0) {
return 1;
} else {
return 0;
}
}

and the following perl and output are observed:

 perl -Iblib/lib -Iblib/arch -le 'use DBI;$h = DBI->connect();print
"ping=",$h->ping(),"\n";$h->disconnect; print "ping=",$h->ping(), "\n";'
PrintError=1

DBI::dump_handle (dbh 0x82c382c, com 0x82c4560, imp DBD::ODBC::db):
   FLAGS 0x100217: COMSET IMPSET Active Warn PrintWarn AutoCommit
   ERR ''
   ERRSTR '[unixODBC][Easysoft][SQL Server Driver][SQL
Server]Changed language setting to us_english. (SQL-01000)
[unixODBC][Easysoft][SQL Server Driver][SQL Server]Changed database
context to 'master'. (SQL-01000)'
   PARENT DBI::dr=HASH(0x82bc5e0)
   KIDS 0 (0 Active)
   Name 'baugi'
ping=1

PrintError=1

DBI::dump_handle (dbh 0x82c382c, com 0x82c4560, imp DBD::ODBC::db):
   FLAGS 0x100213: COMSET IMPSET Warn PrintWarn AutoCommit
   PARENT DBI::dr=HASH(0x82bc5e0)
   KIDS 0 (0 Active)
   Name 'baugi'
DBD::ODBC::db ping failed: Cannot allocate statement when disconnected
from the database at -e line 1.
ping=0

I don't know if it is anything of a clue but if I fail to restore
PrintError with STORE at the end of ping, the error is not seen in the
second call to ping).

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


How to disable PrintError and RaiseError in DBD::ODBC::ping

2009-08-04 Thread Martin Evans
This is my second attempt to try and get some insight into how to do
this. The DBD::ODBC::ping method calls DBI::_new_sth to obtain a new
statement handle but DBD::ODBC prevents creating a new statement handle
when not connected (you cannot actually get one even if you wanted). The
problem is once DBD::ODBC discovers we are not connected it does:

DBIh_SET_ERR_CHAR(
  h, imp_xxh, Nullch, 1,
  "Cannot allocate statement when disconnected from the database",
"08003", Nullch);

and because PrintError is on the error is output and because RaiseError
is on the error handler is called. Most people are calling ping when not
connected and do not want this error and I wanted to mask it by
temporarily disabling PrintError and RaiseError but it does not seem to
work for me.

Initially I tried using local $dbh->{PrintError} = 0 but this did not
work. Then I remembered I needed to call STORE from inside the driver so
changed to $dbh->STORE('PrintError', 0); but that does not work either.

Any ideas how to disable PrintError/RaiseError from inside DBD::ODBC::ping?

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Why doesn't "local" work in this case?

2009-07-28 Thread Martin Evans
Hi,

I've just received a bug report for a regression in the way ping works
(took over 3 years for someone to notice) in DBD::ODBC. If the
connection handle is not connected when a ping it done you get:

DBD::ODBC::db ping failed: Cannot allocate statement when disconnected
from the database at -e line 1.

I've identified why and it is because DBD::ODBC now checks the
connection handle is active before allocating a statement handle (it
didn't before). I wanted to change the ping method in ODBC.pm to wrap
the test in an eval like this:

sub ping {
my $dbh = shift;
my $state = undef;

my ($catalog, $schema, $table, $type);

$catalog = "";
$schema = "";
$table = "NOXXTABLE";
$type = "";

my $evalret = eval {
local $dbh->{RaiseError} = 0;
local $dbh->{PrintError} = 0;

# create a "blank" statement handle
# the following is what fails if $dbh is not connected
my $sth = DBI::_new_sth($dbh, { 'Statement' => "SQLTables_PING" });
return 1 if !$sth;

DBD::ODBC::st::_tables($dbh,$sth, $catalog, $schema, $table, $type)
  or return 1;
$sth->finish;
return 0;
};
if ($evalret == 0) {
return 1;
} else {
return 0;
}
}

as DBD::ODBC's dbdimp.c does the following if the connection handle is
not active when allocating a statement handle:

if (!DBIc_ACTIVE(imp_dbh)) {
DBIh_SET_ERR_CHAR(
  h, imp_xxh, Nullch, 1,
  "Cannot allocate statement when disconnected from the database",
  "08003", Nullch);
return 0;
}

However, the local does not work and the error is still printed.
Removing local from $dbh->{PrintError} works.

Any idea what I'm doing wrong here?

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Important bug fix release for DBD::ODBC (1.22)

2009-06-10 Thread Martin Evans
Hi,

I am sorry to have to admit to being the author of a very very silly bug
in unicode handling in DBD::ODBC. The length of UTF16 encoded data was
stored in an unsigned short and this can overflow. An unfortunate side
effect of this bug can be corruption in your perl application. I only
found this issue this morning and no one has reported it to me as yet.
If you are using a unicode enabled DBD::ODBC (the default on Windows)
and use strings larger than 64K I strongly suggest you upgrade. The 1.22
release should appear on CPAN mirrors soon.

If you package DBD::ODBC up (e.g., a PPM for ActiveState) I would be
pleased if you could expedite repackaging version 1.22.

Since 1.21 the changes are:

=head2 Changes in DBD::ODBC 1.22 June 10, 2009

Fixed bug which led to "Use of uninitialized value in subroutine
entry" warnings when writing a NULL into a NVARCHAR with a
unicode-enabled DBD::ODBC. Thanks to Jirka Novak and Pavel Richter who
found, reported and patched a fix.

Fixed serious bug in unicode_helper.c for utf16_len which I'm ashamed to say
was using an unsigned short to return the length. This meant you could
never have UTF16 strings of more than ~64K without risking serious
problems. The DBD::ODBC test code actually got a

*** glibc detected *** /usr/bin/perl: double free or corruption
(out): 0x406dd008 ***

If you use a UNICODE enabled DBD::ODBC (the default on Windows) and
unicode strings larger than 64K you should definitely upgrade now.

=head2 Changes in DBD::ODBC 1.21_1 June 2, 2009

Fixed bug referred to in rt 46597 reported by taioba and identified by
Tim Bunce. In Calls to bind_param for a given statement handle if you
specify a SQL type to bind as, this should be "sticky" for that
parameter.  That means if you do:

$sth->bind_param(1, $param, DBI::SQL_LONGVARCHAR)

and follow it up with execute calls that also specify the parameter:

$sth->execute("a param");

then the parameter should stick with the SQL_LONGVARCHAR type and not
revert to the default parameter type. The DBI docs (from 1.609)
make it clear the parameter type is sticky for the duration of the
statement but some DBDs allow the parameter to be rebound with a
different type - DBD::ODBC is one of those drivers.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: DBD::Unify warning codes

2009-06-05 Thread Martin Evans
H.Merijn Brand wrote:
> On Thu, 4 Jun 2009 11:44:51 -0700 (GMT-07:00), Todd Zervas
>  wrote:
> 
>> Do you have any thoughts on how DBD::Unify ought to return warnings?
>> Specifically I need to be able to detect "dirty reads" (SQLWARN =
>> -2022).  Is the right DBI way to do this to have DBD::Unify update
>> SQLSTATE even when there is no error via the h->state method?
> 
> Did you read DBI::DBD? I have no ideas about this one. If it is not
> documented in DBI::DBD, the best (and only) place to ask is the devel
> mailing list, which I Cc'd
> 

Look at DBIh_SET_ERR_CHAR. In a quick glance at DBI::DBD I cannot find
the relevant information but the following is pulled from DBD::ODBC:

if (SQL_SUCCEEDED(err_rc)) {
 DBIh_SET_ERR_CHAR(h, imp_xxh, "", 1, ErrorMsg, sqlstate,
Nullch);
} else {
 DBIh_SET_ERR_CHAR(h, imp_xxh, Nullch, 1, ErrorMsg,
   sqlstate, Nullch);
}

I believe it is the "" argument. From DBI docs:

A driver may return 0 from err() to indicate a warning condition after a
method call. Similarly, a driver may return an empty string to indicate
a 'success with information' condition. In both these cases the value is
false but not undef. The errstr() and state() methods may be used to
retrieve extra information in these cases.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


New development release DBD::ODBC 1.21_1

2009-06-03 Thread Martin Evans
I have uploaded a new development release of DBD::ODBC to CPAN.

This fixes the issue reported in
https://rt.cpan.org/Ticket/Display.html?id=46597 and represents a change
in the behavior for binding parameters as DBD::ODBC was not following
the DBI specification.

If you bind parameters and then also pass the parameter data into
execute, DBD::ODBC would use either the default parameter type or what
the ODBC driver described the parameter as with SQLDescribeParam instead
of the type specified in the previous bind_param call.

See the rt for an example.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: column_info()'s COLUMN_DEF values - literal vs function

2009-05-28 Thread Martin Evans
Tim Bunce wrote:
> On Wed, May 27, 2009 at 04:16:48PM +0100, Martin Evans wrote:
>> Tim Bunce wrote:
>>> How does your DBI driver represent a default column value in the results
>>> returned by the column_info() method?
>>> Specifically, does it distinguish between default literal strings and
>>> default functions/expressions?
>> I ran the following code to SQL Server via the Easysoft SQL Server ODBC
>> driver and DBD::ODBC:
>>
>> create table martin (a int default NULL,
>>  b int default 1,
>>  c char(20) default 'fred',
>>  d char(30) default current_user);
>> DBI::dump_results($h->column_info(undef, undef, 'martin', undef));
>>
>> The results are below. If you would like more types of defaults let me know.
> 
>> 'master', 'dbo', 'martin', 'a', '4', 'int', '10', '4', '0', '10', '1', 
>> undef, '(NULL)',
>> 'master', 'dbo', 'martin', 'b', '4', 'int', '10', '4', '0', '10', '1', > 
>> undef, '((1))',
>> 'master', 'dbo', 'martin', 'c', '1', 'char', '20', '20', undef, undef, > 
>> '1', undef, '('fred')',
>> 'master', 'dbo', 'martin', 'd', '1', 'char', '30', '30', undef, undef, > 
>> '1', undef, '(user_name())',
> 
> So ODBC matches the spec and, like Postgres, is reporting an expression
> rather than the original literal text. (And wrapping it in parens,
> presumably to avoid precedence issues if used in an expression.)

Of course, this is one particular ODBC driver. DBD::ODBC has the
"luxury" in this one case of not having to do anything but leave it up
to the ODBC driver to produce the result-set. We wrote this ODBC driver
so obviously it conforms with the spec to the best we can make it.

You should also be aware that although the Microsoft SQL Server driver
(or at least one of the many many versions of their driver) returns the
same as above it for those columns it returns more columns than above -
another 6 IIRC.

> 
> For now I've a draft patch to the DBI docs that looks like:
> 
> -B: The default value of the column.
> +B: The default value of the column, in a format that can be used
> +directly in an SQL statement.
> +
> +Note that this may be an expression and not simply the text used for the
> +default value in the original CREATE TABLE statement. For example, given:
> +
> +col1 char(30) default current_user
> +col2 char(30) default 'string'
> +
> +where "current_user" is the name of a function, the corresponding 
> C
> +values would be:
> +
> +Databasecol1 col2
> +Postgres:   "current_user"() 'string'::text
> +MS SQL: (user_name())('string')
> +
> 
>> I could in theory run this to around 10 databases via 4 or 5 DBDs
>> but I'd really need ALOT or persuasion that I was helping out big time
>> to to that.
> 
> I'd be interested in the COLUMN_DEF values for other databases and DBDs
> but it's not urgent. Hopefully others can fill in the gaps.
> (Oracle and mysql are two big missing databases at the moment.).
> 
> Tim.
> 
> 

DBD::Oracle to one of our databases with:

use DBI;
use strict;
use warnings;

my $h = DBI->connect;

eval {$h->do(q{drop table martin})};

my $table = << 'EOT';
create table martin (a int default NULL,
 b int default 1,
 c char(20) default 'fred',
 d varchar2(30) default user,
 e int)
EOT

$h->do($table);

DBI::dump_results($h->column_info(undef, 'XXX', 'MARTIN', undef));

shows:

undef, "XXX", "MARTIN", "A", "3", "NUMBER", "38", "40", "0", "10", "1",
undef, "NULL", "3", undef, undef, "1", "YES"
undef, "XXX", "MARTIN", "B", "3", "NUMBER", "38", "40", "0", "10", "1",
undef, "1", "3", undef, undef, "2", "YES"
undef, "XXX", "MARTIN", "C", "1", "CHAR", "20&

Re: column_info()'s COLUMN_DEF values - literal vs function

2009-05-27 Thread Martin Evans
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tim Bunce wrote:
> How does your DBI driver represent a default column value in the results
> returned by the column_info() method?
> 
> Specifically, does it distinguish between default literal strings and
> default functions/expressions?
> 
> Consider the difference between these two column definitions
> 
> bar1 timestamp not null default 'current_timestamp',
> bar2 timestamp not null default  current_timestamp,
> 
> or, more generally:
> 
> foo1 varchar(20) not null default 'current_user',
> foo2 varchar(20) not null default  current_user,
> 
> This issue has cropped up in relation to a bug Jos has filed against
> DBIx::Class::Schema::Loader: https://rt.cpan.org/Ticket/Display.html?id=46412
> 
> The ODBC 3.0 spec says
> http://web.archive.org/web/20070513203826/http://msdn.microsoft.com/library/en-us/odbc/htm/odbcsqlcolumns.asp
> 
> The default value of the column. The value in this column should be
> interpreted as a string *if it is enclosed in quotation marks*.
> 
> If NULL was specified as the default value, then this column is the word
> NULL, not enclosed in quotation marks. If the default value cannot be
> represented without truncation, then this column contains TRUNCATED,
> with no enclosing single quotation marks. If no default value was
> specified, then this column is NULL.
> 
> *The value of COLUMN_DEF can be used in generating a new column
> definition*, except when it contains the value TRUNCATED.
> 
> (The *emphasis* is mine.)
> 
> So, people, what does your database driver do for these cases?
> Are COLUMN_DEF values for literal defaults returned by column_info()
> enclosed in quotation marks?
> 
> Tim.
> 
> 

I ran the following code to SQL Server via the Easysoft SQL Server ODBC
driver and DBD::ODBC:

use DBI;
use strict;
use warnings;

my $h = DBI->connect;

eval {$h->do(q{drop table martin})};

my $table = << 'EOT';
create table martin (a int default NULL,
 b int default 1,
 c char(20) default 'fred',
 d char(30) default current_user);
EOT

$h->do($table);

DBI::dump_results($h->column_info(undef, undef, 'martin', undef));

The results are below. If you would like more types of defaults let me
know. I could in theory run this to around 10 databases via 4 or 5 DBDs
but I'd really need ALOT or persuasion that I was helping out big time
to to that.

'master', 'dbo', 'martin', 'a', '4', 'int', '10', '4', '0', '10', '1',
undef, '(NULL)', '4', undef, undef, '1', 'YES', undef, undef, undef,
undef, undef, undef, '38'
'master', 'dbo', 'martin', 'b', '4', 'int', '10', '4', '0', '10', '1',
undef, '((1))', '4', undef, undef, '2', 'YES', undef, undef, undef,
undef, undef, undef, '38'
'master', 'dbo', 'martin', 'c', '1', 'char', '20', '20', undef, undef,
'1', undef, '('fred')', '1', undef, '20', '3', 'YES', undef, undef,
undef, undef, undef, undef, '39'
'master', 'dbo', 'martin', 'd', '1', 'char', '30', '30', undef, undef,
'1', undef, '(user_name())', '1', undef, '30', '4', 'YES', undef, undef,
undef, undef, undef, undef, '39'
4 rows

Martin
- --
Martin J. Evans
Easysoft Limited
http://www.easysoft.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFKHVlfhd1KVpsamNgRAqdHAJ42KesVvO3uxy0M20X14PfSYYwVigCfVhko
PUZrbicrua54kN5bc+XpLK4=
=alsQ
-END PGP SIGNATURE-


DBD::ODBC 1.20 released to CPAN

2009-04-21 Thread Martin Evans
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I released DBD::ODBC 1.20 yesterday. I am happy to accept any make test
results mailed directly to me whether they are success or failure. If
you mail me success results I'll compile a listing of working ODBC drivers.

The changes are listed below the most significant of which are the first
and last ones. There is currently a problem with SQL Server Native
Client version 10.00.1600 which I am still working on reported via cpan
testers. If you have this version of SQL Server Native Client and are
prepared to help me out running some test code I would like to hear from
you as I'm having problems getting it myself.

=head2 Changes in DBD::ODBC 1.20 April 20, 2009

Fix bug in handling of SQL_WLONGVARCHAR when not built with unicode
support.  The column was not identified as a long column and hence the
size of the column was not restricted to LongReadLen. Can cause
DBD::ODBC to attempt to allocate a huge amount of memory.

Minor changes to Makefile.PL to help diagnose how it decided which
driver manager to use and where it was found.

Offer suggestion to debian-based systems when some of unixODBC is
found (the bin part) but the development part is missing.

In 20SqlServer.t attempt to drop any procedures we created if they
still exist at the end of the test. Reported by Michael Higgins.

In 12blob.t separate code to delete test table into sub and call at
being and end, handle failures from prepare there were two ENDs.

In ODBCTEST.pm when no acceptable test column type is found output all
the found types and BAIL_OUT the entire test.

Skip rt_39841.t unless actually using the SQL Server ODBC driver or
native client.

Handle drivers which return 0 for SQL_MAX_COLUMN_NAME_LEN.

Double the buffer size used for column names if built with unicode.

Martin
- --
Martin J. Evans
Easysoft Limited
http://www.easysoft.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFJ7fahhd1KVpsamNgRAvuiAJ9WPX0i0XTPhSEQHCP74tMzOlXjSACeMJCJ
2DOHe1V7VG2XypyeaRMBFeU=
=himF
-END PGP SIGNATURE-


New release 1.19 of DBD::ODBC

2009-04-02 Thread Martin Evans
I have just uploaded DBD::ODBC 1.19 to CPAN where it should start
appearing later today. 1.19 is the result of a lot of hard work in the
previous 4/5 development releases to work around various ODBC driver
bugs. Thank you to everyone one sent me test output and by all means
continue to do so.

If you have FreeTDS then you should note that as a result of working
around a bug in FreeTDS (which ignored all connection attributes after a
double ;;) the Multiple Active Statement test is now run and currently
hangs (it is in 20SqlServer.t) and there are other failures. I've done
my best to workaround what I can but a make test for some versions of
freeTDS (at least 0.82 and 0.6*) will hang - sorry. If you work on
freeTDS and want to sort these issues out by all means get in contact
with me.

Below I've listed all the changes since the last full release.

=head2 Changes in DBD::ODBC 1.19 April 2, 2009

Some minor diagnostic output during tests when running against freeTDS
to show we know of issues in freeTDS.

Fixed issue in 20SqlServer.t where the connection string got set with
two consecutive semi-colons. Most drivers don't mind this but freeTDS
ignores everything after that point in the connection string.

Quieten some delete table output during tests.

Handle connect failures in 20SqlServer.t in the multiple active
statement tests.

In 02simple.t cope with ODBC drivers or databases that do not need a
username or password (MS Access).

In 20SqlServer.t fix skip count and an erroneous assignment for
driver_name.

Change some if tests to Test::More->is tests in 02simple.t.

Fix "invalid precision" error during tests with the new ACEODBC.DLL MS
Access driver. Same workaround applied for the old MS Access driver
(ODBCJT32.DLL) some time ago.

Fix out of memory error during tests against the new MS Access driver
(ACEODBC.DLL). The problem appears to be that the new Access driver
reports ridiculously large parameter sizes for "select ?" queries and
there are some of these in the unicode round trip test.

Fixed minor typo in Makefile.PL - diagnostic message mentioned "ODBC
HOME" instead of ODBCHOME.

12blob.t test somehow got lost from MANIFEST - replaced. Also changed
algorithm to get a long char type column as some MS Access drivers
only show SQL_WLONGVARCHAR type in unicode.

Added diagnostic output to 02simple.t to show the state of
odbc_has_unicode.

=head2 Changes in DBD::ODBC 1.18_4 March 13, 2009

A mistake in the MANIFEST lead to the rt_43384.t test being omitted.

Brian Becker reported the tables PERL_DBD_39897 and PERL_DBD_TEST are
left behind after testing. I've fixed the former but not the latter
yet.

Yet another variation on the changes for rt 43384. If the parameter is
bound specifically as SQL_VARCHAR, you got invalid precision
error. Thanks to Øystein Torget for finding this and helping me verify
the fix.

If you attempt to insert large amounts of data into MS Access (which
does not have SQLDescribeParam) you can get an invalid precision error
which can be worked around by setting the bind type to
SQL_LONGVARCHAR. This version does that for you.

08bind2.t had a wrong skip count.

12blob.t had strict commented out and GetTypeInfo was not quoted. Also
introduced a skip if the execute fails as it just leads to more
obvious failures.

In dbdimp.c/rebind_ph there was a specific workaround for SQL Server
which was not done after testing if we are using SQL Server - this
was breaking tests to MS Access.

=head2 Changes in DBD::ODBC 1.18_2 March 9, 2009

Added yet another workaround for the SQL Server Native Client driver
version 2007.100.1600.22 and 2005.90.1399.00 (driver version
09.00.1399) which leads to HY104, "Invalid precision value" in the
rt_39841.t test.

=head2 Changes in DBD::ODBC 1.18_1 March 6, 2009

Fixed bug reported by Toni Salomäki leading to a describe failed error
when calling procedures with no results. Test cases added to
20SqlServer.t.

Fixed bug rt 43384 reported by Øystein Torget where you cannot insert
more than 127 characters into a Microsoft Access text(255) column when
DBD::ODBC is built in unicode mode.

Martin
-- 
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


New development release (1.18_2) of DBD::ODBC - please read if you use DBD::ODBC

2009-03-09 Thread Martin Evans

Hi,

I have released a new development release of DBD::ODBC 1.18_2 which you 
can find on CPAN. For a list of changes see the end of this email.


I would like to make a plea to all DBD::ODBC users to download and at 
least run the make test and send the output to me, even if you are not 
currently planning on upgrading.


Increasingly I am finding it very difficult to keep on top of specific 
workarounds for drivers and driver managers. At this time there are at 
least 4 ODBC driver managers and well over 50 ODBC drivers used 
regularly (just for SQL Server on Windows there are over 10 drivers and 
versions which can be used for SQL Server). I cannot possibly have and 
test them all and fixing an issue in one driver/driver_manager is 
increasingly likely to break another sending me into a loop.


The latest test code for DBD::ODBC outputs details of the database and 
driver which I can use to create a matrix of working versions. Please, 
please at least download the latest DBD::ODBC, run make test and send it 
to me.


Issues addressed in 1.18_1 and 1.18_2:

=head2 Changes in DBD::ODBC 1.18_2 March 9, 2009

Added yet another workaround for the SQL Server Native Client driver
version 2007.100.1600.22 and 2005.90.1399.00 (driver version
09.00.1399) which leads to HY104, "Invalid precision value" in the
rt_39841.t test.

=head2 Changes in DBD::ODBC 1.18_1 March 6, 2009

Fixed bug reported by Toni Salomäki leading to a describe failed error
when calling procedures with no results. Test cases added to
20SqlServer.t.

Fixed bug rt 43384 reported by oyse where you cannot insert more than
127 characters into a Microsoft Access text(255) column when DBD::ODBC
is built in unicode mode.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Warning about rt.cpan emailings not arriving

2009-02-26 Thread Martin Evans
I just noticed the thread http://www.perlmonks.org/?node_id=746522 on 
perl monks and when I looked at DBD::ODBC, there were 3 issues posted in 
the last few days none of which I got an email from. It would appear I 
am not the only one.


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Reg: Installing Bugzilla with MySQL database

2009-02-24 Thread Martin Evans

Raghavachary, Sundeep wrote:

Hello All,

When I am running the DBD-mysql-4.001 with DBI-1.58 on Red Hat 5 EL
(x86_64), it fails at "make test" with the following error code 255.

Can you help me resolving the error?

(In theory transactions could be supported when using a transport that
maintains a connection, like stream does. If you're interested in this
please get in touch via dbi-dev@perl.org)

Thanks
Sundeep


Please don't "get in touch via dbi-dev@perl.org" because it is not the 
appropriate place for this posting. If you are having problems 
installing DBD::mysql you could use the dbi-users list or one of the 
mysql/perl lists (p...@lists.mysql.com). This list is for DBI development.


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Error Message: on "make test" with DBD-mysql-4.001

PERL_DL_NONLAZY=1 /usr/local/bin/perl "-MExtUtils::Command::MM" "-e"
"test_harne
ss(0, 'blib/lib', 'blib/arch')" t/*.t
t/00base.ok
t/10dsnlist..*** stack smashing detected ***:
/usr/local/bin/perl termin
ated
t/10dsnlist..dubious
Test returned status 0 (wstat 6, 0x6)
t/20createdrop...ok
t/30insertfetch..ok
t/35limitok
t/35prepare..ok
t/40bindparamok
t/40bindparam2...ok
t/40blobsok
t/40listfields...ok
t/40nullsok
t/40numrows..ok
t/41bindparamok
t/41blobs_prepareok
t/42bindparamok
t/50chopblanks...ok
t/50commit...Transactions not supported by database at
t/50commit.t line
 101.
t/50commit...dubious
Test returned status 255 (wstat 65280, 0xff00)
DIED. FAILED tests 5-30
Failed 26/30 tests, 13.33% okay
t/60leaksskipped
all skipped: $ENV{SLOW_TESTS} is not set
t/70takeimp..skipped
all skipped: test feature not implemented
t/75supported_sqlok
t/80procsDBD::mysql::st execute failed: Failed to CREATE
PROCEDURE t
estproc at t/80procs.t line 105.
t/80procsdubious
Test returned status 255 (wstat 65280, 0xff00)
DIED. FAILED tests 9-29
Failed 21/29 tests, 27.59% okay
t/insertid...ok
t/param_values...ok
t/prepare_noerrorok
t/texecute...ok
t/utf8...Wide character in print at t/lib.pl line 258.
t/utf8...FAILED test 12
Failed 1/15 tests, 93.33% okay
Failed Test   Stat Wstat Total Fail  List of Failed

---
t/10dsnlist.t0 6??   ??  ??
t/50commit.t   255 6528030   52  5-30
t/80procs.t255 6528029   42  9-29
t/utf8.t151  12
2 tests skipped.
Failed 4/26 test scripts. 48/489 subtests failed.
Files=26, Tests=489,  1 wallclock secs ( 1.06 cusr +  0.25 csys =  1.31
CPU)
Failed 4/26 test programs. 48/489 subtests failed.
make: *** [test_dynamic] Error 255













Re: ANNOUNCE: DBD::Oracle 1.23 Release Candidate 3

2009-02-23 Thread Martin Evans

John Scoles wrote:

Ok guys and gals how about a little testing with this RC.

This time round I have fixed a bug in the connection  function  were the 
environment handle was lost if one tried to connect with a bad 
user/password combination.


You can find it at the same old spot

http://svn.perl.org/modules/dbd-oracle/trunk/dbd_Oracle_123_RC3.tar

Cheers
John Scoles




Builds and tests for me on:

This is perl, v5.8.8 built for i386-linux-thread-multi

I have not checked it with our test system yet but I will as soon as I can.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Taking trace () to the next level ...

2008-12-05 Thread Martin Evans

[EMAIL PROTECTED] wrote:

Merijn  and I have implimented it in DBD::Oracle and DBD::Unify
respectifully.  I amd about to review the thread and see if I have to
manke any futher changes to DBD::Oracle.

It think the key is gettin Tim B on board.

I myself have fount it a great help when debugging as most of bugs that
come up with DBD::Oracle have to do with OCI nd not DBI.

Just my 2c for today

Cheers
John SColes


Yes I saw the changes to Unify and Oracle but Unify included stuff like:

sub parse_trace_flag
{
my ($dbh, $name) = @_;
return 0x7F00 if $name eq 'DBD';

which seemed to me moving past the current spec in ways I was a little 
uncomfortable about.


and Oracle:

o seemed to be using a global integer dbd_oracle flag
o had a lot of "if (DBIS->debug >= 2 || dbd_verbose >= 2 )"

and I was rather hoping it had ended up fitting into the existing 
DBIc_TRACE macros somehow as I converted to use that last time the 
tracing changed.


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


H.Merijn Brand wrote:

I've had a series of discussions with several DBD authors and others
because I wanted to get better support for Driver-side trace message
handling in a portable way.

My DBD::Unify supports $dbh->{dbd_verbose} since a *long* time, and Tim
never saw the generic value of that, but now that John Scoles has used
it for himself in DBD::Oracle and others showed interest too, it might
be well worth to take the discussion to a wider board.

Here's a summary, '»' marked lines are proposed additions/changes. I
hope I interpreted all opinions correctly.

DBI

$h->trace ($trace_setting [, $output ]);

   $h may be 'DBI'

   $trace_setting is a (combined) flag, which supports

1..15   Simple debug level
"SQL" All SQL statements executed (not yet implemented)
"ALL" All DBI *and* Driver flags
»  "DBD" All Driver flags

» $h->trace ("DBD=7") could be an alternative to set the level

"1|pglib" Combinations of a DBI debug level and driver
flags, which have to be parsed by
parse_trace_flags ()
»  "1|4" Combination of a DBI debug level and a
driver debug level
»  "|4"  Driver-side only debug level, will NOT
alter the current DBI debug level

$trace_setting can be stored in $ENV{DBI_TRACE}
»  Driver side $trace_settings can be stored in $ENV{DBD_TRACE}
(note that this /was/ DBD_VERBOSE in previous discussions, but
DBD_TRACE is more in line with the above)

#define DBIc_TRACE_LEVEL_MASK   0x000F
#define DBIc_TRACE_FLAGS_MASK   0xFF00

Is 0xF0 supposed to be the Driver *level* mask?

   $output can be a filename or an output handle. The latter is very
   nice in combination with perlIO

my $trace = "";
open my $th, ">", \$trace;
$dbh->trace (4, $th);
...
$dbh->trace (0);
# Trace log now in $trace;

   Handles can get there own (inherited) $trace_setting using the

$h->{TraceLevel}  = "2|14";
»  $h->{dbd_verbose} = 4;

»  the dbd_verbose attribute is internally implemented as uni_verbose,
»  pg_verbose, ora_verbose, ... but offers a generic alias to
»  dbd_verbose, so that writing cross-database portable scripts will be
»  much easier.

There is no policy regarding DBD tracing. DBD::Unify uses a trace
*level*, which is by default inherited from dbis->debug (XS), but
DBD::Pg uses flags

DBI - Uses trace *level* (flags SQL or ALL set all)
  0 - Trace disabled.
  1 - Trace top-level DBI method calls returning with results or errors.
  2 - As above, adding tracing of top-level method entry with
parameters.
  3 - As above, adding some high-level information from the driver
  and some internal information from the DBI.
  4 - As above, adding more detailed information from the driver.
  This is the first level to trace all the rows being fetched.
  5 to 15 - As above but with more and more internal information.

DBD::Unify - Uses trace *level*
  0 - Trace disabled
  1 - No messages defined (yet)
  2 - Level 1 plus main method entry and exit points:
  3 - Level 2 plus errors and additional return codes and field types
  and values:
  4 - Level 3 plus some content info:
  5 - Level 4 plus internal coding for exchanges and low(er) level
  return codes:
  6 - Level 5 plus destroy/cleanup states:
  7 - No messages (yet) set to level 7 and up.

DBD::Oracle - Uses trace *level*
  Uses level 1..6, but does not document what the output is

DBD::Pg - uses trace *flags* AND *level* from DBI. Level 4 or higher
  sets all flags
  pglibpq   - Outputs the name of each libpq function (without
  arguments) immediately before running it.
  pgstart   - Outputs the name of each internal DBD::Pg function,
  and other information such as the function a

Re: Help sought with definition and implementation of ParamTypes attribute

2008-10-13 Thread Martin Evans
Thanks to all who have responded with clarification. I have implemented 
ParamTypes in DBD::ODBC as a hash reference with parameter number as key 
and each value is a hash reference with keys of 'TYPE' and values of SQL 
type number.


This will be in 1.17_2.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Help sought with definition and implementation of ParamTypes attribute

2008-10-10 Thread Martin Evans

H.Merijn Brand wrote:

On Fri, 10 Oct 2008 16:19:20 +0100, Martin Evans
<[EMAIL PROTECTED]> wrote:


Hi,

The DBI specification for ParamTypes taken from the DBI pod says the 
following for ParamTypes:


Returns a reference to a hash containing the type information currently 
bound to placeholders.  The keys of the hash are the ’names’ of the 
placeholders: either integers starting at 1, or, for drivers that 
support named placeholders, the actual parameter name string. The hash 
values are hashrefs of type information in the same form as that 
provided to the various bind_param() methods (See "bind_param" for the 
format and values), plus anything else that was passed as the third 
argument to bind_param().  Returns undef if not supported by the driver.


I'm not sure why the values of the keys are hash references unless


because if only the numeric values were supported, it would have been a
list, not a hash, but when placeholder names come in sight, a list
would not do

$sth = $dbh->prepare ("select * from xx where xs between ? and ? or xc = ?");
$sth->execute (4, 7, "0");

ParamValues   => { 1 => 4, 2 => 7, 3 => "0" },
ParamTypes=> { 1 => 5, 2 => 5, 3 => 1   },

If param names are supported, that might look like

ParamValues   => { foo => 4, bar => 7, baz => "0" },
ParamTypes=> { foo => 5, bar => 5, baz => 1   },


but here the keys 1, 2, 3 or foo, bar and baz do not have hash 
references as values as per "the hash values are hashrefs" they have 
scalars as values. What I understood "the hash values are hashrefs" to 
mean was:


ParamTypes => {1 => {something => x, somethingelse => y},
   2 => {something => x, somethingelse => y}}

and I was questioning what the "something" and "somethingelse" were 
since I am only aware of a "type".



multiple values are to be stored. If multiple values per key are stored 
what are they typically? I can only find one DBD which implements 
ParamTypes (DBD::Pg) and unless I am mistaken it sets the values of the 


I implemented it in DBD::Unify as of 0.75 in a bigger patch to
implement as much as possible of the DBI defenition:


thanks for that pointer - I missed DBD::Unify


*** Release 0.75 - Tue 23 Sep 2008 <[EMAIL PROTECTED]>

- Three-level dbd_verbose and documentation
- $ENV{DBD_TRACE} sets $dbh->{dbd_verbose} on/before connect
- New tests for $h->trace (...) and $h->{dbd_verbose}
- Added type_info_all (), get_info (), and parse_trace_flag ()
- Note that identifiers are now quoted
- Override quote_identifier () (UNIFY has no CATALOGS)
- Accept 2-arg and 3-arg ->do ()
- Accept %attr to ->prepare ()
- Raised all verbose levels by 1. 1 and 2 are now DBI only
- Removed 05-reauth.t
- NULLABLE now always 2, as it doesn't work
- Implemented CursorName  sth attribute
- Implemented ParamValues sth attribute
- Implemented ParamTypes  sth attribute
- Implemented RowsInCache sth attribute (always 0)
- Tested with Unify 6.3AB on HP-UX 10.20 with perl 5.8.8
- Tested with Unify 8.2BC on HP-UX 11.00 with perl 5.8.8
- Tested with Unify 8.3I  on HP-UX 11.23 with perl 5.10.0
- Tested with Unify 8.3K  on AIX 5.2.0.0 with perl 5.8.8
  Tests will fail on older perls, as the test cases use scalarIO


keys to a scalar value - the type of the parameter.


in dbd_st_FETCH_aatrib ()

if (kl == 10 && strEQ (key, "ParamTypes")) {
HV *hv = newHV ();
retsv  = newRV (sv_2mortal ((SV *)hv));
while (--p >= 0) {
char key[8];
sprintf (key, "%d", p + 1);
hv_store (hv, key, strlen (key), newSViv (imp_sth->prm[p].ftp), 0);
}
}


So you seem to have implemented it like DBD::Pg but that does not seem 
to agree with how ParamTypes is documented.





Reason I'm asking is it is on my to do list for DBD::ODBC.


The test case in t/20-uni-basic.t now looks like

ok ($sth = $dbh->prepare ("select * from xx where xs between ? and ? or xc = ?"), 
"sel prepare");
ok ($sth->execute (4, 7, "0"), "execute");
ok (1, "-- Check the internals");
{   my %attr = (# $sth attributes as documented in DBI-1.607
NAME  => [qw( xs xl xc xf xr xa xh xT xd xe )],
NAME_lc   => [qw( xs xl xc xf xr xa xh xt xd xe )],
NAME_uc   => [qw( XS XL XC XF XR XA XH XT XD XE )],
NAME_hash => {qw( xs 0 xl 1 xc 2 xf 3 xr 4 xa 5 xh 6 xT 7 xd 8 xe 9 
)},
NAME_lc_hash  => {qw( xs 0 xl 1 xc 2 xf 3 xr 4 xa 5 xh 6 xt 7 xd 8 xe 9 
)},
NAME_uc_hash  => {qw( XS 0 XL 1 XC 2 XF 3 XR 4 XA 5 XH 6 XT 7 XD 8 XE 9 
)},
uni_types =&

Help sought with definition and implementation of ParamTypes attribute

2008-10-10 Thread Martin Evans

Hi,

The DBI specification for ParamTypes taken from the DBI pod says the 
following for ParamTypes:


Returns a reference to a hash containing the type information currently 
bound to placeholders.  The keys of the hash are the ’names’ of the 
placeholders: either integers starting at 1, or, for drivers that 
support named placeholders, the actual parameter name string. The hash 
values are hashrefs of type information in the same form as that 
provided to the various bind_param() methods (See "bind_param" for the 
format and values), plus anything else that was passed as the third 
argument to bind_param().  Returns undef if not supported by the driver.


I'm not sure why the values of the keys are hash references unless 
multiple values are to be stored. If multiple values per key are stored 
what are they typically? I can only find one DBD which implements 
ParamTypes (DBD::Pg) and unless I am mistaken it sets the values of the 
keys to a scalar value - the type of the parameter.


Reason I'm asking is it is on my to do list for DBD::ODBC.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Passing unicode strings to prepare method and other unicode questions

2008-09-01 Thread Martin Evans

Tim Bunce wrote:

On Sun, Aug 31, 2008 at 11:47:38PM +0100, Martin J. Evans wrote:

Tim Bunce wrote:

On Fri, Aug 29, 2008 at 12:37:48PM +0100, Martin Evans wrote:
  

Martin Evans wrote:

dbd_st_prepare and dbd_db_login6 both take char* and not the original SV 
so how can I tell if the strings are utf8 encoded or not?


What I'd like to be able to do (in ODBC terms is):

In dbd_db_login6
  test if connection string has utf8 set on it
  if (utf8on)
convert utf8 to utf16 (that is what ODBC wide functions use)
call SQLDriverConnectW
  else
call SQLDriverConnect (this is the ANSI version)

Similarly in prepare where a number of people have unicode column or 
table names and hence want to do "select unicode_column_name from 
table".


Is this what dbd_st_prepare_sv (as opposed to dbd_st_prepare) is for? 
and should there be a dbd_db_login6_sv?

Yes, and yes.
  
Thanks Tim. So how do I get a login6_sv? (I've got an awful feeling you are 
going to say send a patch).


Your feeling is spot on. Should be trivial though. There are several
similar cases in Driver.xst already.

Thanks for working on this Martin!

Tim.


Ok, as you say the change is trivial for a unicode username and password:

#ifdef dbd_db_login6_sv
ST(0) = dbd_db_login6(dbh, imp_dbh, dbname, username, password, 
attribs) ? &sv_yes : &sv_no;

#elif defined(dbd_db_login6)
ST(0) = dbd_db_login6(dbh, imp_dbh, dbname, u, p, attribs) ? 
&sv_yes : &sv_no;

#else
ST(0) = dbd_db_login( dbh, imp_dbh, dbname, u, p) ? &sv_yes : &sv_no;
#endif

and an additional line in dbd_xsh.h I presume but not "dbname" which is 
the one I'd really like as it is currently char *. How do I get dbname 
to be an SV - as you may have guessed by now I'm not an XS expert. Is it 
just a case of changing my DBD::ODBC::db::_login to say it is an SV*?


Thanks for the pointers.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com

Also, to use dbd_st_prepare_sv am I supposed to add something like the 
following to ODBC.xs:


#include "ODBC.h"
# the following line added:
#define dbd_st_prepare_sv dbd_st_prepare_sv

Each driver should have a .h file that contains a bunch of line like

#define dbd_db_do   ora_db_do
#define dbd_db_commit   ora_db_commit
#define dbd_db_rollback ora_db_rollback
...etc...

They indicate which methods have implementations in C, which
implementation should be used, i.e. dbd_db_login vs dbd_db_login6,
and they ensure that the C function names are unique so multiple drivers
can be statically linked into the same executable. (Though few people
care about static linking these days.)

The "Implementation header dbdimp.h" in the DBI::DBD docs talks about this.

So, to answer your question, alongside your existing set of #defines
you'd add #define dbd_st_prepare_sv odbc_st_prepare_sv.

(It's probably a personal preference if you name the actual C function
odbc_st_prepare_sv, or name it dbd_st_prepare_sv and let the macro
rename it for you.)

Tim.

Sorted, thank you.

I've got a load of unicode changes for DBD::ODBC but I'm still not 100% 
about some of them. I keep getting emails from people tapping in stuff like 
japanese (JIS) strings into their SQL and expecting it to just work across 
different platforms. To add to the confusion ODBC (as far as Microsoft is 
concerned) already defines SQLxxxW functions which are (wide - ha, i.e., 
UCS-2 versions for the normal ANSI functions). The current changes would 
allow connection to a unicode data source name (if I've got login6_sv), 
preparing of unicode SQL and support for unicode column and table names - 
all decode_utf8'ed to perl.


As an aside, if anyone reading this has wanted any kind of unicode support 
in DBD::ODBC (which is not already there) please get in contact with me.


Martin





Re: Passing unicode strings to prepare method and other unicode questions

2008-08-29 Thread Martin Evans

Martin Evans wrote:

Hi,

Increasingly I am getting asked unicode questions and being presented 
with unicode issues that currently don't work in DBD::ODBC. Currenty 
DBD::ODBC supports the binding of unicode parameters and the returning 
of unicode result-set data.


I would like to change DBD::ODBC to support:

a) unicode column names (from NAME attribute, column_info etc)
b) unicode connection strings
c) unicode SQL
d) unicode table names (table_info etc)

Although I don't specifically need unicode connection strings I at least 
need to turn connection strings usually passed to SQLDriverConnect into 
ODBC wide characters and call SQLDriverConnectW because without this 
call other calls to SQLXXXW (wide functions) are mapped to ANSI 
functions by the ODBC driver manager. Since I need to do this it seemed 
reasonable to just go the whole way and support unicode connection strings.


(a) I have implemented and appears to be ok and (d) should be fairly 
easy too but (b) and (c) are a little trickier with the existing DBI 
interface (unless I'm mistaken).


dbd_st_prepare and dbd_db_login6 both take char* and not the original SV 
so how can I tell if the strings are utf8 encoded or not?


What I'd like to be able to do (in ODBC terms is):

In dbd_db_login6
  test if connection string has utf8 set on it
  if (utf8on)
convert utf8 to utf16 (that is what ODBC wide functions use)
call SQLDriverConnectW
  else
call SQLDriverConnect (this is the ANSI version)

Similarly in prepare where a number of people have unicode column or 
table names and hence want to do "select unicode_column_name from table".


Is this what dbd_st_prepare_sv (as opposed to dbd_st_prepare) is for? 
and should there be a dbd_db_login6_sv?


Any help would be appreciated.

Martin


Also, to use dbd_st_prepare_sv am I supposed to add something like the 
following to ODBC.xs:


#include "ODBC.h"
# the following line added:
#define dbd_st_prepare_sv dbd_st_prepare_sv

DBISTATE_DECLARE;

MODULE = DBD::ODBCPACKAGE = DBD::ODBC

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Passing unicode strings to prepare method and other unicode questions

2008-08-29 Thread Martin Evans

Hi,

Increasingly I am getting asked unicode questions and being presented 
with unicode issues that currently don't work in DBD::ODBC. Currenty 
DBD::ODBC supports the binding of unicode parameters and the returning 
of unicode result-set data.


I would like to change DBD::ODBC to support:

a) unicode column names (from NAME attribute, column_info etc)
b) unicode connection strings
c) unicode SQL
d) unicode table names (table_info etc)

Although I don't specifically need unicode connection strings I at least 
need to turn connection strings usually passed to SQLDriverConnect into 
ODBC wide characters and call SQLDriverConnectW because without this 
call other calls to SQLXXXW (wide functions) are mapped to ANSI 
functions by the ODBC driver manager. Since I need to do this it seemed 
reasonable to just go the whole way and support unicode connection strings.


(a) I have implemented and appears to be ok and (d) should be fairly 
easy too but (b) and (c) are a little trickier with the existing DBI 
interface (unless I'm mistaken).


dbd_st_prepare and dbd_db_login6 both take char* and not the original SV 
so how can I tell if the strings are utf8 encoded or not?


What I'd like to be able to do (in ODBC terms is):

In dbd_db_login6
  test if connection string has utf8 set on it
  if (utf8on)
convert utf8 to utf16 (that is what ODBC wide functions use)
call SQLDriverConnectW
  else
call SQLDriverConnect (this is the ANSI version)

Similarly in prepare where a number of people have unicode column or 
table names and hence want to do "select unicode_column_name from table".


Is this what dbd_st_prepare_sv (as opposed to dbd_st_prepare) is for? 
and should there be a dbd_db_login6_sv?


Any help would be appreciated.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: ANNOUNCE: DBD::Oracle 1.22 Release Candidate 3

2008-07-31 Thread Martin Evans

John Scoles wrote:

Ok hot off the press  RC3

A few more minor fixed but a last minute patch for  ora_lob_chunk_size 
function


It can be found in the usual spot

http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.22-RC3.tar


Cheers and thanks for all the help guys

John Scoles




Builds and tests fine for me with instant client 11g on linux against 
Oracle 10.2 XE and with Oracle 10.2 XE (client and server). Only these 
observations:


Oracle.xs:252:43: warning: "/*" within comment
Oracle.xs:253:5: warning: "/*" within comment
  due to closing comment being the wrong way around

Oracle.pm:3203: Unmatched =back
  due to missing =over underneath "Below are the limitations of Remote 
LOBs;"


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: ANNOUNCE: DBD::Oracle 1.22 Release Candidate 2

2008-07-29 Thread Martin Evans

Martin Evans wrote:

Tim Bunce wrote:

On Mon, Jul 28, 2008 at 05:35:14PM -0400, John Scoles wrote:

Gisle Aas wrote:

On Jul 28, 2008, at 18:06, John Scoles wrote:


Ok hot off the press  RC2

I have fixed as much as I can of the different compiler warnings 
hopefully this will be a little better.


You can find the RC here

http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.22-RC2.tar
I tried it on our Linux build box and got test failures in 
t/31lob_extended.t.  Any advice on what I need to do?

--Gisle

 perl -Mblib t/31lob_extended.t
DBD::Oracle::db do failed: ORA-01031: insufficient privileges (DBD 
ERROR: OCIStmtExecute) [for Statement "CREATE OR REPLACE PROCEDURE 
p_DBD_Oracle_drop_me(pc OUT
Ok that is an easy one. This is not a DBD::Oracle or DBI error it 
simply means the user you are running the test as does not have the 
privileges to create an stored procedure.  Grand your user some more 
rights and the test should pass.


Sure, but tests shouldn't fail just because of a lack of privs.
The test needs to detect that and do a skip().

Tim.




That is my fault - I wrote that test and I didn't think about 
privileges. I would attach a patch but it will probably be deleted from 
this list so I've sent it to John and if anyone else specifically wants 
it let me know.


BTW, there are a lot of other tests will fail for users with few 
privileges - the worst being unable to create a table but there are 
others like selects from v$session. I have not patched all those as well.


Martin


For anyone interested the tests which fail for limited privileges (i.e., 
create table privilege, connect, unlimited table space) are:


28array_bind
  attempts to create a sequence - not handled
t/31lob
  attempt to select from v$session - I think this one is handled
t/50cursor
  Can't determine open_cursors from v$parameter - handled
t/56embbeded
  tries to create a type - not handled

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: ANNOUNCE: DBD::Oracle 1.22 Release Candidate 2

2008-07-29 Thread Martin Evans

Tim Bunce wrote:

On Mon, Jul 28, 2008 at 05:35:14PM -0400, John Scoles wrote:

Gisle Aas wrote:

On Jul 28, 2008, at 18:06, John Scoles wrote:


Ok hot off the press  RC2

I have fixed as much as I can of the different compiler warnings 
hopefully this will be a little better.


You can find the RC here

http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.22-RC2.tar
I tried it on our Linux build box and got test failures in 
t/31lob_extended.t.  Any advice on what I need to do?

--Gisle

 perl -Mblib t/31lob_extended.t
DBD::Oracle::db do failed: ORA-01031: insufficient privileges (DBD ERROR: 
OCIStmtExecute) [for Statement "CREATE OR REPLACE PROCEDURE 
p_DBD_Oracle_drop_me(pc OUT
Ok that is an easy one. This is not a DBD::Oracle or DBI error it simply 
means the user you are running the test as does not have the privileges to 
create an stored procedure.  Grand your user some more rights and the test 
should pass.


Sure, but tests shouldn't fail just because of a lack of privs.
The test needs to detect that and do a skip().

Tim.




That is my fault - I wrote that test and I didn't think about 
privileges. I would attach a patch but it will probably be deleted from 
this list so I've sent it to John and if anyone else specifically wants 
it let me know.


BTW, there are a lot of other tests will fail for users with few 
privileges - the worst being unable to create a table but there are 
others like selects from v$session. I have not patched all those as well.


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Clarification sought on deleting connection attributes

2008-07-28 Thread Martin Evans

Tim Bunce wrote:

On Mon, Jul 28, 2008 at 01:13:50PM +0100, Martin Evans wrote:

Hi,

From the DBI::DBD docs in "The dbd_db_login6 method" I read:

=
Here’s how you fetch them; as an example we use hostname attribute,
which can be up to 12 characters long excluding null terminator:

SV** svp;
STRLEN len;
char* hostname;

if ( (svp = DBD_ATTRIB_GET_SVP(attr, "drv_hostname", 12)) && SvTRUE(*svp)) {
   hostname = SvPV(*svp, len);
   DBD__ATTRIB_DELETE(attr, "drv_hostname", 12); /* avoid later STORE */
} else {
hostname = "localhost";
}
=

My question concerns the comment saying "avoid later STORE". If I have a 
DBD::ODBC specific attribute which a) may be specified on the connect call 
and b) is copied to any statement handles when they are created and c) may 
also be on a statement handle, should I be calling DBD__ATTRIB_DELETE? and 
what does that "avoid later STORE" really mean?


After $drh->connect(..., $attr) returns a handle DBI->connect(...)
effectively does $dbh->STORE($_, $attr->{$_}) for keys %$attr;

If the handle has already dealt with the attribute during the drivers
connect/login processing then the later STORE by the DBI is at best
redundant and could, at worse, cause problems/errors/whatever.

So the driver can delete from %$attr any attributes it doesn't want the
DBI to call STORE on later.

Tim.

p.s. patch to DBI::DBD docs most welcome!




Thanks for the clarification.

First patch appears to be:

DBD__ATTRIB_DELETE => DBD_ATTRIB_DELETE

as I can find the latter and not the former. Unless I've got something 
else wrong I get a segfault the minute I used DBD_ATTRIB_DELETE and I 
notice DBD::Oracle does not use it. Is anyone using DBD_ATTRIB_DELETE 
and can confirm it works? I am only doing:


/* odbc_putdata_start */
{
SV **svp;
IV putdata_start_value;

DBD_ATTRIB_GET_IV(
attr, "odbc_putdata_start", strlen("odbc_putdata_start"),
svp, putdata_start_value);
if (svp) {
imp_dbh->odbc_putdata_start = putdata_start_value;
if (DBIc_TRACE(imp_dbh, 0, 0, 3))
TRACE1(imp_dbh, "Setting DBH putdata_start to %d\n",
   (int)putdata_start_value);
#ifdef THE_FOLLOWING_SEG_FAULTS
/* avoid later STORE */
DBD_ATTRIB_DELETE(attr, "odbc_putdata_start",
  strlen("odbc_putdata_start"));
#endif
}
}

Thanks

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Clarification sought on deleting connection attributes

2008-07-28 Thread Martin Evans

Hi,

From the DBI::DBD docs in "The dbd_db_login6 method" I read:

=
Here’s how you fetch them; as an example we use hostname attribute,
which can be up to 12 characters long excluding null terminator:

SV** svp;
STRLEN len;
char* hostname;

if ( (svp = DBD_ATTRIB_GET_SVP(attr, "drv_hostname", 12)) && SvTRUE(*svp)) {
   hostname = SvPV(*svp, len);
   DBD__ATTRIB_DELETE(attr, "drv_hostname", 12); /* avoid later STORE */
} else {
hostname = "localhost";
}
=

My question concerns the comment saying "avoid later STORE". If I have a 
DBD::ODBC specific attribute which a) may be specified on the connect 
call and b) is copied to any statement handles when they are created and 
c) may also be on a statement handle, should I be calling 
DBD__ATTRIB_DELETE? and what does that "avoid later STORE" really mean?


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: ANNOUNCE: DBD::Oracle 1.22 Release Candidate 1

2008-07-25 Thread Martin Evans

Martin Evans wrote:

Fails to compile for me with one instant client:

oci8.c: In function 'oci_mode':
oci8.c:242: error: 'OCI_SUPPRESS_NLS_VALIDATION' undeclared (first use 
in this function)

oci8.c:242: error: (Each undeclared identifier is reported only once
oci8.c:242: error: for each function it appears in.)
make: *** [oci8.o] Error 1

Release 10.2.0.1.0 for Linux.

Shame, because this is what my application is using. I've commented the 
offending line out for now and will get back to you on Monday as to how 
it goes.


Also, I introduced an error in my last patch:

line 25 of t/10general.t uses BAILOUT and it should be BAIL_OUT

Also, t/20select fails on this instant client - still looking in to this.


This is down to:

data_diff saying:
a: UTF8 on, ASCII, 10 characters 10 bytes
b: UTF8 off, ASCII, 10 characters 10 bytes
Strings contain the same sequence of characters

I think this may be because my database is utf8 and although the data 
inserted is plain ascii ("1234567890" and "2bcdefabcd") when they are 
retrieved they come back with the utf8 flag on. I think it needs the 
following change:


--- 20select.t  (revision 11588)
+++ 20select.t  (working copy)
@@ -135,15 +135,15 @@
   $sth->{ChopBlanks} = 1;
   ok($tmp = $sth->fetchall_arrayref, 'fetchall');
   my $dif;
-  $dif = DBI::data_diff($tmp->[0][1], $data0);
-  ok(!$dif, 'first row matches');
-  diag($dif) if $dif;
-  $dif = DBI::data_diff($tmp->[1][1], $data1);
-  ok(!$dif, 'second row matches');
-  diag($dif) if $dif;
-  $dif = DBI::data_diff($tmp->[2][1], $data2);
-  ok(!$dif, 'third row matches');
-  diag($dif) if $dif;
+  if ($utf8_test) {
+  $dif = DBI::data_diff($tmp->[0][1], $data0);
+  ok(!defined($dif) || $dif eq '', 'first row matches');
+  diag($dif) if $dif;
+  } else {
+  is($tmp->[0][1], $data0, 'first row matches');
+  }
+  is($tmp->[1][1], $data1, 'second row matches');
+  is($tmp->[2][1], $data2, 'third row matches');
   }
 } # end of run_select_tests

This was my fault - sorry about that.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: ANNOUNCE: DBD::Oracle 1.22 Release Candidate 1

2008-07-25 Thread Martin Evans

Fails to compile for me with one instant client:

oci8.c: In function 'oci_mode':
oci8.c:242: error: 'OCI_SUPPRESS_NLS_VALIDATION' undeclared (first use 
in this function)

oci8.c:242: error: (Each undeclared identifier is reported only once
oci8.c:242: error: for each function it appears in.)
make: *** [oci8.o] Error 1

Release 10.2.0.1.0 for Linux.

Shame, because this is what my application is using. I've commented the 
offending line out for now and will get back to you on Monday as to how 
it goes.


Also, I introduced an error in my last patch:

line 25 of t/10general.t uses BAILOUT and it should be BAIL_OUT

Also, t/20select fails on this instant client - still looking in to this.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com

John Scoles wrote:

Well here is is a very large maintenance release of DBD::ORACLE

You can find it here

http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.22-RC1.tar

Any and all testing would be greatly appreciated, but especially
testing of building against Oracle Instant Client on a range of platforms.

Looks like I got Makefile to work on most clients and platforms 
including 64 bit sun and others


As well don't bother testing this against an 8 DB or Client as  I am 
dropping support  for 8 in this version. See the POD for more details.


Here is a quick look at what has been fixed in this version

 Update to connection part of POB from  John Scoles
 Fix to test suite to bring it up to standard from Martin Evans
 Fix for memory hemorrhage in bind_param_inout_array found by Ricky 
Egeland, Fix by John Scoles

 Fix for a typo in oracle.xs from Milo van der Leij
 Fix for bugs on SPs with Lobs reported by Martin Evans, Fix by J Scoles
 Changed the way Ping works rather than using prepare and execute it now 
makes a single round trip call to DB by John Scoles
 Fix for rt.cpan.org Ticket #=37501 fail HP-UX Itanium 11.31 makefile 
also added the OS and version to the output of the Makefile.PL for 
easier debugging. from John Scoles and Rich Roemer
 Added a number of internal functions for decoding OCI debug values from 
John Scoles
 Fix for  hpux 11.23 linker error unrecognized argument on the Makefile 
from someone on CPAN forum
 Added fetch by piece for lobs, fixed persistent lobs and expansed their 
usage for LONG and LONG RAW and changed to pod to reflect the changes 
from John Scoles
 Added comment to POD on case sensitivity of ORACLE environment 
variables suggested by Gerhard Lausser
 Added patch to fix a number of harmless, but annoying, GCC warnings 
from Eric Simon
 Added (finally) ora_verbose for DBD only tracking from John Scoles and 
thanks to H.Merijn Brand

 Fix for rt.cpan.org Ticket #=32396 from John Scoles
 Fix for memory leak that snucked into 1.21 from John Scoles
 Fix for rt.cpan.org Ticket #=36069: Problem with synonym from John Scoles
 Fix for rt.cpan.org Ticket #=28811 ORA_CHAR(s) not returning correct 
length in functions and procedures from John Scoles
 Makefile.PL now working without flags for Linux 11.1.0.6 instant client 
and regular client from John Scoles, Andy Sautins, H.Merijn Brand, 
Nathan Vonnahme and Karun Dutt
 Fixed how persistent lob fetch works now uses callback correctly, from 
John Scoles & Darren Kipp





Re: ANNOUNCE: DBD::Oracle 1.22 Release Candidate 1

2008-07-25 Thread Martin Evans

Tim Bunce wrote:

On Fri, Jul 25, 2008 at 11:00:10AM +0100, Martin Evans wrote:
Thanks for all your hard work on this John. Here are some observations and 
results:


o META.yml is missing - you get the following during Makefile.PL processing:

Warning: the following files are missing in your kit:
META.yml


John, using "make dist" to make the distribution should look after that
for you. Just run "make dist" then rename the DBD-Oracle-X.YY.tar.gz
file to add in the _RC1.

Tim.




Although it creates a META.yml is probably not the one you want to use 
for DBD modules because it does not create a yml file containing:


build_requires:
  DBI: 1.21
configure_requires:
  DBI: 1.21

and if you don't include those, cpan testers may generate failures now 
when they don't have DBI.


BTW, as an aside, just spent the last hour looking at Devel::NYTProf 
output for one process in our application and it produced some 
interesting output highlighting some areas to look at for optimisation 
that was not so obvious via DProf. Thanks to Tim and Adam Kaplan for this.


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: ANNOUNCE: DBD::Oracle 1.22 Release Candidate 1

2008-07-25 Thread Martin Evans
Thanks for all your hard work on this John. Here are some observations 
and results:


o META.yml is missing - you get the following during Makefile.PL processing:

Warning: the following files are missing in your kit:
META.yml

o there is a =back4 in the pod that should be just "=back"
Oracle.pm:2918: Unknown command paragraph "=back4"

o when you are using instant client the Makefile.PL still says:

WARNING: The tests will probably fail unless you set ORACLE_HOME yourself!

I do not think this is correct. You do not need ORACLE_HOME set with 
instant client and Oracle actually tell you to not set ORACLE_HOME with 
instant client on unix.


o there is a typo in the changes file:

connection part of POB => POD

o the segfault I reported with fetching lobs via a procedure returned 
cursor is fixed.


o ora_auto_lob confirmed as working from a statement handle created for 
a procedure returned cursor.


o all tests pass using instant client 11.1 on Linux Ubuntu 8.04 
(completely up to date with all fixes).


o all tests pass using instant client 10.2 on Linux Ubuntu 8.04 
(completely up to date with all fixes)


Attached is a new set of tests which check that bugs have been fixed. It 
includes tests for issues I found above.


I have not as yet run this release candidate with our application but I 
hope to do this before the end of the day.


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com

John Scoles wrote:

Well here is is a very large maintenance release of DBD::ORACLE

You can find it here

http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.22-RC1.tar

Any and all testing would be greatly appreciated, but especially
testing of building against Oracle Instant Client on a range of platforms.

Looks like I got Makefile to work on most clients and platforms 
including 64 bit sun and others


As well don't bother testing this against an 8 DB or Client as  I am 
dropping support  for 8 in this version. See the POD for more details.


Here is a quick look at what has been fixed in this version

 Update to connection part of POB from  John Scoles
 Fix to test suite to bring it up to standard from Martin Evans
 Fix for memory hemorrhage in bind_param_inout_array found by Ricky 
Egeland, Fix by John Scoles

 Fix for a typo in oracle.xs from Milo van der Leij
 Fix for bugs on SPs with Lobs reported by Martin Evans, Fix by J Scoles
 Changed the way Ping works rather than using prepare and execute it now 
makes a single round trip call to DB by John Scoles
 Fix for rt.cpan.org Ticket #=37501 fail HP-UX Itanium 11.31 makefile 
also added the OS and version to the output of the Makefile.PL for 
easier debugging. from John Scoles and Rich Roemer
 Added a number of internal functions for decoding OCI debug values from 
John Scoles
 Fix for  hpux 11.23 linker error unrecognized argument on the Makefile 
from someone on CPAN forum
 Added fetch by piece for lobs, fixed persistent lobs and expansed their 
usage for LONG and LONG RAW and changed to pod to reflect the changes 
from John Scoles
 Added comment to POD on case sensitivity of ORACLE environment 
variables suggested by Gerhard Lausser
 Added patch to fix a number of harmless, but annoying, GCC warnings 
from Eric Simon
 Added (finally) ora_verbose for DBD only tracking from John Scoles and 
thanks to H.Merijn Brand

 Fix for rt.cpan.org Ticket #=32396 from John Scoles
 Fix for memory leak that snucked into 1.21 from John Scoles
 Fix for rt.cpan.org Ticket #=36069: Problem with synonym from John Scoles
 Fix for rt.cpan.org Ticket #=28811 ORA_CHAR(s) not returning correct 
length in functions and procedures from John Scoles
 Makefile.PL now working without flags for Linux 11.1.0.6 instant client 
and regular client from John Scoles, Andy Sautins, H.Merijn Brand, 
Nathan Vonnahme and Karun Dutt
 Fixed how persistent lob fetch works now uses callback correctly, from 
John Scoles & Darren Kipp





Re: Problems building DBD on strawberry Perl

2008-05-21 Thread Martin Evans

H.Merijn Brand wrote:

On Wed, 21 May 2008 09:21:57 +0100, Martin Evans
<[EMAIL PROTECTED]> wrote:


Hi,

I'm hoping someone here may be able to help with an outstanding ticket I 
have for DBD::ODBC.


http://rt.cpan.org/Dist/Display.html?Status=Active&Queue=DBD-ODBC
which started at http://rt.cpan.org//Ticket/Display.html?id=32789 (for 
strawberry perl)


The problem is people using cpan -i on strawberry perl as cpan sets INC 
on the command line and this overrides any changes made to INC in the 
Makefile.PL. As a result the change all DBDs make to add the path for 
DBI's header files is lost and the compile fails. This has been reported 
to me via mail 4 times in the last fortnight :-(


Anyone have any ideas if there is a way around this?


http://win32.perl.org/wiki/index.php?search=ODBC&go=Go
http://win32.perl.org/wiki/index.php?title=Vanilla_Perl_Problem_Modules
http://win32.perl.org/wiki/index.php?title=Install_DBD::Oracle_on_Strawberry_Perl

http://rt.cpan.org/Ticket/Display.html?id=32811



Unless I've missed something, there is nothing in the above except edit 
the makefile to add the DBI path. What I was really looking for is an 
automatic solution to the problem.


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Problems building DBD on strawberry Perl

2008-05-21 Thread Martin Evans

Hi,

I'm hoping someone here may be able to help with an outstanding ticket I 
have for DBD::ODBC.


http://rt.cpan.org/Dist/Display.html?Status=Active&Queue=DBD-ODBC
which started at http://rt.cpan.org//Ticket/Display.html?id=32789 (for 
strawberry perl)


The problem is people using cpan -i on strawberry perl as cpan sets INC 
on the command line and this overrides any changes made to INC in the 
Makefile.PL. As a result the change all DBDs make to add the path for 
DBI's header files is lost and the compile fails. This has been reported 
to me via mail 4 times in the last fortnight :-(


Anyone have any ideas if there is a way around this?

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Function Calling Methods

2008-05-14 Thread Martin Evans

David E. Wheeler wrote:

Howdy dbi-devers,

More and more lately, I'm writing database functions in PL/pgSQL (in 
PostgreSQL) or SQL (in MySQL and others) to do the heavy lifting of 
interacting with database tables. I've been thinking that I'd really 
love a DBI method to call these functions without having to do the usual 
prepare / execute / fetch drill. Even using do() or fetchrow_array() 
seems a bit silly in this context:


my ($source_id) = $dbh->fetchrow_array(
'SELECT get_source_id(?)',
undef,
$source,
);

What I'd love is a couple of DBI methods to do this for me. I recognize 
that this is currently not defined by the DBI, but I'm wondering whether 
it might be time. I've no idea whether JDBC implements such an 
interface, but I was thinking of something like this for function calls:


sub call {
my $dbh = shift;
my $func = shift;
my $places = join ', ', ('?') x @_;
return $dbh->fetchrow_array(
"SELECT $func( $places )",
undef,
@_
);
}

This would allow me to call a function like so:

  my $val = $dbh->call('get_source_id', $source );

Which is a much nicer syntax. Drivers might have to modify it, of 
course; for MySQL, it should use CALL rather than SELECT.


For functions or procedures that happen to return sets or a cursor, 
perhaps we could have a separate method that just returns a statement 
handle that's ready to be fetched from?


That is slightly more complicated than it looks. DBD::Oracle already 
magics a sth into existence for reference cursors but some databases can 
return more than one result-set from a procedure - e.g., SQL Server and 
the SQLMoreResults call to move to the next one.



sub cursor {
my $dbh = shift;
my $func = shift;
my $places = join ', ', ('?') x @_;
my $sth = $dbh->prepare( "SELECT $func( $places )" );
$sth->execute(@_);
return $sth;
}

Just some ideas. I'm sure that there are more complications than this, 
but even if we could just have something that handles simple functions 
(think last_insert_id() -- eliminate this special case!), I think it'd 
go a long way toward not only simplifying the use of database functions 
in the DBI, but also toward encouraging DBI users to actually make more 
use of database functions.


Thoughts?

Thanks,

David




I have hundreds of functions and procedures in various packages in 
Oracle we use via DBD::Oracle. We have no SQL at all outside database 
functions/procedures/packages i.e., our Perl does not know anything at 
all about the tables or columns in the database and the only SQL 
executed is to prepare/execute procedures and functions. We wrap calls 
to functions and procedures like this:


$h->callPkgFunc(\%options, $pkg, $func_name, \$ret, @args);
$h->callPkgProc(\%options, $pkg, $proc_name, @parameters);

$pkg is the package name of synonym for the package.
$func_name and $proc_name are the function or procedure name.
$ret is the return value from a function - which may be a reference 
cursor for Oracle.

@args is the list of scalar args for the function.
@parameters is the list of parameters for the procedure and if any is a
reference to a scalar it is assumed to be an output parameter.

There are various %options for whether to die etc and ways of handling 
error output.


The wrapper handles creating the SQL, preparing it, binding the 
parameters, executing the func/proc and returning the output bound 
parameters.


This works well for us. We were using the same wrapper for MySQL and DB2 
but have since dropped use of MySQL and DB2. Of course, the innards of 
the wrapper were significantly different between DB2, MySQL and Oracle. 
For Oracle you end up with:


begin :1 := pkg_name.function_name(:2,:3,:4...); end;

begin pkg_name.proc_name(:1,:2,:3...); end;

The code to do this is fairly straight forward, the complexities lie in 
the differences between DBDs and databases.


A call-like method in DBI would save a little programming but for some 
DBDs it would be difficult - I'm of course thinking of DBD::ODBC. 
Although ODBC defines a {call xxx} syntax what actually happens when you 
you use it is very database dependent and I even know of ODBC drivers 
that expect you to ignore output bound reference cursors in the 
parameter list.


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


DBD::ODBC 1.16 uploaded to CPAN

2008-05-14 Thread Martin Evans

Hi,

I have just uploaded DBD::ODBC 1.16 to CPAN. This release contains the 
following changes:


=head1 CHANGES

=head2 Changes in DBD::ODBC 1.16 May 13, 2008

=head3 Test Changes

Small change to the last test in 10handler.t to cope with the prepare
failing instead of the execute failing - spotted by Andrei Kovalevski
with the ODBCng Postgres driver.

Changed the 20SqlServer.t test to specifically disable MARS for the
test to check multiple active statements and added a new test to check
that when MARS_Connection is enabled multiple active statements are
allowed.

Changed the 09multi.t test to use ; as a SQL statement seperator
instead of a newline.

A few minor "use of unitialised" fixes in tests when a test fails.

In 02simple.t Output DBMS_NAME/VER, DRIVER_NAME/VER as useful
debugging aid when cpan testers report a fail.

2 new tests for odbc_query_timeout added to 03dbatt.t.

Changed 02simple.t test which did not work for Oracle due to a "select
1" in the test. Test changed to do "select 1 from dual" for Oracle.

New tests for numbered and named placeholders.

=head3 Documentation Changes

Added references to DBD::ODBC ohloh listing and markmail archives.

Added Tracing sections.

Added "Deviations from the DBI specification" section.

Moved the FAQ entries from ODBC.pm to new FAQ document. You can view
the FAQ with perldoc DBD::ODBC::FAQ.

Added provisional README.windows document.

Rewrote pod for odbc_query_timeout.

Added a README.osx.

=head3 Internal Changes

More tracing in dbdimp.c for named parameters.

#ifdeffed out odbc_get_primary_keys in dbdimp.c as it is no longer
used.  $h->func($catalog, $schema, $table, 'GetPrimaryKeys') ends up
in dbdimp.c/dbd_st_primary_keys now.

Reformatted dbdimp.c to avoid going over 80 columns.

Tracing changed. Levels reviewed and changed in many cases avoiding levels 1
and 2 which are reserved for DBI. Now using DBIc_TRACE macro internally.

=head3 Build Changes

Changes to Makefile.PL to fix a newly introduced bug with 'tr', remove
easysoft OOB detection and to try and use odbc_config and odbcinst if
we find them to aid automatic configuration. This latter change also
adds "odbc_config --cflags" to the CC line when building DBD::ODBC.

Avoid warning when testing ExtUtils::MakeMaker version and it is a
test release with an underscore in the version.

=head3 Functionality Changes

Added support for parse_trace_flag and parse_trace_flags methods and
defined a DBD::ODBC private flag 'odbcdev' as a test case.

Add support for the 'SQL' trace type. Added private trace type odbcdev
as an experimental start.

Change odbc_query_timeout attribute handling so if it is set to 0
after having set it to a non-zero value the default of no time out is
restored.

Added support for DBI's statistics_info method.

=head3 Bug Fixes

Fix bug in support for named placeholders leading to error "Can't
rebind placeholder" when there is more than one named placeholder.

Guard against scripts attempting to use a named placeholder more than
once in a single SQL statement.

If you called some methods after disconnecting (e.g., prepare/do and
any of the DBD::ODBC specific methods via "func") then no error was
generated.

Fixed issue with use of true/false as fields names in structure on MAC
OS X 10.5 (Leopard) thanks to Hayden Stainsby.

Remove tracing of bound wide characters as it relies on
null-terminated strings that don't exist.

Fix issue causing a problem with repeatedly executing a stored
procedure which returns no result-set. SQLMoreResults was only called
on the first execute and some drivers (SQL Server) insist a procedure
is not finished until SQLMoreResults returns SQL_NO_DATA.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


bind_param for named parameters - clarification sought

2008-05-13 Thread Martin Evans

Tim,

In the thread "problem with DBD::ODBC and placeholders 
[SEC=UNCLASSIFIED]" on dbi-users recently you said:


> Drivers that support named placeholders like ":N" where N is an
> integer, could support both forms of binding: bind_param(":1",$v) and
> execute($v)
> It's not dis-allowed. Driver docs should clarify this issue.

Is it really your intention that to bind named parameter "fred" as in 
the SQL "insert into xxx values(:fred)" you call bind_param(":fred",$v)?


As it happens DBD::ODBC has a bug in its support of named parameters 
(other than :1, :2 etc) which means it did not work at all but it ALSO 
expects the name parameter ":fred" above to be passed to bind_param as 
"fred" i.e. the leading ':' is treated as an introducer and not part of 
the parameter name. I believe, but am prepared to be put right, other 
non-perl database drivers also drop the ':' when binding.


As it didn't work before I can change it to be either - just let me know.

Thanks.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Question on 'SQL' tracing

2008-05-13 Thread Martin Evans

Tim Bunce wrote:

On Mon, May 12, 2008 at 05:15:34PM +0100, Martin Evans wrote:

Martin Evans wrote:
After seeing the recent posting by Greg Sabino Mullane in subject "Log DBI 
query and values with placeholders" I realised there was something else 
DBD::ODBC was out of date with - tracing.


DBD::ODBC does not implement its own trace flags but neither does it react 
to 'SQL' tracing. If I set trace('SQL') I can see the TraceLevel is set to 
256 and I can find:


sub parse_trace_flag {
my ($h, $name) = @_;
#  0xddrL (driver, DBI, reserved, Level)
return 0x0100 if $name eq 'SQL';
return;
}

in DBI. So to test if 'SQL' is enabled we:

if *DBIc_TRACE_LEVEL(imp_xxh) & 256) {blah;}

and also, how do I set this in Perl:

$dbh->{TraceLevel} = 'SQL|3';

DBI::db=HASH(0x82ba9d4) trace level set to 0x100/3 (DBI @ 0x0/0) in DBI 
1.601-ithread (pid 4574)


$dbh->prepare()
  here when I examine DBIc_TRACE_LEVEL(imp_dbh) it is 3!

Similarly if I use:

$h->trace($h->parse_trace_flags('SQL'))

What am I doing wrong? I'm a little confused by this since DBD::Pg seems to 
expect 'SQL' to work.


I presume you've read http://search.cpan.org/~timb/DBI/DBI.pm#TRACING

What's missing is an equivalent section in the DBI::DBD docs.
At the moment there's just a passing mention at the end of an unrelated section:
http://search.cpan.org/~timb/DBI/lib/DBI/DBD.pm#The_dbd_drv_error_method

The short answer is that DBIc_TRACE_LEVEL only gives you the 'trace
level' not the 'trace flags' (which you can get via DBIc_TRACE_FLAGS).

You probably want to use the fancy DBIc_TRACE macros though...

Hopefully this chunk of DBIXS.h with help:

#define DBIc_TRACE_LEVEL_MASK   0x000F
#define DBIc_TRACE_FLAGS_MASK   0xFF00
#define DBIc_TRACE_SETTINGS(imp) (DBIc_DBISTATE(imp)->debug)
#define DBIc_TRACE_LEVEL(imp)   (DBIc_TRACE_SETTINGS(imp) & 
DBIc_TRACE_LEVEL_MASK)
#define DBIc_TRACE_FLAGS(imp)   (DBIc_TRACE_SETTINGS(imp) & 
DBIc_TRACE_FLAGS_MASK)
/* DBIc_TRACE_MATCHES(this, crnt): true if this 'matches' (is within) crnt
   DBIc_TRACE_MATCHES(foo, DBIc_TRACE_SETTINGS(imp))
*/
#define DBIc_TRACE_MATCHES(this, crnt)  \
(  ((crnt & DBIc_TRACE_LEVEL_MASK) >= (this & DBIc_TRACE_LEVEL_MASK)) \
|| ((crnt & DBIc_TRACE_FLAGS_MASK)  & (this & DBIc_TRACE_FLAGS_MASK)) )
/* DBIc_TRACE: true if flags match & DBI level>=flaglevel, or if DBI level>level
   This is the main trace testing macro to be used by drivers.
   (Drivers should define their own DBDtf_* macros for the top 8 bits: 
0xFF00)
   DBIc_TRACE(imp, 0, 0, 4) = if level >= 4
   DBIc_TRACE(imp, DBDtf_FOO, 2, 4) = if tracing DBDtf_FOO & level>=2 or 
level>=4
   DBIc_TRACE(imp, DBDtf_FOO, 2, 0) = as above but never trace just due to level
*/
#define DBIc_TRACE(imp, flags, flaglevel, level)\
(  (flags && (DBIc_TRACE_FLAGS(imp) & flags) && (DBIc_TRACE_LEVEL(imp) 
>= flaglevel)) \
|| (level && DBIc_TRACE_LEVEL(imp) >= level) )

Patches to DBI::DBD very welcome.

Tim.




Thanks Tim,

I've got the gist of that now and implemented in DBD::ODBC.
Patch for DBI::DBD on its way in the next few days.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Question on 'SQL' tracing

2008-05-12 Thread Martin Evans

Martin Evans wrote:
After seeing the recent posting by Greg Sabino Mullane in subject "Log 
DBI query and values with placeholders" I realised there was something 
else DBD::ODBC was out of date with - tracing.


DBD::ODBC does not implement its own trace flags but neither does it 
react to 'SQL' tracing. If I set trace('SQL') I can see the TraceLevel 
is set to 256 and I can find:


sub parse_trace_flag {
my ($h, $name) = @_;
#  0xddrL (driver, DBI, reserved, Level)
return 0x0100 if $name eq 'SQL';
return;
}

in DBI. So to test if 'SQL' is enabled we:

if *DBIc_TRACE_LEVEL(imp_xxh) & 256) {blah;}

Is that correct? No constant I am missing?

Martin


and also, how do I set this in Perl:

$dbh->{TraceLevel} = 'SQL|3';

DBI::db=HASH(0x82ba9d4) trace level set to 0x100/3 (DBI @ 0x0/0) in 
DBI 1.601-ithread (pid 4574)


$dbh->prepare()
  here when I examine DBIc_TRACE_LEVEL(imp_dbh) it is 3!

Similarly if I use:

$h->trace($h->parse_trace_flags('SQL'))

What am I doing wrong? I'm a little confused by this since DBD::Pg seems 
to expect 'SQL' to work.


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Question on 'SQL' tracing

2008-05-12 Thread Martin Evans
After seeing the recent posting by Greg Sabino Mullane in subject "Log 
DBI query and values with placeholders" I realised there was something 
else DBD::ODBC was out of date with - tracing.


DBD::ODBC does not implement its own trace flags but neither does it 
react to 'SQL' tracing. If I set trace('SQL') I can see the TraceLevel 
is set to 256 and I can find:


sub parse_trace_flag {
my ($h, $name) = @_;
#  0xddrL (driver, DBI, reserved, Level)
return 0x0100 if $name eq 'SQL';
return;
}

in DBI. So to test if 'SQL' is enabled we:

if *DBIc_TRACE_LEVEL(imp_xxh) & 256) {blah;}

Is that correct? No constant I am missing?

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: ANNOUNCE: DBD::Oracle 1.21 Release Candidate 3

2008-04-02 Thread Martin Evans

John Scoles wrote:


Seem this version is mostly problem free?  Here is the RC3 for your 
enjoyment.


http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.21-RC3.tar


Hopefully the last of the C warnings are gone, Fixed some typos and made 
sure the the .yml and manifest files are in the tar.


Still would like to see some more testing of this RC.

Cheers
John Scoles




I tested the following with success:

Common to all:
  database: Oracle XE 10.2.0
running on Linux Fedora Core release 5 (Bordeaux) 32-bit
  client platform: Linux Fedora Core release 5 (Bordeaux) 32-bit
  DBI: 1.604 (upgraded just to see those thread tests run)
  Perl: v5.8.8 built for i386-linux-thread-multi

Client side:
  Instant Client 10.1 - success
  Oracle 10.2.0 XE- success

I have upgraded to RC3 on my test platform which is currently running a 
lot of DBD::Oracle code and will let you know if I see anything.


Thanks for all your work on this John.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: ANNOUNCE: DBD::Oracle 1.21 Release Candidate 2

2008-04-01 Thread Martin Evans

John Scoles wrote:



Martin Evans wrote:

Martin Evans wrote:

John Scoles wrote:

Ok how about try #2

You can find it here

http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.21-RC2.tar

Got rid of the 'OCIXMLTypeCreateFromSrc' waring, seems Oracle has 
released this in the API but not provided a ptototype for it.


Also added a patch to provide faster fetch from REF CURSORs


Looking forward to testing this as we use ref cursors a lot.


Thanks!

John Scoles
[EMAIL PROTECTED]


 From perl Makefile.PL:

Warning: the following files are missing in your kit:
META.yml
Please inform the author
META.yml  is nothing to worry about this is generated from CPAN one dose 
not need to include it in the tars when uploading


Really? Your old META.yml contained auto generated content from 
ExtUtils::MakeMaker but I am not aware of CPAN generating anything. The 
META.YML in DBD::ODBC could not be generated and contains a lot of stuff 
I specifically put in like build_requires etc.



Previous compiler warnings mostly fixed but I've now got 2 new ones:

oci8.c:1390: warning: comparison is always false due to limited range 
of data type
oci8.c:1390: warning: comparison is always true due to limited range 
of data type


which are down to:

ub1 tz_hour;
if (  (tz_hour<0) && (tz_hour>-10) )



ok I will fix that one I guess it mus be a singned int not unsigned


IIRC it came from some OCI API.


Make test succeeds for all tests executed which specifically excludes:

t/12impdata.skipped
all skipped: DBI version 1.59 does not supprt iThreads use 
version 1.602 or later

t/14threads.skipped
all skipped: DBI version 1.59 does not supprt iThreads use 
version 1.602 or later


Note minor typo "supprt".


dho!
Looking at the timezone compiler warning I don't think this will 
impact on us much so I'll go ahead and install this RC on my test 
system and get back to you.


Martin


Sorry, I forgot to provide platform information:

This is perl, v5.8.8 built for i386-linux-thread-multi
DBI 1.59.
Linux Fedora Core release 5 (Bordeaux)
Intel Xeon (32bit)
Oracle Database XE 10.1

I also tried to the same database as above but with instant client 
10.1.0.4-20050525 with the same result.


Martin


Ok will do another RC later today or tomorrow depending on what comes in.

Thanks Martin




Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: ANNOUNCE: DBD::Oracle 1.21 Release Candidate 2

2008-04-01 Thread Martin Evans

Martin Evans wrote:

John Scoles wrote:

Ok how about try #2

You can find it here

http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.21-RC2.tar

Got rid of the 'OCIXMLTypeCreateFromSrc' waring, seems Oracle has 
released this in the API but not provided a ptototype for it.


Also added a patch to provide faster fetch from REF CURSORs


Looking forward to testing this as we use ref cursors a lot.


Thanks!

John Scoles



 From perl Makefile.PL:

Warning: the following files are missing in your kit:
META.yml
Please inform the author

Previous compiler warnings mostly fixed but I've now got 2 new ones:

oci8.c:1390: warning: comparison is always false due to limited range of 
data type
oci8.c:1390: warning: comparison is always true due to limited range of 
data type


which are down to:

ub1 tz_hour;
if (  (tz_hour<0) && (tz_hour>-10) )

Make test succeeds for all tests executed which specifically excludes:

t/12impdata.skipped
all skipped: DBI version 1.59 does not supprt iThreads use 
version 1.602 or later

t/14threads.skipped
all skipped: DBI version 1.59 does not supprt iThreads use 
version 1.602 or later


Note minor typo "supprt".

Looking at the timezone compiler warning I don't think this will impact 
on us much so I'll go ahead and install this RC on my test system and 
get back to you.


Martin


Sorry, I forgot to provide platform information:

This is perl, v5.8.8 built for i386-linux-thread-multi
DBI 1.59.
Linux Fedora Core release 5 (Bordeaux)
Intel Xeon (32bit)
Oracle Database XE 10.1

I also tried to the same database as above but with instant client 
10.1.0.4-20050525 with the same result.


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: ANNOUNCE: DBD::Oracle 1.21 Release Candidate 2

2008-04-01 Thread Martin Evans

John Scoles wrote:

Ok how about try #2

You can find it here

http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.21-RC2.tar

Got rid of the 'OCIXMLTypeCreateFromSrc' waring, seems Oracle has 
released this in the API but not provided a ptototype for it.


Also added a patch to provide faster fetch from REF CURSORs


Looking forward to testing this as we use ref cursors a lot.


Thanks!

John Scoles



From perl Makefile.PL:

Warning: the following files are missing in your kit:
META.yml
Please inform the author

Previous compiler warnings mostly fixed but I've now got 2 new ones:

oci8.c:1390: warning: comparison is always false due to limited range of 
data type
oci8.c:1390: warning: comparison is always true due to limited range of 
data type


which are down to:

ub1 tz_hour;
if (  (tz_hour<0) && (tz_hour>-10) )

Make test succeeds for all tests executed which specifically excludes:

t/12impdata.skipped
all skipped: DBI version 1.59 does not supprt iThreads use 
version 1.602 or later

t/14threads.skipped
all skipped: DBI version 1.59 does not supprt iThreads use 
version 1.602 or later


Note minor typo "supprt".

Looking at the timezone compiler warning I don't think this will impact 
on us much so I'll go ahead and install this RC on my test system and 
get back to you.


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: ANNOUNCE: DBD::Oracle 1.21 Release Candidate 1

2008-03-31 Thread Martin Evans

John Scoles wrote:
The first Release Candidate for DBD::Oracle 1.21 is now ready for your 
enjoyment.


You can find it here

http://svn.perl.org/modules/dbd-oracle/trunk/DBD-Oracle-1.21-RC1.tar

Any and all testing would be greatly appreciated, but especially
testing of building against Oracle Instant Client on a range of platforms.

So far in the with the Windows environment there is still one waring 
'OCIXMLTypeCreateFromSrc' as  'undefined'.  While annoying it does not 
seem to cause any problems.


there are more warnings for me on Linux/Oracle 10.2.0.1.

This is a another 'Big' release with a number of new features in no 
particular order


1) Support for the Oracle 10.2 Data Interface for Persistent LOBs
   (no more LOB Locaters hoary!!)
2) Support for Native Oracle Scrollable cursors
3) Support for bind_param_inout_array for use with execute_array


thanks especially for that - looking forward to using it.


4) Support for Lobs in 'select' of Oracle Embedded Objects
5) support for direct insert of large XML character data into XMLType
   fields

See the pod for details on these new features and of course a number of 
bug fixes and a few pod enhancements just for fun.


Thanks!

John Scoles




This RC seems to fail during make for me whereas 1.20 on the same 
machine works fine:


[EMAIL PROTECTED] DBD-Oracle-1.21-RC1]$ perl Makefile.PL
Using DBI 1.59 (for perl 5.008008 on i386-linux-thread-multi) installed 
in /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/auto/DBI/


Configuring DBD::Oracle for perl 5.008008 on linux (i386-linux-thread-multi)

Remember to actually *READ* the README file! Especially if you have any 
problems.


Using Oracle in /usr/lib/oracle/xe/app/oracle/product/10.2.0/server
DEFINE _SQLPLUS_RELEASE = "1002000100" (CHAR)
Oracle version 10.2.0.1 (10.2)
Found 
/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/rdbms/demo/demo_xe.mk
Using 
/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/rdbms/demo/demo_xe.mk
Looks like Oracle XE 
(/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/rdbms/demo/demo_xe.mk)
Reading 
/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/rdbms/demo/demo_xe.mk
Your LD_LIBRARY_PATH env var is set to 
'/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/lib:/home/db2inst1/sqllib/lib'




System: perl5.008008 linux hs20-bc2-4.build.redhat.com 2.6.9-34.elsmp #1 
smp fri feb 24 16:56:28 est 2006 i686 i686 i386 gnulinux
Compiler:   gcc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 
-mtune=generic -fasynchronous-unwind-tables -D_REENTRANT -D_GNU_SOURCE 
-fno-strict-aliasing -pipe -Wdeclaration-after-statement 
-I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 
-I/usr/include/gdbm

Linker: /usr/bin/ld
Sysliblist: -ldl -lm -lpthread -lnsl -lirc
Oracle makefiles would have used these definitions but we override them:
  CC:   /usr/bin/gcc
  LDFLAGS:  -g
   [-g]
Linking with -L/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/lib/ 
-lclntsh -lpthread


LD_RUN_PATH=/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/lib
Using DBD::Oracle 1.21.
Using DBD::Oracle 1.21.
Using DBI 1.59 (for perl 5.008008 on i386-linux-thread-multi) installed 
in /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/auto/DBI/

Writing Makefile for DBD::Oracle

***  If you have problems...
 read all the log printed above, and the README and README.help.txt 
files.

 (Of course, you have read README by now anyway, haven't you?)

[EMAIL PROTECTED] DBD-Oracle-1.21-RC1]$ make
Skip blib/lib/DBD/Oracle.pm (unchanged)
Skip blib/lib/DBD/mkta.pl (unchanged)
Skip blib/lib/oraperl.ph (unchanged)
Skip blib/arch/auto/DBD/Oracle/dbdimp.h (unchanged)
Skip blib/arch/auto/DBD/Oracle/ocitrace.h (unchanged)
Skip blib/lib/Oraperl.pm (unchanged)
Skip blib/arch/auto/DBD/Oracle/Oracle.h (unchanged)
Skip blib/arch/auto/DBD/Oracle/mk.pm (unchanged)
Skip blib/lib/DBD/Oracle/GetInfo.pm (unchanged)
/usr/bin/perl /usr/lib/perl5/5.8.8/ExtUtils/xsubpp  -typemap 
/usr/lib/perl5/5.8.8/ExtUtils/typemap  Oracle.xs > Oracle.xsc && mv 
Oracle.xsc Oracle.c

Error: 'OCILobLocator *' not in typemap in Oracle.xs, line 249
Error: 'OCILobLocator *' not in typemap in Oracle.xs, line 303
Error: 'OCILobLocator *' not in typemap in Oracle.xs, line 383
Error: 'OCILobLocator *' not in typemap in Oracle.xs, line 431
Error: 'OCILobLocator *' not in typemap in Oracle.xs, line 449
make: *** [Oracle.c] Error 1

I believe this is because you have omitted the typemap file which xsubpp 
needs. Also the MANIFEST file is missing. If you copy typemap from 
DBD::Oracle 1.20 you get considerably further:


[EMAIL PROTECTED] DBD-Oracle-1.21-RC1]$ cp ../DBD-Oracle-1.20/typemap .
[EMAIL PROTECTED] DBD-Oracle-1.21-RC1]$ make
/usr/bin/perl /usr/lib/perl5/5.8.8/ExtUtils/xsubpp  -typemap 
/usr/lib/perl5/5.8.8/ExtUtils/typemap  Oracle.xs > Oracle.xsc && mv 
Oracle.xsc Oracle.c
gcc -c 
-I/usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-

Re: Using database handles after disconnect

2008-03-27 Thread Martin Evans

Tim Bunce wrote:

On Wed, Mar 26, 2008 at 03:01:54PM +, Martin Evans wrote:

Hi,

I was attempting to tidy up some code in DBD::ODBC and noticed quite a 
number of tests using DBIc_ACTIVE which attempt to signal an error when the 
database is not active (not connected). Having never seen these errors I 
wrote a quick test script and got no errors when calling prepare, do etc 
after disconnecting. Investigating, it appears there is a bug in DBD::ODBC 
which fails to report these errors correctly but this led me to a few 
questions:


dbd_st_prepare, dbd_db_execdirect, dbd_st_tables, dbd_st_primary_keys and a 
host of odbc private functions in dbdimp.c receive combinations of an sth, 
dbh or both and do something like:


if (!DBIc_ACTIVE(imp_dbh)) {
  error code
}

I would like to commonise some of this. I know I can get my private sth 
from an sth (D_imp_sth) and similarly for database handle (D_imp_dbh) and I 
can get my private dbh from an sth with D_imp_dbh_from_sth but the  
DBIh_SET_ERR_CHAR macros needs a handle - how do you get SV *dbh from 
imp_dbh? (DBIc_PARENT_COM perhaps?)


You can use newRV_noinc((SV*)DBIc_MY_H(imp_dbh)) but there are some caveats.
I'd recommend just passing both the h and the imp_xxh to all functions.


Thanks. The problem was that some methods have a dbh, some have an sth 
and some have both and the code to check for the connection and report 
the error was predominantly the same in case so shouted out to be a 
single function. I ended up doing:


int dbd_db_execdirect( SV *dbh,
   char *statement )
{

   if ((dbh_active = check_connection_active(dbh)) == 0) return 0;
   .
   .
}
int
   dbd_st_prepare(SV *sth, imp_sth_t *imp_sth, char *statement, SV 
*attribs)

{
   if ((dbh_active = check_connection_active(sth)) == 0) return 0;
   .
   .
}

etc for other methods.

static int check_connection_active(SV *h)
{
D_imp_xxh(h);
struct imp_dbh_st *imp_dbh = NULL;
struct imp_sth_st *imp_sth = NULL;

switch(DBIc_TYPE(imp_xxh)) {
  case DBIt_ST:
imp_sth = (struct imp_sth_st *)imp_xxh;
imp_dbh = (struct imp_dbh_st *)(DBIc_PARENT_COM(imp_sth));
break;
  case DBIt_DB:
imp_dbh = (struct imp_dbh_st *)imp_xxh;
break;
  default:
croak("panic: check_connection_active bad handle type");
}

if (!DBIc_ACTIVE(imp_dbh)) {
DBIh_SET_ERR_CHAR(
h, imp_xxh, Nullch, 1,
"Cannot allocate statement when disconnected"
" from the database", "08003", Nullch);
return 0;
}
return 1;
}

which seems to work and I hope is ok.

Also, who does DBI allow prepare in a driver to be called when the dbh was 
disconnected?


s/who/why/? The presumption is that the underlying database API will
return an error in that situation (with a suitable native error message)
so the DBI needn't waste time being pedantic and second guessing what's
allowable. (Why just disallow prepare, for example?)

Tim.


I just picked prepare as an example but yes it applies to other methods 
as well. Just thought it seemed reasonable that DBI could do this and 
save each driver from doing it but that does not work if they want to 
report different states.


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Using database handles after disconnect

2008-03-26 Thread Martin Evans

Hi,

I was attempting to tidy up some code in DBD::ODBC and noticed quite a 
number of tests using DBIc_ACTIVE which attempt to signal an error when 
the database is not active (not connected). Having never seen these 
errors I wrote a quick test script and got no errors when calling 
prepare, do etc after disconnecting. Investigating, it appears there is 
a bug in DBD::ODBC which fails to report these errors correctly but this 
led me to a few questions:


dbd_st_prepare, dbd_db_execdirect, dbd_st_tables, dbd_st_primary_keys 
and a host of odbc private functions in dbdimp.c receive combinations of 
an sth, dbh or both and do something like:


if (!DBIc_ACTIVE(imp_dbh)) {
  error code
}

I would like to commonise some of this. I know I can get my private sth 
from an sth (D_imp_sth) and similarly for database handle (D_imp_dbh) 
and I can get my private dbh from an sth with D_imp_dbh_from_sth but the 
 DBIh_SET_ERR_CHAR macros needs a handle - how do you get SV *dbh from 
imp_dbh? (DBIc_PARENT_COM perhaps?)


Also, who does DBI allow prepare in a driver to be called when the dbh 
was disconnected?


Thanks

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: How to avoid bug reports from CPAN Testers who don't have DBI installed?

2008-03-10 Thread Martin Evans

Jonathan Leffler wrote:

Good Evening (well, it was evening when I was writing this),

Having just released a new version of DBD::Informix, I've gotten a couple of
bug reports from CPAN Testers about not being able to install it - which is
no vast surprise since those people typically do not have the relevant
software installed (which, in this case, means IBM Informix ESQL/C).
However, some of them don't have DBI installed either, and that absence
causes the Makefile.PL with DBD::Informix to fail.

Questions for you:

   1. Do you have a neat way of avoiding problems with DBI not being
   available, or do you just ignore the reports from CPAN Testers?


I had a long thread on cpan-testers 
(http://www.mail-archive.com/cpan-testers-discuss%40perl.org/msg00076.html) 
about failures they were reporting for DBD::ODBC. It appears 
cpan-testing mechanism is going through a lot of changes right now so I 
summed up their advice here:


http://www.nntp.perl.org/group/perl.dbi.dev/2007/11/msg5180.html

(although there was a slight bug in the "use" of DBI which should have 
been "require").


I amended the DBI::DBD docs slightly at:

http://search.cpan.org/~timb/DBI-1.602/lib/DBI/DBD.pm#Pure_Perl_version_of_Makefile.PL


   2. Is it me or is it silly that the CPAN Testers requirement for
   'cannot install the module because the pre-requisites are missing' is to
   exit with a 0 (success) status?  It grates horribly on my sense of what is
   appropriate to report a failure as success.


This should go away soon - once configure_requires/build_requires is 
supported and in use across all cpan-testers.



   3. Does anyone use ExtUtils::AutoInstall to assist?  I've had it in
   DBD::Informix for a while (read several years) but have just done the basic
   testing with a Perl without DBI installed, and ExtUtils::AutoInstall doesn't
   seem to help because I 'use DBI::DBD' and 'DBI' in various places.


not me.


I'm quite willing to look at your source code so if you have a mechanism in
place and working -- just tell me which module to download.  Or you can
explain in email with illustrations.

At the moment, to satisfy the CPAN Testers crowd, I think I'd have to have a
dummy Makefile.PL that (a) arranged to install DBI and then (b) ran more or
less the current Makefile.PL.  I'm sure that isn't kosher, but I'm not sure
how else to do it, and I am therefore inclined to ignore the reports, but
I'd really rather not waste their time, or my time, poking around looking at
their bogus (but automatically generated) problem reports.


I would set configure_requires and build_requires in your META.yml and 
wait for things to change. You can just exit with 0 and write no 
Makefile.PL now and the error reports will go away.



(Does anyone else get spam in [EMAIL PROTECTED] messages?  If so,
do you just reject them or do you do anything more fanciful with them?)



not me.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


  1   2   >