Re: perl is not working after migrating the server to AIX 7.2
On Thu, 17 Feb 2022 09:42:38 + "MK, Karthick" via dbi-users wrote: > Hello Brain, > > Thanks for your respond.. The server was migrated from 6.1 TL9 SP12 > to 7.2 TL5 SP3. The DB2 version is 10.5 and Perl version before > migration it was 5.8.8 and now it is 5.28.1 You ever got this working? If not one place to start is whatver .cpan directory the build was done in. You can perform "make clean all" then "make -wk test" and see if the tests give you anything odd. -- Steven Lembark Workhorse Computing lemb...@wrkhors.com +1 888 359 3508
Re: Perl script excessively executing statement
On Thu, 13 Feb 2020 06:58:37 + JohnD Blackburn wrote: > The DBA pulled info from some cache that showed the SQL statement > from the script was executed 12610 times. > > So if I were to add an “or die $!” statement after the > fetchrow_array(), that might fix things? (or prevent it from trying > to continue if there are errors?) Or catch errors after the execute > statement? Q: What are you really trying to accomplish? Basic rules: If you need to run SQL more than once parepare it. If you are not sure how well it runs then you can evaluate it beorehand -- which also spots errors, can help track down un-used indexes. As noted, generating a single query, preparing and running once, then extracting the rows (preferably as arrays rather than hashes) will be the most effecient way. The database is going to do a much better job filtering anything it can locally in native format beore you convert it to ASCII char strings and pull the same content across a LAN. -- Steven Lembark3646 Flora Place Workhorse ComputingSt. Louis, MO 63110 lemb...@wrkhors.com+1 888 359 3508
Re: Fork DBD::mysql
> It is not as easy as it could appear. And also in some cases migration > from MySQL/MariaDB to Pg could be problematic from performance point of > view. One Pg developer already told us that for our use case is really > MySQL better then Pg. Q: What about your use case is more adapted to MySQL? There is no part of SQL that Pg does not support that MySQL does; there should not be any serious performance issues with Pg that leave it slower than MySQL. There are a variety of ways that Pg can be faster (e.g., partial indexes, exclusion constraints vs. triggers) and will usually be less error-prone. You may have to refactor some sloppy design that MySQL allowed but Pg does not, but that is also in your favor. I really am curious to see any example of something in your database that can be handled more gracefully in MySQL than well-designed Pg. -- Steven Lembark 3646 Flora Pl Workhorse Computing St Louis, MO 63110 lemb...@wrkhors.com +1 888 359 3508
Re: Perl crashing, citing libmsql__.dll as culprit(?)
On Mon, 7 Dec 2015 10:21:38 +0200 Meir Guttman <mguttm...@gmail.com> wrote: > Dear DBI users, > > I use Perl, DBI and DBD:mysql for years, without a problem. But working on > my latest project I encounter Perl often, but *not always(!) *crashing with > the Windows message "Perl interpreter has stopped working". As far as I can > tell the script runs to its natural end.I must add that together with this > phenomenon ("Poster hoc ergo prompter hoc...") I get, again some times but > not always, an out-of-the-blue Devel::Peek report and sometimes two or > rarely three of those. These I also don't know where did they come from. > >- Platform: Win-7 Pro >- Perl: Strawberry Perl ver. 5.18.2.1. >- DBI: 1.63 >- DBD::mysql 4.025 > > I extracted the crash report from the Windows Event log, below. If I > understand it correctly, the culprit is libmysql__.dll. > Can somebody give me directions as to how to narrow it down to the package > and line? One approach is running the thing with "perl -d". The debugger may give you a bit more information on where in the perl code you are failing. That or start dumping progress statements to stderr before and after each block of DBI statements that access MySQL. This is a rather nice example of somplace Log4Perl works nicely: put "debug" logs with statements and args all over the place. As a last resort you can use "perl -d" and set DB::Trace = 1. The problem is that if you're croaking in mysql's dll then there may not be much you can do about it from Perl's side. Might be worth checking (or re-installing) the MySQL lib's and DBD::mysql to make sure they are in sync. -- Steven Lembark 3646 Flora Pl Workhorse Computing St Louis, MO 63110 lemb...@wrkhors.com +1 888 359 3508
Re: Custom Configuration Best Practice
On Sun, 1 Mar 2015 17:05:59 -0800 David E. Wheeler da...@justatheory.com wrote: On Mar 1, 2015, at 5:00 PM, David E. Wheeler da...@justatheory.com wrote: Fellow mod_perlers, Oy vey, major autocomplete fail. Sorry about that, DBI folks, please ignore. At least not an insult... If foo is called with your package followed by the original stack then maybe you could just to the entire job in foo, leaving the handler to return only OK: sub foo { # do all of the work here } sub handler { OK } Short of that, I think you are stuck. The only way to constrain $foo would be something like: { my $foo = ''; sub foo { ... } sub handler { } } and at least $foo is scoped reasonably. -- Steven Lembark 3646 Flora Pl Workhorse Computing St Louis, MO 63110 lemb...@wrkhors.com +1 888 359 3508
Re: Help with using Triggers MySql/Innodb
On Thu, 26 Feb 2015 15:31:56 + Martin J. Evans boh...@ntlworld.com wrote: I would actually suggest you don't use a trigger for this and also that you don't store the count of people in a house in a column in the house table. You can always calculate how many people there are in a house but if you use triggers or some other mechanism to keep the count you run the risk of the count not actually agreeing with the rows with that house id. In other words, I think this is generally bad design. You could get a one-step summary using a view that does a select with count(1) as people from a join of houses and people, grouped by the house. That leaves the houses and people separate and still gives you one-stop shopping for the houses with accurate counts. -- Steven Lembark 3646 Flora Pl Workhorse Computing St Louis, MO 63110 lemb...@wrkhors.com +1 888 359 3508
Re: Can't locate object method connect via package DBI
On Fri, 10 Jul 2009 11:42:58 -0500 Sanford Stein sanford.st...@cybertools.biz wrote: I am attempting to install a shareware library administration system called koha which is written in perl. I have used cpan to install a large number of perl modules, including DBI. But when I try to connect to DBI I always get the message in the above subject line. I wrote a very simple perl script as a test, and always get the same message: use DBI; my $dsn=DBI:mysql:koha:localhost:3306; my $user=kohaadmin; my $pass=whatever; $dbh = DBI-connect($dsn, $user, $pass) || die Couldn't connect to database ! $DBI::errstr\n; - I have tried all the obvious things: 1. Assured that DBI in installed, with all prerequisites (based on the following site: http://deps.cpantesters.org/?module=DBI;perl=latest) 2. Assured that my perl @INC include all the DBI stuff 3. Assured that I could connect to mysql from a command prompt, using the same username and password. After connecting, I am able to do all the standard database command (create table, drop table, insert, select) so I know that the account is valid and has the proper permissions. Any other ideas? If you have a locally-installed copy of perl (vs. /usr/bin/perl) then you may have installed the modules into another perl's ./lib. If the modules were installed via O/S package manager (vs. CPAN) then they may also be in the lib dir for /usr/bin/perl, not yours. Convince yourself that the perl you are using can see DBI: $ /absolute/path/to/perldoc -l DBI /opt/perl/5.10.0/lib/site_perl/5.10.0/i686-linux/DBI.pm Convince yourself that DBI is really being loaded: $ /absolute/path/to/perl -d -e 0; main::(-e:1): 0 DB1 use DBI; DB2 x DBI-VERSION 0 1.607 DB3 x DBI-can( 'connect' ) 0 CODE(0x8382780) - DBI::connect in /opt/perl/5.10.0/lib/site_perl/5.10.0/i686-linux/DBI.pm:551-711 Q: Is the path what you expected it to be? -- Steven Lembark 85-09 90th St. Workhorse Computing Woodhaven, NY, 11421 lemb...@wrkhors.com+1 888 359 3508
Re: ANNOUNCE: DBI 1.602
Enjoy! We try. -- Steven Lembark +1 888 359 3508 Workhorse Computing 85-09 90th St [EMAIL PROTECTED] Woodhaven, NY 11421
Re: Proposal for new $h-{ReadOnly} attribute
Any thoughts? Amen: Most of the perl I use only does reads and the readonly locking could be a big savings. -- Steven Lembark 85-09 90th Street Workhorse Computing Woodhaven, NY 11421 [EMAIL PROTECTED] +1 888 359 3508
Re: Is Perl-DBI Slow?
-- LLC [EMAIL PROTECTED] I've heard that perl-DBI is slow. Is this true? If so, is it because of Perl being an interpreted language? Perl is compiled, always has been. DBI is no slower than any other application that has to talk to a database. This involves disk and/or network I/O, transaction processing, and any latency involved in using a time-shared system (just about all of them these days). I have worked on large datasets with Perl using DBI for years; in most cases my Perl runs no slower than C that real programmers come up with and frequently runs faster. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Performance issue
-- Alexander Foken [EMAIL PROTECTED] You generate a big number of open statement handles, each costing some memory. Put the SQL statements (*NOT* the statement handles) into a hash, like this: use DBI; my %statements=( 'find_users' = 'select foo,bar from users where baz=?', 'find_documents' = 'select * from documents where id=? and type=?', 'increment_counter' = 'update counter set n=n+', ); The original posting doesn't say anything about whether the statements are used more than once. If they are all one-time use then preparing them in advance will be an expensive waste. If they are re-used then there might be some gain to keeping the prepared handles around. Thing is that in situations like this you frequently do not use all of the statements in each execution. If so then you might get some improvement by caching the statements that actually get used via prepare_cached: my $sth = $dbh-prepare_cached( $statemetnz{ $name } ); This avoids inundating the server with unnecessary statement handle creation without adding overhead to re-prepare the statements. You can use two hashes, one with prepared statements the other with one-time use if the balance isn't skewed heavily one way or the other: my %reuse = ( foo = 'select ...', ... ); my %onetime = ( bar = 'select ...', ... ); while( my $line = $infile ) { my( $name, @argz ) = split; # whatever... # assuming the caller does an eval. my $sth = $reuse{ $name } ? $dbh-prepare_cached( $reuse{ $name } ) : $dbh-prepare( $onetime{ $name } ) or die Bogus query: unknown '$name'; $sth-execute( @argz ); ... } this'll keep the cached ones around for you while still allowing the server to quickly give up resources for one-shot queries. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Reg: auto_increment issue.
-- Dilly raja [EMAIL PROTECTED] I created a table in mysql in which i set a field id in that table to auto_increment and this field is the primary key to this tabel. so each time a new record is added the id increment by 1. When i delete a last record and insert the next record it increment the value from the deleted record only instead of the last existing record. Is there any solution in which i can set these values or reset the auto_increment so that it starts from the first when i delete all the record. This is not a DBI issue at all, but one for MySQL. You'll probably get better answers in the future on a MySQL mailing list than this one for database issues. Auto-increment fields are really indended for use as surrogate keys. As such, they are not intended for use on tables that will have records deleted from them. You might do better to create a single table, call it sequence with a single unsigned integer and select its value for update, increment, then store it using DBI when you add records. If you are adding multiple records and have the count in advance then increment the sequence by that number instead of one to get a bulk list. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Statement handle side effects of swap_inner_handle?
Umm, bind_columns is probably do-able if the DBI gave you a way to access the current row array. Then you could just bind the new columns to the old ones. Any real odds? Odds? Probably do-able Q1: How likely? Q2: Any timeframe? -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Statement handle side effects of swap_inner_handle?
Artistic Opinion: Would reconnecting the damaged handle and keep it usable with an appropriate error then loosing the transaction (and letting the caller deal with it as an exception) seem reasonable? Yes. That would possibly suffice is all code gets its statement handles via prepare_cached(). But then if all code used prepare_cached() and connect_cached() then you wouldn't need to go down this road at all :) Trick is to have the caller avoid having to use connect_cached for themselves each time they access the database handle. Ideally they should be able to: { my $dbh = DBIx::Priaprism( @blah ); sub foo { do { eval { $dbh-foo( ... ); }; } while( $@ =~ /Reconnected/ ); } } and have the connection up forever. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Error handling for auto-reconnect?
Is there any way to restart the failed transaction from within the HandleError sub? I cannot find anything in the DBI doc's that allows me to tell what the last op on the dbh was. thanx -- Steven Lembark [EMAIL PROTECTED] Systems Management Phone: 888.IPSOFT8 IPsoft, Inc. Direct: 212.440.5529 http://www.ip-soft.net Fax:212.353.2454
Re: Error handling for auto-reconnect?
-- Henri Asseily [EMAIL PROTECTED] Are you talking about $dbh-{Statement} ? If you want full transactional integrity and you've been sending multiple statements within the transaction, you could overload execute() and track every time there's an open tran or commit in the statement. Anything in between, you store in an array of statements so when there's an error you rerun all of them. overloading execute doesn't deal gracefully with selectall_*, fetchall_*... Best odds I can find at this point is to add a reconnect feture that uses swap_inner_handle to re-establish the dbh, sth connections and (hopefully) re-bind the parameters. That would allow the error handler to effectively do somthing like the pesudo-dbi below: my $handle = $_[1]; if( $err =~ /$connection_lost_error/o ) { $_[0] = 'Connection failure'; if( $handle-isa( 'sth' ) ) { $_[0] .= ' Reconnected' if $handle-reconnect_statement; } elsif( $handle-isa( 'dbh' ) ) { $_[0] .= ' Reconnected' if $handle-reconnect_database; } else { # give up; undef } } The point is cataloging how much can be done using the DBI interface to rebuild what was damaged. For statement handles the current issue is figuring out how to re-bind the parameters. Transactions will proably be lost (for now at least), but at least a downed database connection will leave all prepared statements still alive and re-usable. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Statement handle side effects of swap_inner_handle?
Q: If the cached kids are stored in $new_dbh, wouldn't I overwrite the statement handles when I performed the $old_dbh-swap_inner_handle( $new_dbh )? True. If you swap_inner_handle for all ChildHandles then you'll automatically deal with CachedKids as well. Simple. Some good news on the front... The only thing this doesn't pull across that I can think of is the current row state of each handle You could use $sth-rows and fetch to the same row count (but there's no guarantee you'll be getting the same values). Artistic Opinion: Would reconnecting the damaged handle and keep it usable with an appropriate error then loosing the transaction (and letting the caller deal with it as an exception) seem reasonable? i.e., $dbh-rows is their problem if they know that the transaction has failed. ParamValues and ParamTypes gives you a way to redo bind_param(). But there's no equivalent for bind_param_inout(), nor for bind_columns. Umm, bind_columns is probably do-able if the DBI gave you a way to access the current row array. Then you could just bind the new columns to the old ones. Any real odds? Q: From what I can see, $sth-{Type} doesn't give me enough information to tell if a particular sth is cached or not. Am I missing something? No. $is_cached = grep { $_ == $sth } values %{$dbh-{CachedKids}}; thanks It's not restarting the query-from-hell that worries me, it's the fact that rows may have been added or deleted since the first select (assuming it is a select statement) so the application may process some rows twice and/or miss some rows. But if you proceed with this then I'd expect your nice CPAN module will give people lots of hooks to express what kinds of madness they're comfortable with :) Q: Is there any chance of getting a 'seek' to handle this? It would look sometehing like: For the moment it seems as though I'd want to just abandon the transaction and let the user restart it themselves. There is enough context in it that a Transaction failed + reconnected exception would give them enough data to deal with it appropriately. Aside: How likely to change is dbh-clone as of 1.50? The clone method was added in DBI 1.33. It is very new and likely to change. The more widely it's used the less likely it is to change :) The 'big issue' is how to deal with attribute values that have changed since the $dbh was created. Q: Which ones does clone use now? thanks for the input. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Statement handle side effects of swap_inner_handle?
I'm using swap_inner_handle in a HandleError sub to re-connect the existing database handle. The trick is to re-connect if the error warrants it, return false to the caller, which then allows the caller to re-try the failed operation and keep going. Q: Do I need to use $if_active = 3 to re-prepare the cached statement handles if I use swap_inner_handle? e.g. my $dbh = whatever; my %queryz = ( foobar = q{select foo from bar where baz = ?}, ... ); ... my $runquery = sub { my $name = shift or croak ... ; my $sth = $dbh-prepare_cached( $queryz{$name}, $queryargz ) or croak ... ; my $data = do { my $d = ''; for( 1..$maxtries ) { last if $d = $dbh-selectall_arrayref( $sth, undef, @_ ); } $d }; }; The selectall_* calls normally obviate the need for $if_active; however swapping the database handles might require re-preparing all of the file handles using my $sth = $dbh-prepare_cached( $queryz{$name}, $queryargz, 3 ) after etching the database handle's brains with swap_inner_handle; or setting some magic this needs to be re-prepared the next pass flag on all of the statement handles? Or is the best way to just my $kidz = $dbh-CachedKids; delete @{ $kidz }{ keys %$kidz }; and be done with it? Or is some similar maintainence done automatically by the swap_inner_handle? thanx Reference: swap_inner_handle $rc = $h1-swap_inner_handle( $h2 ); $rc = $h1-swap_inner_handle( $h2, $allow_reparent ); Brain transplants for handles. You don't need to know about this unless you want to become a handle surgeon. A DBI handle is a reference to a tied hash. A tied hash has an *inner* hash that actually holds the contents. The swap_inner_handle() method swaps the inner hashes between two handles. The $h1 and $h2 handles still point to the same tied hashes, but what those hashes are tied to has been swapped. In effect $h1 *becomes* $h2 and vice-versa. This is powerful stuff. Use with care. As a small safety measure, the two handles, $h1 and $h2, have to share the same parent unless $allow_reparent is true. The swap_inner_handle() method was added in DBI 1.44. prepare_cached $sth = $dbh-prepare_cached($statement) $sth = $dbh-prepare_cached($statement, \%attr) $sth = $dbh-prepare_cached($statement, \%attr, $if_active) Like prepare except that the statement handle returned will be stored in a hash associated with the $dbh. If another call is made to prepare_cached with the same $statement and %attr parameter values, then the corresponding cached $sth will be returned without contacting the database server. The $if_active parameter lets you adjust the behaviour if an already cached statement handle is still Active. There are several alternatives: 0: A warning will be generated, and finish() will be called on the statement handle before it is returned. This is the default behaviour if $if_active is not passed. 1: finish() will be called on the statement handle, but the warning is suppressed. 2: Disables any checking. 3: The existing active statement handle will be removed from the cache and a new statement handle prepared and cached in its place. This is the safest option because it doesn't affect the state of the old handle, it just removes it from the cache. [Added in DBI 1.40] CachedKids (hash ref) For a database handle, CachedKids returns a reference to the cache (hash) of statement handles created by the prepare_cached method. For a driver handle, returns a reference to the cache (hash) of database handles created by the connect_cached method. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: large updates and deletes
-- Robert Hicks [EMAIL PROTECTED] Steven Lembark wrote: -- Robert Hicks [EMAIL PROTECTED] I have been handed a couple of requests to change some data. One is an update on 3.6 million records and the other is two different deletions of over 16 million rows of data. I need to be able to do these in batches as I am told that Oracle will blow up if I do them in one shot. The issue is rollback/redo space. If you have too large of a transaction pending you can run out of space and it abort the transaction. You may have sufficient space to blow off this many records, but it'll be a True Pain to find out the hard way... The limit...10,000 and he had to increase the rollback space or some such to get it to work right. One approach is to select the keys to be updated into a temporary table, then update the scratch table with new values, use a join to udpate the records with a count x or rowid x on the scrath space. After the update succeeds you update a finished flag. This also has the advantage of bookkeping what was done. Depending on how important the data is you might want to select the existing records into scratch space, update the scratch records with 'new value' fields and use a nested select to udpate the new table records. Net result is a running history of what was updated when with the old and new values selectable. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Possible extension for DBI error handler (fwd)
-- Tim Bunce [EMAIL PROTECTED] $dbh-connect_cached( @{ $meta{ $dbh } } ); If the connection has failed then connect_cached() will return a new, different, $dbh which would be lost with the code above. To change the $dbh in the application to be this new $dbh you'd need to use swap_internal_handle(). If I was mad enough to try this I'd probably do: $new_dbh = $dbh-clone; $dbh-swap_internal_handle($new_dbh); So much for hacking error handlers in my mail editor :-) Why mad enough? The point is that if I have a long running (months at a time) daemon then it'd be nice not to have to restart it each time the databas is restarted: just let HandleError deal with it and keep on trukin... -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Statement handle side effects of swap_inner_handle?
The inner handle carries all the DBI info, including CachedKids. So swap_inner_handle is a simple and total brain transplant. Q: Any chance of getting $sth-clone? From what I can see in the doc's, the clone method is specific to database handles (the statement's clone would account for cached queries). Matching up the cached vs. un-cached values is doable, but getting the bound variables (especially lexicals) correct is going to seriously difficult. Or is some similar maintainence done automatically by the swap_inner_handle? swap_inner_handle just does what it says - swaps handles. That's it. I suspect what you'll need to do (for maximum transparency) is perform a swap_inner_handle on each of the old dbh's kids to replace the now defunct sth's with new ones freshly prepared using the new dbh. The trick will be matching the values of CachedKids to those in ChildHandles to cache the proper set of handles: DB1 x $dbh-{CachedKids} 0 HASH(0x86900a4) ' select now() ' = DBI::st=HASH(0x868ff24) empty hash Otherwise someone using prepare and prepare_cached on the same sql string would end up with mismatched statements after the reconnect. Q: If the cached kids are stored in $new_dbh, wouldn't I overwrite the statement handles when I performed the $old_dbh-swap_inner_handle( $new_dbh )? i.e., wouldn't I get a wholesale duplication via: # remake all of the old statement handles # using the new database handle. $new_dbh-prepare( $_-{Statement} ) for grep { defined } $old_dbh-{ChildHandles}; # replace the database handle wholesale. $old_dbh-swap_inner_handle( $new_dbh ); (ignoring issues with prepare_cached and bound variables for a moment)? Or is it necessary to do something like: # save the existing statement handles (assuming # they don't survive the $dbh-swap_inner_handle. my @old_sth = grep { defined } @{ $old_dbh-{ChildHandles} }; my $old_kidz = $old_dbh-{ CachedKids }; my %cached = reverse %$old_kidz; # generate a new dbh and install it. after the swap, # the new kids are empty (based on $new_dbh having # no prepared statements yet). my $new_dbh = $old_dbh-clone; $old_dbh-swap_inner_handle( $new_dbh ); my $new_kidz = $old_dbh-{ CachedKids }; # at this point the swap has wiped out the ChildHandles # and CachedKids of $old_dbh; now to regenerate them. for my $old_sth ( @old_sth ) { my $sql = $old_sth-{ Statement }; my $new_sth = $new_dbh-prepare( $sql ); $old_sth-swap_inner_handle( $new_sth ); # thankfully, this thing is writeable... $new_kidz-{ $sql } = $new_sth if $cached{ $old_sth }; } The only thing this doesn't pull across that I can think of is the current row state of each handle and the bound variables. Regenerating the cached queries only would simply require saving the keys and running $old_dbh-prepare_cached( $_ ) for keys %$saved_kidz_hash; the messy part is going to be making sure I have the un-cached ones handled properly. Q: From what I can see, $sth-{Type} doesn't give me enough information to tell if a particular sth is cached or not. Am I missing something? If so then it's kind'a handy that the DBI now has a $h-{ChildHandles} attribute. If you really want to get fancy you could each check $old_sth-rows and then fetch that many rows from the new $sth to leave it in the same 'place' (hopefully!) as the original. Just how mad do you want to be? Only risk there is accidentally restarting the query-from-hell, but that could be avoided with a configuration parmeter to DBIx::Viagra. Q: Is there any chance of getting a 'seek' to handle this? It would look sometehing like: $sth-seek( $sth-rows ); and save me from retrieving the entire list of rows into core on the local side just to discard them. I'll leave bound variables until after I've had my morning coffe. Thankfully I'm doing this in Perl; Heaven help the poor slob who tries this in Java... Aside: How likely to change is dbh-clone as of 1.50? The clone method was added in DBI 1.33. It is very new and likely to change. thanx -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Possible extension for DBI error handler (fwd)
From DBI's pod: This only works for methods which return a single value and is hard to make reliable (avoiding infinite loops, for example) and so isn't recommended for general use! If you find a good use for it then please let me know. The use of it would be returning a defined-but-false value (probably '') on the way through. The use would be: my $err_handler = sub { # boilerplate ... if( $err =~ /$regex_for_connection_errors/o ) { log_error 'Failed database connection:', $err; $dbh-connect_cached( @{ $meta{ $dbh } } ); $_[-1] = ''; # short circut the handler logic and let the # caller deal with the false return value. 1 } else { # nothing I can do about it... let the exception # take its course... 0 } }; this allows a polling daemon to use code like: POLL: for(;;) { if( my $now = $get_dbtime_t-() ) { sleep $poll_int - ( $now % $poll_int ); my $rowz = $dbh-selectall_arrayref( $sth, undef, @valuz ) ) or next POLL; ... } # at this point the connection error was rendered non-fatal # by the handler. and the database handle [hopefully] re- # connected to the database. in any case, we will find out # about it on the next polling pass... } Thing about this approach is that I don't need an explicit check for the error in my main code. What'd really be sexy -- and seems doable from what I've seen inside DBI -- is to use tristate logic to allow restarting the failed operation: sub some_dbi_method { my $result = do { if( my $handler = $dbh-{meta}{err_handler} ) { $handler-( $errstr, $dbh, $ret ); } else { undef } }; if( $result ) { # caller dealt with it. return $ret; } elsif( defined $result ) { # retry the failed operation by jumping back into the # current subroutine. goto foo; } else { # error handler set result to undef, which means # to let RaiseError, etc, follow its normal course. $dbh-_raise_error; } } The user would have to manage their own timeouts in a place accessable to the error handler (kwikhak, without boilerplate): my %meta = (); my $handle_error = sub { my( $err, $dbh, $cruft ) = @_; if( $err =~ /$regex/o ) { my( $time, $trys, $argz ) = @{ $meta{ $dbh } }{ qw(timeout, retry_count, connect_args) }; for( 1..$trys ) { # carp would help the poor slob debugging # this figure out WHICH dbh is causing # the pain. carp Retrying connection ($_)...; $_[0] = DBI-connect( @$argz ) and last; sleep $time; } # return false-but-defined if the connection was # remade, otherwise give up. $_[0] ? '' : undef; } else { undef } }; # override the DBI constrctors with one that # saves the arguments for future use and then # redispatches them to DBI. for( qw(connect connect_cached connect_whatever) ) { my $dbisub = DBI-can( $name ); my $ref = qualify_to_ref $_, __PACKAGE__; *$ref = sub { my $proto = shift; $_[ -1 ]-{ HandleError } ||= $handle_error; my $argz = [ @_ ]; unshift @_, DBI; my $dbh = $dbisub; $meta{ $dbh } = $argz; }; } This allows a derived class to reconnect and retry the operation without having to wrap the DBI class at all. This also minimized the overhead since it doees not require an AUTOLOAD or re-dispatching every call: only the constructors are overloaded and they all pass the result off to DBI as-is. I could dodge overloading the DBI constructors by calling a separate metadata storage routine. The only way to avoid wrapping all of the DBI calls in their own eval's seems to be some way for HandleError to retry the failed operation if it thinks the underlying issue has been resolved. enjoi -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Memoizing non-place-holding sql statements to place-holding
This coding may not be so difficult, but not easy for me. The issue will be converting: insert into foo ( field, fields ) values ( ... ); into insert into foo ( field, fields ) values ( ?, ?, ? ) After that you can use prepare_cached on the sql to get a re-usable statement handle and reduce the preparation. If the values being inserted are not too convoluted (e.g., don't involve newlines, commas, or quotes as data) then a few splits and regexen should allow you to standardize the queries and use prepare_cached. I don't think Memoize.pm can be used, :) No, because you don't want to short-circut the acutal call. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
RE: Memoizing non-place-holding sql statements to place-holding
I've read DBI doc but I cann't understand prepare_cached. Any difference between prepare and prepare_cached? prepare_cached is useful if you are going to re-use a statement handle. It stores the statement handle in a hash keyed by the sql statement itself. This saves re-preparing the statement if it is reused: For example: my $sth = $dbh-prepare_cached( 'select foo from bar' ); Calls code like: my $dbh = shfit; my $sql = shift; ... $cached_statments{ $sql } ||= $dbh-prepare( $sql ); This only does the prepare once (when the ||= finds a false value in %prepared_statements). These are useful when you are going to re-run the same query any number of times from different parts of the code. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: DBI v2 - The Plan and How You Can Help
Which is why major DBMSs are increasingly relying on SSO based solutions. (e.g., Kerberos/LDAP authentication). Not certain if DBI is the proper level to implement that, (probably needs to be down at the DBD = DBMS level). And in a standard way may still be wishful thinking. Also, I'm not sold on the idea that a ~/.dbi file is particularly secure in that regard. Not neccesarily opposed, just not convinced its the right solution. (I don't like cleartext passwords either, but due to the variance in DBMS's authentication methods, I don't know if DBI can solve that problem). Could use an approach like LWP: Stock authentication hook gets called if the remote side asks for authn data, something like: { RaiseError = 1, AuthnHook = $coderef, } would let you cover it however you like. And/or an automatic redirect via tunnel (basically something like ssh -L with an ssh module handling the redirect) might also work. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
RE: DBI-1.48 bind_param_inout produces ORA-01461 colliding with a CLOB
-- Drozdowski, Catharine [EMAIL PROTECTED] Steve, There is an Oracle sql function TO_LOB which can be used to convert the input to a lob You might also check out the Oracle 9i Application Thanks. I'll check it out in 9.2.0.4. If it works that will be exactly what I need. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
DBI-1.48 bind_param_inout produces ORA-01461 colliding with a CLOB
/lembark/lib/Plugin/Language/DML/DBI.pm line 209 eval {...} called at /sandbox/lembark/lib/Plugin/Language/DML/DBI.pm line 208 Plugin::Language::DML::DBI::__ANON__[/sandbox/lembark/lib/Plugin/Language/DML/DBI.pm:213] called at /sandbox/lembark/lib/Plugin/Language/DML.pm line 146 Plugin::Language::DML::__ANON__[/sandbox/lembark/lib/Plugin/Language/DML.pm:160]('dest=HASH(0x861a30c)', 'Activins are dimeric growth and differentiation factors which...', 'LocusLink', '2005.06.16-00:00:00', 'undef', '2005.06.16-09:16:41', 'G00462083') called at /sandbox/lembark/lib/Plugin/Language/DML/MultiToken.pm line 171 eval {...} called at /sandbox/lembark/lib/Plugin/Language/DML/MultiToken.pm line 171 Plugin::Language::DML::MultiToken::__ANON__[/sandbox/lembark/lib/Plugin/Language/DML/MultiToken.pm:174] called at /sandbox/lembark/lib/Plugin/Language/DML.pm line 146 Plugin::Language::DML::__ANON__[/sandbox/lembark/lib/Plugin/Language/DML.pm:160]('dest=HASH(0x861a30c)', 'Activins are dimeric growth and differentiation factors which...', 'LocusLink', '2005.06.16-00:00:00', 'undef', '2005.06.16-09:16:41', 'G00462083') called at /sandbox/lembark/lib/Plugin/Language/DML/MultiToken.pm line 93 Plugin::Language::DML::MultiToken::__ANON__[/sandbox/lembark/lib/Plugin/Language/DML/MultiToken.pm:101] called at /sandbox/lembark/lib/Plugin/Language/DML.pm line 146 Plugin::Language::DML::__ANON__[/sandbox/lembark/lib/Plugin/Language/DML.pm:160]('ARRAY(0x8ad2380)', 'ARRAY(0x8ad1e70)', 'ARRAY(0x8b004dc)', 'ARRAY(0x8b00590)', 'ARRAY(0x8b000e0)', 'ARRAY(0x8b00188)', 'ARRAY(0x8b00230)', 'ARRAY(0x8b126e4)', 'ARRAY(0x8b1263c)', ...) called at copy-ccm-curation line 656 dest::entry_comment('dest=HASH(0x861a30c)', 'ARRAY(0x8b12c48)') called at copy-ccm-curation line 131 eval {...} called at copy-ccm-curation line 131 -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: DBI-1.48 bind_param_inout produces ORA-01461 colliding with a CLOB
perl-5.8.7, DBI-1.48, Oracle-9.2.0.4. I'm an idiot: DBD::Oracle-1.16 -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
RE: DBI-1.48 bind_param_inout produces ORA-01461 colliding with a CLOB
-- Reidy, Ron [EMAIL PROTECTED] Steve, See the DBD::Oracle docs, section Handling LOBs. Basically, you need to insert a EMPTY_CLOB(), return the locator, and then call ora_write_lob() to push the data into the locator. That's what I'm trying to avoid. The problem is that doing so requires pre-checking every incomming SQL to see if there is a CLOB in it. If there were some internal Oracle function for insert the following string as a CLOB then I could avoid the issue in the SQL rather than dealing with it in the closure. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
RE: DBI-1.48 bind_param_inout produces ORA-01461 colliding with a CLOB
-- Reidy, Ron [EMAIL PROTECTED] Steve, See the DBD::Oracle docs, section Handling LOBs. Basically, you need to insert a EMPTY_CLOB(), return the locator, and then call ora_write_lob() to push the data into the locator. This is a generic query handler. In order to do that I have to investigate the data type of each '?' in the input sql, determine if it is a LOB, and handle that field specially. If there were something like convert for a clob then I could push the knowlege of clob-ness up into the SQL. This code is also used for other databases (e.g., MySQL, which handles text fields more gracefully). -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Oracle schema comparison
-- Steve Sapovits [EMAIL PROTECTED] Does anybody know of any good tools (free or otherwise) for comparing two Oracle schemas? I'd want to be able to compare tables, indices, sequences, etc. to see what one has and another doesn't. In an ideal world I'd also be able to compare the actual data in the tables, but the primary need is to be able to see if tables and indices match. - Alzabo. - Query user_tab_cols. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
RE: How can I...
-- Reidy, Ron [EMAIL PROTECTED] The data dictionary is your friend ... SELECT 'x' FROMall_tables WHERE table_name = UPPER(:name); Depending on access you might be better off selecting from user_tables: less likely to fail and the table name is more likely to be useful in the current user's context (i.e., fewer false hits). -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: /usr/bin/ls: 0403-027 The parameter list is too long
-- Vamsi_Doddapaneni [EMAIL PROTECTED] Hi all, Thanks for your help. I am facing a new problem. Here is the code part: foreach $name(`ls $opt_i/*.xml`){ chomp; push @f, $name; print pushing elements=$name\n; } [EMAIL PROTECTED]; Now in the directory $opt_i if there are some 10 , 20 or even 100 its working well. But now I have some 305 odd xmls and the code is EXITING WITH sh: /usr/bin/ls: 0403-027 The parameter list is too long. In unix prompt ls *.xml is working (giving out those 305 xmls) Could anybody help me out? This has nothing to do with DBI. Try File::Find and iterate them. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: db or file access?
It is interesting to me, how can be faster storing files on disk versus database. If I have many pictures in database overall db performance is slower? Blob access tends to be less effecient due to buffering and data transfer issues; huge tables with images require more space and tend to be slower for scanning (when necessary) due to larger record overhead (depending on whether the blobs are stored inline). Grabbing a relative path (or URI) from the database as a string doesn't rquire special buffering or conversions, leaves the table smaller. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Describe table
I just want the info describing the table that I can see in sqlplus. my $typz = q{ select column_name, data_type from use_tab_columns where table_name = ? }; my $sth = $dbh-prepare( $typz ); ... my $table = ... ; my %name2type = do { eval { my $a = $sth-fetchall_arrayref( $table ); @$a } } or die Sorry, no data available for '$table'; # at this point you either have a name-type map in # %name2type or died trying (e.g., due to bogus # column or database failure). -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: inserting data coming from a hash
-- Robert [EMAIL PROTECTED] This is my hash structure: Veterans Day = { date= '2005', type= 'US', federal = 'true', active = 'true', }, Would I just use a placeholder (?) in my statement and pass it in via that? It will be in a loop as I have quite a few. Use a hash slice to extract the values in order: # @fields is the collection of keys from your # data that match the order of your query. # # Nice thing about this trick is that you # can have more keys in the data than you # actually insert (e.g., the data may require # more than one query to insert). my @fieldz = qw( date type federal active ); my $sth = $dbh-prepare ( q{ insert blah blah values ( ?, ?, ?, ? ) } ); ... # sometime later you set up a hash[ref] of # stuff to insert and just hash-slice it # to get what you need out: my $data = { date= ... , type= ... , federal = ... , active = ... , foo = ... , # these don't hurt anything bar = ... , # since @fields doesn't include them. } $sth-execute( @[EMAIL PROTECTED] ); -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: inserting data coming from a hash
-- Robert [EMAIL PROTECTED] This is my hash structure: Veterans Day = { date= '2005', type= 'US', federal = 'true', active = 'true', }, Would I just use a placeholder (?) in my statement and pass it in via that? It will be in a loop as I have quite a few. Thoughts and suggestions would be appreciated. Aside: If you can live with re-popluating the same hash you could also try binding specific hash values to the placeholders via bind_inout and re-cycling the hash iteslf. This may prove more trouble than just slicing out the values, however. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
RE: Insert help...
-- NIPP, SCOTT V \\(SBCSI\\) [EMAIL PROTECTED] OK... I have been able to solve this problem, and it was no big deal. However, the problem I am currently facing is really hosing me up... Basically, a NULL field is getting converted to a 0 in the database. This causes a comparison of the same file and the database to yield a lot of differences. Please help. I think this has got to be related to the INSERT. The table column is setup as 'smallint' allowing NULLS with the default value set to NULL. This column however is getting a '0' if the input file is empty for that field. Here is the pertinent code... Simple fix: Don't store NULL's in your database. If you pick an appropriate non-NULL value then the entire problem goes away and you can tell what the outcome of joins really is. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Perl error in fetching data from Clob datatype.
-- Rishi Bansal, Japan IT [EMAIL PROTECTED] Hi, I am trying to fetch data from CLOB datatype in Database. I am getting an Error as : error:ORA-03127: no new operations allowed until the active operation ends (DBD ERROR: OCISessionEnd). I guess I am not handling the data fetched properly. The code snippet is pasted here. $dbh-{LongReadLen} = 1024 * 1024 * 1024; Q: Do you really need a 1GiB buffer? That alone may be causing you problems. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Unable to connect to Oracle on another Unix host in a perl programming using DBI
-- Kairam, Raj [EMAIL PROTECTED] To those who could help me with a problem connecting to an Oracle database on HP-UX from within a perl script that uses DBI I have a perl program on a unix (HP-UX) host(A) running Oracle 8.1.6 In the program I am trying to connect to another unix (HP-UX) host(B) running Oracle 8.1.7 holding a table c1dwg. The perl script that runs on host A contains these lines. $ENV{'ORACLE_HOME'} = '/u01/app/oracle/product/8.1.6'; $ENV{'ORACLE_SID'} = 'CAD'; use DBI; $dbh = DBI-connect('dbi:Oracle:Mycad4prod', 'user', ''password') || Just for the fun of it try: my $dbh = DBI-connect ( dbi:Oracle:host=$ip_address;sid=$sid, $user, $pass ); The point here is to bypass any other issues and see if you can reach a tnslsnr on the IP address with an explicit ORACLE_SID value. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
RE: A good Perl Book
-- Reidy, Ron [EMAIL PROTECTED] 1. Programming Perl 2. Perl Cookbook 3. Object Oriented Perl 4. Extending and Embedding Perl 5. Writing CGI Applications with Perl Watch O'Reilly Press' list for Perl Best Pratices by Damian Conway. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: Should prepare_cached() manage its total size or memory usage?
-- Mark Stosberg [EMAIL PROTECTED] Hello, I have a database application that selects about 50,000 rows one by one, does some process in Perl, and then executes a SELECT statement for each one, with slight variations in the SQL and parameters. I was using prepare_cached() on this repeatedly called SELECT statement. Depends on how you're using the caching. If it looks something like: my $sth = prepare_cached( 'select foo from bar where ( bletch = ?)' ); Then this should generate a single query and re-cycle it for later use. If you are using ANYthing hardcoded specific to the query then caching will just leave a huge number of row-specific queries lying around; which is probably not what you want. Q: Are the placeholders in the existing query(s) or do they use hard-coded values? If you have a small number of placeholder queries then this is odd behavior; if not then all you've done is generate the non-recycled-cache-from-hell and the behavior is normal. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: [DBI] Re: What is wrong with select?
Basicly $StateProvince in a string value in you sql statement, so you either single quote yourself, or let DBI do it. Or use a placeholder and save yourself the pain of figuring it out: select ... where name_short = ? will do the deed without your having to even think about quoting. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: elegent way to handle pluggable backend servers?
Any opinions? Is there another approach that I should think about? Has anyone encountered this situation before? For most selects it isn't that hard to turn metadata about the query into vendor SQL. The simplest fix may be hiding the queries behind another object that takes in generic selects and spits out SQL appropriate for the particular DBD::Foobar. I should have an alpha version of Plugin::Installer and Plugin::Language::DML avaialble in a week. The plugins could be various collections of metadata indexed by common names (a.k.a. hash keys). You then use something like: my $dblink = $dbhandler_class-construct; ... $dblink-frobnicate( @argz ); With the various handler classes defining the metadata-to-query handler for that particular database. You only store the metadata once, the if-logic is isolated into a single place. That can easily be shoved into a single factory class: sub construct { shift; my $connect_meta = shift; my $handler_class = do { if( ... ) { 'Oracle::Query' } elsif( ... ) { 'MySQL::Query' } else { croak Bogus query: unable to determine class from:, @$connect_meta; } }; # caller gets back whatever the appropriate # class constructs with the connection data. $handler_class-construct( @$connect_meta ); } The handler classes all implement a few hooks like run_query and commit: my $handler = Factory-construct; ... $handler-run_query( query_name = [ query argz ] ); $handler-commit; The run_query could either construct SQL from scratch or munge ANSI to handle special cases. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Re: bind_param_inout
I wonder ... how about adding a method bind_params() like this: sub bind_params { my $sth = shift; my $pos = (ref($_[0]) ? 1 : shift); for my $param (@{$_[0]}) { $sth-bind_param($pos++, $param); } } So that this could be shortened to $sth-bind_params([EMAIL PROTECTED]); # default position is 1 $sth-bind_param_inout($#execute_args+1,\$new_id,38); I know I can have the loop in the script, but this looks kinda more readable to me. Sorry if this was already rejected ;-) I've already posted a working version of this to the list, if you like I can send you a copy to test. -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
How hard would it be for DBI to export a hash of data def's?
For example neat (see below for perldoc) 'knows' if a string is numeric or char in the database. If I could get a hash keyed by $sth-{NAME} that had, say, DBD-specific metadata it'd help quite a bit on handling some error situaions. One use is returning a copy of the query with placeholders filled in for cut+paste troubleshooting; another is pretty-printing results where numeric sequences stored as strings need to be quoted for further use (i.e., /^\d+$/ is not a good enough test for quotishness when Perl gets all the data back as strings). If whatever neat uses were exported -- perhaps in DBD-specific fashion to accomodate varying levels of metadata available in the different databases -- it'd sure help... neat $str = DBI::neat($value); $str = DBI::neat($value, $maxlen); Return a string containing a neat (and tidy) represen- tation of the supplied value. Strings will be quoted, although internal quotes will not be escaped. Values known to be numeric will be unquoted. Undefined (NULL) values will be shown as undef (without quotes). If the string is flagged internally as utf8 then dou- ble quotes will be used, otherwise single quotes are used and unprintable characters will be replaced by dot (.). For result strings longer than $maxlen the result string will be truncated to $maxlen-4 and ...' will be appended. If $maxlen is 0 or undef, it defaults to $DBI::neat_maxlen which, in turn, defaults to 400. This function is designed to format values for human consumption. It is used internally by the DBI for trace output. It should typically not be used for formatting values for database use. (See also quote.) -- Steven Lembark 85-09 90th Street Workhorse ComputingWoodhaven, NY 11421 [EMAIL PROTECTED] 1 888 359 3508
Automatic handling of insert ... returning querys.
After noticing the syntax I've written a module that handles the process. Before putting it on CPAN I'd like to get opinons on the interface. The point is to avoid all of the bind_param_inout setup by creating some per-handle metadata with the bound array and fields to copy in from @_, out to the caller. After that the caller only sees a prep step and the execute calls. One approach uses classes derived from DBI and DBI::st to add prepare_bound to DBI and execute to DBI::st. The resulting code looks somethingn like: my $dbh = blah; my $sql = q{ insert into table ( ... ) values ( ?, ?, ?, ?) returning idfield into ? }; my $sth = $dbh-prepare_bound( $sql ); ... my $id = $sth-execute( @insert_values ); or my @id = $sth-execute( @insert_values ); This seems nice in that the syntax for prepare (and prepare_bound_cached) look rather DBI-ish. Catch is that this makes deriving other classes and using statement handles from oddly-derived classes somewhat tricky. Another approach is simply adding a post-processing step to the statement handle: my $dbh = blah; my $sth = $dbh-prepare( $sql ); $sth-binderize( $sql ); my $id = $sth-execute_bound( @insert_values ); The downside here is extra steps to binderize the handle and a separate execute command -- which might interfere with the statement handle if accidentally mixed with $sth-execute. Personally, I'm leaning towards the first technique even if the internals are a bit messier: NEXT makes re-dispatching the execute and prepare steps manageable and the syntax seems cleaner. Any suggestions? -- Steven Lembark 85-09 90th St. Workhorse Computing Woodhaven, NY 11421 [EMAIL PROTECTED]1 888 359 3508
Re: Oddity using placeholders w/ Oracle
I'm guessing it's a CHAR field. Thank blank-padded-comparison-sementics. It's an FAQ. USe bind_param(..., SQL_CHAR); d'oh... -- Steven Lembark 9 Music Square South, Box 344 Workhorse ComputingNashville, TN 37203 [EMAIL PROTECTED] 1 888 359 3508
Oddity using placeholders w/ Oracle
Maybe I've just missed a known bug, but this seems kinda odd... The code below is running againsed Oracle-9.2.0.4 on RHE-3.0. Called with the same variable interpolated and as a placeholder it works or doesn't. I have any amount of code floating around here that uses placeholders and DOES work... Any suggesions? $ perl example; DBI Version: 1.43 DBD Version: 1.15 Value is: 'Q8R2H7' Result A: '133387' Result B: '' $ cat example; use DBI; use DBD::Oracle; use Data::Dumper; print \n\tDBI Version: $DBI::VERSION\n; print \n\tDBD Version: $DBD::Oracle::VERSION\n; my @connect_args = ( dbi:Oracle:host=$ENV{ORACLE_HOST};sid=$ENV{ORACLE_SID}, $ENV{ORACLE_USER}, $ENV{ORACLE_PASS}, { RaiseError = 1 } ); my $dbh = DBI-connect( @connect_args ); my $value = 'Q8R2H7'; print \n\tValue is: '$value'\n; my $a = qq{ select entry_id from entry where entry_accno = '$value' }; my $b = q{ select entry_id from entry where entry_accno = ? }; { # this works... my $handle_a = $dbh-prepare( $a ); $handle_a-execute; my @row = $handle_a-fetchrow_array; print \n\tResult A: '@row'\n; } { # this doesn't my $handle_b = $dbh-prepare( $b ); $handle_b-execute( $value ); my @row = $handle_b-fetchrow_array; print \n\tResult B: '@row'\n; } -- Steven Lembark 9 Music Square South, Box 344 Workhorse ComputingNashville, TN 37203 [EMAIL PROTECTED] 1 888 359 3508
Fix for problems installing DBD::mysql-2.9004 with mysql-4.0.20
Perl-5.8.5 compiled happily, now updating DBD::mysql with: mysql-standard-4.0.20-pc-linux-i686 DBD-mysql-2.9004 Catch is that newer mysql distro's use ./lib and ./include for their files not ./lib/mysql or ./include/mysql. Fix is to symlink '.' to mysql in the lib directory and use an explicit '-I' with the cflags. For example: $ perl Makefile.PL I will use the following settings for compiling and testing: cflags(mysql_config) = -I/usr/local/mysql/include/mysql -mcpu=pentiumpro libs (mysql_config) = -L/usr/local/mysql/lib/mysql -lmysqlclient -lz -lcrypt -lnsl -lm -lc -lnss_files -lnss_dns -lresolv -lc -lnss_files -lnss_dns -lresolv nocatchstderr (default ) = 0 nofoundrows (default ) = 0 ssl (guessed ) = 0 testdb(default ) = test testhost (default ) = testpassword (default ) = testuser (default ) = To change these settings, see 'perl Makefile.PL --help' and 'perldoc INSTALL'. Note (probably harmless): No library found for -lmysqlclient Using DBI 1.43 (for perl 5.008005 on i686-linux-thread-multi) installed in /opt/perl/5.8/lib/site_perl/5.8.4/i686-linux-thread-multi/auto/DBI Writing Makefile for DBD::mysql EH??? No libmysqlclient? $ find /usr/local/mysql/ -name 'libmysqlclient*' -follow /usr/local/mysql/lib/libmysqlclient.a /usr/local/mysql/lib/libmysqlclient_r.a One hak fixes it: cd /usr/local/mysql/lib; ln -fs . mysql; Now I get: cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/Mysql.pm blib/lib/Mysql.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Mysql/Statement.pm blib/lib/Mysql/Statement.pm cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm /opt/gcc/bin/gcc -c -I/opt/perl/5.8/lib/site_perl/5.8.4/i686-linux-thread-multi/auto/DBI -I/usr/local/mysql/include/mys ql -mcpu=pentiumpro -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -fno-strict-aliasing -pipe -D_LARGEFILE_SOURCE -D_FIL E_OFFSET_BITS=64 -I/usr/include/gdbm -O3 -march=pentium4 -DVERSION=\2.9004\ -DXS_VERSION=\2.9004\ -fpic -I/opt/pe rl/5.8/lib/5.8.5/i686-linux-thread-multi/CORE dbdimp.c `-mcpu=' is deprecated. Use `-mtune=' or '-march=' instead. In file included from dbdimp.c:19: dbdimp.h:21:49: mysql.h: No such file or directory dbdimp.h:22:49: errmsg.h: No such file or directory In file included from dbdimp.c:19: dbdimp.h:106: error: parse error before MYSQL dbdimp.h:106: warning: no semicolon at end of struct or union dbdimp.h:117: error: parse error before '}' token dbdimp.h:146: error: parse error before MYSQL_RES dbdimp.h:146: warning: no semicolon at end of struct or union dbdimp.h:159: error: parse error before '}' token In file included from dbdimp.c:19: snip Which was fixed via: perl Makefile.PL --cflags='-O3 -march=i686 -I/usr/local/mysql/include'; -- Steven Lembark 9 Music Square South, Box 344 Workhorse ComputingNashville, TN 37203 [EMAIL PROTECTED] 1 888 359 3508
Help in testing new releases
Perhaps you could discuss it on dbu-users in the context of DBI. Ever wish you could test a new module or version without installing it in the default perl libs? Or set up different sandboxes for various module releases to check them? The main problem at most sites is control over the default @INC dir's: they are reserved for the vendor distribution modules. Modifying one of these, especially for regression testing, requires an act of Congress (or worse a SysAdmin). Here's a fix: FindBin::libs. It walks the file tree from $FindBin::bin to '/' looking for lib directories, abs_paths them, and does a use lib of the distinct paths found. This allows you to create multiple sandboxes with different versions of code found, or have a lib directory for homegrown code outside of the vendor @INC. You can happily install new modules into, say, /usr/local/lib and run the code out of /usr/local/bin without having to use lib in every #! or module you write. This can also be used for quicker regression testing. Say a new version of DBD::FooBar comes out and you use two versions of DBI on site. If the two versions are installed into /sandbox/DBD-test1 and /sandbox/DBD-test2 you can symlink a bin directory into each of them after installing the code (the module's POD has examples of this). Because lib's closer to $Bin have higher priority you can also test a module under development by moving it up the /sandbox path to test it on a wider range of code. This also helps if you have multiple versions under development: ./sandbox/version-X/lib ./sandbox/version-X/bin - ../bin ./sandbox/version-Y/lib ./sandbox/version-Y/bin - ../bin you can now test anything in bin with version X or Y. The module can be configured to search for different dir's (e.g., mylib) or return an array of the paths found (for explicit searching). The POD describes a few ways to set up sandboxes for development and testing. The module is currently on CPAN as FindBin::libs. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 888 359 3508
Re: Problem with rename function call in perl
Actually rename function call in perl can not span across the drives. Wanted to know is there any solution for this. Not necessary the fastest way, but this should work for most situations: sub copy { local $\; local $/; print STDERR copy $_[0] - $_[1]\n if $verbose; open my $src, '', $_[0] or die $_[0]: $!; open my $dst, '', $_[1] or die $_[1]: $!; # caller gets back the result of printing to # the destination file handle, which is a # reasonably way to check for roadkill on # the output. print $dst scalar $src } ... copy qw( /foo/bletch /bar/bletch ); For really large stuff use Uri's slurp module (which uses lower-level functions to perform the writes). -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 888 359 3508
RE: Using Perl to Source variables
-- Srikanth Madani [EMAIL PROTECTED] When the script exists, this variable will no longer be available. The variables are part of the currently running program: int main( int argc, char **argv, char **env ) {^^ the third paramter are the local var's. You can update them within the current process, but they are data local to the process. Updating the environment via $ENV{FOO} = 'bar'. has the same effect as: export FOO='bar'; at the shell: it sets the variable for the duration of a single execution. If you want to modify the contents of ORACLE_SID or ORACLE_ASK and then have oracle run with them you need to set the environment up first then run the job: export ORACLE_SID = 'foobar'; sqlplus scot/tiger; or $ENV{ORACLE_SID} = 'foobar'; system( 'sqlplus scot/tiger' ); will do the same thing. You could also fork/exec the sqlplus instead of letting system do it for you. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 888 359 3508
Re: Simple Perl
-- Michael Higgins [EMAIL PROTECTED] Khamneian, Mahtaj wrote: I have a query that returns the following data from the db: ... operid emplid course --- [snip] ... Noting that different courses are associated with the same oprid and emplid, I need to format the output so that a single line is printed for each oprid. e.g., SSPT490 12050971 37290 38380 38540 52320 I have used both fetchall_arrayref() and fetchrow_array() methods to fetch the data. I can think of a quick and dirty way of doing this, but was looking for an efficient/elegant way. Any/all help is appreciated. Maybe push them onto a hashkey? (untested) my %results; foreach (@$fetched){ push @{$results{$_-[0].' '.$_-[1]}}, $_-[2]; } map {print $_ @{$results{$_}}\n} sort keys %results; If the data can be sorted you can also just walk down the list, printing items until the first field changes: local $\; local $,; my $last = ''; for my $row ( @$rowz ) { if( $last ne $row-[0] ) { print \n if $last; print $last = $row-[0]; } print join \t '', @{$row}[1,2]; } print \n; i.e., print a newline if the last record has a different $row-[0] than this one, then the new row header. for each record print the fields pasted together with tabs and a leading tab separator. The hash trick is necessary if your data is not sorted or if you have to accumulate it in one place to maniuplate it before printing the stuff. Aside: You could modify the loop above for a speed gain by using fetchrow w/ bound values. That would leave you with something like while( fetch... ){ ... }. If the list is non-trivial then sorting it and returning the rows singly may be less overhead (see the DBI book, DBI pod, and numerous postings of Tim's for examples of how to use the bound values). enjoi -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 888 359 3508
Re: Query formatting problem
replacing ? for '500' replacing ? for '500' replacing ? for '12-Jul-2003' SELECT foo, bar FROM some_table WHERE name='500', AND value='500', AND date='12-Jul-2003' Sorry, but this fails in our database. See, the first use of 500 is in a varchar, the second in an integer. That's the problem: I cannot blindly replace literals in this. The issue is finding a way to replace literal '?' with properly [un]quoted values that are appropriate for the underlying database. With DBI and bind param's I don't have to worry about quoting. If value is an int then the 500 gets dealt with internally. As Tim pointed out earlier, this is not trivial. One thing that might work: $sth-quotish; or $dbh-quotish( $sql_with_placeholders ); returns an array of true/false values, one per '?' in the SQL string. If the array entry is true then the bound value will be handled as a stringy value and the literal '?' will have to be replaced with a quoted copy of the bind parameter; false means unquoted. This would be enough for generating a quotified string with something like: sub quotificate { my $sth = shift or croak missing statement handle; my $sql = shift or croak missing sql statement; @_ or carp Odd, no bind parameters...; # no attempt is made here to validate that the # bind parameters passed on @_ are valid for # the types (e.g., hello, world! in an int # field. since the point of this is generating # SQL that matches the original bind parameters. if( my @quotz = $sth-quotish ) { for( 0..$#quotz ) { # $a is either a quoted copy of the bind # parameter or the original. # could also use shift here if anyone's # squeamish about $_[$_] :-) my $a = $quotz[$_] ? qq{$_[$_]} : $_[$_]; $sth =~ s/\?/$a/; } } else { # no bind parm's in the original, nothing # more to do with it. log_message No bind parameters to replace; } # caller gets back the original statement handle # with '?' hacked into properly quoted strings -- # which may be the same string if there were no # '?' in it. $sth } Question: where in the guts of DBI is the mapping of bind parameters to quotish behavior done? -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 888 359 3508
Query formatting problem
I'm trying to find a way of using DBI's internal knowlege of how bind var's are managed to format a working query for error messages. I normally use placeholders with execute or selectall*. This works wonderfully and saves quite a bit of hassles trying to interpolate the queries. Catch is that if the query fails our sysadmin's don't want a placeholder-ized string with some values but a real query they can cut+paste into the system to see what happend. A truncated example (multiple sub-queries removed) is: select foo, bar from some_table where name = ? and value = ? and date = ? run as: $sth-execute( $a, $a, $today ); Yes, $a is used twice, in one case it is compared to a number, the other it used as a string. Using the placeholders makes my life simpler since the name and value are taken from the same variable but DBI handles the stringy/numeric issues for itself. The problem starts when admin's have to check why something failed at 3am and don't know that the '?' are replaced as '500' followed by a naked 500 (for $a) and then the date in quotes. What I need is something like: my $string = $sth-interpolated( $sql, @bindlist ); which called as: my $a = 500; my $date = '11-Jul-1999'; $string = $sth-interpolated( $sql, $a, $a, $date ) gives me back: select foo, bar from some_table where name = 500 and value = 500 and date = 11-Jul-1999 The main issue is being able to walk the bind param. list and check if the columns are numeric (naked copy of $a + 0 inserted) or not (quoted copy of $a). The alternative is having to sprintf every query I use for each combination of values and $dbh-do() them for large datasets in case any one of them fails (ugh!). Looking throught he DBI-1.38 pod, the Catalog Methods don't have anything quite like this since there is no way to query what DBI thinks of the bound parameters (vs. things like column_info which are about the returned data set). An ideal would be some sort of $dbh-blah that returned the stringified version of whatever query was run last: die join \n, 'Bad news, boss:', $dbh-errstr, $dbh-last_query ; If there is someplace w/in the SQL modules that has this please warn me, so far wandering through CPAN hasn't gotten me anywhere. thanx -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 888 359 3508
Re: Query formatting problem
In Oracle, value and date are reserved words. Name might be too. snip The main issue is being able to walk the bind param. list and check if the columns are numeric (naked copy of $a + 0 inserted) or not (quoted copy of $a). The alternative is having to sprintf every query I use for each combination of values and $dbh-do() them for large datasets in case any one of them fails (ugh!). If you don't know what the data types you are comparing to are, you have bigger problems than the query format. Actually, one if the nice thigns about bind parameters is that I don't have to know much about the underlying database fields. If 500 is use stringily in one place and numerically in another perl just Does The Right Thing and I don't really care about it. However, this has nothing at all to do with running the queries. I can already run the querys perfectly with placeholders. The placeholders are a pain for admin's, however, since they can not simply paste the queries into the database client shell for later testing if I report errors. It is about reporting a cut+paste-able error string that the admin's can use to test why the query failed. Having to hand-edit out the '?' and then properly quote them by hand is rather error prone. What I need is something that will replace the original input string '?' characters with properly [un-]quoted values from the bind paramter list so that queries which begin life as: select * from foo where field = ? get turned into select * from foo where field = '500' (if 500 is being used for a stringy comparision in the SQL based on the underlying field types). The problem is that admin's who are trying to deal with a problem at 3am are in no mood to stumble around finding field types so that they can put quotes around things when they cut+paste my bind values to generate a valid query string that they can use in the client shell to test why the database spat out a particular error. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 888 359 3508
Re: Row Count failure
-- Ailes, Kevin [EMAIL PROTECTED] I get an error when I attempt a query for a specific CUST_ACCT using code like this --- my $sth = $dbh-prepare(select * from CHSTX_WEB_INVOICE where CUST_ACCT = $quotedcust_no ORDER BY SASH_VAL DESC) or die Can't prepare SQL statement: $DBI::errstr\n; --- The error looks like this in the apache web server log file --- [Fri Nov 15 11:12:52 2002] [error] [client 10.10.0.66] DBD::ODBC::st fetchrow_array failed: [Ardent][SQL Client][RPC] error code = 81001 (SQL-IM985)(DBD: finish/SQLFreeStmt(SQL_CLOSE) err=-1) at F:/Perl/lib/webflosub.pl line 1108. --- This particular CUST_ACCT number happens to have ~17,000 rows total in the file. All other customers are below 10,000 rows. I am pretty new to this type of programming, but this appears to be erroring at the datasource level. Has anyone seen this before? Is there something I can set in DBI to overcome this apparent limit? Thanks for any replies. Does the query work in your database w/o DBI? -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
RE: Sharing a DBH via IPC::Shareable.
Guess it just caught me off guard. I have been able to place reference to objects and such wihtout trouble. What part of DBH makes it special? You can store objects whose behavior is intended to handle threads; otherwise the behavior becomes self-destructive. Making something an object does not immediately make it shareable. DBH stores a handle to the Oracle connection, which is not intended for threading (or forking for that matter, although it can be worked around). You have to be careful that the handle is closed only once, for example, even if multiple proc's/threads exit. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: Error Handling in DBI Perl
-- arun arora [EMAIL PROTECTED] Hi All, I have seen some standard SybPerl programs that are using the ct_callback(CS_CLIENTMSG_CB,msg_cb); ct_callback(CS_SERVERMSG_CB,srv_cb); for the error handling. This is i suppose the comcept of completion callback I am supposed to change these Programs to work with Oracle 9i on Solaris. I want to use DBI Perl. I am not able to decide that what would be the best way to handle errors when coding with DBI Perl . As per My current knowledge of DBI there is no concept of completion callbacks in DBI. If someone has handled this sort of problem please help me out here. Errors are most easly handled with block eval's: eval { ... # code here dies if there is an error }; if( $@ ) { ... # code here can check $@ for the message } Whatever message the die passed out of the eval will be in $@; which is guaranteed to be undef if nothing died. This is the simplest way to deal with things since the eval-ed code doesn't need to check for errors itself. The DBI book and online doc's describe using this. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: Query on DBI Perl
-- arun arora [EMAIL PROTECTED] 1. Is there some other better way to do this Probably not. 2. If i use Perl DBI which is the current most stable version of PERL DBI. The current one on CPAN. 3. Are these drivers used by DBI already installed on all SUN SOLARIS machines. If not then how do i get these drivers You will have to install DBD::Oracle. See the README for DBI and DBD::Oracle for information. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: Off topic question on data modeling tools
http://search.cpan.org/author/DROLSKY/Alzabo-0.68/lib/Alzabo.pm http://www.alzabo.org/ is a better place to start. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: Backslashes
-- [EMAIL PROTECTED] Does anybody have a way to deal with user input containing a '\' from a cgi form into a mysql database? Or do I have to do regex to escape user input, escape the database input and escape the database out for every db query or insert ? Not a complaint but just looking for a better way. Given that MySQL has to interpret the strings, there isn't any way to insulate yourself at the Perl level (e.g., by using single-quotes). Short of treating the data as blobs, you're stuck escaping them for MySQL. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: CPAN Upload: T/TI/TIMB/DBI-1.24.tar.gz
file: $CPAN/authors/id/T/TI/TIMB/DBI-1.24.tar.gz You're getting nearly as bad as I am :-) Any idea when you are going to update the DBI book (I'm going to push hard on my company to buy copies when it comes out )? Chicago just hit summer: we traded cold rainy for heat and thunderstorms. Watching the puddles reminds me of London :-) How is your place working out? -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: Quick Question... I hope ;-)
-- Herbold, John W. [EMAIL PROTECTED] Help! I have a chance for Perl to take yet another deep foot hold in my company! However I need it to be able to hit IMS on the mainframe? We have it hitting DB2 on the mainframe already, but for a new project, we need it to be able to read IMS tables. Any Ideas? Look up IMS on CPAN: http://search.cpan.org/search?mode=modulequery=IMS You'll find (among other things) IMS::ReleaseMgr Perl extension for managing IMS Release Manager packages -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: Unix command in Perl
chdir(/oracle); -- for changing the dir to Oracle source ora_INST -- for pointing the OPS$accnt to database INST how can I run these unix commands in a perl program? Any kind of help is Appreciated! You don't want to run an external command: perldoc -f chdir; either (a) dot the shell setup before starting perl or (b) set the environment up using %ENV or (c) use system( . ora_INST; other commands here ) to set the environment up for the commands. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: Problem with DBD(Oracle)
* ** install_driver(Oracle) failed: Can't load '/opt/perl5/lib/site_perl/5.6.1/PA-RISC2.0/auto/DBD/Oracle/Oracle.sl' for module DBD::Oracle: No such file or directory at /opt/perl5/lib/5.6.1/PA-RISC2.0/DynaLoader.pm line 206. at (eval 6)[/opt/perl5/lib/site_perl/5.6.1/PA-RISC2.0/DBI.pm:565] line 3 Compilation failed in require at (eval 6)[/opt/perl5/lib/site_perl/5.6.1/PA-RISC2.0/DBI.pm:565] line 3. Perhaps a required shared library or dll isn't installed where expected * *** The file Oracle.sh is there in path /opt/perl5/lib/site_perl/5.6.1/PA-RISC2.0/auto/DBD/Oracle. Check that perl -V shows it looking in /opt/perl5/lib/. You may have used another copy of perl to install the module -- under that perl's ./lib. Also check that make test succeeds in the original build directory and watch the output of make install. It may have blown up and you missed the message. If you used -MCPAN to insall the module try doing it from the command line this time. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: Concurreny
what the DBI policy on concurrency is. It's a very good thing when it works would probably sum it up nicely. The mechanism, however, is left up to the underlying database. MySQL implements row locking (I think?). At that point you can get standard-issue read/write lock level concurrency. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
RE: How to call stored procedure?
-- Stephen Keller [EMAIL PROTECTED] Ed, you asked about stored procedures in DBD::DB2 land... According to the IBM CLI documentation, you can only call a stored procedure with parameter place holders, which implies that you must use bind_param() after you've prepared the statement instead of passing the values in directly (as you show). I've not tried this with Perl, but I would expect it to work something like the following: $sth=$dbh-prepare( 'CALL MYPROC(?,?)' ); $sth-bind_param(0, 'test'); $sth-bind_param(1, 'missing'); $sth-execute(); In the CLI Stored procedures use the same SQLExecuteDirect or SQLPrepare/SQLExecute C calls that regular queries use. The Perl driver also uses these C calls, so using CALL in a prepare statement should work in the DBD::DB2 driver just fine. One caveat, however. IBM does not document any stored procedure call examples in its driver docs. This suggests the possibility that the IBM driver does not support such calls. For confirmation or an example, you might try a search on the IBM DB2 website: http://www.ibm.com/db2. If you prepare the query with placeholders and use exec-with-arguments I thought DBI/DBD::Oracle dealt with binding the parameters on the fly as the query was passed through to Oracle (i.e., the bind_param, above, are implicit). Might be worth checking the code if it saves a lot of bind_param calls throughout the original perl code. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: What's the best book for learning DBI and DBD::Oracle?
What about this one: Programming the Perl DBI (O'Reilly - by Alligator Descartes, Tim Bunce)? Did someone read it? Is it a good book? This is certianly the best place to start. The 'eagle' (mod_perl) book is another good thing to deal with. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: What's the best book for learning DBI and DBD::Oracle?
What about this one: Programming the Perl DBI (O'Reilly - by Alligator Descartes, Tim Bunce)? Did someone read it? Is it a good book? Yes. Yes. Yes. DBI FAQ. dbi-users mail archive also answers quite a few questions. The DBI and DBD::Oracle POD are also useful (hey: documentation that works!). Examples that come with DBI DBD stuff also might help. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: DBI :Installation problem
-- sheby [EMAIL PROTECTED] Hi, when i am trying to install DBI-1.15 on Solaris 2.8 having Perl 5.005 , during the execution of the command , am getting follwoing error messages. # make cc -c -xO3 -xdepend -DVERSION=\1.15\ -DXS_VERSION=\1.15\ -KPIC -I/usr/per l5/5.00503/sun4-solaris/CORE -DDBI_NO_THREADS Perl.c sh: cc: not found *** Error code 1 make: Fatal error: Command failed for target `Perl.o' cc: not found as caused by make being unable to find the compiler in your path. You probably need to update your path to include the compiler's directory. One way is to export CC=/path/to/your/compiler before running the ./Configure code. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: Insert dates into oracle from perl
-- Jay Strauss [EMAIL PROTECTED] With so little info provided, its anyone's guess, so mine is: You aren't casting a perl string to an oracle date with to_date: i.e. $date = 01/01/02 $dbh-do(insert into table (datefield) values (to_date($date,'mm/dd/yy')); That or just set the NLS_DATE_FORMAT and make sure the strings agree. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
RE: how to install DBD-Oracle on linux
-- Sterin, Ilya [EMAIL PROTECTED] You can't do it without having the Oracle client installed. You must have the libs in order to compile and link with them. The good news is that you don't need *all* of a runnable oracle server, just the client libs. These take quite a bit less luck + time to get working. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Anyone know of specific problems with the malloc in Sun's libc?
W/ current apache and mod_perl, perl-5.6.1, Solaris-2.8. Been having problems with DBI dropping connections. One suggestion was to override the default of no and use the malloc that comes with perl-5.6.1. Anyone know of problems or any obvious symptoms? thanx. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: empty row sometimes?
-- Ronald J Kimball [EMAIL PROTECTED] if ($row1 ne ) { You're using the wrong test. It should be: if( defined $row1 ) { # deal with the row ... } else { print Your row is empty. } If the operation may return undef at one level then you have to check that level before checking further down to avoid autovivication: $referent-[0][1] ||= ''; automatically creates the [0] referent so that it has someplace to store the [1] entry that's getting '' stored in it. This means that if a caller wants to know if something was stored at [0][1] it'll have to use soemthing like: if( defined $ref-[0] ) { print You have a row } or if( defined $ref-[0][1] ) { print The value at row 0, field 1 is $ref-[0][1] } else { print Sorry, boss, nothing there. } Point is that checking for a false value misses some things which would be present but false ('0', '') and won't check properly for nested levels of array that have defined values in them which are present to hold undef or false. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Seems that IBM likes DBI as well...
Under Highlights of DB2: http://www-4.ibm.com/software/data/db2/perl/index.html -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: Paradox
-- Marco Schlünß [EMAIL PROTECTED] if you have a windows-environment around (BDE needs this) and still willing to a DBD::BDE, i would supply you with all the resources and knowledge i could provide. Don't have windows here (have a DOS-3.3 floppy somewhere I think if it's still good...). I used the Boreland Engine when it ran on DOS and the thing was fairly usable w/ Paradox. Guess that Marco has answered it :-) -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: Paradox
-- Adam Gomes [EMAIL PROTECTED] Sorry, should have specified, under *nix. My bad. I don't think ODBC or ADO will help me much ... If they still have an API available I'd be willing to try and whip up a DBD::Paradox using it. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: Installation DBI-package and DBD package : Need ur Help
-- anandha prassanna [EMAIL PROTECTED] DBD::Pg -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: LINUX = MSSQL
-- Casper van der Harst [EMAIL PROTECTED] i'm running a Linux-server and need to get information from an Microsoft SQL-server. Is there a cheap possibility to do this. dbi w/ odbc should do it. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
RE: Perl Questions
-- [EMAIL PROTECTED] Hi, while noone seems to want to give any detail, rule of thumb for me:-) make clean perl Makefile.PL make make test make install Save yourself some typing: make test install; dependencies build the thing and the build short-circuts unless the test suceeds. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: strange perl problems....
-- Rajeev Nalluri [EMAIL PROTECTED] Hi all, I am having a strange problem. I used to have few perl scripts working fine until recently I moved to AIX 4.3.3 from 4.3.1. Make sure to re-compaile anything with C code in it. You may have run into a library mismatch. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: Keeping a database connection across a fork
-- Stephen Clouse [EMAIL PROTECTED] -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Wed, Aug 29, 2001 at 03:23:39PM -0500, Jay Strauss wrote: I'm trying to write a daemon that accesses my database. I thought I could create the connection in the parent, and use it in the child. But that doesn't seem to work. Below is the code. Any help would be appreciated Initiate the database connect in the child. You generally can't share database connections across processes. You *definitely* can't share Oracle connections across processes. Actually you can. Trick is to turn on the NO-DESTROY option (forgot the exact keyword) and let the children destroy things when they exit. It's actually fairly workable to share a database handle w/ the sub-proc's preparing their own statement handles. The main problem is that is a True Pain (tm) to debug forks via the perl debugger and DBI isn't normally the sort of thing you want to debug with printf's :-) -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: Keeping a connection open accross a fork
-- Jay Strauss [EMAIL PROTECTED] Hi, I'm trying to write a daemon that accesses my database. I thought I could create the connection in the parent, and use it in the child. But that doesn't seem to work. Below is the code. Any help would be appreciated Don't pass DBI objects across forks unless you're into wizardry. Pass info instead and have the child proc make its own connections. Problem is that when things go out of scope they are DESTROY-ed. Value goes out of scope in parent and poof, it's gone. Leaves child with copy of dead object. There are ways around thi sin DBI but they require careful manglement. Mucho simpler to just create at least the statemnt handles in your children -- just make sure that the parent never destroy's the database handle (e.g., via wait). dbh's aren't all that expensive so you can probably just make the connection in the child proc -- also insulates the child proc's from one another. sl
Re: SQL efficiency
- Stacy Mader [EMAIL PROTECTED] on 07/29/01 23:03:10 +1000: With our report/response database, fault_no's can have one, multiple or null response_no's. The SQL below returns distinct fault_no's regardless if it has one, multiple or null response's. The SQL does the job, but can you figure out it uses up a large amount of CPU (60% on an Ultra 1)? I'm only dealing with ~ 1400 rows. Is there a better method? SELECT s.fault_no, s.date_occurred, s.one_line_summary, s.category FROM stacy s WHERE (s.response_no = ( SELECT max(response_no) FROM stacy WHERE fault_no = s.fault_no ) OR response_no is null ) BTW: This is operating on an Oracle VIEW. I'm using Oracle 7.3.3 via perl5.6.0/DBI1.14 NULL's are evil in most cases -- C.J. Date explains why better than I can. You would be better off using a default value for the field and simply selecting what you want. After that the query breaks down to a simple join on -- hopefully -- indexed fields. sl
Re: DBI and HTML Display
- Kevin Diffily [EMAIL PROTECTED] on 07/15/01 19:52:15 -0400: Can anyone offer a simple example of retrieving information with DBI and displaying it in HTML? I am simply looking for the basic syntax. I am new to DBI but have pretty extensive Perl and Website experience. HTML::Table At its most basic: my @html = ( 'table' ); push @html, join th', 'tr', qw( table header strings ); push @html, map { join 'td', 'tr', @$_ } $sth-selectall_arrayref ; push @html, '/table'; my $result = join \n, @html; The array makes it a bit simpler to isolate changes between the sections. Joining the result on newlines won't effect the final display but makes reading / debugging things simpler. sl
Re: (Fwd) .MDX
- Bodo Eing [EMAIL PROTECTED] on 07/09/01 09:43:41 +0200: Date sent:Sat, 7 Jul 2001 20:28:27 +0100 From: Tim Bunce [EMAIL PROTECTED] To: [EMAIL PROTECTED] Copies to:[EMAIL PROTECTED] Subject: (Fwd) .MDX Organization: Paul Ingram Group, Software Systems, +44 1 483 862800 Greg, Apologies for unsolicited email, but I have been searching for any information that will tell me what database I'm using - with .mdx files in there and found a message from you to a message board asking a similar quiestion looks like index files being part of a DBMS using files in Xbase format. Check if there are .dbf and (perhaps) .dbt files around. Unfortunately, this does not tell you anything about the DBMS using these files ( dBASE, MS Access, FoxPro and many more I don't know about) Thing is that most of the Xbase formats are pretty interchangable. If all you want to do is read the stuff just pretend it's all from DBase and see what happens, you stand a reasonable chance of reading the beasties. sl
RE: Checking for the existence of a certain row.
- Steve Howard [EMAIL PROTECTED] on 07/07/01 17:54:18 -0500: do a SELECT COUNT(*) FROM tablename WHERE ... A count is almost always faster than actually returning that row, and requires even less network bandwidth to return the result. Of course, it a result of 1 is returned - the row exists. Don't try this in Oracle, it immediately degenerates into a table scan. MySql may handle this differently but selecting the key (or the first field with restriction on the PK) will be equally fast w/o fewer risks. sl
Re: (Fwd) Just direct me where to go
Our company is developing allot of Perl script using DBI and DBD::Informix, We want to send these scripts to our customers, along with compiled Perl+DBI+DBD::Informix. What is the best way to do this. If all of your customers run the same versions of the O/S, Informix and Perl and DBI and DBD and system lib's then you can just send the bin's squished in an archive on CD or over the net. Problem is that the first customer who uses a different version of the external lib's (e.g., Solaris vs HP-UX, Solaris 2.7 vs. Solaris 2.5) will be unable to load your stock binary. Your only recourse at that point is to have a copy of every system setup your customers do in order to generate bin's, which can get painful if there are a variety of hard- and O/S-alternatives out there. The installation process is not that difficult and you may find it simpler to just help them install the Perly and DBI portions from scratch. sl
Re: fat slow program DBI?
I'm making a small system using perl and PostgresSQL, but have some problem when clients access tables with abou 2000 rows. Well, I have a piece of code where I list all the data that comes from one table (the one with 2000 rows) and the perl process becomes a fat (13Mgs in RAM) and slow (can take about 7 minutes) to show the results. the program is not complicated, it's as simple as: $query=select * from table; $sth=$dbh-prepare($query); $sth-execute(); while(@data=$sth-fetchrow_array){ print trtd$data[0]/tdtd$data[1]/td/tr; } I'm runing a RedHat 7.1 with a 2.4.2 SMP kernel on a dual Pentium III, 256M RAM pc. The clients are windows, but I have monitores the processes on a terminal and saw them fat, using quite some CPU but slow when giving data to the clients. The networks is not charged, I have transmited a 3Mg file in less than a second. You might get better results -- especially with a list this small -- by sucking the results out in one pass: my $qry = $dbh-prepare( 'select * from blah' ); print map { ... } $qry-fetchall_arrayref; I don't know enough about postgres internals, but you might get some improvement with something more like: my $qry = $dbh-prepare( $blah ); sub handler { ... print map { ... } $qry-fetchall_arrayref; } i.e., prepare the thing once and just re-use it to get the data. If you want to see where it's really sucking up the 7 minutes you can use Benchmark to track the time and CPU use during each stage and see where the majority of time lies. sl
RE: Shorthand for $dbh-prepare and $sth-execute?
Have you ever tried selecting about a thousand or so rows? I just wonder if it still would work, since the db driver really can't know how many rows it fetched until they are all fetched. As I mentioned some drivers fetch so many rows into a cache for better efficiency, but I don't know how DBD::mysql does it since I rarelly use it, but I wonder if that's the case for you. I have code that regularly has to select 1000's of rows from oracle -- though I normally use fetchall_arrayref w/ prepared queries. There are quite a few cases where preparing queries isn't all that helpful, however (e.g., when the table must be chosen at runtime). Proably need to benchmark what the system and particular config of mysql can handle to find out for sure...