Maintaining simultaneous support for two Oracle versions in DBI
My workplace is transitioning from Oracle version 9 to version 11. I would like to build the DBD driver to support connecting to the Oracle 11 database. However, until all the data are migrated from the old database to the new one, I would need to maintain connectivity to the old Oracle 9 database. Since the driver and shared objects built for the new database would be incompatible, I would need to build separate objects for the Oracle 11 connection. Some of the items I can identify are: The module should be Oracle11.pm rather than Oracle.pm (called as DBI-connect(dbi:Oracle11:, ...) rather than DBI-connect(dbi:Oracle:, ...)). The shared object and bootstrap files as Oracle11.so and Oracle11.bs respectively. (Should these go in the same directory or should they be in the directory Oracle11 and does the package name in Oracle11.pm have to be changed to DBD::Oracle11 in order to find things there?) I have some idea of what I need to change in Makefile.PL in order to achieve at least part of this. What I have so far is changing references to Oracle.pm to Oracle11.pm and $opts{NAME} from DBD::Oracle to Oracle 11. Am I right about the second? Are there others I should have included? Thanks for any help, Scott Smith
Re: Looking for clever solution: Different versions of Oracle .so files within single Perl build
Try RMI::Client::ForkedPipes. Have the child process use one lib, and the parent use the other. The child can give back with one DBI/DBD, and the parent can produce handles from the other. This is effectively the proxy solution, but you're just forking and proxying to a private sub-process you created. (Full disclosure: I wrote the RMI modules. If they break for you please email me.) Scott jeff wrote: You make an incorrect assumption :-( Not trying to fool anyone or hide my identity, just hoping for further input. If you don't want to answer, then don't - but there is no need to be nasty about it. You shouldn't view people so negatively. As it turns out, the proxyserver is not a solution and database links are not an option - Not allowed to create. On Mon, 2009-12-14 at 20:54 +0100, Alexander Foken wrote: OK, while you were busy creating new threads, I remembered that this problem was discussed some time ago on this mailing list. I was about to mail you a link to that discussion for further information, like http://www.mail-archive.com/dbi-users@perl.org/msg32448.html, but then I saw that that therad was also started by you. We gave you several pointers how to solve this problem, from using a specific oracle client over proxy solutions to database links. We told you that it is impossible to include two different Oracle client libraries into one process. And we told you that you do not need to include two different Oracle client libraries into one process. Now, you ask basically the same question again, hoping that no body remembered that thread startet two months ago. Well, I did, and it seems you are resistant to consulting, both from Oracle experts and from experienced DBI users. So, why do you waste our time? Alexander On 14.12.2009 13:18, jeff wrote: I need to connect simultaneously to both Oracle 8 and oracle 10 from the same script using 'external' connection (ie, no username or password). The Oracle libraries, from my experimentations, are not compatible for external connections. Oracle 8 uses system ID and Oracle 10 uses wallet and they do not overlap. I need to stress that this is NOT an option. I have no say in this regardless of how insecure it may sound. Anyway, since Oracle DBD can only be built against 1 at a time , this presents a problem. I have been trying to use DBI Proxy in a PAR standalone for Oracle 8 connections and the usual setup for Oracle 10 connections with some limited success. What would be ideal would be to have 2 different Oracle DBD connections from within the same perl build. Any suggestions or ideas for potential solutions out there ? Thanks. Jeff
Re: Success ! Different versions of Oracle .so files within single Perl build
Great. Glad it works for you! jeff wrote: Thank you, Scott ! In case anyone else needs this sort of setup: I wanted to share this quick dirty successful test using Scott's RMI to connect to both oracle 10 with a wallet and oracle 8 with external authentication - no user names or passwords provided in script. ora8_lib contains the Oracle DBD for oracle 8 and only the forked server sees it. ld_library_path is also set to pick up both oracle clients. And prepare returns the query info as well - which is why proxyserver didn't work for me. looking very good so far ! use RMI::Client::ForkedPipes; use DBI; $c = RMI::Client::ForkedPipes-new(); $c-call_use('DBI'); my $paths = $c-call_eval('@main::x = @INC; return \...@main::x;'); foreach my $path (@{$paths}) { print $path\n; } print \n; $c-call_use_lib('/home/owuser1/perl58/perl_ora10/ora8_lib'); # A build of Oracle DBD using oracle 8 client my $paths = $c-call_eval('@main::x = @INC; return \...@main::x;'); foreach my $path (@{$paths}) { print $path\n; } print \n; $remote_dbh = $c-call_class_method('DBI','connect',dbi:Oracle:OWSERV1,'',''); print $remote_dbh\n; if (defined $remote_dbh) { my $sth = $remote_dbh-prepare(select sys_context('userenv','session_user') from dual); foreach my $name (@{$sth-{NAME}}) { print \t$name\n; } $sth-execute; print $sth-fetch; print OK :) Oracle8\n; } else { print No Connection Oracle8\n; } $remote_dbh-disconnect; my $db3=DBI-connect(dbi:Oracle:OWSERV2,'',''); print $db3\n; if (defined $db3) { my $sth = $db3-prepare(select sys_context('userenv','session_user') from dual); foreach my $name (@{$sth-{NAME}}) { print \t$name\n; } $sth-execute; print $sth-fetch; print OK :) Oracle10\n; } else { print No Connection Oracle10\n; } $db3-disconnect; $remote_dbh = $c-call_class_method('DBI','connect',dbi:Oracle:OWSERV1,'',''); print $remote_dbh\n; if (defined $remote_dbh) { my $sth = $remote_dbh-prepare(select sys_context('userenv','session_user') from dual); foreach my $name (@{$sth-{NAME}}) { print \t$name\n; } $sth-execute; print $sth-fetch; print OK :) Oracle8\n; } else { print No Connection Oracle8\n; } $remote_dbh-disconnect; On Mon, 2009-12-14 at 20:09 -0600, Scott Smith wrote: Try RMI::Client::ForkedPipes. Have the child process use one lib, and the parent use the other. The child can give back with one DBI/DBD, and the parent can produce handles from the other. This is effectively the proxy solution, but you're just forking and proxying to a private sub-process you created. (Full disclosure: I wrote the RMI modules. If they break for you please email me.) Scott
Re: NULL = undef, even in Perl list?
Hi Brian, What was the return value of execute()? Was there a DBI error message set afterward? ($sth_upd[$_]-errstr) You're right that passing undef should work just fine as one of the execution params should translate into a NULL on the database side. Yeah, as long as it's in the 1st or 2nd position (part of the update not the where) you should be fine. Of course x = NULL is always false in SQL, but you know that won't work. I'd lay money that you have an error message explaining things, there isn't a row with that value at that point, or one of your triggers is at play. Best, Scott Brian H. Oak wrote: I *think* this used to work, but recently stopped working. I have a program that is designed to work on MS SQL Server. My program prepares an array of statement handle references by repeatedly interpolating several scalar variables with placeholders: for ( 0..$#uptabcol ) { $sqlcommand= update $uptab set $upcol = ?; $sqlcommand .= , $paqcol = ?; $sqlcommand .= where $lucol = ?; $sth_upd[ $_ ] = $dbh-prepare( $sqlcommand ); } Later, it executes the update statement handles using calculated and looked-up placeholder values (which are sometimes NULL): for ( 0..$#uptabcol ) { my @exargs = ( $linkedname ); my ( $paqstatus ) = $sth_paq[ $_ ]-fetchrow_array; push( @exargs, $paqstatus eq ? undef : $paqstatus ); push( @exargs, $luval ); $sth_upd[ $_ ]-execute( @exargs ); } Please note that for reasons of brevity I have not shown preparation of the $sth_paq statement handle, but it is correctly defined and prepared. Also, the where clause is never NULL, so that is not a problem here. I'm simply trying to update the values of the $upcol and $paqcol. The $upcol value is never NULL, but the $paqcol value is frequently NULL. I'm pretty sure that passing a list, one of the values of which is undef, used to work for setting the desired column value to NULL. But I installed my program on a new server last week and ran it over the weekend, only to find that it skipped updating the $paqcol any time the update value was NULL. It didn't drop $exargs[2] down into the undefined position -- that would have messed the whole thing up. But it's definitely not updating the value of $paqcol, because a trigger that I'm trying to avoid setting off by updating $paqcol with *any* value is getting tripped every time $paqcol is supposed to be NULL. Any ideas? Has something changed in the handling of undef/NULL? Have I been doing something wrong since the first release of my program, but dumb luck has blinded me? -Brian
Re: How to iterate through database tables
I think his issue is getting list to the server side without concatenating strings. You can't use bind variables as table names or column names. You may be able to write a stored procedure which takes a table name as a variable, does the string concatenation on the server side, and uses Oracle's execute immediate to effectively eval the SQL there.. Scott Dale wrote: On Tue, Feb 3, 2009 at 8:47 AM, Deviloper devilo...@slived.net wrote: Hello there! lets assume that one has a list of tables @db_tables. (For Example one per Month) Lets assume one wants to find some data in the tables. (For Example sold Items.) select sales from $db_tables[0]; # gives all sold items in January You could use a UNION in the database to combine the common data (you can create a view and use that for your select): CREATE VIEW v_combined_months AS SELECT 'January' AS Month,sales FROM jan_sales UNION ALL SELECT 'February' AS Month,sales FROM feb_sales UNION ALL SELECT 'March' AS Month,sales FROM march_sales [...]
Re: Proposal for new $h-{ReadOnly} attribute
Doing set transaction read only on Oracle has additional side-effects besides making the connection unable to write: it switches the read consistency level from per-statement (the default) to per-transaction. This effectively freezing the connection in time, allowing multiple queries to be executed against changing data to match up. It might surprise someone who thought they were just getting a read-only connection, especially if they're polling the database for changes and wanted to be lighter weight. It consumes more resources, not less, since Oracle must reverse out all changes since your set trans.. statement with each query, and might result in a snapshot too old error. I don't know if/how other databases implement this feature, and give it such a possibly misleading name. Does anyone else? Scott Smith Tim Bunce wrote: I've just added this to the DBI docs: =item CReadOnly (boolean, inherited) An application can set the CReadOnly attribute of a handle to a true value to indicate that it will not be attempting to make any changes (insert, delete, update etc) using that handle or any children of it. If the driver can make the handle truely read-only (by issing a statement like Cset transaction read only as needed, for example) then it should. Otherwise the attribute is simply advisory. A driver can set the CReadOnly attribute itself to indicate that the data it is connected to cannot be changed for some reason. Library modules and proxy drivers can use the attribute to influence their behavior. For example, the DBD::Gofer driver considers the CReadOnly attribute when making a decison about whether to retry an operation that failed. =cut Any thoughts? Tim.
Re: Proposal for new $h-{ReadOnly} attribute
Tim Bunce wrote: On Fri, May 04, 2007 at 08:15:53AM -0500, Scott Smith wrote: Doing set transaction read only on Oracle has additional side-effects besides making the connection unable to write: it switches the read consistency level from per-statement (the default) to per-transaction. This effectively freezing the connection in time, allowing multiple queries to be executed against changing data to match up. s/freezing the connection in time/freezing the transaction in time/ ? Good point, speaking of small differences which make a difference. So effectively no change if AutoCommit is on? If AutoCommit is off, then the effect lasts until the next commit/rollback, but an app that's read only might not be doing any commits. Wow. I hadn't thought of using commit/rollback to intentionally select when to bump the time point forward. I agree that's an issue. It might surprise someone who thought they were just getting a read-only connection, especially if they're polling the database for changes and wanted to be lighter weight. It consumes more resources, not less, since Oracle must reverse out all changes since your set trans.. statement with each query, and might result in a snapshot too old error. That's certainly a good argument for the ReadOnly attribute not doing a set transaction read only on Oracle. I'll tweak the docs. It's also an argument in favor of ReadOnly not being a simple boolean. Thanks. Tim. p.s. You're *just* in time for 1.55. Yeah. Sorry to reply four days late... Scott I don't know if/how other databases implement this feature, and give it such a possibly misleading name. Does anyone else? Scott Smith Tim Bunce wrote: I've just added this to the DBI docs: =item CReadOnly (boolean, inherited) An application can set the CReadOnly attribute of a handle to a true value to indicate that it will not be attempting to make any changes (insert, delete, update etc) using that handle or any children of it. If the driver can make the handle truely read-only (by issing a statement like Cset transaction read only as needed, for example) then it should. Otherwise the attribute is simply advisory. A driver can set the CReadOnly attribute itself to indicate that the data it is connected to cannot be changed for some reason. Library modules and proxy drivers can use the attribute to influence their behavior. For example, the DBD::Gofer driver considers the CReadOnly attribute when making a decison about whether to retry an operation that failed. =cut Any thoughts? Tim.
Re: environment variable
Reidy, Ron wrote: Oscar, Short answer - you cannot (sort of). This is because your shell script will execute in a sub shell of your perl program. However, you can do something like this: # untested system(export VAR=val; /path/to/your/shell/script.sh); The shell also takes a series of zero or more key-value pairs at the start of any command: system(VAR=val VAR2=anotherval /path/to/any/program); This has the same effect as exporting when you're doing it with Perl's system call, but it's good to keep in mind when you don't want to keep the variable setting or override in your general shell session. i.e. LD_PRELOAD=/usr/lib/special_old_library.so /usr/bin/oldprogram Scott Smith Genome Sequencing Center
Re: [EMAIL PROTECTED]: elegent way to handle pluggable backend servers?]
Hi Jon, Instead of asking where should I put the SQL?, you might ask should I write the raw final SQL?? If you have 50 queries, and 10 platforms, you probably don't want to to maintain 500 text strings in your program. Ideally you want to write 50 queries in some generic format, and get the benefits of your solution #1, and then have 10 pieces of logic to translate a general query into a platform's specific SQL, to get the benefits of solution #2. What we use internally here represents each query (basically) as a table and a series of key-value pairs. One layer expands the key-value pairs to do things like add required joins, and a second takes that and writes an actual SQL string. Writing a good layer like that can be complicated, depending on how hairy your SQL gets. There are a ton of modules on CPAN which let you work with a database w/o writing the raw SQL, like Class::DBI. There are a few which try to re-write SQL from one platform to another, which might give you what you need with minimal code changes. If you do write your own, the DBI module has a lot of catalog methods to let you examine your database. You could have your core subroutine check the data type of fields passed-in and handle boolean values specially, for instance, in a generic way. We use these and actual queries to the data dictionary of the given database to make the SQL writer smart. Scott Smith Informatics Genome Sequencing Center Washington University School of Medicine David Dooling wrote: Care to respond? - Forwarded message from Jon Lapham [EMAIL PROTECTED] - Date: Fri, 18 Feb 2005 16:37:16 -0200 From: Jon Lapham [EMAIL PROTECTED] Subject: elegent way to handle pluggable backend servers? To: dbi-users@perl.org Reply-To: [EMAIL PROTECTED] My DBI perl application needs to support many database servers and versions, but will only use one database server per installation. IE: one customer may be using MySQL v3.xx, while another has PgSQL v7.4, while another has Oracle... etc. My application needs to work for them all, with minimal fuss. As an example of why this is necessary, MySQL v4 does not support boolean variables, and date handling between databases (and different versions of the same database) are often different. I would like to ask your opinion(s) on which approach to take for coding this backend server plugablility. It seems to me that there are a number of approaches. 1) The first idea would be to simple wrap all SQL in some conditional in the main body of the code, [or pseudocode in this case :) ] if ($dbserver eq 'MySQL' and $dbversion == 3.23) { $SQL = 'SELECT * FROM blah WHERE active=1'; } elsif ($dbserver eq 'PgSQL' and $dbversion == 7.4) { $SQL = 'SELECT * FROM blah WHERE active'; } elsif (etc...) { ... } $result = $conn-prepare($SQL); $result-execute(); Okay, this has the advantage that all the SQL for all the supported backends sits easy to see in the main code body. Disadvantage is that it is ugly as sin making your main body code harder to follow, imagine if you wanted to support 20 database backend-version combinations? 2) But the SQL into a subroutine, held in an external module. One module for each database-version combo you want to support. Then, your main body code would look like: $SQL = db_some_query(); $result = $conn-prepare($SQL); $result-execute(); ...where the subroutine db_some_query() exists in a bunch of modules (ie: SQLpgsql_7_4.pm, SQLmysql_3_23.pm, SQLoracle_x_xx, etc) with only the appropriate on loaded earier in the app. The advantage here is that it is extensible, more database-versions supported simply means creating more SQL* modules. The disadvantage is that these modules could become easily out of sync... nightmare in terms of maintainablity. Another disadvantage is that since the main body code contains a subroutine call rather than the actual SQL code, it is hard to see what is happening in the main body code, you would continually need to refer to the appropriate module. Any opinions? Is there another approach that I should think about? Has anyone encountered this situation before? Thanks -Jon
Identifying .mk files, building
I'm trying to build DBD-Oracle and am not sure of what .mk file to use. I used find to identify available .mk file and got this output: ~ find /oracle -name \*.mk /oracle/network/lib/ins_net_client.mk /oracle/network/lib/env_network.mk /oracle/sqlplus/lib/env_sqlplus.mk /oracle/sqlplus/lib/ins_sqlplus.mk /oracle/precomp/lib/ins_precomp.mk /oracle/precomp/lib/env_precomp.mk /oracle/rdbms/lib/ins_rdbms.mk /oracle/rdbms/lib/env_rdbms.mk /oracle/plsql/spnc_makefile.mk where /oracle is the value for ORACLE_HOME. Are any of these usable? I tried using /oracle/rdbms/lib/ins_rdbms.mk earlier but make didn't work and there were some strange things in the makefile. Please reply personally. Scott Smith NYS Task Force on Reapportionment _ Surf the Web without missing calls! Get MSN Broadband. http://resourcecenter.msn.com/access/plans/freeactivation.asp