Possible to do Connect Caching? connect, pull, wait 30secs, pull again...
Current script does this in sequence. connect pull data based on select copy/import into PG wait 30 secs connect again. I've turned on connection logging in PG and I noticed that I keep seeing LOG : connection received statuses. Which means that the script is connecting to the server every 30s. What I don't know is.. 1. is the script using the same connection or is re-connecting afresh each time. (how can I tell?) 2. would connect_cache help in this to reduce the # of connect and re-connect to the server? Any help/comments appreciate.
Looking for a good explanation of select function in perl dbi
Well, I've figured out how to put data into a sqlite database. I can extract data from the command line, however, for the life of me, I just can't find any good documentation on using Select. What I'm trying to do: SELECT name, size FROM filelist where uid= This works from the command line: sqlite3 data.dbl "SELECT name, uid from filelist where UID=\"1000\"" I'd like to pop this into an array, where I could manipulate it, where each element of the array would contain name,size. Are there any good web pages on using the SELECT function? I took a look at: http://souptonuts.sourceforge.net/readme_sqlite_tutorial.html But it kind of lost me. Scott Mohnkern
Problem with selecting rows from a database link
Hey guys I am trying to query a table using a database link, using the following code: use strict; use DBI; my $col_value = 'test'; my $query = 'select col_1,col_2,col_3 from [EMAIL PROTECTED] where col_1 = ?'; my $dbh = DBI ->connect ('dbi:Oracle:dbs','username','password',{ RaiseError => 1, AutoCommit => 0 }) || die "Database connetion not made: $DBI::errstr"; my $sth = $dbh->prepare ($query)|| die "Unable to prepare\n"; $sth->execute($col_value) or die "Can't execute SQL statement: $DBI::errstr Stopped\n"; my ($col_1,$col_2,$col_3); $sth->bind_columns(\($col_1,$col_2,$col_3))||die "Unable to fetch\n"; print "$col_1: $col_2 :$col_3\n" while $sth1->fetchrow_arrayref; $dbh->disconnect; Now what happens is that, the first time this program is run, it returns the correct result. However after the first successful run it fails to run again. It gives NO error, it just fails to return the rows. If a create a new link and use the new link in the query, it then works for the first run, and then fails after. The same thing happens if I decide to query a view that was created using the link. The link is connected to a 9I database. Perl version: 5.8.6 DBI version = 1.59 DBD:Oracle = 1.19 Regards, Ray
Re: Question that may not have an answer
Scott Mohnkern wrote: Hi Scott I recall talking to someone about perl having an internal database file Your Perl should have come with: AnyDBM_File.html -- Ron Savage [EMAIL PROTECTED] http://savage.net.au/index.html
RE: Question that may not have an answer
You can keep the data in CSV format and use DBD::CSV to access the data. http://search.cpan.org/~jzucker/DBD-CSV-0.22/lib/DBD/CSV.pm Anna Q. Fong, Chief Phone: (916) 574-2632 Flood Operations, Decision Support Division of Flood Management Fax: (916) 574-2767 CA Dept of Water Resources Pager: (916) 762-2669 -Original Message- From: Scott Mohnkern [mailto:[EMAIL PROTECTED] Sent: Monday, September 17, 2007 1:04 PM To: dbi-users@perl.org Subject: Question that may not have an answer I'm working on a project where we have data stored in arrays that we need to put into a database. We'd started with CSV (it was easiest), but it's become unmanagable. However, I don't want to go through the pain of getting a mysql database or other database server set up to put the data into. I recall talking to someone about perl having an internal database file where you could store the data in a file, and access it relatively easily, without having to set up an interface to an external database server. Or am I completely wrong, and there isn't such a thing? Scott Mohnkern
Re: Question that may not have an answer
Scott Mohnkern wrote: I'm working on a project where we have data stored in arrays that we need to put into a database. We'd started with CSV (it was easiest), but it's become unmanagable. However, I don't want to go through the pain of getting a mysql database or other database server set up to put the data into. I recall talking to someone about perl having an internal database file where you could store the data in a file, and access it relatively easily, without having to set up an interface to an external database server. Or am I completely wrong, and there isn't such a thing? Scott Mohnkern You're probably thinking of SQLite http://search.cpan.org/~msergeant/DBD-SQLite-1.13/lib/DBD/SQLite.pm
Question that may not have an answer
I'm working on a project where we have data stored in arrays that we need to put into a database. We'd started with CSV (it was easiest), but it's become unmanagable. However, I don't want to go through the pain of getting a mysql database or other database server set up to put the data into. I recall talking to someone about perl having an internal database file where you could store the data in a file, and access it relatively easily, without having to set up an interface to an external database server. Or am I completely wrong, and there isn't such a thing? Scott Mohnkern
Re: Question that may not have an answer
On Mon, 17 Sep 2007 16:04:09 -0400, Scott Mohnkern wrote > I'm working on a project where we have data stored in arrays that we > need to put into a database. We'd started with CSV (it was easiest), > but it's become unmanagable. > > However, I don't want to go through the pain of getting a mysql > database or other database server set up to put the data into. > > I recall talking to someone about perl having an internal database file > where you could store the data in a file, and access it relatively > easily, without having to set up an interface to an external > database server. > > Or am I completely wrong, and there isn't such a thing? > > Scott Mohnkern You're looking for SQLite: http://www.sqlite.org/ and http://search.cpan.org/~msergeant/DBD-SQLite-1.13/lib/DBD/SQLite.pm Alex
2nd Patch for SQL::Statement
Hi Jeff, because you didn't answer my last reply I think it's better to send the 2nd bug fix (patch includes fixes sent last times, too) again via dbi-users@ list. When you're short on time, maybe others who are involved, may take a look on it. Furthermore SQL::Statement and a lot of DBD-Modules seems to rely on the other, when both modules are installed. I made a fix in join_2_tables which prevents detecting the shared columns as soon as more than 2 tables shall get joined. To be honest, I can't see any reason for checking $isunqualA{$c} or $isunqualB{$c} in lines 663 and 666. Because of 2 tables could have similar named columns, the check of k1/k2 in %iscolA/%iscolB is more significant. That's the reason why I can't understand the lines 659-661 - a check as done in 663/666 is enough, isn't it? In the first impression (without deep think over it) it looks like a forgotten relict from first steps in joining into MemTables. But maybe it's important for NATURAL joins - what ever that means - I'm not an SQL expert as you. Other problems - I didn't fix, because don't know where - is the behaviour of SQL::Statement/SQL::Parser on following queries: 1) select A, B from tA, tB where tA.ID=tB.A_ID and tB.PK="PATTERN" 2) select A, B from tA, tB where tA.ID=tB.A_ID and tB.PK='PATTERN1' or tB.PK='PATTERN2' Both statement prints out a perl warning like: 1) Use of uninitialized value in substitution iterator at /usr/lib/perl5/vendor_perl/5.8.5/SQL/Parser.pm line 1806. 2) Use of uninitialized value in substitution iterator at /usr/lib/perl5/vendor_perl/5.8.5/SQL/Parser.pm line 1552. #ERROR: error during query: 'SQL ERROR: No equijoin condition in WHERE or ON clause The 1st situations causes SQL::Parser to bail out when hit the "PATTERN" arg without raising any error. I think better error checking could works wonders xD Let me ask the question from my last mail again: What do you think about allow indexed table-access? It's very likely that a physical data structure knows more performant ways to search in it's data pool (XPath in XML, BTrees in Berkeley-DB-tables, we use reverse lookup hash-tables). When I shall invest time to add the one or other bug-fix or feature as suggested, I ask for being allowed to reformat the source. It's very painful to edit, because sometimes are TAB's used, sometimes blanks, no consistent indent etc. `perltidy -gnu` or `perltidy -toc` would allow me to stop wasting time to reformat the source when editing around to program sth. and format back when finished to reduce differences made only because of beautifying ... Freundliche Grüße / Best Regards Jens Rehsack _ Fa. Manß & Partner Phone: +49 - 214 - 30 - 46 193 Fax: +49 - 214 - 30 - 31 625 E-mail: [EMAIL PROTECTED] Web: http://www.BayerBBS.com Geschäftsführung: Vorsitzender Andreas Resch | Arbeitsdirektor Norbert Fieseler Vorsitzender des Aufsichtsrats: Klaus Kühn Sitz der Gesellschaft: Leverkusen | Amtsgericht Köln, HRB 49895 patch-SQL_Statement Description: Binary data
Re: Very slow executes with utf8 integer parameters in DBD::Oracle
Peter J. Holzer wrote: On 2007-09-14 18:03:15 +0100, Martin Evans wrote: I have NLS_LANG=AMERICAN_AMERICA.AL32UTF8 as recommended in DBD::Oracle when using utf8 and I need to as my data is utf8 in Perl. Grossly simplified my code does: o select integer_primary_key_field from table o prepare(select from another_table where field = ?) o execute($inter_primary_key_value_retrieved_from_select) This query is vastly more complex than this really Even though the field retrieved from the first table is an integer when I look at it, Perl has utf8 flag set. When these utf8 encoded integers are then passed into the execute for a select on another table the execute takes 0.7s. Now that may not sound a lot to you but this query gets runs a lot. If I downgrade the integer parameter with utf8::downgrade before passing it to execute the execute takes 0.01s. When I look at the QEP the filter contains a TO_NUMBER(:p1) and I think Oracle has decided it cannot use an index on the column. I tried binding the parameter as ora_number but that does not help. The only thing which seems to work is to downgrade the parameter from utf8. Any ideas? This may be the same problem I ran into a few weeks ago. See http://www.mail-archive.com/dbi-users@perl.org/msg30138.html I have a patch for this but I still haven't gotten around to testing it, so even though it's only a few lines I don't want to post it yet. Feel free to contact me off-list if you want to try it. The workaround which I actually use in production code is to set NLS_NCHAR=US7ASCII. Of course this isn't a good idea if you have nvarchar2 or nclob columns. Explicitely binding with the correct cs_form also works: $sth->bind_param(1, $name, { ora_csform => SQLCS_IMPLICIT }); hp Thank you Peter. Rather stupidly, I had marked the post your refer to as particularly noteworthy but forgot I'd seen it. My time for the query has come down from at best .7s (some were a lot worse) to 0.035s - a huge difference. All I've done to achieve this is: 1) bound the select columns which are integers as ORA_NUMBER (which I don't think is having an real affect as the numbers I get back are still marked utf-8). 2) added SQLCS_IMPLICIT to the bound parameters which are numbers (keys in my case). Like you found, when I look at the QEP, I find Oracle is doing a lot of different things now including the creation of a view and use of a index it was not using before. All my data is utf8 so this problem probably exists elsewhere as well. Does anyone know what the disadvantage of changing DBD::Oracle to default to SQLCS_IMPLICIT instead of SQLCS_NCHAR is? Martin -- Martin J. Evans Easysoft Limited http://www.easysoft.com
Re: Very slow executes with utf8 integer parameters in DBD::Oracle
On 2007-09-14 18:03:15 +0100, Martin Evans wrote: > I have NLS_LANG=AMERICAN_AMERICA.AL32UTF8 as recommended in DBD::Oracle when > using utf8 and I need to as my data is utf8 in Perl. > > Grossly simplified my code does: > > o select integer_primary_key_field from table > o prepare(select from another_table where field = ?) > o execute($inter_primary_key_value_retrieved_from_select) > This query is vastly more complex than this really > > Even though the field retrieved from the first table is an integer when I > look > at it, Perl has utf8 flag set. When these utf8 encoded integers are then > passed > into the execute for a select on another table the execute takes 0.7s. Now > that > may not sound a lot to you but this query gets runs a lot. If I downgrade the > integer parameter with utf8::downgrade before passing it to execute the > execute > takes 0.01s. > > When I look at the QEP the filter contains a TO_NUMBER(:p1) and I think > Oracle > has decided it cannot use an index on the column. > > I tried binding the parameter as ora_number but that does not help. The only > thing which seems to work is to downgrade the parameter from utf8. > > Any ideas? This may be the same problem I ran into a few weeks ago. See http://www.mail-archive.com/dbi-users@perl.org/msg30138.html I have a patch for this but I still haven't gotten around to testing it, so even though it's only a few lines I don't want to post it yet. Feel free to contact me off-list if you want to try it. The workaround which I actually use in production code is to set NLS_NCHAR=US7ASCII. Of course this isn't a good idea if you have nvarchar2 or nclob columns. Explicitely binding with the correct cs_form also works: $sth->bind_param(1, $name, { ora_csform => SQLCS_IMPLICIT }); hp -- _ | Peter J. Holzer| If I wanted to be "academically correct", |_|_) | Sysadmin WSR | I'd be programming in Java. | | | [EMAIL PROTECTED] | I don't, and I'm not. __/ | http://www.hjp.at/ | -- Jesse Erlbaum on dbi-users pgpKmME1flZrp.pgp Description: PGP signature