Re: Possible to get field names and types in a table without executing a query?

2006-06-27 Thread JupiterHost.Net



[EMAIL PROTECTED] wrote:

Hi all. My goal is to get a list of all field names and data types for
those fields in any given table in a mysql database. From reading the
DBI documentation, the only way I've been able to do this is by
preparing and executing a query against a table first. Then I would use



Instead of trying to hack together the data bases on an assumed behavior 
and pseodo query, do a query that actually actually asks for what you want.


For instance with MySQL:

 print Dumper $dbh-select_all_arrayref('SHOW TABLE STATUS FROM db 
LIKE table');


 # same as SHOW COLUMNS FROM db.table
 print Dumper $dbh-select_all_arrayref('DESCRIBE db.table');

HTH :)


$sth-{NAME} and $sth-{TYPE} to get the field names and their
respective data types. it looks like this:
 
$sth = $dbh-prepare(SELECT * FROM $table) or die Can't prepare

statement!\n;
$sth-execute or die Can not execute statement!\n;
@types = @{$sth-{TYPE}};
@cols = @{$sth-{NAME}};


PS: use strict and warnings or you're going to have headaches when 
reusing $sth @types, @cols, etc...


Re: Possible to get field names and types in a table without executing a query?

2006-06-27 Thread JupiterHost.Net

It is much more easily portable.


Suppose you have a real query:



select a.foo, b.bar.c.baz
from
a, b, c
where .

The 0= 1 method works for that too. Contrast that with parsing the
from clause and the where clause to create a tabel catalog query. Yuk.


How can you gaurantee all DB engines will return the column names with 
no value on an empty query?


For instance I'd expect fetchall_hashref() to return {} on a query with 
no results...


It'd be better to use the DB engine's built in tools, perhaps 
abstracting the engine specific guts into a method that does what it 
needs for the handle's engine and they all return the same hash...


*If* your app needs support every DB that is...


Re: Possible to get field names and types in a table without executing a query?

2006-06-27 Thread JupiterHost.Net



Matthew Persico wrote:


So now I need one for every database?


For every database you need to support yes, not all engines will return 
the exact same data with a query if no results and that also not in the 
same format.


But each database is 00% gauranteed to support what it documents it 
supports and return it in the format it has documented it returns it in :)


Re: Identify PID for remote database handle--CLARIFICATION...

2006-05-23 Thread JupiterHost.Net



Reidy, Ron wrote:


No, I do not know you are a DBA; maybe YOU cannot describe the problem
well enough.

-Original Message-
From: Drozdowski, Catharine [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 23, 2006 10:11 AM

To: Reidy, Ron; dbi-users@perl.org
Subject: RE: Identify PID for remote database handle--CLARIFICATION...

You know I AM a DBA and will withdraw the post as you guys seem to not
be able to grasp the concept. 


Ok kids ne nice :)

The question is (assuming I understand it)

Is there a way to get or have DBI set so it can be gotten the PID of the 
$dbh process ont he DB server (local or remote)


Correct?


Re: Checking if a table exist

2006-04-28 Thread JupiterHost.Net



Reidy, Ron wrote:

1.  Look in the data dictionary
2.  Select from the table and trap the appropriate error code

-Original Message-
From: Loo, Peter # PHX [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 27, 2006 4:33 PM

To: List - DBI users
Subject: Checking if a table exist

Hi All,
 
Does anyone know of a good way to check if a table exist disregarding

whether the table has data or not?


put 'SHOW TABLES;' into a hash and
if(exists $tables{'user'}) {

...


Re: Running DBI, ODBC in the crontab

2006-04-13 Thread JupiterHost.Net



Tim Bunce wrote:

On Wed, Apr 12, 2006 at 04:33:39PM +0200, Dr.Ruud wrote:


Jeffrey Seger schreef:



perl -MData::Dumper -e' print Dumper @INC'


Alternative:

perl -MData::Dumper -e' print Data::Dumper-Dump([EMAIL PROTECTED], [qw(*INC)])'



Oi, why not much cleaner (code and output):

 perl -MData::Dumper -e' print Dumper [EMAIL PROTECTED], \%INC;'


Alternative:

   perl -V

Tim.



DBD::Sybase + freetds from Linux to Microsoft SQL Server 2000 contest

2006-02-04 Thread JupiterHost.Net

Hello list,

I'm about to go nuts trying to connect to Microsoft SQL Server 2000 from 
a linux machine.


I'll pay $20 to the first person who can get me over this last hump, 
seriously I'll paypal it to you, maybe more if the solution is had 
quickly. No joke, I will pay :)


Here's what I have:

As root I:

1) download and untarred freetds (v0.63)and went into the dir:
./configure --prefix=/opt/freetds
make
make install

2) Dowloaded an unatarred DBD-Sybase (v1.07) and went into the dir:
export SYBASE=/opt/freetds
perl Makefile.PL
make
make install

This connects:
 # /opt/freetds/bin/tsql -H 1.2.3.4 -p 1433 -U howdy -Pdoody
 locale is en_US.UTF-8
 locale charset is UTF-8
 1 select convert( varchar(30), getdate(), 120 ) as No
 2 go
 No
 2006-02-04 21:40:56
 1

Now what do I need to do to
 my $dbh = DBI-connect(?) or die DBI-errstr();

and how can I do a simple test query (verision or date, whatever) and 
print the results?


If I need to install somthing and do it another way please point me in 
the right direction, freetds.org seems a bit vague to me a super 
MSSQL/ODBC newbie :) I'm used to MySQL so this all seems backwards to me :)


TIA!


Re: Extracting files using DBI

2006-01-31 Thread JupiterHost.Net

Sham Prasad wrote:

Hi all,


Hello,


I have a bugzilla database running on mysql. If you are aware of
bugzilla, it has a table called attachments. what i am trying to do
is get the attachments of all the bugs having  attachment/s into a
directory.
for example there is a bug with a attachment named abc.zip. can i
extract this file to a directory on my filesystem? as you have
mentioned the files in mysql table are in BLOB fields. i have written
a script but am not able to copy the file to the filesystem instead i
am able to copy the contents which is not readable.
Contents of the script

#!usr/lib/perl


use strict;
use warnings;


use DBI;
my $dbh = 
DBI-connect(DBI:mysql:databse=Pbugs;host=appletest,root,deltazia,{'RaiseError
= 1});
my $sth = $dbh-prepare(select thedata from attachments where
attach_id=143) or die can't prepare statement;
printQuery Results\n;
while(my @row = $sth-fetchrow_array()){
print@row\n;
}
$sth-execute or die can't execute statement;
$dbh-disconnect;
Note -  thedata in the query represents the contents of the file.
the above script prints the contents of the attachment in a binary
form which is nonreadable.
Actually i want the file completely to be copied to the file system.
how can i do it using DBI?


write @row to a file

perldoc -f open
perldoc -f binmode


Re: searching database on emailaddress field

2006-01-31 Thread JupiterHost.Net



[EMAIL PROTECTED] wrote:


Hi.


Hello,


Solved after I posted.
Surround $email with 's.
As in '$email'.


Close, but this is a gun pointed at your head :)

You either need to $dbh-quote() it or do the bind value/placeholder 
(see `perldoc DBI` for details of both)


Re: Fwd: how to detect that we're running under CPAN::Testers?

2006-01-11 Thread JupiterHost.Net



Is there any way to tell if my package is being tested automatically
under CPAN::Testers? Here's the situation:


I've never used that module but this should work:

if(exists $INC{'CPAN/Testers.pm'}) {
print I am probably running under CPAN::Testers\n;
}
else {
print I am probably *NOT* running under CPAN::Testers\n;
}

That should do what you want if I'm understanding your goal right 
*unless*  CPAN::Testers has been used but is not actually running the 
test that includes the above logic.


Just my .02 about detecting' it. I'd say the best thing to do is just 
write good tests that pass regardless:


eval 'use Foo::Bar';

if($@) {
pritn Skipping Foo::Bar test since its not installed apparently;
return 1;
}
else {
# run Foo::bar tests
}

then it doesn't matter whose using what where and when because you have 
tests that are run if necessary and skipped (but not failed) otherwise


Re: Convenience function selectall_hasharrayref

2006-01-10 Thread JupiterHost.Net



Peter J. Holzer wrote:

or selectall_arrayhashref? 


Anyway, I rather frequently find that the most natural way to
represent a query result is an array of hashes: Each row is hashref, but
the the rows are in an array(ref) so that the order is preserved, and
the columns can be accessed by name.

Proposed use:

my $emp = $dbh-selectall_hasharrayref(select * from emp order by ename);

for (@$emp) {
print $_-{ename} ($_-{empno}): $_-{job}\n;
}

or

for ($first .. $last) {
print $emp-[$_]{ename}, \n;
}

or something like that.

What do you think?



The names are to vague and not accurate

selectall_arrayref_of_hashrefs() is what it is and anyone knows that 
immediately by looking at its name instead of having to look up its 
documentation


Re: (Fwd) dbi-users@perl.org

2006-01-09 Thread JupiterHost.Net



   Can't load =
   '/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBD/Oracle/O=
   racle.so'=20
for module DBD::Oracle: libclntsh.so.10.1:  cannot open shared object =
   file:=20
  No such file or directory 


I imagine your web server does not have permissions to the directory the 
missing .so is in or the file itself.




Re: trying to subclass DBI

2005-12-30 Thread JupiterHost.Net



wernerus sebastien wrote:

 Tim, I made the changes you suggested. Here is the code:

---
package MySubDBI;


...



package Main;


try:

package main; #IE lowercase M, not Main..

Also be sure to:
 use strict;
 use warnings;
in each package as that will likley tell you what is wrong (assuming it 
a programming error and not a subclassing paradigm error which useing 
strict and warnings will assist in catching/ruling that out)


Re: anyway to determine # rows before fetch loop ends and without seperate count(*)

2005-11-18 Thread JupiterHost.Net



listmail wrote:
Well I'm not seeing why a number of arrays that each point to arrays 
could not be consider a matrix of arrays when considering one definition 
of the word matrix Something resembling such an array, as in the 
regular formation of elements into columns and rows.  I dunno, i'm not 



Because the data type of $results is an array reference.

Each item in it is also an array refenerence, so instead of confusing 
and overwhelming your self with complex and ominous sounding matrix


Just think:

 Ok, $results contains all of my $records, each $record has all of the 
columns I SELECTed.


very simple and intuitive and doesn't sounds like you have to be 
einstein or neo to understand and manipulate it.


trying to argue with you of course.  It is apparent that I truly am 
confused with Perl References again.  I beleive my main mistake could be 


because you're making it too hard on yourself thinking in such abstract 
apocolyptic terms such as matrix which relate to computing theory in 
general instead of a well defined paradigm and implimentation of a 
specific component of the given language.


Sure *technically* and array ref that contains other array refs  can be 
considered a  matrix but an array ref of array refs not only tells you 
what it is but *exactly* what each part of it is which in turn instantly 
tells you how it needs to be accessed which ultimately makes the 50 or 
60 line script you sent originally about 10 lines or less. (IE by using 
the array of array setup instead of some convoluted matrix that is 
structured god know how.


That means in six months when you (or heaven forbid me) have to maintain 
your code we don't have to figure what you were smoking to get what you 
were after and what you personally define as a matrix and how you'd 
structure and access the data in said matrix, etc etc


using foreach my $record (@{ $results }) instead of what you've shown 
for my $record (@{ $results }).  I'll test this later when I get a 


for and foreach are the same thing, foreach just takes up 4 more 
characters so I always use for(), its cleaner IMHO but do what you like :)


Re: anyway to determine # rows before fetch loop ends and without seperate count(*)

2005-11-17 Thread JupiterHost.Net

Also very convoluted, all of that can be done with:

my $results = $dbh-selectall_arrayref($sql); # if you only want 
to 
process a certain amount just LIMIT in your $sql...




I appreciate the response.  I tested selectall_arrayref and as I 
expected, irregardless of the number of rows returned, $results will 
always point to a matrix.  So from what I am seeing, $record-[0] as 


Actualyy its an array reference and each element of the array is an 
array refernce that is the dat areturned by the select.


you have written below would have to be written as $record[0]-[0].  At 


nope. $record is one element of the $results array in the for loop,look 
again:


this point I've come to conclusion that my requirements are causing 
uneccessary complications.  If it wasn't clear, previously I was 
wanting the data from a sql statement with one row returned to be 
stored into an array of columns, otherwise make it an array of columns 
and rows.


Sounds like you want selectall_arrayref() still... did you read it 
documentation?



my $results = $dbh-selectall_arrayref(SELECT id, foo, bar FROM baz 
WHERE $where);


my $count = @{ $results }; # the number of elements in $results (IE the 
number of rows returned)


for my $record (@{ $results }) { # go through each $record in your $results
print Id $record-[0] has a foo of $record-[1]\n;
print Id $record-[0] has a bar of $record-[2]\n;
}


I'll simply go with a matrix always and be done with it.


There's no matrix :) you're making it too complex on yourself :)

You have an array ref that you can get the number of rows from *and* 
each record from as an array ref itself, its not nearly as complicated 
or obscure as a matrix.



my $count = @{ $results };

$dbh-disconnect;

if($count  1000) { # or whatever you wanted teh count for...
   for my $record(@{ $results }) {
   # now use the data:
   # $record-[0]
   # $record-[1]
   }
}









Re: anyway to determine # rows before fetch loop ends and without seperate count(*)

2005-11-16 Thread JupiterHost.Net



[EMAIL PROTECTED] wrote:

#Here's an example which shows what I am trying to accomplish.  If I 
can determine the number of rows before pushing the data, this can 
simply things for #me when processing the data throught my scripts.  
#

use warnings;
use strict;


Good good :)


use DBI;
use DBD::Oracle;

my $sql=q{  select name, location
from mytable
};

my $dbh;

eval {
$dbh = DBI-connect(dbi:Oracle:MYDB,
'dbuser', 'dbpass',
  {
   RaiseError = 1,
   AutoCommit = 0,
   ora_session_mode = 0
  }
);
};

if ( $@ ) {
outprint('end',$DBI::errstr\n);
}


Hmm, perhaps the oracle specific stuff needs it but why are you evaling 
that?


my $dbh = DBI-connect(@DBI_CONNECT_ARGS) or outprint('end', 
$DBI::errstr); # assumign its die()ing or exit()ing




my $sth=$dbh-prepare($sql) or die Couldn't prepare statement:  . DBI-


errstr;



$sth-execute or die Couldn't execute statement:  . DBI-errstr;

my $ary;

while ($ary = $sth-fetchrow_array()) {
#I need to determine number of rows as this will affect 
whether a matrix is used or not


Also very convoluted, all of that can be done with:

my $results = $dbh-selectall_arrayref($sql); # if you only want to 
process a certain amount just LIMIT in your $sql...


my $count = @{ $results };

$dbh-disconnect;

if($count  1000) { # or whatever you wanted teh count for...
for my $record(@{ $results }) {
# now use the data:
# $record-[0]
# $record-[1]
}
}


Re: can't create tables with a $

2005-09-29 Thread JupiterHost.Net



moma wrote:

Hi,


Hello,


i am using postgresql 7.4.7 on an ubuntu box, perl 5.8.4 and DBI version
1.46 with DBD::Pg version 1.32.
i can create tables from psql with an $ in the middle of it, e.g. create
table foo$bar (id integer);


That seems like a bad bad bad bad idea because how can you tell how Perl 
will take the $ at any point in the flow. Or what about later when any 
language/system that uses $ for special purposes uses your name. Will it 
take it as a literal $ or for whatever it uses $ for. I'd rethink your 
db naming shema.



But if i try to do this
$dbh-do(create table foo\$bar (id integer););


$dbh-do('create table foo$bar (id integer)');


in a script, then following error is emiited:
DBD::Pg::db do failed: Execute called with an unbound placeholder


google Execute called with an unbound placeholder


Re: MySQL 4.1+ Password Incompatibility

2005-08-25 Thread JupiterHost.Net



Shawn Iwinski (siwinski) wrote:


I'm having trouble getting DBI connect to a MySQL 4.1+ server.  I have
to use MySQL's OLD_PASSWORD to set the password on the server in order


This isn't a DBI issue, the problem is that the passwords in your 
privilage tables (IE from 3.x or 4.0) are not compatible with MySQL 
4.1's authentication protocol.


You're only two options are:
 1) just use old_passwords (IE it has to be in ~/.my.cnf for CLI use = 
MySQL issue not Perl)

 2) update your privileges to 4.1 auth schema
 3) go back to 4.0 or whatever

You may find other options from the mysql folks :)

HTH :) Lee.M - JupiterHost.Net


Re: SQL Server and 'set dateformat'

2005-07-07 Thread JupiterHost.Net



Ron Savage wrote:

On Thu, 07 Jul 2005 11:38:06 +1000, Daniel Kasak wrote:

Hi Daniel



my $sth = $dbh-prepate( set dateformat dmy );
What am I doing wrong?


probably prepate is not a function, I think you mean prepare

Does that exact query work via CLI interface (IE is the query bad)


Re: Perl Performance Help.

2005-05-12 Thread JupiterHost.Net

Divya wrote:
 

Hi All,
 

I am in a Perl based project, where we fire a query which returns 20K
records.
In the perl side, we use the following snippet to get the query result to an
array.
 

while (  @each_record = $stmt_handle-fetchrow)
  {
push @records, [ @each_record ] ;
}
So you end up with an array, that has an array ref of each result.
First off is your SQL efficient (IE are you selecting columns you never 
use for instance)

Have you tried to not do the prepare, execute, fetchrow route?
I beleive selectal_arrayref() still does all of that but perhaps its 
worth looking into:

my @records = @{ $dbh-selectall_arrayref($query) };
Or don't derefernce it and use the ref instead.
So instead of
my @records = @{ $dbh-selectall_arrayref($query) };
handle($records[1]);
my $records = $dbh-selectall_arrayref($query);
handle($records-[1]);
That may helps with the speed also...
HTH :)
Lee.M JupiterHost.Net


Re: Perl Performance Help.

2005-05-12 Thread JupiterHost.Net
a) We are using Oracle 8i DB. 
 The query (or the DBMS) takes only 6 seconds
to return data to Perl.
How did you determine this? I think you're just assuming that.
c) So we conclude that it is mainly an issue with fetchrow especially when
the number of records are high. We tried other possible options like 
As other have said compare the CLI version of the query to the perl version:
(Not an oracle person so if it looks like MySQL well, you'll live, its 
just an example to illustrate what to do)

oracle SELECT TIME(); # your DB's equivalent obviously
oracle YOUR QUERY;
oracle SELECT TIME();
vs.
perl -mstrict -MDBI -we 'my $dbh = DBI-connect(YOURDSNHERE) or die 
DBI-errstr; print time(),\n;$dbh-selectall_arrayref(YOUR 
QUERY);print time(),\n;'


Re: cPanel / DBI / mySQL / Exim issue

2005-04-27 Thread JupiterHost.Net

After that did not work, they directed me to your list, in the hopes you could
give me some assistance with how to resolve this. Any help you can offer is
much appreicated.
Er, you may want to not show your passwords on a public list...
They were right, its not a cPanel issue. Have you upgraded MySQL lately?
You might need to downgrade or figure out the old_password=1 thing for 
your config.

Have you tried changing your privilege tables to the new format that the 
version of MySQL uses?


Re: Efficient select/insert

2005-04-26 Thread JupiterHost.Net

   $dbh-do(INSERT INTO t (col1, ...) SELECT col1, ... FROM t2 WHERE x 
= '$element');
Oi SQL injection warning! Red lights and sirens - don't do that!!
'$element' is supper dangerouse, evil evil evil
either $dbh-quote it or use ? in your prepare/execute dance:
$dbh-do( 'INSERT INTO t (col1,col2) SELECT col1,col2 FROM t2 WHERE x = 
' . $dbh-quote($element) );

Lee.M


Re: Paged Querys

2005-01-26 Thread JupiterHost.Net

Thilo Planz wrote:

I make a page that returns the result of a select statement in Perl
and DBI, but a lot of rows are returned, there is a way to page the
result ?

Yes, there are several modules for this.
The best one (easy to use, all the functions you need in one place) 
I've used is Data::Paginate (http://search.cpan.org/~dmuey/)
Paging the data in Perl is the easiest way.
However, it should improve performance if you do this on the database 
server (in SQL)
and only return the rows you need.

For this, you need to rewrite your SQL.

Agreed! which is why we chose this module also :)
It sends the page's range so you can:
 query all records if its the initial query.
 (or COUNT(*) the number of records and query 1-n)
 then
 query the range (like with LIMIT) for the given page after you have 
the total number of records returned originally

We've been able to create with it a serach feature that does only return 
the records for say page 2 while keeping the paging info in tact.

I beleive the example the author gave us will be in the docs for it.
I'm telling you that module handles *everything* you'd want to do in 
regard to paginating data.

Hope its up soon! Next time I work with the author I'll mention this 
thread to him ;p

HTH :)


Re: Paged Querys

2005-01-25 Thread JupiterHost.Net

Hernan Arredondo wrote:
Hi all, 
Hello,
I make a page that returns the result of a select statement in Perl
and DBI, but a lot of rows are returned, there is a way to page the
result ?
Yes, there are several modules for this.
The best one (easy to use, all the functions you need in one place) I've 
used is Data::Paginate (http://search.cpan.org/~dmuey/)

The author hasn't uploaded it yet, but he let us use it to test it for 
him and it is awesome!

Hopefully he'll upload it for the public soon ;p
HTH
Lee.M


Re: Double quotes in select statement throw an error

2005-01-18 Thread JupiterHost.Net

Moosmann, James wrote:
Nope, same results, Here is a simple example:
Is the syntax invalid?
use DBI; 
my $dbh = DBI-connect('dbi:ODBC:somedb','',''); 
my $qs = $dbh-quote( SELECT \Rows returned: \ );
Why are you quoting the entire query as a string?
$dbh-do($qs);
Use a valid query:
$dbh-do(SELECT \valid column\ FROM \valid table\) ...
print $dbh-errstr;
---RESULT---
DBD::ODBC::st execute failed: [Microsoft][ODBC SQL Server Driver][SQL
Server]Invalid column name 'Rows returned: '. (SQL-42S22)
err, at the risk of sounding pedantic or sarcastic, it would appear 
there is no column named Rows returned: , so I'd say yes, the syntax 
is invalid...

I'm not a Microsoft user (thank the Lord!!!) but even Bill Gates should 
be able to understand that error :)

Lee.M


Re: Double quotes in select statement throw an error

2005-01-18 Thread JupiterHost.Net

Moosmann, James wrote:
Lee,
Hello,
The select statement is very valid and so is: 

SELECT 'Hello World!' as 'My first SQL Statement'
-or-
SELECT answer = 2+3
Really, try it.
ok, but if you $dbh-quote() it, it becomes something like:
 'SELECT \'Hello World!\' as \'My first SQL Statement\''
literally, but as far as validity, if the statement in question was 
valid it would return something instead of an error...

Anyway... I was sent the correct answer and was very surprised in that the
syntax is indeed valid on some servers, however the ANSI standard is single
I imagine it may be but what does:
 Invalid column name 'Rows returned: '
mean then?
quotes for all string literals and that these other drivers/servers handled
string literals with either single or double quotes on a somewhat random
basis...bizzare.  Perl DBI thinks any double quoted string must be a column
or table name ( [ and ] are preferred, but double quotes are still
really? where did you see that in the docs?
I can:
 SELECT foo FROM bar WHERE ID=1;
without 1 having to be a column or table name..
acceptable).  The problem is that these other apps let the coder use that
non-standard syntax and it works.  I guess we will have to retrain the folks
who generated this stuff.
Yes, Microsoft users are the lamest ;p


Re: (Fwd) perl future

2004-11-08 Thread JupiterHost.Net

all the applications of my company work with apache Perl oracle,I want to know
as it is the future of the Perl in the world and if we go by the correct way,
because the directors of the company think that no.
as it is its use in the world.
I can't speak for others but look at the history and community and use...
The 3 companies I am closely associated with use exclusively 
apache/perl/mysql, we do support other languages for our customers/users 
but mostly we make money supporting them 'cause they're crap ;p

We don't use Oracle because the cost benefit ratio is not good so i 
can't help out there ;p

HTH :)


Re: (Fwd) perl future

2004-11-08 Thread JupiterHost.Net

Oscar Gomez wrote:
all the applications of my company work with apache Perl oracle,I 
want to know as it is the future of the Perl in the world and if we 
Those will be  developed , supported, and industry standard mission 
critical safe.

go by the correct way, because the directors of the company think 
that no. as it is its use in the world.
Not only does Perl do what they need and as good or better than other 
things, the cost benefit ratio is extremely excellent. Not only is it 
open source (IE free) the cost of development and support are 
exponentially  better than most scripting languages in our experience.

I think from what I and everyone else has said here, you can be assured 
that Perl is the right choice now and will be probably longer than your 
company will be in business or you and I will be breathing.

:)


Re: SQL question: Find next unused number...

2004-09-14 Thread JupiterHost.Net

NIPP, SCOTT V (SBCSI) wrote:
I have a table that has a list of users with numeric user IDs.
The user IDs are not sequential.  There are large gaps in the list of
numeric IDs.  I want to pick the next unused number beginning at 3000.
I'd use an autoincrement column for ID so you don't have to know what 
the next available one is.

For instance:
If you  use SQL to find the last ID and then add 1 to it, say its 3001, 
what happens if someone else inserts a record for 3001 right befor eyou 
do? You'll over write her last INSERT statement.

This isn't really a perl question though...
I'd see your DB vendor's website about Auto increment columns and , like 
mysql has, LAST_INSERT_ID() function so you can do an insert then find 
out what its ID was.

Now if you really need to find it you'd need to do a query like:
 SELECT uid FROM Users LIMIT 1 ORDER BY uid;
That will give you the last uid, you can add DESC or something similar 
to reverse the results. But you'd need to see your DB vendor's website 
for more details.

HTH :)
Lee.M - JupiterHost.Net


Re: DBI Module.

2004-06-30 Thread JupiterHost.Net

Licensing may not be an issue with mysql but it is an issue with commercial databases.
I use Informix with hundreds of users accessing the database for extremely short queries from a GUI. I cannot afford to have them connected constantly, I would have to spent $100,000 per server in licensing. Having them connect and disconnect for every query has cost me about $3,000 per server using concurrent user licensing.
Oi, Why would anyone want to pay thousands for that when there are so 
many free ones that work the same or better?

Tell you what, I'll build you an SQL server for the low low price of 
$10,000 incuding the hardware, OS, and database system.

I'd make about $9000 on the deal for doing hardly anything at all, which 
is what the informix people seem to be doing :)

Or I'll license you the same server that you can have persistant 
connections and unlimited queries for $3000 a year.

That way your apps will be faster running persistantly and you don't 
have to worry about too many queries costing you too much.

Does it really charge bases on the amount of queries or connection time?
I'm sure everyone has reasons for tossing cash into the fire but I'm not 
real clear on why...

Just .02 from a MySQL guy (IE mostly ignorant of the others by choice) :)
Sorry for being sort of OT, won't happen again :)


Re: DBI Module.

2004-06-30 Thread JupiterHost.Net

Sterin, Ilya (I.) wrote:
Commercial databases have commercial support behind them, that's what is driving it.  
Most non-IT companies, do not want to invest in internal departments that are 
responsible for supporting an open source software product.  Until a giant like IBM, 
HP, or similar puts their support behind an open source db product, you will not see 
wide enterprise adoption of them.  And I totally agree with the CIOs, CEOs, etc...  
Money is not necessarily an issue, at least it's not when it comes to business 
continuity.
Here is a good example.  We sold a large company on SuSE (before Novell bought them and put their support behind them).  My sales guy was great, but 7 months later they ran into a support issue and SuSE at the time offered 24/7 enteprise support from Germany.  No problem right, well they called to get an answering machine, send an email that came back with questions about the specifics.  In the case of a large corporation, they would either send someone on site within 4 hours, or help by phone right away.  
 There is also the risk of a small company going under, etc...
Big companies can to, and if the source is secret then were will ya be :)
Granted not likely to happen, but then it only takes one time for it all 
to come crashing down.

I feel the likely hood of the economy changing and causing major issues 
for big companies is way more likely than the open source community to 
 stop supporting and developing.

To me all the enterprise level nonsense is just a bunch of big words 
that make it sound important and sell it to people who are clueless.
(Oh yeah I have some special pills to save your life when we go through 
haley's tail if you want, half price. They defabulate the spilkcik on 
the gazoinkcik so your body doesn't get depolicated when it frondilized 
by haley's interaction with the earth's zaltoid particles ;p Suckers!)

Not many companies are willing to take the risk these days, that's why other benefits 
are important.  Before Linux became enterprise ready, with Novell, IBM, HP support 
behind it, you rarely saw any public company adapt it in mission critical 
environments, that has now changed.  Though this is a case for open source DBs right 
now and because IBM has their own db product, they won't put support behind MySQL and 
PostgreSQL.
Ilya Sterin
I guess I can see that. Still I've gotten much better support for open 
source products than I have for, say, the Micorsoft based networks I've 
admined. And they were for the mission critical (and whatever other 
smart sounding and completely meaningless terms anyone feels like 
tossing around) systems. But 1800 Microsoft wasn't much help but a 
Microsoft Certified Lackey was available for even more $$.

Oh well waddayado? :)
Thanks for your input Ilya I appreciate it, I'm not going off on you 
just the evil coporate mongers :)

Since I am now waaay OT I'll stop 100% for sure...
[ end rant - have a nice day :) ]


Re: DBI Module.

2004-06-30 Thread JupiterHost.Net

Sterin, Ilya (I.) wrote:
I know Tim will kill me for this OT stuff:-)
Tim's a pretty cool guy it seems, but maybe we should let the horse die :)

I feel the likely hood of the economy changing and causing 
major issues 
for big companies is way more likely than the open source 
community to 
 stop supporting and developing.

It's not that the open source community doesn't support, it's that it's an unstructured/unguranteed support structure.  Do you guarantee me that if my Linux kernel came crashing in some specific environment that can't be easily debugged on a home computer, 
 that Linus and the team will come running to the rescue?  Within 4 
hours?

More likely that Bill :)
Most are, but support and some other legal arrangements are very valid issues.
I hear ya, it just bugs me sometimes :)
I'm happy if you're happy


Re: DBI CGI help

2004-05-16 Thread JupiterHost.Net
leegold wrote:
I have been looking for *clear, simple, upto date,
working code examples* of using DBI and CGI.
#!/usr/bin/perl
use strict;
use warnings;
use DBI;
use CGI qw(header param);
Requesting some action to be taken on a form
from user and then returning the user's db select
to them via the browser.
# WARNING : untested code ahead :) #
my $foo = param('foo');
print header();
if($foo) {
	my $c = 0;
	my $dbh = connect('your connect info here') or die DBI::errstr;
	my $fooQ = $dbh-quote($foo);
	my $records = $dbh-selectall_arrayref(SELECT Id, Bar FROM Stuff WHERE 
Foo = $fooQ);
	print Your Foo will be ul\n;
	for(@{ $records }) {
		$c++;
		my($id, $bar) = @_;	
		print li$id $barli /\n;
	}
	print /ulbr /I found $c Foo(s)br /\n;
	$dbh-disconnect();
}

print FORM;
form action=this.pl method=post
  Enter Your Foo input type=text name=foo value=$foo /
  input type=submit value=Find Foo /
/form
FORM
I been trying all day on the web and I have 
alot of fragmented stuff but nothing definative.

Does anyone know of anything? I'd prefer something
simple and recent. Thanks.
I know there are plenty of things I could improve in the code above but
that should get you started :)
perldoc DBI
perldoc CGI
HTH
Lee.M - JupiterHost.Net


Re: do() with bind_values

2004-05-14 Thread JupiterHost.Net
I'm actually looking for the source code for execute() (IE sub execute { 
.. }), do() and quote() are in DBI.pm, prepare() is in DBD/mysql.pm but 
execute in neither.


The source code for execute is in dbd_st_execute() in dbdimp.c, and if you want
to know how the quoting is done, you will need to download the the soucre code
for the mysql client libraries spec. the function mysql_reql_escape_string()
because that is how DBD::mysql handles the quoting.
HTH, 

Rudy
Thanks Rudy , I'd have never found it! :)


do() with bind_values

2004-05-12 Thread JupiterHost.Net
Howdy group!

perldoc DBI
has this:
   $rv  = $dbh-do($statement, \%attr, @bind_values);
So would this be a proper use of it:

 $dbh-do(
   'INSERT INTO Stuff (Id,Foo) VALUES (NULL,?)',
undef,
qw('foo' 'bar' 'baz')
 ) or die 
# IE undef foro \%attr and include quoted data
That would essencially run:
 INSERT INTO Stuff (Id,Foo) VALUES (NULL,'foo');
 INSERT INTO Stuff (Id,Foo) VALUES (NULL,'bar');
 INSERT INTO Stuff (Id,Foo) VALUES (NULL,'baz');
since do() does the prepare and execute for you.
correct?
TIA

Lee.M - JupiterHost.Net


Re: do() with bind_values

2004-05-12 Thread JupiterHost.Net
Thanks for all the replies! Its much clearer now :)

Just to clear up the qw() issue, I included quotes so that the data 
would be quote()ed for the query, not sure if that was necessary or  not 
since do() didn't do quote()...

IE I was passing 'data' (single quotes as part of the string) and not 
data
Thats why I added the note
 # IE undef for \%attr and include quoted data

So do I need to send $dbh-quote($string) -quoted- or $string -unquoted- 
data in @bind_values ?

IE - would it be
  $dbh-do(
 'INSERT INTO Stuff (Id,Foo,Bar,Baz) VALUES (NULL,?,?,?)',
  undef,
  $dbh-quote($foo), $dbh-quote($bar), $dbh-quote($baz)
   ) or die 
or
  $dbh-do(
 'INSERT INTO Stuff (Id,Foo,Bar,Baz) VALUES (NULL,?,?,?)',
  undef,
  $foo, $bar, $baz
   ) or die 
Thanks!

Lee.M - JupiterHost.Net


Re: do() with bind_values

2004-05-12 Thread JupiterHost.Net


Ronald J Kimball wrote:
JupiterHost.Net [mailto:[EMAIL PROTECTED] wrote:


I found do() and quote() in DBI.pm, prepare() in DBD::mysql, but I
couldn't find execute() - I wanted to see how it does the quoting
exactly (for binary data) -
Anyone know where execute() is?


All of these are covered in the DBI documentation.  execute() is in the
section on statement handle methods, rather than database handle methods.
Thanks Ronald,

I'm actually looking for the source code for execute() (IE sub execute { 
.. }), do() and quote() are in DBI.pm, prepare() is in DBD/mysql.pm but 
execute in neither.

Any ideas?
Thx
Lee.M - JupiterHost.Net

Ronald







Re: using binary data in variable in an INSERT/UPDATE statement

2004-05-10 Thread JupiterHost.Net
Anyone ever do this before?

JupiterHost.Net wrote:

Hello DBI folks!

If I have a column that is binary and data that is binary do I simply 
$dbh-quote() it on an insert?

IE

my $binary_content = get_binary_stuff();

my $binary_content_Q = $dbh-quote($binary_content);

$dbh-do(UPDATE MyStuff SET BinaryGoodies=$binary_content_Q WHERE ID=1);

...

Or do I need to convert it to hex  and then $dbh-quote() the hex 
version (or not quote it)?

If I do need to make it hex first do I need to unhex it, after I 
SELECT it into a variable, to get it into the original binary format?

(w/ pack()/unpack()??? which is a bit OT for this list...)

TIA

Lee.M - JupiterHost.Net





Re: using binary data in variable in an INSERT/UPDATE statement

2004-05-10 Thread JupiterHost.Net


Scott T. Hildreth wrote:

I guess it would depend on which database you are using.  I would try
Good point :)

I'm thinking MySQL, since this specific app will use MySQL excusively...

it with a bind column and let the DBD driver handle the quoting if need
be, 

I'm really hoping MySQL will work by doing:

my $binary_quoted = $dbh-quote($binary_data);
$dbh-do(UPDATE MyStuff Set BinaryGuts=$binary_quoted);
I'll have to try that and the bind way and see how it goes...

I've read you have to make it hex and I;ce read you can just quote it so 
I just wanted to make sure :)

$dbh-do(q{
update mystuff set binaryfld=?
Where  id=1 
}, undef, $binary_data
);

..although you may have tell the driver that this is binary data,
i.e binding a blob when using Oracle (ora_types)
Probably not to much help, but maybe a start,

  STH
 
Thanks a bunch! I really appreciate the input :)

On Mon, 2004-05-10 at 14:18, JupiterHost.Net wrote:

Anyone ever do this before?

JupiterHost.Net wrote:


Hello DBI folks!

If I have a column that is binary and data that is binary do I simply 
$dbh-quote() it on an insert?

IE

my $binary_content = get_binary_stuff();

my $binary_content_Q = $dbh-quote($binary_content);

$dbh-do(UPDATE MyStuff SET BinaryGoodies=$binary_content_Q WHERE ID=1);

...

Or do I need to convert it to hex  and then $dbh-quote() the hex 
version (or not quote it)?

If I do need to make it hex first do I need to unhex it, after I 
SELECT it into a variable, to get it into the original binary format?

(w/ pack()/unpack()??? which is a bit OT for this list...)

TIA

Lee.M - JupiterHost.Net









Re: using binary data in variable in an INSERT/UPDATE statement

2004-05-10 Thread JupiterHost.Net


Andy Hassall wrote:

 The general approach is to always use placeholders, and never directly
interpolate values into SQL statements. This should insulate you from
whatever encoding's needed. Constants in SQL are OK, but if you're putting
encoding as in quote()ing or encoding as in prepare/execute will handle 
making the binary data ok to use in the statement?
 (do() does prepare/execute internally so isn't that the same thing???)

Perl variables in a statement, in most cases this is not the best choice.

e.g.

   my $sth = $bh-prepare(UPDATE MyStuff SET BinaryGoodies=? WHERE ID=1);
   $sth-execute($binary_content_Q);
So would this do it properly then?
  $rv  = $dbh-do('UPDATE MyStuff SET BinaryGoodies=? WHERE ID=1', 
undef, $binary_content_Q);


See:
http://search.cpan.org/~timb/DBI/DBI.pm#Handling_BLOB_/_LONG_/_Memo_Field
Thanks for the url, you'd think I would've looked there already :)


Re: DBI and my.cnf socket file different

2004-04-11 Thread JupiterHost.Net


Rudy Lippan wrote:
On Sat, 10 Apr 2004, JupiterHost.Net wrote:


Via the command line I can connect to MySQL and do what I want fine.
Via DBI I get the classic Can't connect to local MySQL server through 
socket '/tmp/MySQL.sock' (2)

So MySQL seesmt to not be running but it is! The socket is 
'/usr/local/MySQL/run/MySQL_socket' and is setup that way in my.cnf so 
if I `ln -s /usr/local/MySQL/run/MySQL_socket /tmp/MySQL.sock` then DBI 
can connect().

So the question is, where does DBI get /tmp/MySQL.sock from and can/how 


It gets it from the libraries that you linked against.

So there is no way to edit a file and change it once and perminantly?

can/where would I change it to /usr/local/MySQL/run/MySQL_socket or


Add the connect option: ';mysql_socket=/path/to/socket.sock
 

would it be better if I change my.cnf to use /tmp/MySQL.soc (if so why)?


Or you can add a ';mysql_read_default_group=client' to your connect string 
which will cause mysql to read the client group out of my.cn
Thanks for those 2 ideas, I'll benchmark them and use the fastest one 
unless I can fix it so I don't have to have a special connect entry or 
create a symlink everytime I restart mysql.

Rudy
Thnaks for your insights Rudy, I very much appreciate it :)

HAGO

Lee.M - JupiterHost.Net


DBI and my.cnf socket file different

2004-04-10 Thread JupiterHost.Net
Hello Group,

Via the command line I can connect to MySQL and do what I want fine.
Via DBI I get the classic Can't connect to local MySQL server through 
socket '/tmp/MySQL.sock' (2)

So MySQL seesmt to not be running but it is! The socket is 
'/usr/local/MySQL/run/MySQL_socket' and is setup that way in my.cnf so 
if I `ln -s /usr/local/MySQL/run/MySQL_socket /tmp/MySQL.sock` then DBI 
can connect().

So the question is, where does DBI get /tmp/MySQL.sock from and can/how 
can/where would I change it to /usr/local/MySQL/run/MySQL_socket or 
would it be better if I change my.cnf to use /tmp/MySQL.soc (if so why)?

Tia :)

Lee.M - JupiterHost.Net