Re: Column Names

2001-11-02 Thread Bart Lateur

On Thu, 01 Nov 2001 18:56:18 -0800, Venkataramana Mokkapati wrote:

How do I get column names and order of column names
for a select * from ... query.

If you have

$sth = $dbh-prepare(select * from ...);

then try

@column_names = @{$sth-{NAME}};

You may have to do an execute first, for this to return anything of
value.

It's in the DBI docs under the heading Statement Handle Attributes, in
the DBI POD formatted as text around line 2284.

-- 
Bart.



Re: Column Names

2001-11-02 Thread Scott R. Godin

In article [EMAIL PROTECTED],
 [EMAIL PROTECTED] (Bart Lateur) wrote:

 On Thu, 01 Nov 2001 18:56:18 -0800, Venkataramana Mokkapati wrote:
 
 How do I get column names and order of column names
 for a select * from ... query.
 
 If you have
 
   $sth = $dbh-prepare(select * from ...);
 
 then try
 
   @column names = @{$sth-{NAME}};
 
 You may have to do an execute first, for this to return anything of
 value.
 
 It's in the DBI docs under the heading Statement Handle Attributes, in
 the DBI POD formatted as text around line 2284.

the absolute neatest trick I've seen with this, that is so totally 
perlish it defies description.. you stare at it for a bit and suddenly 
all becomes clear. 
 
   $sth-execute 
or die(Cannot Execute SQL Statement: , $sth-errstr(), \n);

my $rows = $sth-rows;
 # only expecting one row for a unique ID . this should NEVER happen.
   safe_error(invalid number of rows returned from database ($rows) for 
ID $id)
if $rows  1;
# although this might...
safe_error(no match in database for ID $id)
if $rows  1;

my %db;
$sth-bind_columns( \( @db{ @{ $sth-{NAME} } } ));# magic

while ($sth-fetch)
{
#... and no worries about which order the columns get returned in
#... since you access them via the $db{ColumnName} method :)

-- 
Scott R. Godin| e-mail : [EMAIL PROTECTED]
Laughing Dragon Services  |web : http://www.webdragon.net/



Re: DBI 1.15+ establishes multiple connections under parent mod_perl process

2001-11-02 Thread Scott R. Godin

In article [EMAIL PROTECTED],
 [EMAIL PROTECTED] (Eric Kolve) wrote:

 I think I have found a curious bug in DBI.  It seems that since DBI 1.15
 - 1.20, when you bring up apache/mod_perl and execute queries against
 the database handle in the parent process (startup.pl), multiple
 connections result against the database.  If I switch to DBI 1.14, no
 such problem occurs.  I have found this problem occurs with:
 
 DBI 1.20 + DBD::Oracle 1.12
 DBI 1.15 + DBD::Oracle 1.07
 DBI 1.16 + DBD::Oracle 1.07
 
 
 I have turned on Apache::DBI::DEBUG and trace(2) in DBI.  Could someone
 tell me what I should be looking for or can someone else shed any light
 on this? I am not sure if this is necessarily a mod_perl issue or if
 mod_perl is just eliciting a bug in  DBI.
 
 thanks,
 
 --eric

I've noticed this too, and it has *seriosly* damaged any credibility I 
might have gained with the admin I'm up against who is a major PHP 
proponent, and who refused to even think about installing mod_perl to 
help the script along after he saw this. :/

-- 
Scott R. Godin| e-mail : [EMAIL PROTECTED]
Laughing Dragon Services  |web : http://www.webdragon.net/



Copying image data from Sybase to Mssql or vice versa

2001-11-02 Thread Veera P. Nallamilli

I am using DBD:Sybase to interact with Sybase database
and using DBD:ODBC for Mssql database. I tried to copy
image data either way , but unfortunately failed. Could
anybody please suggest me like what is the best way
to transfer the data from sybase to Mssql or vice versa.

Thanks
prasad




Re: Column Names

2001-11-02 Thread Bart Lateur

On Fri, 02 Nov 2001 07:27:49 -0500, Scott R. Godin wrote:

my %db;
$sth-bind_columns( \( @db{ @{ $sth-{NAME} } } ));# magic

while ($sth-fetch)
{
#... and no worries about which order the columns get returned in
#... since you access them via the $db{ColumnName} method :)

What's the advantage of this approach over

while(my $db = fetchrow_hashref) {
...
}

and accessing the datae through $db-{ColumnName}?

I can see nothing but an alternative, but highly equivalent method.

-- 
Bart.



Re: Looping through recordset twice

2001-11-02 Thread Don Seiler

Perhaps I wasn't supposed to take it so literally, and my hash handling
skillz aren't quite what they should be.

When I do this:

while (my $hashref = $csr-fetchrow_hashref())
{
  print Adding record from group  . $hashref-{group_code} . .\n;
  push (@{$groups{$hashref-{group_code}}}, $hashref);
}

($csr is my $sth var)

The print statement prints nothing for the $hashref-{group_code}
reference.  This seems to be an indication of the start of a domino effect
of things not being seen.

What am I doing wrong?

Don.

On Thu, 1 Nov 2001, David Marshall wrote:

 I'll second the suggestion made a few messages back about storing your
 results in some other data structure (as opposed to re-traversing the data
 structure that DBI gives you).

 In your circumstances, I'd probably put the retrieved rows in a hash of
 arrays (keyed by group number) before doing anything else.  Then I'd
 examine each array of records in the group and delete it from the hash if
 it didn't qualify.  Then the HoA that is left over can be traversed for
 whatever the final output is.

 The code might look something like this:

 my %groups;
 while (my $hashref = $sth-fetchrow_hashref) {
 push @{$groups{$hashref-{group_num}}}, $hashref;
 }

 then later...

 foreach my $group_number (keys %groups) {
 delete $groups{$group_number} unless group_is_OK($groups{$group_number});
 }

 then finally...

 spew_group($_) foreach values %groups;

 YMMV on exact implementation.  In similar implementations, I will often
 have other stuff in the data structure beyond that which I got directly out
 of the database.  It all depends.

 At 01:26 PM 11/1/01 -0600, Don Seiler wrote:
 Perhaps I'm missing it, then.
 
 basically my query is this:
 
 select cust_no, acct_type, acct_status, group_num
 from cust,acct
 where cust.cust_no=acct.cust_no
 order by group_num
 
 the values of acct_type and acct_status for all of the records in
 a group determine if I want that group or not.  I don't think I can make
 that determination until I've gone through the recordset though.
 
 --
 Don Seiler [EMAIL PROTECTED]
 Database Administrator / Sr Software Engineer
 NSightTel Billing LLCPhone:  920.617.7501
 1580 Mid Valley DriveFax:920.617.7493
 De Pere, WI  54115   Cell:   920.606.3240
 Pager: [EMAIL PROTECTED] / 920.613.2000
 
 
 On Thu, 1 Nov 2001, Marcelo Guelfi wrote:
 
  
   Are you sure that you can't use the GROUP BY clause?
  
   Saludos,
 Marcelo.
  
  
  
  
  
  
  
  
  
  
  
   Don Seiler
   Don.Seiler@Ce   To: Michael Peppler
  [EMAIL PROTECTED]
   llcom.com   cc: Marcelo
  Guelfi/Uruguay/Contr/IBM@IBMUY, [EMAIL PROTECTED]
Subject: Re: Looping
  through recordset twice
   01/11/2001
   16:13
   Please respond
   to Don
   Seiler
  
  
  
  
  
   Basically, when I get to a new group number.  The record set is ordered by
   group number, so all records in a group are together.  As I'm looping
   through records in a group, I do some evaluation and add values to
   variables.  When I get to a new group number, I look at the values.  If
   they meet my criteria I add the last group number to an array.
  
   Then when I'm done I planned to loop again through the record set and if
   the group number matches one in the array I'd print it.
  
   This is probably horribly inefficient and I'm leaning towards saving the
   records to a tmp array and if they qualify saving that to master array for
   later printing.
  
   Don.
  
   On Thu, 1 Nov 2001, Michael Peppler wrote:
  
Don Seiler writes:
  Actually the nature of the problem is what stopped me from doing this.
 
  I won't know which records I want until I look at the group of them.
 
  Example:  I have a table of records.  There is a groupnum column.
   Many
  records have the same groupnum, i.e. they are in the same group.
   I'm
  only interested in selecting the group as a whole.  I will only know
   if I
  want this group based on examining all of the records for that group.
   
Hmmm - what condition determins that a group is complete?
   
  
  
  
  
  
  






Re: Looping through recordset twice

2001-11-02 Thread Ronald J Kimball

On Fri, Nov 02, 2001 at 08:42:42AM -0600, Don Seiler wrote:
 Perhaps I wasn't supposed to take it so literally, and my hash handling
 skillz aren't quite what they should be.
 
 When I do this:
 
 while (my $hashref = $csr-fetchrow_hashref())
 {
   print Adding record from group  . $hashref-{group_code} . .\n;
   push (@{$groups{$hashref-{group_code}}}, $hashref);
 }
 
 ($csr is my $sth var)
 
 The print statement prints nothing for the $hashref-{group_code}
 reference.  This seems to be an indication of the start of a domino effect
 of things not being seen.

Make sure that the keys in %$hashref are in the case you think they are.  I
believe uppercase is the default.  Try $csr-fetchrow_hashref('NAME_lc');
if you want lowercase.

Ronald



Re: Looping through recordset twice

2001-11-02 Thread Don Seiler

GENIUS!  That did the trick.

Thanks much,
Don.

On Fri, 2 Nov 2001, Ronald J Kimball wrote:

 On Fri, Nov 02, 2001 at 08:42:42AM -0600, Don Seiler wrote:
  Perhaps I wasn't supposed to take it so literally, and my hash handling
  skillz aren't quite what they should be.
 
  When I do this:
 
  while (my $hashref = $csr-fetchrow_hashref())
  {
print Adding record from group  . $hashref-{group_code} . .\n;
push (@{$groups{$hashref-{group_code}}}, $hashref);
  }
 
  ($csr is my $sth var)
 
  The print statement prints nothing for the $hashref-{group_code}
  reference.  This seems to be an indication of the start of a domino effect
  of things not being seen.

 Make sure that the keys in %$hashref are in the case you think they are.  I
 believe uppercase is the default.  Try $csr-fetchrow_hashref('NAME_lc');
 if you want lowercase.

 Ronald





Re: DBI 1.15+ establishes multiple connections under parent mod_perl process

2001-11-02 Thread Eric Kolve

I have traced it back to prepare_cached() (at least that is what I
notice).  

Scott, try replacing your calls on startup with prepare() instead of
prepare_cached().

I was also able to eliminate the problem if I commented out the
following line in DBI.pm

 # $dbh-STORE('CachedKids', $cache = {}) unless $cache;   # line 1021
sub prepare_cached


Of course this is not a solution, but it may give someone else with more
knowledge enough to fix the problem. I will keep digging for answers.

--eric


Scott R. Godin wrote:
 
 In article [EMAIL PROTECTED],
  [EMAIL PROTECTED] (Eric Kolve) wrote:
 
  I think I have found a curious bug in DBI.  It seems that since DBI 1.15
  - 1.20, when you bring up apache/mod_perl and execute queries against
  the database handle in the parent process (startup.pl), multiple
  connections result against the database.  If I switch to DBI 1.14, no
  such problem occurs.  I have found this problem occurs with:
 
  DBI 1.20 + DBD::Oracle 1.12
  DBI 1.15 + DBD::Oracle 1.07
  DBI 1.16 + DBD::Oracle 1.07
 
 
  I have turned on Apache::DBI::DEBUG and trace(2) in DBI.  Could someone
  tell me what I should be looking for or can someone else shed any light
  on this? I am not sure if this is necessarily a mod_perl issue or if
  mod_perl is just eliciting a bug in  DBI.
 
  thanks,
 
  --eric
 
 I've noticed this too, and it has *seriosly* damaged any credibility I
 might have gained with the admin I'm up against who is a major PHP
 proponent, and who refused to even think about installing mod_perl to
 help the script along after he saw this. :/
 
 --
 Scott R. Godin| e-mail : [EMAIL PROTECTED]
 Laughing Dragon Services  |web : http://www.webdragon.net/



Re: Column Names

2001-11-02 Thread Michael Peppler

Scott R. Godin writes:
  the absolute neatest trick I've seen with this, that is so totally 
  perlish it defies description.. you stare at it for a bit and suddenly 
  all becomes clear. 
   
 $sth-execute 
  or die(Cannot Execute SQL Statement: , $sth-errstr(), \n);
  
  my $rows = $sth-rows;
   # only expecting one row for a unique ID . this should NEVER happen.
 safe_error(invalid number of rows returned from database ($rows) for 
  ID $id)
  if $rows  1;
  # although this might...
  safe_error(no match in database for ID $id)
  if $rows  1;

Be careful here!

Most DBI drivers will return -1 for $sth-rows() for a SELECT query.

In the case of DBD::Sybase $h-rows() will return the correct number
of rows only *after* all the rows have been fetched.

See the DBI docs for details.

Michael
-- 
Michael Peppler - Data Migrations Inc. - http://www.mbay.net/~mpeppler
[EMAIL PROTECTED] - [EMAIL PROTECTED]
International Sybase User Group - http://www.isug.com



Re: Looping through recordset twice

2001-11-02 Thread Don Seiler

When I pass in a hash to a function like below:

group_is_OK($groups{$group_number});

I'm confused as to how to handle that argument with in the function,
knowing the there are multiple values for that key.

I start out with this:

sub group_is_OK
{
  my $hash = $_;

  # insert a lot of confused and hopeless code here

}

I want to loop through the collection of records passed in and perform my
calculations.  I've tried for and foreach loops, but probably don't know
what kind of data structure I'm dealing with.

the %groups hash is defined as below.

Any hints?

On Thu, 1 Nov 2001, David Marshall wrote:

 I'll second the suggestion made a few messages back about storing your
 results in some other data structure (as opposed to re-traversing the data
 structure that DBI gives you).

 In your circumstances, I'd probably put the retrieved rows in a hash of
 arrays (keyed by group number) before doing anything else.  Then I'd
 examine each array of records in the group and delete it from the hash if
 it didn't qualify.  Then the HoA that is left over can be traversed for
 whatever the final output is.

 The code might look something like this:

 my %groups;
 while (my $hashref = $sth-fetchrow_hashref) {
 push @{$groups{$hashref-{group_num}}}, $hashref;
 }

 then later...

 foreach my $group_number (keys %groups) {
 delete $groups{$group_number} unless group_is_OK($groups{$group_number});
 }

 then finally...

 spew_group($_) foreach values %groups;

 YMMV on exact implementation.  In similar implementations, I will often
 have other stuff in the data structure beyond that which I got directly out
 of the database.  It all depends.

 At 01:26 PM 11/1/01 -0600, Don Seiler wrote:
 Perhaps I'm missing it, then.
 
 basically my query is this:
 
 select cust_no, acct_type, acct_status, group_num
 from cust,acct
 where cust.cust_no=acct.cust_no
 order by group_num
 
 the values of acct_type and acct_status for all of the records in
 a group determine if I want that group or not.  I don't think I can make
 that determination until I've gone through the recordset though.
 
 --
 Don Seiler [EMAIL PROTECTED]
 Database Administrator / Sr Software Engineer
 NSightTel Billing LLCPhone:  920.617.7501
 1580 Mid Valley DriveFax:920.617.7493
 De Pere, WI  54115   Cell:   920.606.3240
 Pager: [EMAIL PROTECTED] / 920.613.2000
 
 
 On Thu, 1 Nov 2001, Marcelo Guelfi wrote:
 
  
   Are you sure that you can't use the GROUP BY clause?
  
   Saludos,
 Marcelo.
  
  
  
  
  
  
  
  
  
  
  
   Don Seiler
   Don.Seiler@Ce   To: Michael Peppler
  [EMAIL PROTECTED]
   llcom.com   cc: Marcelo
  Guelfi/Uruguay/Contr/IBM@IBMUY, [EMAIL PROTECTED]
Subject: Re: Looping
  through recordset twice
   01/11/2001
   16:13
   Please respond
   to Don
   Seiler
  
  
  
  
  
   Basically, when I get to a new group number.  The record set is ordered by
   group number, so all records in a group are together.  As I'm looping
   through records in a group, I do some evaluation and add values to
   variables.  When I get to a new group number, I look at the values.  If
   they meet my criteria I add the last group number to an array.
  
   Then when I'm done I planned to loop again through the record set and if
   the group number matches one in the array I'd print it.
  
   This is probably horribly inefficient and I'm leaning towards saving the
   records to a tmp array and if they qualify saving that to master array for
   later printing.
  
   Don.
  
   On Thu, 1 Nov 2001, Michael Peppler wrote:
  
Don Seiler writes:
  Actually the nature of the problem is what stopped me from doing this.
 
  I won't know which records I want until I look at the group of them.
 
  Example:  I have a table of records.  There is a groupnum column.
   Many
  records have the same groupnum, i.e. they are in the same group.
   I'm
  only interested in selecting the group as a whole.  I will only know
   if I
  want this group based on examining all of the records for that group.
   
Hmmm - what condition determins that a group is complete?
   
  
  
  
  
  
  






Re: Looping through recordset twice

2001-11-02 Thread Etienne Marcotte

 my %groups;
 while (my $hashref = $sth-fetchrow_hashref) {
 push @{$groups{$hashref-{group_num}}}, $hashref;
 }

 then later...

 foreach my $group_number (keys %groups) {
 delete $groups{$group_number} unless group_is_OK($groups{$group_number});
 }

I don't understand why you're calling group_is_OK each time..
you can send all your %group to a sub that checks the group and deletes it if not
good? You would have to do one fuction call instead of calling it for each group!

Or if group_is_Ok is a function used elsewhere where you don't want to delete the
not_OK groups, you could have group_is_OK return an array of the keys where groups
are not ok (or of the keys where gourps are OK).. and then the deleting part would
just delete each record with the key in the array returned by the group_is_ok
function. (humm I'm not very clear, hope you understand!)

my 0.02$

Etienne


Don Seiler wrote:

 When I pass in a hash to a function like below:

 group_is_OK($groups{$group_number});

 I'm confused as to how to handle that argument with in the function,
 knowing the there are multiple values for that key.

 I start out with this:

 sub group_is_OK
 {
   my $hash = $_;

   # insert a lot of confused and hopeless code here

 }

 I want to loop through the collection of records passed in and perform my
 calculations.  I've tried for and foreach loops, but probably don't know
 what kind of data structure I'm dealing with.

 the %groups hash is defined as below.

 Any hints?

 On Thu, 1 Nov 2001, David Marshall wrote:

  I'll second the suggestion made a few messages back about storing your
  results in some other data structure (as opposed to re-traversing the data
  structure that DBI gives you).
 
  In your circumstances, I'd probably put the retrieved rows in a hash of
  arrays (keyed by group number) before doing anything else.  Then I'd
  examine each array of records in the group and delete it from the hash if
  it didn't qualify.  Then the HoA that is left over can be traversed for
  whatever the final output is.
 
  The code might look something like this:
 
  my %groups;
  while (my $hashref = $sth-fetchrow_hashref) {
  push @{$groups{$hashref-{group_num}}}, $hashref;
  }
 
  then later...
 
  foreach my $group_number (keys %groups) {
  delete $groups{$group_number} unless group_is_OK($groups{$group_number});
  }
 
  then finally...
 
  spew_group($_) foreach values %groups;
 
  YMMV on exact implementation.  In similar implementations, I will often
  have other stuff in the data structure beyond that which I got directly out
  of the database.  It all depends.
 
  At 01:26 PM 11/1/01 -0600, Don Seiler wrote:
  Perhaps I'm missing it, then.
  
  basically my query is this:
  
  select cust_no, acct_type, acct_status, group_num
  from cust,acct
  where cust.cust_no=acct.cust_no
  order by group_num
  
  the values of acct_type and acct_status for all of the records in
  a group determine if I want that group or not.  I don't think I can make
  that determination until I've gone through the recordset though.
  
  --
  Don Seiler [EMAIL PROTECTED]
  Database Administrator / Sr Software Engineer
  NSightTel Billing LLCPhone:  920.617.7501
  1580 Mid Valley DriveFax:920.617.7493
  De Pere, WI  54115   Cell:   920.606.3240
  Pager: [EMAIL PROTECTED] / 920.613.2000
  
  
  On Thu, 1 Nov 2001, Marcelo Guelfi wrote:
  
   
Are you sure that you can't use the GROUP BY clause?
   
Saludos,
  Marcelo.
   
   
   
   
   
   
   
   
   
   
   
Don Seiler
Don.Seiler@Ce   To: Michael Peppler
   [EMAIL PROTECTED]
llcom.com   cc: Marcelo
   Guelfi/Uruguay/Contr/IBM@IBMUY, [EMAIL PROTECTED]
 Subject: Re: Looping
   through recordset twice
01/11/2001
16:13
Please respond
to Don
Seiler
   
   
   
   
   
Basically, when I get to a new group number.  The record set is ordered by
group number, so all records in a group are together.  As I'm looping
through records in a group, I do some evaluation and add values to
variables.  When I get to a new group number, I look at the values.  If
they meet my criteria I add the last group number to an array.
   
Then when I'm done I planned to loop again through the record set and if
the group number matches one in the array I'd print it.
   
This is probably horribly inefficient and I'm leaning towards saving the
records to a tmp array and if they qualify saving that to master array for
later printing.
   
Don.
   
On Thu, 1 Nov 2001, Michael Peppler wrote:
   
 Don Seiler writes:
   Actually the nature of the problem is what stopped me from doing this.
  
   I won't know which records I want until 

Re: Looping through recordset twice

2001-11-02 Thread David Marshall

The breakout of where a subroutine ends and begins is largely a matter of 
personal preference.  You'll have to iterate over all the groups somewhere, 
I would tend to do it where my rough example showed so that subroutine 
group_is_OK wouldn't need to know whether the group being examined is part 
of a hash, array, or something else.  One could easily write another 
subroutine to do the hash traversal. YMMV.

For Don:

%groups is a hash of arrays, so the values of the hash are array 
references.  The elements of the array are themselves hashes, seeing as my 
example used fetchrow_hashref.

group_is_OK should begin like this, to continue my example.

sub group_is_OK {
my $aryref = shift; # ref($aryref) will be eq 'ARRAY'
foreach my $group_member (@$aryref) {
   # ref($group_member) eq 'HASH'
}
# some value here that reflects whether the group is OK
}

I apologize for having presented an incorrect amount of information 
(whether too little or too much, I don't know, heh).  I was mainly trying 
to illustrate why I think there are better alternatives to finding some way 
to loop through a recordset twice.


At 11:24 AM 11/2/01 -0500, Etienne Marcotte wrote:
  my %groups;
  while (my $hashref = $sth-fetchrow_hashref) {
  push @{$groups{$hashref-{group_num}}}, $hashref;
  }
 
  then later...
 
  foreach my $group_number (keys %groups) {
  delete $groups{$group_number} unless 
 group_is_OK($groups{$group_number});
  }

I don't understand why you're calling group_is_OK each time..
you can send all your %group to a sub that checks the group and deletes it 
if not
good? You would have to do one fuction call instead of calling it for each 
group!

Or if group_is_Ok is a function used elsewhere where you don't want to 
delete the
not_OK groups, you could have group_is_OK return an array of the keys 
where groups
are not ok (or of the keys where gourps are OK).. and then the deleting 
part would
just delete each record with the key in the array returned by the group_is_ok
function. (humm I'm not very clear, hope you understand!)

my 0.02$

Etienne


Don Seiler wrote:

  When I pass in a hash to a function like below:
 
  group_is_OK($groups{$group_number});
 
  I'm confused as to how to handle that argument with in the function,
  knowing the there are multiple values for that key.
 
  I start out with this:
 
  sub group_is_OK
  {
my $hash = $_;
 
# insert a lot of confused and hopeless code here
 
  }
 
  I want to loop through the collection of records passed in and perform my
  calculations.  I've tried for and foreach loops, but probably don't know
  what kind of data structure I'm dealing with.
 
  the %groups hash is defined as below.
 
  Any hints?
 
  On Thu, 1 Nov 2001, David Marshall wrote:
 
   I'll second the suggestion made a few messages back about storing your
   results in some other data structure (as opposed to re-traversing the 
 data
   structure that DBI gives you).
  
   In your circumstances, I'd probably put the retrieved rows in a hash of
   arrays (keyed by group number) before doing anything else.  Then I'd
   examine each array of records in the group and delete it from the hash if
   it didn't qualify.  Then the HoA that is left over can be traversed for
   whatever the final output is.
  
   The code might look something like this:
  
   my %groups;
   while (my $hashref = $sth-fetchrow_hashref) {
   push @{$groups{$hashref-{group_num}}}, $hashref;
   }
  
   then later...
  
   foreach my $group_number (keys %groups) {
   delete $groups{$group_number} unless 
 group_is_OK($groups{$group_number});
   }
  
   then finally...
  
   spew_group($_) foreach values %groups;
  
   YMMV on exact implementation.  In similar implementations, I will often
   have other stuff in the data structure beyond that which I got 
 directly out
   of the database.  It all depends.
  
   At 01:26 PM 11/1/01 -0600, Don Seiler wrote:
   Perhaps I'm missing it, then.
   
   basically my query is this:
   
   select cust_no, acct_type, acct_status, group_num
   from cust,acct
   where cust.cust_no=acct.cust_no
   order by group_num
   
   the values of acct_type and acct_status for all of the records in
   a group determine if I want that group or not.  I don't think I can make
   that determination until I've gone through the recordset though.
   
   --
   Don Seiler [EMAIL PROTECTED]
   Database Administrator / Sr Software Engineer
   NSightTel Billing LLCPhone:  920.617.7501
   1580 Mid Valley DriveFax:920.617.7493
   De Pere, WI  54115   Cell:   920.606.3240
   Pager: [EMAIL PROTECTED] / 920.613.2000
   
   
   On Thu, 1 Nov 2001, Marcelo Guelfi wrote:
   

 Are you sure that you can't use the GROUP BY clause?

 Saludos,
   Marcelo.











 Don Seiler
 Don.Seiler@Ce   To: Michael Peppler
[EMAIL 

Re: DBI 1.15+ establishes multiple connections under parent mod_perl process

2001-11-02 Thread Tim Bunce

But prepare_cached/prepare has nothing to do with multiple connections.

There were connect() changes made between DBI 1.14 and 1.15 but I'd need
people to look into it for me. Should be trivial to debug by enabling
DBI tracing and Apache::DBi debug.

Tim.

On Fri, Nov 02, 2001 at 07:51:33AM -0800, Eric Kolve wrote:
 I have traced it back to prepare_cached() (at least that is what I
 notice).  
 
 Scott, try replacing your calls on startup with prepare() instead of
 prepare_cached().
 
 I was also able to eliminate the problem if I commented out the
 following line in DBI.pm
 
  # $dbh-STORE('CachedKids', $cache = {}) unless $cache;   # line 1021
 sub prepare_cached
 
 
 Of course this is not a solution, but it may give someone else with more
 knowledge enough to fix the problem. I will keep digging for answers.
 
 --eric
 
 
 Scott R. Godin wrote:
  
  In article [EMAIL PROTECTED],
   [EMAIL PROTECTED] (Eric Kolve) wrote:
  
   I think I have found a curious bug in DBI.  It seems that since DBI 1.15
   - 1.20, when you bring up apache/mod_perl and execute queries against
   the database handle in the parent process (startup.pl), multiple
   connections result against the database.  If I switch to DBI 1.14, no
   such problem occurs.  I have found this problem occurs with:
  
   DBI 1.20 + DBD::Oracle 1.12
   DBI 1.15 + DBD::Oracle 1.07
   DBI 1.16 + DBD::Oracle 1.07
  
  
   I have turned on Apache::DBI::DEBUG and trace(2) in DBI.  Could someone
   tell me what I should be looking for or can someone else shed any light
   on this? I am not sure if this is necessarily a mod_perl issue or if
   mod_perl is just eliciting a bug in  DBI.
  
   thanks,
  
   --eric
  
  I've noticed this too, and it has *seriosly* damaged any credibility I
  might have gained with the admin I'm up against who is a major PHP
  proponent, and who refused to even think about installing mod_perl to
  help the script along after he saw this. :/
  
  --
  Scott R. Godin| e-mail : [EMAIL PROTECTED]
  Laughing Dragon Services  |web : http://www.webdragon.net/



Re: Column Names

2001-11-02 Thread Tim Bunce

On Fri, Nov 02, 2001 at 02:18:15PM +0100, Bart Lateur wrote:
 On Fri, 02 Nov 2001 07:27:49 -0500, Scott R. Godin wrote:
 
 my %db;
 $sth-bind_columns( \( @db{ @{ $sth-{NAME} } } ));# magic
 
 while ($sth-fetch)
 {
 #... and no worries about which order the columns get returned in
 #... since you access them via the $db{ColumnName} method :)
 
 What's the advantage of this approach over
 
   while(my $db = fetchrow_hashref) {
   ...
   }
 
 and accessing the datae through $db-{ColumnName}?

Speed! It's many times faster (assuming the loop is empty :)

(But use $sth-{NAME_lc} or $sth-{NAME_uc} for portability.

Tim.



RE: Looping through recordset twice

2001-11-02 Thread Don Seiler

I guess I'm not that concerned with that level of detail.  there are a
million different ways to do that part.  It was just the part of finding
the groups that I needed help with.

Thanks though.

Don.

On Fri, 2 Nov 2001, Wilson, Doug wrote:


  From: Don Seiler [mailto:[EMAIL PROTECTED]]
 
  I was thinking of this, since I was about to plagiariz the example of
  using a hash with multiple values per key.
 

 What I want to know is, once you find out a group is 'OK', and
 you go back to print it, is there that much more complex stuff
 to do in the actual printing of it, or would it be easier to just
 print it to a temp location as you're going through it the
 first time, and then once you figure
 out the group is 'OK', copy the temp file to its final
 destination?

 Cheers,
 Douglas Wilson





::massive sql query using dbi - please help::

2001-11-02 Thread Hastie, Christa

Hello to all!
This is my first time posting to this group!  But I'm in desperate need of
any help!  
(BTW, thanks for all the emails from contributors to this list..I learn a
lot from you guys every day!)

I have two tables in a mySQL db, named users_old and users_new, both with
UserId and Email columns, no primary keys and no auto-increment columns.
The users_old table has numeric values for the UserId while the users_new
have NULL values.
The users_old table has 120,000 rows and the users_new has 910,000 rows.
I'm trying to find a simple, painless way of querying these two tables so I
can store the UserId and Email from the users_old table if the Email exists
in both tables.

Everything I try just continues to run without ever producing any results -
it just hangs at the command line when running the standalone query.
Perhaps there are just too many rows to compare.
I tried writing a simple script using the Perl DBI to just log the results
of this massive query in a simple tab delimited flat file so I can load the
data into the live database after it finishesbut no luck.

Anybody have any suggestions on a better approach?
My simple code looks like this:

#!/usr/local/bin/perl

use DBI;
use strict;

my($dbh);
my($sth);
my($exclusive_lock);

eval { $dbh = DBI-connect(DBI:mysql:dbname;host=localhost, dbuser,
dbpassword, {'RaiseError' = 1}); };
if($@) { 
my($error) = Error opening Database: $@\n; 
print $error\n;
}


my $sth = $dbh-prepare(SELECT users_old.UserId, users_old.Email FROM
users_new, users_old WHERE users_old.Email = users_new.Email); 
$sth-execute or die Unable to execute query: $dbh-errstr\n; 
my ($row);

while($row = $sth-fetchrow_arrayref) {
my($data_log) = /home/chastie/sony_showbiz.txt;
open (DATATEMP, $data_log);
flock (DATATEMP, $exclusive_lock);
print LOG $row-[0]\t$row-[1]\n;
close (LOG);
}

$sth-finish; 
$dbh-disconnect; 
exit;

\/ 
   (o o)
ooO-(_)-Ooo
christa hastie
programmer
sonypicturesdigitalentertainment
www.sonypictures.com
-




Performance of Oracle BY Ref cursors

2001-11-02 Thread Matt Heilman

Is there any performance issues (or advantages) to using an Oracle Stored
Procedure and receiving a BY REF cursor, compared to a straight SQL
statement?

Thanks,

Matt



Re: DBI 1.15+ establishes multiple connections under parent mod_perl process

2001-11-02 Thread Scott R. Godin

In article [EMAIL PROTECTED],
 [EMAIL PROTECTED] (Eric Kolve) wrote:

 I have traced it back to prepare_cached() (at least that is what I
 notice).  
 
 Scott, try replacing your calls on startup with prepare() instead of
 prepare_cached().

no, I'm using prepare(). an earlier post thread of mine (php vs perl) 
has a copy of the script I'm running in it.

 I was also able to eliminate the problem if I commented out the
 following line in DBI.pm
 
  # $dbh-STORE('CachedKids', $cache = {}) unless $cache;   # line 1021
 sub prepare_cached

I haven't tried this though

 Of course this is not a solution, but it may give someone else with more
 knowledge enough to fix the problem. I will keep digging for answers.

-- 
Scott R. Godin| e-mail : [EMAIL PROTECTED]
Laughing Dragon Services  |web : http://www.webdragon.net/



Re: DBI 1.15+ establishes multiple connections under parent mod_perl process

2001-11-02 Thread Scott R. Godin

In article [EMAIL PROTECTED],
 [EMAIL PROTECTED] (Tim Bunce) wrote:

 
 There were connect() changes made between DBI 1.14 and 1.15 but I'd need
 people to look into it for me. Should be trivial to debug by enabling
 DBI tracing and Apache::DBi debug.

Unless your admin refuses to run any of the mod_perl and Apache::DBI 
stuff compiled in, because he's a php freak and thinks mod_perl is a 
resource pig. :\

-- 
Scott R. Godin| e-mail : [EMAIL PROTECTED]
Laughing Dragon Services  |web : http://www.webdragon.net/



Re: Column Names

2001-11-02 Thread Scott R. Godin

In article [EMAIL PROTECTED],
 [EMAIL PROTECTED] (Tim Bunce) wrote:

 On Fri, Nov 02, 2001 at 02:18:15PM +0100, Bart Lateur wrote:
  On Fri, 02 Nov 2001 07:27:49 -0500, Scott R. Godin wrote:
  
  my %db;
  $sth-bind_columns( \( @db{ @{ $sth-{NAME} } } ));# magic
  
  while ($sth-fetch)
  {
  #... and no worries about which order the columns get returned in
  #... since you access them via the $db{ColumnName} method :)
  
  What's the advantage of this approach over
  
  while(my $db = fetchrow_hashref) {
  ...
  }
  
  and accessing the datae through $db-{ColumnName}?
 
 Speed! It's many times faster (assuming the loop is empty :)
 
 (But use $sth-{NAME_lc} or $sth-{NAME_uc} for portability.
 
 Tim.

with the exception of my case where neither mod_perl nor Apache::DBI is 
compiled in.. 

in the php vs perl thread earlier this (last?) month, I posted some 
benchmarks done by the site admin on a search of 5100 rows for c by 
their ph script and my perl script.. the results were staggeringly 
different, even with the help of this (see the script I posted in that 
thread for details on what I was doing) the thread issues are posted 
here: [EMAIL PROTECTED]

-- 
Scott R. Godin| e-mail : [EMAIL PROTECTED]
Laughing Dragon Services  |web : http://www.webdragon.net/



Re: Column Names

2001-11-02 Thread Scott R. Godin

In article [EMAIL PROTECTED],
 [EMAIL PROTECTED] (Michael Peppler) wrote:

   my $rows = $sth-rows;
# only expecting one row for a unique ID . this should NEVER happen.
  safe_error(invalid number of rows returned from database ($rows) for 
   ID $id)
   if $rows  1;
   # although this might...
   safe_error(no match in database for ID $id)
   if $rows  1;
 
 Be careful here!
 
 Most DBI drivers will return -1 for $sth-rows() for a SELECT query.

=:o

 In the case of DBD::Sybase $h-rows() will return the correct number
 of rows only *after* all the rows have been fetched.

I guess I'm fortunate that DBD::'s CSV, AnyData, and mysql all work this 
way.

-- 
Scott R. Godin| e-mail : [EMAIL PROTECTED]
Laughing Dragon Services  |web : http://www.webdragon.net/



RE: ::massive sql query using dbi - please help::

2001-11-02 Thread Hastie, Christa

Thanks Marcelo for all three of those points.
I tried performing just the query from mysql, but it just hangs there also,
even if I leave it for an hour!
I've modified the code to stop opening and closing the file in the loop, and
added the lock values, but it's still just running and running and still
produces no resultsh:

#!/usr/local/bin/perl

use DBI;
use strict;

my($dbh);
my($sth);
my($exclusive_lock) = 2;
my($unlock_lock) = 8;

#eval { $dbh = DBI-connect(DBI:mysql:dbname;host=localhost, dbuser,
dbpassword, {'RaiseError' = 1}); };
if($@) { 
print Content-type: text/html\n\n;
my($error) = Error opening Database: $@\n; 
print htmlbody$error/body/html;
}


my $sth = $dbh-prepare(SELECT users_old.UserId, users_old.Email FROM
users_new, users_old WHERE users_old.Email = users_new.Email); 
$sth-execute or die Unable to execute query: $dbh-errstr\n; 
my ($row);

my($data_log) = /home/chastie/sony_showbiz.txt;
open (DATATEMP, $data_log);
flock (DATATEMP, $exclusive_lock);

while($row = $sth-fetchrow_arrayref) {
print LOG $row-[0]\t$row-[1]\n or die Error: $!\n;
}

flock (DATATEMP, $unlock_lock); 
close (LOG);

$sth-finish; 
$dbh-disconnect; 
exit;


-Original Message-
From: Marcelo Guelfi [mailto:[EMAIL PROTECTED]]
Sent: Friday, November 02, 2001 11:34 AM
To: [EMAIL PROTECTED]
Subject: Re: ::massive sql query using dbi - please help::



1) Did you try to perform that query from mysql instead of a perl program
to measure the time?
2) Is this code correct? :
   open (DATATEMP, $data_log);
   flock (DATATEMP, $exclusive_lock);
   print LOG $row-[0]\t$row-[1]\n;
   close (LOG);

You never opened de LOG descriptor so you can't write to it. Add a warn or
die call to capture the error:
  print LOG  or die Error: $!\n;

3) Why are you opening,locking and closing the file inside the loop?

Saludos,
  Marcelo.

Saludos,
  Marcelo.





 

Hastie,

Christa   To: [EMAIL PROTECTED]

chastie@sonypic   cc:

tures.com Subject: ::massive sql query
using dbi - please help:: 
 

02/11/2001 15:57

Please respond

to Hastie,

Christa

 

 




Hello to all!
This is my first time posting to this group!  But I'm in desperate need of
any help!
(BTW, thanks for all the emails from contributors to this list..I learn a
lot from you guys every day!)

I have two tables in a mySQL db, named users_old and users_new, both with
UserId and Email columns, no primary keys and no auto-increment columns.
The users_old table has numeric values for the UserId while the users_new
have NULL values.
The users_old table has 120,000 rows and the users_new has 910,000 rows.
I'm trying to find a simple, painless way of querying these two tables so I
can store the UserId and Email from the users_old table if the Email exists
in both tables.

Everything I try just continues to run without ever producing any results -
it just hangs at the command line when running the standalone query.
Perhaps there are just too many rows to compare.
I tried writing a simple script using the Perl DBI to just log the results
of this massive query in a simple tab delimited flat file so I can load the
data into the live database after it finishesbut no luck.

Anybody have any suggestions on a better approach?
My simple code looks like this:

#!/usr/local/bin/perl

use DBI;
use strict;

my($dbh);
my($sth);
my($exclusive_lock);

eval { $dbh = DBI-connect(DBI:mysql:dbname;host=localhost, dbuser,
dbpassword, {'RaiseError' = 1}); };
if($@) {
   my($error) = Error opening Database: $@\n;
   print $error\n;
}


my $sth = $dbh-prepare(SELECT users_old.UserId, users_old.Email FROM
users_new, users_old WHERE users_old.Email = users_new.Email);
$sth-execute or die Unable to execute query: $dbh-errstr\n;
my ($row);

while($row = $sth-fetchrow_arrayref) {
   my($data_log) = /home/chastie/sony_showbiz.txt;
   open (DATATEMP, $data_log);
   flock (DATATEMP, $exclusive_lock);
   print LOG $row-[0]\t$row-[1]\n;
   close (LOG);
}

$sth-finish;
$dbh-disconnect;
exit;

\/
   (o o)
ooO-(_)-Ooo
christa hastie
programmer
sonypicturesdigitalentertainment
www.sonypictures.com
-







Re: [netlabs #64] Re: How to install DBI for Oracle

2001-11-02 Thread Ask Bjoern Hansen

On Thu, 1 Nov 2001, Ivan Kavuma wrote:

 I am developing an Intranet for my company.  We have an oracle
 database version 8i running on another Linux machine.  which I
 want to use as a backborn.

Please don't cc the list-owner on mails to the mailinglists.


 - ask

-- 
ask bjoern hansen, http://ask.netcetera.dk/   !try; do();





Script for Directory Permissions on NT Filesystem

2001-11-02 Thread Nigel Gall

Hi!  I'm a new user of PERL and I'm trying to write a script to give me a
condensed list of file permissions of an NTFS file system.  For example, if
I have a folder named E:\, with a hundred subfolders (and so on), I'd like
to get a list of subfolder names with their associated permissions (ACLs).
If possible, I'd like to leav out duplicates.  For example, if E:\temp has
subfolders going down five levels, but each of the subfolders of E:\temp has
the same permissions as E:\temp, then I want the script to just give me the
permisions on the E:\temp folder.  I tried using the find2perl -exec cacls
{} \; command to generate a script for me to modify, but I'm getting
problems with it (and I'm running out of time).

I've search CPAN's Scripts section but couldn't find anything.  If someone
can point me to alternative archives where I could search I'd appreciate it.
Or if you have a script that does this, even better.  Thanks a lot in
advance!

Best regards,
-- Making IT happen! --
Edmund Nigel Gall
Senior Information Systems Specialist
Industrial Plant Services Limited
Atlantic Avenue, Point Lisas Industrial Estate
Point Lisas, Couva, Trinidad  Tobago, W.I.
Tel: (868) 679-2435 x 274  Fax: (868) 636-2538, 679-3770




Re: ::massive sql query using dbi - please help::

2001-11-02 Thread Jeff Seger

If the query just hangs there in mysql, putting it into DBI is not going to fix
that.  Are the columns indexed in the tables?  If not, you've probably got it
doing nested full table scans.  Try adding indexes to the tables on the join
columns.

Get the query to run in mysql before you ever try it in perl again, or if there
is anything wrong with your perl code, you'll never know it.

jeff


Hastie, Christa wrote:

 Thanks Marcelo for all three of those points.
 I tried performing just the query from mysql, but it just hangs there also,
 even if I leave it for an hour!
 I've modified the code to stop opening and closing the file in the loop, and
 added the lock values, but it's still just running and running and still
 produces no resultsh:

 #!/usr/local/bin/perl

 use DBI;
 use strict;

 my($dbh);
 my($sth);
 my($exclusive_lock) = 2;
 my($unlock_lock) = 8;

 #eval { $dbh = DBI-connect(DBI:mysql:dbname;host=localhost, dbuser,
 dbpassword, {'RaiseError' = 1}); };
 if($@) {
 print Content-type: text/html\n\n;
 my($error) = Error opening Database: $@\n;
 print htmlbody$error/body/html;
 }

 my $sth = $dbh-prepare(SELECT users_old.UserId, users_old.Email FROM
 users_new, users_old WHERE users_old.Email = users_new.Email);
 $sth-execute or die Unable to execute query: $dbh-errstr\n;
 my ($row);

 my($data_log) = /home/chastie/sony_showbiz.txt;
 open (DATATEMP, $data_log);
 flock (DATATEMP, $exclusive_lock);

 while($row = $sth-fetchrow_arrayref) {
 print LOG $row-[0]\t$row-[1]\n or die Error: $!\n;
 }

 flock (DATATEMP, $unlock_lock);
 close (LOG);

 $sth-finish;
 $dbh-disconnect;
 exit;

 -Original Message-
 From: Marcelo Guelfi [mailto:[EMAIL PROTECTED]]
 Sent: Friday, November 02, 2001 11:34 AM
 To: [EMAIL PROTECTED]
 Subject: Re: ::massive sql query using dbi - please help::

 1) Did you try to perform that query from mysql instead of a perl program
 to measure the time?
 2) Is this code correct? :
open (DATATEMP, $data_log);
flock (DATATEMP, $exclusive_lock);
print LOG $row-[0]\t$row-[1]\n;
close (LOG);

 You never opened de LOG descriptor so you can't write to it. Add a warn or
 die call to capture the error:
   print LOG  or die Error: $!\n;

 3) Why are you opening,locking and closing the file inside the loop?

 Saludos,
   Marcelo.

 Saludos,
   Marcelo.



 Hastie,

 Christa   To: [EMAIL PROTECTED]

 chastie@sonypic   cc:

 tures.com Subject: ::massive sql query
 using dbi - please help::


 02/11/2001 15:57

 Please respond

 to Hastie,

 Christa





 Hello to all!
 This is my first time posting to this group!  But I'm in desperate need of
 any help!
 (BTW, thanks for all the emails from contributors to this list..I learn a
 lot from you guys every day!)

 I have two tables in a mySQL db, named users_old and users_new, both with
 UserId and Email columns, no primary keys and no auto-increment columns.
 The users_old table has numeric values for the UserId while the users_new
 have NULL values.
 The users_old table has 120,000 rows and the users_new has 910,000 rows.
 I'm trying to find a simple, painless way of querying these two tables so I
 can store the UserId and Email from the users_old table if the Email exists
 in both tables.

 Everything I try just continues to run without ever producing any results -
 it just hangs at the command line when running the standalone query.
 Perhaps there are just too many rows to compare.
 I tried writing a simple script using the Perl DBI to just log the results
 of this massive query in a simple tab delimited flat file so I can load the
 data into the live database after it finishesbut no luck.

 Anybody have any suggestions on a better approach?
 My simple code looks like this:

 #!/usr/local/bin/perl

 use DBI;
 use strict;

 my($dbh);
 my($sth);
 my($exclusive_lock);

 eval { $dbh = DBI-connect(DBI:mysql:dbname;host=localhost, dbuser,
 dbpassword, {'RaiseError' = 1}); };
 if($@) {
my($error) = Error opening Database: $@\n;
print $error\n;
 }

 my $sth = $dbh-prepare(SELECT users_old.UserId, users_old.Email FROM
 users_new, users_old WHERE users_old.Email = users_new.Email);
 $sth-execute or die Unable to execute query: $dbh-errstr\n;
 my ($row);

 while($row = $sth-fetchrow_arrayref) {
my($data_log) = /home/chastie/sony_showbiz.txt;
open (DATATEMP, $data_log);
flock (DATATEMP, $exclusive_lock);
print LOG $row-[0]\t$row-[1]\n;
close (LOG);
 }

 $sth-finish;
 $dbh-disconnect;
 exit;

 \/
(o o)
 ooO-(_)-Ooo
 christa hastie
 programmer
 sonypicturesdigitalentertainment
 www.sonypictures.com

Speed up return of result set..

2001-11-02 Thread Adam Frielink

Hi all,

I am connecting to a DB2 database on an AS/400 using the Client Access
Express ODBC driver (version 6).  At times, I will have need to return some
rather large datasets (around 10Mb total).  Unfortunately, the CAE driver
only allows the information to be sent in 8Kb segments before it pauses to
prepare the next 8Kbs of data.  Approx. every 2 seconds it will send the
next batch of data, you can see that it will take roughly 40 minutes to
retrieve the data.

Is there a way to configure the information to be returned in a compressed
format so the transmission is quicker?

How about removing trailing spaces so I can get more record sets?

How about increasing the buffer send size (It might be called the Blocking
Size)?


Much thanks,

Adam Frielink






Re: Script for Directory Permissions on NT Filesystem

2001-11-02 Thread Ian Summers


Hi

You wrote:
Hi!  I'm a new user of PERL and I'm trying to write a script to give me a
condensed list of file permissions of an NTFS file system.  For example, if
I have a folder named E:\, with a hundred subfolders (and so on), I'd like
to get a list of subfolder names with their associated permissions (ACLs).
If possible, I'd like to leav out duplicates.  For example, if E:\temp has
subfolders going down five levels, but each of the subfolders of E:\temp has
the same permissions as E:\temp, then I want the script to just give me the
permisions on the E:\temp folder.  I tried using the find2perl -exec cacls
{} \; command to generate a script for me to modify, but I'm getting
problems with it (and I'm running out of time).

I've search CPAN's Scripts section but couldn't find anything.  If someone
can point me to alternative archives where I could search I'd appreciate it.
Or if you have a script that does this, even better.  Thanks a lot in
advance!


You may be able to modify this script which goes down all the sub-folders 
looking for stuff: Good luck :-)

--
#!/usr/bin/perl -w
use strict;

my $help = '__';

#
#Path Directory
#
#Utility to list all the files and their size in a folder
# and its sub folders.
#
#
#
#To install:
#--
#Place the files PathDirectory.pl and pdir.bat in a folder
#that is included in the machine's PATH. You'll also need perl from
#http://www.activestate.com/Products/ActivePerl/Download.html
#
#To use:
#--
#To run from command prompt:
#
#pdir folder minsize clustersize age savetofile
#
#where:
#
#folder   is the folder name (eg, c:\)
#minsize  restricts reported files to those of this size or more
#clustersize  calculates the actual amount of disc space used
#age  any value (eg, 1) will include age of file in days
#savetofile   saves the summary to file with file name savetofile
#
#Note: Only folders with files are declared.
#
#For example:
#---
#
#pdir c:\documents and settings
#pdir c: 100
#pdir c: 0 0 1 c:\log.txt
#pdir c: 0 512 0 c:\log.txt
#
#To print this message enter ? as the first parameter
#
#Author: Ian Summers [EMAIL PROTECTED]
#Date:   27 January 2001
#
#

__

my $folder = $ARGV[0];

if ($folder =~ /\?/)
{
 $help =~ s/#//g;
 print $help;
 exit;
}

while (!$folder)
{
 print 'Folder: ';
 $folder = STDIN;
 chomp $folder;
}

if (!($folder =~ /\\$/))
{
 $folder .= '\\';
}

my $minsize = $ARGV[1];
my $clustersize = $ARGV[2];
my $datemod = $ARGV[3];
my $savedata = $ARGV[4];

my $append = '';
my $error = '';

if (!$minsize)
{
 $minsize = 0;
}

printit (\nSummary of the files in the chosen path (including its 
sub-folders));
if ($minsize)
{
 printit (\nif their size is = .filesize($minsize,1).' bytes');
}
printit (\nOn .filedate(time).\n);
if ($clustersize)
{
 printit (\n'Used in Cluster' based on a cluster size of 
$clustersize bytes.\n);
}
if ($datemod)
{
 printit (\nThe age of each file as the number of days since 
modified is given.\n);
}
printit (\n\nFile size bytes);
if ($clustersize)
{
 printit (  Used in Cluster);
}
if ($datemod)
{
 printit ('Age');
}
printit (  File Name\n---);
if ($clustersize)
{
 printit (  ---);
}
if ($datemod)
{
 printit ('  -');
}
printit (  --\n);

my $totalfilesize = 0;
my $totalclusterfilesize = 0;
my $grandtotalfilesize = 0;
my $grandtotalclusterfilesize = 0;
my $delcarefolder;
my $somefiles = 0;

look ($folder);


printtotal ($totalfilesize, $totalclusterfilesize, '');
printtotal ($grandtotalfilesize, $grandtotalclusterfilesize, \n\nGrand 
total:\n\n);


# -

sub look
{
 my $currentfolder = $_[0];
 if ($somefiles)
 {
 $delcarefolder .= \n\n$currentfolder\n\n;
 } else {
 }
 $delcarefolder = \n\n$currentfolder\n\n;
 my $first = 1;
 if (opendir DIR, $currentfolder)
 {
 my @folderfiles = grep !/^\.\.?$/, readdir DIR;
 closedir DIR;

 # loop through files
 foreach (sort {lc($a) cmp lc($b)} @folderfiles)
 {
 chomp;
 my $filename = $_;
 my $folderfilename = 

Error handling with DBD::Oracle

2001-11-02 Thread Satyanarayana, KJ

Hello,

We are migrating our systems from Informix to Oracle. We have a lot of code
written using DBI and DBD::Informix. DBD::informix has something like
ix_sqlcode to return the sqlcode for the operation. Does Oracle have
anything like this or is the status of a database operation just returned in
$DBI::err ? And does DBD::Oracle set err value after each database operation
?

Thanks.





Re: Script for Directory Permissions on NT Filesystem

2001-11-02 Thread aaron

this isn't ontopic for the dbi-list try perl friends or something like
that.

Aaron

On Fri, 2 Nov 2001, Nigel Gall wrote:

 Hi!  I'm a new user of PERL and I'm trying to write a script to give me a
 condensed list of file permissions of an NTFS file system.  For example, if
 I have a folder named E:\, with a hundred subfolders (and so on), I'd like
 to get a list of subfolder names with their associated permissions (ACLs).
 If possible, I'd like to leav out duplicates.  For example, if E:\temp has
 subfolders going down five levels, but each of the subfolders of E:\temp has
 the same permissions as E:\temp, then I want the script to just give me the
 permisions on the E:\temp folder.  I tried using the find2perl -exec cacls
 {} \; command to generate a script for me to modify, but I'm getting
 problems with it (and I'm running out of time).

 I've search CPAN's Scripts section but couldn't find anything.  If someone
 can point me to alternative archives where I could search I'd appreciate it.
 Or if you have a script that does this, even better.  Thanks a lot in
 advance!

 Best regards,
 -- Making IT happen! --
 Edmund Nigel Gall
 Senior Information Systems Specialist
 Industrial Plant Services Limited
 Atlantic Avenue, Point Lisas Industrial Estate
 Point Lisas, Couva, Trinidad  Tobago, W.I.
 Tel: (868) 679-2435 x 274  Fax: (868) 636-2538, 679-3770





Re: Column Names

2001-11-02 Thread Tim Bunce

On Fri, Nov 02, 2001 at 02:41:05PM -0500, Scott R. Godin wrote:
 In article [EMAIL PROTECTED],
  [EMAIL PROTECTED] (Tim Bunce) wrote:
 
  On Fri, Nov 02, 2001 at 02:18:15PM +0100, Bart Lateur wrote:
   On Fri, 02 Nov 2001 07:27:49 -0500, Scott R. Godin wrote:
   
   my %db;
   $sth-bind_columns( \( @db{ @{ $sth-{NAME} } } ));# magic
   
   while ($sth-fetch)
   {
   #... and no worries about which order the columns get returned in
   #... since you access them via the $db{ColumnName} method :)
   
   What's the advantage of this approach over
   
 while(my $db = fetchrow_hashref) {
 ...
 }
   
   and accessing the datae through $db-{ColumnName}?
  
  Speed! It's many times faster (assuming the loop is empty :)
  
  (But use $sth-{NAME_lc} or $sth-{NAME_uc} for portability.
  
  Tim.
 
 with the exception of my case where neither mod_perl nor Apache::DBI is 
 compiled in.. 

The performance of bind_columns + fetchrow_arrayref vs fetchrow_hashref
is not related to mod_perl or Apache::DBI.

 in the php vs perl thread earlier this (last?) month, I posted some 
 benchmarks done by the site admin on a search of 5100 rows for c by 
 their ph script and my perl script.. the results were staggeringly 
 different, even with the help of this (see the script I posted in that 
 thread for details on what I was doing) the thread issues are posted 
 here: [EMAIL PROTECTED]

Can you give me a url?

Tim.



TYPE integer values and their respective data types

2001-11-02 Thread Dieter

Hello,

i am looking for a complete list of the integer values and their respective data types 
returned from the TYPE statement handle attribute.
I followed both links in the 'Programming the Perl DBI book, but they did not lead to 
any results. 
I am interested in all MySQL data types, especially in all text data types (There are 
no values for text data types in the book).

Any help (link) would be appreciated.

Thanks,
Dieter



Quoting with placeholders

2001-11-02 Thread Stacy Mader


Hi all,

I have the following:

  $reported_by  = $dbh-quote($reported_by);
  $project_no   = $dbh-quote($project);
  $project_comments = $dbh-quote('NULL');
  $one_line_summary = $dbh-quote($one_line_summary);
  $issue= $dbh-quote($issue);
  $new_report   = $dbh-quote('Y');
  $allocated_to = $dbh-quote('NULL');
  $fix_status   = $dbh-quote('NOT YET ALLOCATED');
  
  $dbh-do(qq{
INSERT INTO $fault_db 
  VALUES ($fault_no, 
  $reported_by,
  $project_no, 
  $project_comments,
  $date_occurred, 
  $date_reported,
  $time_lost,
  $one_line_summary, 
  $issue,
  $new_report,
  $allocated_to,
  $fix_status
 )
});
  
Can the $dbh-do quote my values automatically? ie, do I
specifically need to call $dbh-quote on every value before
I perform the DO statement? I'm using DBI v1.20 with Oracle
7.3.3

Thanks in advance,

Regards,
Stacy Mader.



Does perl has DBD for MS Access?

2001-11-02 Thread Linda Xu


Hi,
Does perl has DBD for MS Access? Where I can download it?

Linda



RE: Does perl has DBD for MS Access?

2001-11-02 Thread Kokarski, Anton

Hi Linda,

try ODBC option that might work.

Anton

-Original Message-
From: Linda Xu [mailto:[EMAIL PROTECTED]]
Sent: Friday, November 02, 2001 5:42 PM
To: DBI Users
Subject: Does perl has DBD for MS Access?



Hi,
Does perl has DBD for MS Access? Where I can download it?

Linda



Re: Quoting with placeholders

2001-11-02 Thread Jeff Zucker

Stacy Mader wrote:
 
   $allocated_to = $dbh-quote('NULL');

That's wrong for two reasons: don't use quote() on something that
already has quotes around it unless you want the literal quotes in the
string; and if you mean an actual SQL NULL, it should not be quoted by
either method.

 Can the $dbh-do quote my values automatically? 

Yes, with placeholders:

  $dbh-do(
   INSERT INTO $fault_db VALUES (?,?,?,?,?,?,?,?) , undef,
  $reported_by, $project, undef, $one_line_summary, $issue,
  'Y', undef, 'NOT YET ALLOCATED'
  );

Note the use of the first undef which is a stand-in for \%attr which you
don't need.  The following undefs (with no quotes around them) are for
SQL NULLs.

-- 
Jeff



Re: Quoting with placeholders

2001-11-02 Thread Stacy Mader

Thanks for that Jeff.

Now that I have:

  $dbh-do(INSERT INTO $fault_db VALUES (?,?,?,?,?,?,?,?,?,?,?,?) ,
undef,
   $fault_no, 
   $reported_by,
   $project_no, 
   undef,
   $date_occurred, 
   $date_reported,
   $time_lost,
   $one_line_summary, 
   $issue,
   'Y',
   undef,
   'NOT YET ALLOCATED'
  );
  

I get the error:

DBD::Oracle::db do failed: ORA-01858: a non-numeric character was found
where a numeric was expected (DBD: oexec error) at ff_report.cgi line
450. 

From my Oracle books, the error code states:

The input data to be converted using a date format model was incorrect;
the formal model expected a number but found a non-numeric character.

My variables for date_occurred and date_reported are:

 $date_occurred = to_date(\'$odate_str $otime_str\',\'DD-MON-
HH24:MI\');
 $date_reported = to_date(\'$rdate_str $rtime_str\',\'DD-MON-
HH24:MI\');

Does this seem right?


Stacy.




Jeff Zucker wrote:
 
 Stacy Mader wrote:
 
$allocated_to = $dbh-quote('NULL');
 
 That's wrong for two reasons: don't use quote() on something that
 already has quotes around it unless you want the literal quotes in the
 string; and if you mean an actual SQL NULL, it should not be quoted by
 either method.
 
  Can the $dbh-do quote my values automatically?
 
 Yes, with placeholders:
 
   $dbh-do(
INSERT INTO $fault_db VALUES (?,?,?,?,?,?,?,?) , undef,
   $reported_by, $project, undef, $one_line_summary, $issue,
   'Y', undef, 'NOT YET ALLOCATED'
   );
 
 Note the use of the first undef which is a stand-in for \%attr which you
 don't need.  The following undefs (with no quotes around them) are for
 SQL NULLs.
 
 --
 Jeff



Re: Quoting with placeholders

2001-11-02 Thread Terrence Brannon


On Friday, November 2, 2001, at 06:30 PM, Stacy Mader wrote:

 Thanks for that Jeff.

 Now that I have:

   $dbh-do(INSERT INTO $fault_db VALUES (?,?,?,?,?,?,?,?,?,?,?,?) ,
 undef,
  $fault_no,
  $reported_by,
  $project_no,
  undef,
  $date_occurred,
  $date_reported,
  $time_lost,
  $one_line_summary,
  $issue,
  'Y',
  undef,
  'NOT YET ALLOCATED'
 );


I dont think it is ever advisable to do inserts without explcitly 
listing the columns which will be inserted...



 I get the error:

 DBD::Oracle::db do failed: ORA-01858: a non-numeric character was found
 where a numeric was expected (DBD: oexec error) at ff_report.cgi line
 450.

 From my Oracle books, the error code states:

 The input data to be converted using a date format model was incorrect;
 the formal model expected a number but found a non-numeric character.

 My variables for date_occurred and date_reported are:

  $date_occurred = to_date(\'$odate_str $otime_str\',\'DD-MON-
 HH24:MI\');
  $date_reported = to_date(\'$rdate_str $rtime_str\',\'DD-MON-
 HH24:MI\');


you dont need the backslashes in this code. make it

$date_occurred = to_date('$odate_str $otime_str, 'DD-MON- 
HH24:MN');

 Does this seem right?


finally, always start your program with use strict. always.


 Stacy.




 Jeff Zucker wrote:

 Stacy Mader wrote:

   $allocated_to = $dbh-quote('NULL');

 That's wrong for two reasons: don't use quote() on something that
 already has quotes around it unless you want the literal quotes in the
 string; and if you mean an actual SQL NULL, it should not be quoted by
 either method.

 Can the $dbh-do quote my values automatically?

 Yes, with placeholders:

   $dbh-do(
INSERT INTO $fault_db VALUES (?,?,?,?,?,?,?,?) , undef,
   $reported_by, $project, undef, $one_line_summary, $issue,
   'Y', undef, 'NOT YET ALLOCATED'
   );

 Note the use of the first undef which is a stand-in for \%attr which 
 you
 don't need.  The following undefs (with no quotes around them) are for
 SQL NULLs.

 --
 Jeff





RE: Does perl has DBD for MS Access?

2001-11-02 Thread Steve Howard

You need DBD::ODBC to access MS Access using Perl and DBI. It can be
downloaded from CPAN, or if you are using ActiveState, or PPM you can use
them to search, locate download and install this module.

Steve H.

-Original Message-
From: Linda Xu [mailto:[EMAIL PROTECTED]]
Sent: Friday, November 02, 2001 7:42 PM
To: DBI Users
Subject: Does perl has DBD for MS Access?



Hi,
Does perl has DBD for MS Access? Where I can download it?

Linda




RE: ::massive sql query using dbi - please help::

2001-11-02 Thread Steve Howard

yours
The users_old table has 120,000 rows and the users_new has 910,000 rows.
/yours

If you have no indexes, I'm not at all surprised it takes that long or even
longer to get results from a join on MySQL on two tables with this many
rows. The join must be completed before results are returned, and that is a
long, processor intensive process without indexes.

Can e-mail addresses be a primary key on either table? if it can, it should
be. If not, it should at least be indexed.

Only suggestion I have beyond the indexes is don't use Legacy syntax - that
won't speed up the execution, but is just a good habit to have for when you
want to do more complex queries. Use this type of join syntax:

SELECT users_old.UserId, users_old.Email FROM
users_new INNER JOIN users_old ON users_old.Email = users_new.Email

aliasing will then save you a bit of typing, but that's secondary - indexing
your tables is what you need to improve the performance in this case.

Steve H.


-Original Message-
From: Hastie, Christa [mailto:[EMAIL PROTECTED]]
Sent: Friday, November 02, 2001 12:58 PM
To: [EMAIL PROTECTED]
Subject: ::massive sql query using dbi - please help::


Hello to all!
This is my first time posting to this group!  But I'm in desperate need of
any help!
(BTW, thanks for all the emails from contributors to this list..I learn a
lot from you guys every day!)

I have two tables in a mySQL db, named users_old and users_new, both with
UserId and Email columns, no primary keys and no auto-increment columns.
The users_old table has numeric values for the UserId while the users_new
have NULL values.
The users_old table has 120,000 rows and the users_new has 910,000 rows.
I'm trying to find a simple, painless way of querying these two tables so I
can store the UserId and Email from the users_old table if the Email exists
in both tables.

Everything I try just continues to run without ever producing any results -
it just hangs at the command line when running the standalone query.
Perhaps there are just too many rows to compare.
I tried writing a simple script using the Perl DBI to just log the results
of this massive query in a simple tab delimited flat file so I can load the
data into the live database after it finishesbut no luck.

Anybody have any suggestions on a better approach?
My simple code looks like this:

#!/usr/local/bin/perl

use DBI;
use strict;

my($dbh);
my($sth);
my($exclusive_lock);

eval { $dbh = DBI-connect(DBI:mysql:dbname;host=localhost, dbuser,
dbpassword, {'RaiseError' = 1}); };
if($@) {
my($error) = Error opening Database: $@\n;
print $error\n;
}


my $sth = $dbh-prepare(SELECT users_old.UserId, users_old.Email FROM
users_new, users_old WHERE users_old.Email = users_new.Email);
$sth-execute or die Unable to execute query: $dbh-errstr\n;
my ($row);

while($row = $sth-fetchrow_arrayref) {
my($data_log) = /home/chastie/sony_showbiz.txt;
open (DATATEMP, $data_log);
flock (DATATEMP, $exclusive_lock);
print LOG $row-[0]\t$row-[1]\n;
close (LOG);
}

$sth-finish;
$dbh-disconnect;
exit;

\/
   (o o)
ooO-(_)-Ooo
christa hastie
programmer
sonypicturesdigitalentertainment
www.sonypictures.com
-




RE: Copying image data from Sybase to Mssql or vice versa

2001-11-02 Thread Steve Howard

The 'Best way' may not be Perl. Is this a one time shot, or something where
the two servers need to interact constantly?

If this is one time, or something that needs to happen only periodically, I
would recommend Data Transformation Services (DTS). That is part of the MS
SQL installation if it is version 7.0 or higher. It is actually quite good
for transferring data between any two data sources where you can connect to
each either by ODBC or some OLE-DB compliant interface - neither one of them
have to be MS SQL.

It can be very simple, or you can put together pretty complex transformation
packages that can be stored and executed periodically. I'd recommend
starting out in the MS SQL Server Books Online for details on its use.

Hope this helps.

Steve H.

-Original Message-
From: Veera P. Nallamilli [mailto:[EMAIL PROTECTED]]
Sent: Friday, November 02, 2001 9:53 AM
To: [EMAIL PROTECTED]
Subject: Copying image data from Sybase to Mssql or vice versa


I am using DBD:Sybase to interact with Sybase database
and using DBD:ODBC for Mssql database. I tried to copy
image data either way , but unfortunately failed. Could
anybody please suggest me like what is the best way
to transfer the data from sybase to Mssql or vice versa.

Thanks
prasad




RE: Column Names

2001-11-02 Thread Steve Howard

yours
my %db;
$sth-bind_columns( \( @db{ @{ $sth-{NAME} } } ));# magic

while ($sth-fetch)
{
#... and no worries about which order the columns get returned in
#... since you access them via the $db{ColumnName} method :)
/yours

Right, no worries, but a good point to make is that the hash keys are in an
array - so the order in which they are returned while doing that is always
in the order they returned by the query, so it is perfectly safe to do
something like this when the two tables have the same columns:

my $select = qq{SELECT * FROM Sometable};
my $selecth = $dbh1-prepare($select) || die Can't
prepare\n$select\n$DBI::errstr\n
$selecth-execute() || die Can't execute\n$select\n$DBI::errstr\n;
$selecth-bind_columns(undef, \(@col{ @{$selecth-{NAME}}}));
my $insert = qq{INSERT INTO SomeOtherTable ($columnlist) VALUES (}
. '?' . '?' x $#col{ @{$selecth-{NAME}}} . ')';
my $inserth = $dbh2-prepare($insert) || die Can't
prepare\n$insert\n$DBI::errstr;

while ($selecth-fetch) {
#do some manipulation if necessary
$inserth-execute(@col{ @{$selecth-{NAME}}}) || die Can't execute
$insert: $DBI::errstr\n;
}

# or if you are making a pipe delimited file instead of inserting elsewhere:

while ($selecth-fetch) {
#do some manipulation if necessary
print outfile join('|', @{$selecth-{NAME}}}) . \n;
}

Syntax untested in that example, but I use the principle sometimes. It gives
the advantage of the speed of bind_columns instead of fetchrow_hashref, and
the ability to access the columns by name, and it keeps all the columns in
order for the use in execute or print or whatever else might be useful. I
find it very slick when I need column names.

Steve H.


-Original Message-
From: Scott R. Godin [mailto:[EMAIL PROTECTED]]
Sent: Friday, November 02, 2001 6:28 AM
To: [EMAIL PROTECTED]
Subject: Re: Column Names


In article [EMAIL PROTECTED],
 [EMAIL PROTECTED] (Bart Lateur) wrote:

 On Thu, 01 Nov 2001 18:56:18 -0800, Venkataramana Mokkapati wrote:

 How do I get column names and order of column names
 for a select * from ... query.

 If you have

   $sth = $dbh-prepare(select * from ...);

 then try

   @column names = @{$sth-{NAME}};

 You may have to do an execute first, for this to return anything of
 value.

 It's in the DBI docs under the heading Statement Handle Attributes, in
 the DBI POD formatted as text around line 2284.

the absolute neatest trick I've seen with this, that is so totally
perlish it defies description.. you stare at it for a bit and suddenly
all becomes clear.

   $sth-execute
or die(Cannot Execute SQL Statement: , $sth-errstr(), \n);

my $rows = $sth-rows;
 # only expecting one row for a unique ID . this should NEVER happen.
   safe_error(invalid number of rows returned from database ($rows) for
ID $id)
if $rows  1;
# although this might...
safe_error(no match in database for ID $id)
if $rows  1;

my %db;
$sth-bind_columns( \( @db{ @{ $sth-{NAME} } } ));# magic

while ($sth-fetch)
{
#... and no worries about which order the columns get returned in
#... since you access them via the $db{ColumnName} method :)

--
Scott R. Godin| e-mail : [EMAIL PROTECTED]
Laughing Dragon Services  |web : http://www.webdragon.net/




RE: Quoting with placeholders

2001-11-02 Thread Sterin, Ilya

No, the variable can't contain the whole conversion string, since it will be
bound as a value.  Rather do this...

$dbh-do(INSERT INTO $fault_db VALUES (?,?,?,?,?,to_date(?,'DD-MON-
HH24:MI'),to_date(?,'DD-MON- HH24:MI'),?,?,?,?,?) ,
 undef,
   $fault_no,
   $reported_by,
   $project_no,
   undef,
   $date_occurred,
   $date_reported,
   $time_lost,
   $one_line_summary,
   $issue,
   'Y',
   undef,
   'NOT YET ALLOCATED'
  );

Ilya

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
 Behalf Of Stacy Mader
 Sent: Friday, November 02, 2001 9:31 PM
 To: Jeff Zucker
 Cc: dbi-users
 Subject: Re: Quoting with placeholders


 Thanks for that Jeff.

 Now that I have:

   $dbh-do(INSERT INTO $fault_db VALUES (?,?,?,?,?,to_date(?,'DD-MON-
HH24:MI'),to_date(?,'DD-MON- HH24:MI'),?,?,?,?,?) ,
 undef,
  $fault_no,
  $reported_by,
  $project_no,
  undef,
  $date_occurred,
  $date_reported,
  $time_lost,
  $one_line_summary,
  $issue,
  'Y',
  undef,
  'NOT YET ALLOCATED'
 );


 I get the error:

 DBD::Oracle::db do failed: ORA-01858: a non-numeric character was found
 where a numeric was expected (DBD: oexec error) at ff_report.cgi line
 450.

 From my Oracle books, the error code states:

 The input data to be converted using a date format model was incorrect;
 the formal model expected a number but found a non-numeric character.

 My variables for date_occurred and date_reported are:

  $date_occurred = to_date(\'$odate_str $otime_str\',\'DD-MON-
 HH24:MI\');
  $date_reported = to_date(\'$rdate_str $rtime_str\',\'DD-MON-
 HH24:MI\');

 Does this seem right?


 Stacy.




 Jeff Zucker wrote:
 
  Stacy Mader wrote:
  
 $allocated_to = $dbh-quote('NULL');
 
  That's wrong for two reasons: don't use quote() on something that
  already has quotes around it unless you want the literal quotes in the
  string; and if you mean an actual SQL NULL, it should not be quoted by
  either method.
 
   Can the $dbh-do quote my values automatically?
 
  Yes, with placeholders:
 
$dbh-do(
 INSERT INTO $fault_db VALUES (?,?,?,?,?,?,?,?) , undef,
$reported_by, $project, undef, $one_line_summary, $issue,
'Y', undef, 'NOT YET ALLOCATED'
);
 
  Note the use of the first undef which is a stand-in for \%attr which you
  don't need.  The following undefs (with no quotes around them) are for
  SQL NULLs.
 
  --
  Jeff



Re: Quoting with placeholders

2001-11-02 Thread Stacy Mader

Okay,

  $dbh-do(qq{
INSERT INTO $fault_db (
   FAULT_NO,
   REPORTED_BY,
   PROJECT_NO,
   DATE_OCCURRED,
   DATE_REPORTED,
   TIME_LOST,
   ONE_LINE_SUMMARY,
   ISSUE,
   NEW_REPORT,
   ALLOCATED_TO
  )
  VALUES (?,?,?,?,?,?,?,?,?,?)
}, undef,
   $fault_no,
   $reported_by,
   $project_no, 
   $date_occurred, 
   $date_reported,
   $time_lost,
   $one_line_summary, 
   $issue,
   'Y',
   'NOT YET ALLOCATED'
  );
  
Even after defining all columns, I still get the error:

DBD::Oracle::db do failed: ORA-01858: a non-numeric character
was found where a numeric was expected (DBD: oexec error) at
ff_report.cgi line 488. 

The types of each column in the database are:

NAME,TYPE,NULLABLE
'FAULT_NO','NUMBER(38)','Y'
'REPORTED_BY','VARCHAR2(100)','Y'
'PROJECT_NO','VARCHAR2(10)','Y'
'PROJECT_COMMENTS','VARCHAR2(200)','Y'
'DATE_OCCURRED','undef 9(0)','Y'
'DATE_REPORTED','undef 9(0)','Y'
'TIME_LOST','NUMBER(38)','Y'
'ONE_LINE_SUMMARY','VARCHAR2(200)','Y'
'ISSUE','VARCHAR2(2000)','Y'
'NEW_REPORT','VARCHAR2(1)','Y'
'ALLOCATED_TO','VARCHAR2(100)','Y'
'FIX_STATUS','VARCHAR2(29)','Y'

The trace output is included below:



Many thanks,

Stacy.

2   - prepare for DBD::Oracle::db (DBI::db=HASH(0x2dab64)~INNER '
INSERT INTO fault.report (
   FAULT_NO,
   REPORTED_BY,
   PROJECT_NO,
   DATE_OCCURRED,
   DATE_REPORTED,
   TIME_LOST,
   ONE_LINE_SUMMARY,
   ISSUE,
   NEW_REPORT,
   ALLOCATED_TO
  )
  VALUES (?,?,?,?,?,?,?,?,?,?)
' undef)
dbd_preparse scanned 10 distinct placeholders
2   - prepare= DBI::st=HASH(0x3255dc) at DBI.pm line 950
- execute for DBD::Oracle::st (DBI::st=HASH(0x3255dc)~0x3257a4
'1440' 'S MADER' 'p000' 'to_date('03-NOV-2001 13:34', 'DD-MON-
HH24:MI')' 'to_date('03-NOV-2001 16:05', 'DD-MON- HH24:MI')' '10'
'test.' 'test. please ignore.' 'Y' 'NOT YET ALLOCATED')
   bind :p1 == '1440' (type 0)
   bind :p1 == '1440' (size 4/5/0, ptype 4, otype 1)
   bind :p2 == 'S MADER' (type 0)
   bind :p2 == 'S MADER' (size 7/8/0, ptype 4, otype 1)
   bind :p3 == 'p000' (type 0)
   bind :p3 == 'p000' (size 4/5/0, ptype 4, otype 1)
   bind :p4 == 'to_date('03-NOV-2001 13:34', 'DD-MON-
HH24:MI')' (type 0)
   bind :p4 == 'to_date('03-NOV-2001 13:34', 'DD-MON-
HH24:MI')' (size 51/52/0, ptype 4, otype 1)
   bind :p5 == 'to_date('03-NOV-2001 16:05', 'DD-MON-
HH24:MI')' (type 0)
   bind :p5 == 'to_date('03-NOV-2001 16:05', 'DD-MON-
HH24:MI')' (size 51/52/0, ptype 4, otype 1)
   bind :p6 == '10' (type 0)
   bind :p6 == '10' (size 2/3/0, ptype 7, otype 1)
   bind :p7 == 'test.' (type 0)
   bind :p7 == 'test.' (size 5/6/0, ptype 4, otype 1)
   bind :p8 == 'test. please ignore.' (type 0)
   bind :p8 == 'test. please ignore.' (size 20/21/0, ptype 4, otype
1)
   bind :p9 == 'Y' (type 0)
   bind :p9 == 'Y' (size 1/2/0, ptype 4, otype 1)
   bind :p10 == 'NOT YET ALLOCATED' (type 0)
   bind :p10 == 'NOT YET ALLOCATED' (size 17/18/0, ptype 4, otype
1)
dbd_st_execute (for sql f3 after oci f62, out0)...
!! ERROR: 1858 'ORA-01858: a non-numeric character was found where a
numeric was expected (DBD: oexec error)'
- execute= undef at DBI.pm line 951
!! ERROR: 1858 'ORA-01858: a non-numeric character was found where a
numeric was expected (DBD: oexec error)'
- do= undef at ff_report.cgi line 488
- DESTROY for DBD::Oracle::st (DBI::st=HASH(0x325474)~INNER)
- DESTROY= undef at ff_report.cgi line 488
- DESTROY for DBD::Oracle::st (DBI::st=HASH(0x3257a4)~INNER)
- DESTROY= undef at ff_report.cgi line 488
- DESTROY for DBD::Oracle::db (DBI::db=HASH(0x2dab64)~INNER)
- DESTROY= undef at ff_report.cgi line 488



Re: TYPE integer values and their respective data types

2001-11-02 Thread Dodger

www.mysql.com/doc

Dodger
- Original Message -
From: Dieter [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, November 02, 2001 7:16 PM
Subject: TYPE integer values and their respective data types


Hello,

i am looking for a complete list of the integer values and their respective
data types returned from the TYPE statement handle attribute.
I followed both links in the 'Programming the Perl DBI book, but they did
not lead to any results.
I am interested in all MySQL data types, especially in all text data types
(There are no values for text data types in the book).

Any help (link) would be appreciated.

Thanks,
Dieter





Can't get multiple handles to stay alive

2001-11-02 Thread Rob McMillin

I am having some severe difficulties with multiple DBI objects. It seems
that it is not possible to keep two connections alive and functional at
the same time if both use Oracle but with different userids. Ex:

$dbh1 = DBI-connect(dbi:Oracle:host=foo:sid=bar,snark,snarkpw);
$dbh2 = DBI-connect(dbi:Oracle:host=foo:sid=bar,jubjub,jubjubpw);

$sth1 = $dbh1-prepare(select zizzer,foo from blargh where
zizzer  1);
$sth1-execute;
while(@f = $sth1-fetchrow_array)
{
  my($zizzer,$foo) = @f;
  $dbh2-do (update yawp set zizzer=?,foo=?,$zizzer,$foo);
}

I'm perpetually getting ORA-03113 end-of-file on communication channel
on the do when I try to use this. Any hints on how to approach this
problem, other than rebuilding a shiny new connection each time I need
the other access? The reason for it is that I have two levels of
security, and one user should really only be used for the update.

I have also seen trouble when mixing MySQL db handles with Oracle db
handles (queries complete too early sometimes).

Any ideas?

--
  http://www.pricegrabber.com | Dog is my co-pilot.






Re: mod_perl and dbi

2001-11-02 Thread MADI REDDY GARI SUBBA REDDY

Then how does it comparable with CGI::FastCGI and 
what are differences between FastCGI and embperl package?

Which is better?
In what circumastances FastCGI, embperl can be suitable?

Plese give, clear info what to choose?

Thank you inadvance.

-SubbaReddy

 
--

On Tue, 30 Oct 2001 02:19:31  
 Andy Duncan wrote:
Hi Dan,

 Can someone give me some examples of high-volume, high-profile sites using
 mod_perl and dbi?

A lot of people use Embperl (HTML::Embperl), often with mod_perl and DBI.  You
can see some of these sites here:

= http://perl.apache.org/embperl/Sites.pod.1.html

A lot of other people also use Mason (HTML::Mason), another Perl HTML component
templating engine, again mostly used with mod_perl and often DBI riding shotgun
:)

= http://www.masonhq.com/about/sites.html

You may also want to check:

= http://perl.apache.org/sites.html
= http://www.apache-asp.org/sites.html
= http://www.apacheweek.com/features/mod_perl01#sect12
= http://perl.apache.org/netcraft/

HTH a bit! :-)

Rgds,
AndyD


=
Make Someone Happy.  Buy a Copy of:
= http://www.oreilly.com/catalog/oracleopen/
-BEGIN GEEK CODE BLOCK-
GO/SS/TW d- s+:+ a C++$ U++$ P$ L++$ !E W+ N+ K- W O- 
M+ V-- PS+ PE++ Y+ PGP t+@ 5 X- R* tv- b+++ DI++ D G e++ 
h r+++ y 
--END GEEK CODE BLOCK--

__
Do You Yahoo!?
Make a great connection at Yahoo! Personals.
http://personals.yahoo.com



Make a difference, help support the relief efforts in the U.S.
http://clubs.lycos.com/live/events/september11.asp