Re: DBD::CSV attempt to free unreferenced scalar during global destruction

2017-01-16 Thread Ron Savage

Hi Roy

I ran your code - unsuccessfully -  and added the output as a comment to 
your gist.


On 16/01/17 20:54, Roy Storey wrote:

Hello,

I'm here looking for help with an issue I'm having with DBD::CSV.

Specifically, I'm attempting to use the 'after_parse' callback to handle
a csv file with a data defined variable number of columns and hit a
warning on an attempt to free unreferenced scalar during global
destruction.

I've prepared a minimal example at
https://gist.github.com/kiwiroy/fa0c737ff3f298cb064e554505bc4495 so show
the issue. The two test scripts process the 'input.csv' with Text::CSV
and DBD::CSV respectively and only the latter exhibits the behaviour.

It appears to be 5.24.0 specific. I'm using default perlbrew built perls.

Is there something I'm missing or doing wrong?

Thanks in advance.

Roy




--
Ron Savage - savage.net.au


Re: DBD::CSV

2015-02-26 Thread Furst, Carl
What system, version of Excel?
I’m using Mac Office on OS 10.9.5


Carl Furst


From: Matthew Musgrove mr.musk...@gmail.commailto:mr.musk...@gmail.com
Date: Wednesday, February 25, 2015 at 2:31 PM
Cc: Perl DBI mailing list dbi-users@perl.orgmailto:dbi-users@perl.org
Subject: Re: DBD::CSV

Carl,
When I tried it just now (first time using DBD::CSV) it just worked as I would 
expect.

First, here's my script that I named test.plhttp://test.pl:
#!/bin/env perl
use strict;
use warnings;

use Data::Dumper;
use DBI;
use Try::Tiny qw( try catch );

my $dbh = DBI-connect( 'DBI:CSV:', '', '', {
f_schema = undef,
f_dir = '.',
f_ext = '.csv/r',
RaiseError = 1,
PrintError = 0,
}) or die Cannot connect: $DBI::errstr;

my $test = try {
return $dbh-selectall_arrayref( 'select * from test' );
}
catch {
my $err = $_;
if ( $err =~ m!^.*DBD::CSV::db.*?Execution ERROR: (.*?) at 
.*?DBI/DBD.*?line \d+.*?called from (.*? at \d+)\..*?$!s )
{
my ( $msg, $where ) = ( $1, $2 );
warn $msg -- $where\n;
}
else
{
warn $err\n;
}
return {};
};

print Dumper( $test );
__END__

Here is my test.csv file:
id,value
a,1
b,2
c,3

When I run it with test.csv (whether or not it is open in Excel) I get this 
output:
$VAR1 = [
  [
'a',
'1'
  ],
  [
'b',
'2'
  ],
  [
'c',
'3'
  ]
];

When I run it from a network drive I get this output:
Cannot obtain shared lock on /export/home/mmusgrove/test.
csv: No locks available -- ./test.plhttp://test.pl at 18
$VAR1 = {};

HTH,
Matt


On Wed, Feb 25, 2015 at 11:25 AM, Furst, Carl 
carl.fu...@mlb.commailto:carl.fu...@mlb.com wrote:
I think someone wrote about this before, but I just wanted to bring it up
again.

Is there a way to have DBD::CSV produce a warning or a message (like when
printError and raise error is set) to say that it’s waiting for a file
lock??

For example if I’m using my oh so beloved MS Excel and it locks the csv
file and at the same time I’m trying to also run my even more beloved perl
script which is trying to open the file for read. The script of course
hangs when trying to execute any statements because it’s waiting for the
flock put there by MS Excel..Is there a lock timeout? Is that what f_lock
= 2 is for??

At least I’m pretty sure that’s what’s happening because when I open it in
things that put the whole file into memory (like my other beloved text
editor) I have no problem, but excel likes to lock things, it seems. When
I close the file in Excel, of course, the perl runs like a charm.

Thanks,
Carl Furst

**

MLB.com: Where Baseball is Always On

**

MLB.com: Where Baseball is Always On


Re: DBD::CSV

2015-02-26 Thread Matthew Musgrove
Perhaps this is specific to Mac Office or OS X? I'm running MS Office Pro
Plus 2010 on Windows 7 Pro SP1.


On Thu, Feb 26, 2015 at 11:05 AM, Furst, Carl carl.fu...@mlb.com wrote:

   What system, version of Excel?
 I’m using Mac Office on OS 10.9.5


  Carl Furst


   From: Matthew Musgrove mr.musk...@gmail.com
 Date: Wednesday, February 25, 2015 at 2:31 PM
 Cc: Perl DBI mailing list dbi-users@perl.org
 Subject: Re: DBD::CSV

Carl,
  When I tried it just now (first time using DBD::CSV) it just worked as I
 would expect.

  First, here's my script that I named test.pl:
 #!/bin/env perl
 use strict;
 use warnings;

 use Data::Dumper;
 use DBI;
 use Try::Tiny qw( try catch );

 my $dbh = DBI-connect( 'DBI:CSV:', '', '', {
 f_schema = undef,
 f_dir = '.',
 f_ext = '.csv/r',
 RaiseError = 1,
 PrintError = 0,
 }) or die Cannot connect: $DBI::errstr;

 my $test = try {
 return $dbh-selectall_arrayref( 'select * from test' );
 }
 catch {
 my $err = $_;
 if ( $err =~ m!^.*DBD::CSV::db.*?Execution ERROR: (.*?) at
 .*?DBI/DBD.*?line \d+.*?called from (.*? at \d+)\..*?$!s )
 {
 my ( $msg, $where ) = ( $1, $2 );
 warn $msg -- $where\n;
 }
 else
 {
 warn $err\n;
 }
 return {};
 };

 print Dumper( $test );
  __END__

 Here is my test.csv file:
 id,value
 a,1
 b,2
 c,3

 When I run it with test.csv (whether or not it is open in Excel) I get
 this output:
 $VAR1 = [
   [
 'a',
 '1'
   ],
   [
 'b',
 '2'
   ],
   [
 'c',
 '3'
   ]
 ];

 When I run it from a network drive I get this output:
 Cannot obtain shared lock on /export/home/mmusgrove/test.
 csv: No locks available -- ./test.pl at 18
 $VAR1 = {};

  HTH,
  Matt


 On Wed, Feb 25, 2015 at 11:25 AM, Furst, Carl carl.fu...@mlb.com wrote:

 I think someone wrote about this before, but I just wanted to bring it up
 again.

 Is there a way to have DBD::CSV produce a warning or a message (like when
 printError and raise error is set) to say that it’s waiting for a file
 lock??

 For example if I’m using my oh so beloved MS Excel and it locks the csv
 file and at the same time I’m trying to also run my even more beloved perl
 script which is trying to open the file for read. The script of course
 hangs when trying to execute any statements because it’s waiting for the
 flock put there by MS Excel..Is there a lock timeout? Is that what f_lock
 = 2 is for??

 At least I’m pretty sure that’s what’s happening because when I open it in
 things that put the whole file into memory (like my other beloved text
 editor) I have no problem, but excel likes to lock things, it seems. When
 I close the file in Excel, of course, the perl runs like a charm.

 Thanks,
 Carl Furst

 **

 MLB.com: Where Baseball is Always On



 **

 MLB.com: Where Baseball is Always On



Re: DBD::CSV

2015-02-25 Thread Matthew Musgrove
Carl,
When I tried it just now (first time using DBD::CSV) it just worked as I
would expect.

First, here's my script that I named test.pl:
#!/bin/env perl
use strict;
use warnings;

use Data::Dumper;
use DBI;
use Try::Tiny qw( try catch );

my $dbh = DBI-connect( 'DBI:CSV:', '', '', {
f_schema = undef,
f_dir = '.',
f_ext = '.csv/r',
RaiseError = 1,
PrintError = 0,
}) or die Cannot connect: $DBI::errstr;

my $test = try {
return $dbh-selectall_arrayref( 'select * from test' );
}
catch {
my $err = $_;
if ( $err =~ m!^.*DBD::CSV::db.*?Execution ERROR: (.*?) at
.*?DBI/DBD.*?line \d+.*?called from (.*? at \d+)\..*?$!s )
{
my ( $msg, $where ) = ( $1, $2 );
warn $msg -- $where\n;
}
else
{
warn $err\n;
}
return {};
};

print Dumper( $test );
__END__

Here is my test.csv file:
id,value
a,1
b,2
c,3

When I run it with test.csv (whether or not it is open in Excel) I get this
output:
$VAR1 = [
  [
'a',
'1'
  ],
  [
'b',
'2'
  ],
  [
'c',
'3'
  ]
];

When I run it from a network drive I get this output:
Cannot obtain shared lock on /export/home/mmusgrove/test.
csv: No locks available -- ./test.pl at 18
$VAR1 = {};

HTH,
Matt


On Wed, Feb 25, 2015 at 11:25 AM, Furst, Carl carl.fu...@mlb.com wrote:

 I think someone wrote about this before, but I just wanted to bring it up
 again.

 Is there a way to have DBD::CSV produce a warning or a message (like when
 printError and raise error is set) to say that it’s waiting for a file
 lock??

 For example if I’m using my oh so beloved MS Excel and it locks the csv
 file and at the same time I’m trying to also run my even more beloved perl
 script which is trying to open the file for read. The script of course
 hangs when trying to execute any statements because it’s waiting for the
 flock put there by MS Excel..Is there a lock timeout? Is that what f_lock
 = 2 is for??

 At least I’m pretty sure that’s what’s happening because when I open it in
 things that put the whole file into memory (like my other beloved text
 editor) I have no problem, but excel likes to lock things, it seems. When
 I close the file in Excel, of course, the perl runs like a charm.

 Thanks,
 Carl Furst

 **

 MLB.com: Where Baseball is Always On



Re: DBD::CSV on MacOSX Lion using cpanm fails to $r-fetch in t/50_chopblanks.t line 41

2012-12-04 Thread Jens Rehsack

On 30.11.12 09:49, Allen van der Ross wrote:

Hi Jens,


Hi Allen,

am I allowed to ask why you're writing to my NetBSD.org address instead 
of my CPAN address? At first I looked for a mistake I made last 
DBI/DBD::* update for pkgsrc ;)



I've installed all the prerequisites,


Congrats ;)


but using cpanm DBD::CSV fails on the tests:

simba:db allen$ cpanm -v DBD::CSV
cpanm (App::cpanminus) 1.5008 on perl 5.014002 built for darwin-2level
Work directory is /Users/allen/.cpanm/work/1354261809.36053
You have make /usr/bin/make
You have LWP 6.04
You have /usr/bin/tar: bsdtar 2.8.3 - libarchive 2.8.3
You have /usr/bin/unzip
Searching DBD::CSV on cpanmetadb ...
-- Working on DBD::CSV
Fetching http://search.cpan.org/CPAN/authors/id/H/HM/HMBRAND/DBD-CSV-0.36.tgz 
... OK
Unpacking DBD-CSV-0.36.tgz

skipped lots of stuff here

t/50_chopblanks.t ... 2/?
#   Failed test 'fetch'
#   at t/50_chopblanks.t line 41.

#   Failed test 'content'
#   at t/50_chopblanks.t line 42.
# Structures begin differing at:
#  $got = undef
# $expected = ARRAY(0x7fc2b4385290)

#   Failed test 'fetch'
#   at t/50_chopblanks.t line 41.

#   Failed test 'content'
#   at t/50_chopblanks.t line 42.
# Structures begin differing at:
#  $got = undef
# $expected = ARRAY(0x7fc2b41ab938)
# Looks like you failed 4 tests of 65.
t/50_chopblanks.t ... Dubious, test returned 4 (wstat 1024, 0x400)
Failed 4/65 subtests


Looks as if you should join RT#81523 
(https://rt.cpan.org/Ticket/Display.html?id=81523). If you're able to 
try out and send us Feedback using the current Repository HEAD revision 
from http://repo.or.cz/w/DBD-CSV.git (we enhanced the test because we 
can't reproduce on Mountain Lion).



Here are the version of Perl and the related modules…

simba:db allen$ perl -v

This is perl 5, version 14, subversion 2 (v5.14.2) built for darwin-2level

Copyright 1987-2011, Larry Wall

Perl may be copied only under the terms of either the Artistic License or the
GNU General Public License, which may be found in the Perl 5 source kit.

Complete documentation for Perl, including FAQ lists, should be found on
this system using man perl or perldoc perl.  If you have access to the
Internet, point your browser at http://www.perl.org/, the Perl Home Page.

simba:db allen$ perl -MSQL::Statement -E 'say $SQL::Statement::VERSION'
1.401
simba:db allen$ perl -MText::CSV_XS -E 'say $Text::CSV_XS::VERSION'
0.93
simba:db allen$ perl -MDBI -E 'say $DBI::VERSION'
1.618
simba:db allen$


Looks fine, even if DBI could be newer - but that's not the reason.


The Mac OSX Lion version is 10.7.5

How do I resolve this problem or is there something I can do to help you 
resolve this problem?


Best thing you can do is help us figuring out what really goes wrong.
We don't know what fails and why (Merijn, please correct me if I'm 
wrong), but we're willing to investigate.



Regards,
Allen.


Cheers,
Jens



Re: DBD::CSV and skip_first_line

2012-11-26 Thread H.Merijn Brand
On Mon, 26 Nov 2012 11:49:49 -0500, Scott R. Godin scot...@mhg2.com
wrote:

 
 On 11/25/2012 04:16 AM, Jens Rehsack wrote:
  On 25.11.12 10:00, H.Merijn Brand wrote:
  On Fri, 23 Nov 2012 17:43:50 -0500, Scott R. Godin scot...@mhg2.com
  wrote:
 
  I've run into an issue where I need both col_names set and
  skip_first_line still set to TRUE, because of malformed colnames in the
  original dumpfiles that conflict with SQL Reserved Words (such as
  'key')
  that I am unable to find any other acceptable workaround short of
 
  Why not automate the hacking using Text::CSV_XS and rewrite the header
  before using DBD::CSV?
 
  Or simply quote the column names in your SQL statement?
 
 I tried various quoting mechanisms up to and including attempting to use
 backticks, but all result in errors of one kind or another

Can you attach the first 4 lines of your csv datafile?

 $dbh-prepare(q{Select 'key', PHM_ID, DAW_CD, GENBRND_CD from clms limit
 10})
 results in every record having the literal value key for the column `key`
 same if I try select 'key' as PKEY
 
 if I switch to double-quotes rather than single quotes around key in
 the above, I get the following error:
 Execution ERROR: No such column 'key' called from clms_test.pl at 23.
 
 I'll look into playing with Text::CSV_XS, and see what I can come up with.
 
 I still think it would be easier if skip_first_line were not presumed
 (forced to) false if col_names is set, but rather presumed false only if
 not set explicitly true.

We agree, investigating what is actually required (and should be
documented)

-- 
H.Merijn Brand  http://tux.nl   Perl Monger  http://amsterdam.pm.org/
using perl5.00307 .. 5.17   porting perl5 on HP-UX, AIX, and openSUSE
http://mirrors.develooper.com/hpux/http://www.test-smoke.org/
http://qa.perl.org   http://www.goldmark.org/jeff/stupid-disclaimers/


Re: DBD::CSV and skip_first_line

2012-11-26 Thread H.Merijn Brand
On Mon, 26 Nov 2012 12:29:25 -0500, Scott R. Godin scot...@mhg2.com
wrote:

 On 11/26/2012 11:56 AM, H.Merijn Brand wrote:
  On Mon, 26 Nov 2012 11:49:49 -0500, Scott R. Godin scot...@mhg2.com
  wrote:
 
  On 11/25/2012 04:16 AM, Jens Rehsack wrote:
  On 25.11.12 10:00, H.Merijn Brand wrote:
  On Fri, 23 Nov 2012 17:43:50 -0500, Scott R. Godin scot...@mhg2.com
  wrote:
  I've run into an issue where I need both col_names set and
  skip_first_line still set to TRUE, because of malformed colnames in the
  original dumpfiles that conflict with SQL Reserved Words (such as
  'key')
  that I am unable to find any other acceptable workaround short of
  Why not automate the hacking using Text::CSV_XS and rewrite the header
  before using DBD::CSV?
  Or simply quote the column names in your SQL statement?
  I tried various quoting mechanisms up to and including attempting to use
  backticks, but all result in errors of one kind or another
  Can you attach the first 4 lines of your csv datafile?
 Unfortunately, no, as I am under HIPAA restrictions.
 
 key consists of seemingly random alphanumeric [A-Z0-9] sequences that
 may or may not contain one dash (at about position 11), of 16-char length
 PHM_ID consists of P\d{7} and may repeat across records
  of the other two fields DAW_CD is numeric(1) and GENBRND_CD is boolean
 all records are pipe-delimited

Well, in that case, I'd use Text::CSV_XS to rewrite the data before
using DBD::CSV to use it. I do sorta the same on a regular basis, as my
customers mak typos in the headers and change them all the time.

In any case, key quoted/unquoted has no magical value at all. The
key-name is what you pass in in column_names or in the first line. That
the first line should be skippable when specifying your own column
names is already agreed on.

 The actual csv contains 44 columns; in the interest of brevity I limited
 the sample to the below four. :)
  $dbh-prepare(q{Select 'key', PHM_ID, DAW_CD, GENBRND_CD from clms limit
  10})
  results in every record having the literal value key for the column `key`
  same if I try select 'key' as PKEY

--8--- example of CSV rewrites written without checking from top of my head
my $csv_o = Text::CSV_XS-new ({
binary= 1,
auto_diag = 1,
eol   = \n,
});
my $csv_i = Text::CSV-new ({
binary= 1,
auto_diag = 1,
sep_char  = |, # | often requires allow_white_space = 1
});

my %hdr;
while (DATA) {
m/^#/ and next;
my ($fld, $text) = m/^(\S+)\s+(.*)/ or next;
$hdr{$text} = $fld;
}

my @hdr;
for (@{$csv_i-getline ($fh)}) {
if (exists $hdr{$_}) {
push @hdr, $hdr{$_};
next;
}
# special hardcoded fields
if (m/^[A-Za-z]\w{0,7}-?\w{1,12}$/) {
push @hdr, key; # This is a key?
next;
}
die I do not know how to translate '$_' to something useful;
}
my %rec;
$csv_i-bind_columns (\@rec{@hdr});
$csv_o-print ($out, \@hdr);
while ($csv_i-getline ($fh)) {
$csv_o-print (\@rec{@hdr});
}

__END__
# Wanted   Seen in their shit
c_foo  code foo
c_foo  kode foo
c_foo  kode-foo
c_foo  kode_foo
foodescription of foo
fooomschrijving foo
fooomschr. foo
--8---

-- 
H.Merijn Brand  http://tux.nl   Perl Monger  http://amsterdam.pm.org/
using perl5.00307 .. 5.17   porting perl5 on HP-UX, AIX, and openSUSE
http://mirrors.develooper.com/hpux/http://www.test-smoke.org/
http://qa.perl.org   http://www.goldmark.org/jeff/stupid-disclaimers/


Re: DBD::CSV and skip_first_line

2012-11-26 Thread Scott R. Godin
On 11/26/2012 11:56 AM, H.Merijn Brand wrote:
 On Mon, 26 Nov 2012 11:49:49 -0500, Scott R. Godin scot...@mhg2.com
 wrote:

 On 11/25/2012 04:16 AM, Jens Rehsack wrote:
 On 25.11.12 10:00, H.Merijn Brand wrote:
 On Fri, 23 Nov 2012 17:43:50 -0500, Scott R. Godin scot...@mhg2.com
 wrote:
 I've run into an issue where I need both col_names set and
 skip_first_line still set to TRUE, because of malformed colnames in the
 original dumpfiles that conflict with SQL Reserved Words (such as
 'key')
 that I am unable to find any other acceptable workaround short of
 Why not automate the hacking using Text::CSV_XS and rewrite the header
 before using DBD::CSV?
 Or simply quote the column names in your SQL statement?
 I tried various quoting mechanisms up to and including attempting to use
 backticks, but all result in errors of one kind or another
 Can you attach the first 4 lines of your csv datafile?
Unfortunately, no, as I am under HIPAA restrictions.

key consists of seemingly random alphanumeric [A-Z0-9] sequences that
may or may not contain one dash (at about position 11), of 16-char length
PHM_ID consists of P\d{7} and may repeat across records
 of the other two fields DAW_CD is numeric(1) and GENBRND_CD is boolean
all records are pipe-delimited

The actual csv contains 44 columns; in the interest of brevity I limited
the sample to the below four. :)
 $dbh-prepare(q{Select 'key', PHM_ID, DAW_CD, GENBRND_CD from clms limit
 10})
 results in every record having the literal value key for the column `key`
 same if I try select 'key' as PKEY

 if I switch to double-quotes rather than single quotes around key in
 the above, I get the following error:
 Execution ERROR: No such column 'key' called from clms_test.pl at 23.

 I'll look into playing with Text::CSV_XS, and see what I can come up with.

 I still think it would be easier if skip_first_line were not presumed
 (forced to) false if col_names is set, but rather presumed false only if
 not set explicitly true.
 We agree, investigating what is actually required (and should be
 documented)


-- 
Scott R. Godin, Senior Programmer
MAD House Graphics 
 302.468.6230 - main office
 302.722.5623 - home office



Re: DBD::CSV and skip_first_line

2012-11-26 Thread Scott R. Godin

On 11/26/2012 11:56 AM, H.Merijn Brand wrote:
 Can you attach the first 4 lines of your csv datafile?
Here is some randomized data that closely resembles the data in the csv
if this is any help in working with variations on

$dbh-prepare(q{Select key, PHM_ID, DAW_CD, GENBRND_CD from clms limit 10});

(bearing in mind the csv contains 44, not 4 columns and this is just a sample)


key|PHM_ID|DAW_CD|GENBRND_CD
667291120KNM4728|P1951532|2|0
858525298EEA3248|P8697017|5|0
286424010HTG2644|P8607393|3|1
344987842DYH2950|P8662248|3|0
225509049XEU3393|P1222508|1|0
061473729SFZ1183|P2785408|6|0
370501125YPF2594|P1534462|2|0
620354050CRF3119|P4438944|3|1
901228431AUF5822|P5315769|1|0
969358370QPO9757|P1523687|8|0
543692286WTA5861|P5993819|1|0
591327753QVR5452|P1013462|4|0
159204117LXL0308|P5358769|8|1
352853355KYT5615|P2810873|3|1
195099617GNE7056|P1306424|6|0


-- 
Scott R. Godin, Senior Programmer
MAD House Graphics 
 302.468.6230 - main office
 302.722.5623 - home office



Re: DBD::CSV and skip_first_line

2012-11-25 Thread H.Merijn Brand
On Fri, 23 Nov 2012 17:43:50 -0500, Scott R. Godin scot...@mhg2.com
wrote:

 I've run into an issue where I need both col_names set and
 skip_first_line still set to TRUE, because of malformed colnames in the
 original dumpfiles that conflict with SQL Reserved Words (such as 'key')
 that I am unable to find any other acceptable workaround short of

Why not automate the hacking using Text::CSV_XS and rewrite the header
before using DBD::CSV?

 hacking the dumpfiles prior to import for unit-testing and validation
 prior to splitting the dumpfiles out into a normalized sql database.
 (I'll eed to hand this off to someone else for future dump-and-import
 work, so it's just got to WORK with these ACCESS database dump files
 as-is, plus HIPAA rules about not changing data complicates matters)
 
 Is there any way to ensure that despite col_names being set, I can still
 force skip_first_line = 1 ? or should I report this as a possible
 edge-case bug

There should be, and it is likely that this case is already fixed in
the current development state of DBD::CSV by the valuable work of Jens
Rehsack, but that state is not (yet) releasable as it depends on
changes in SQL::Statement and DBI (DBD::File).

 to sum up,
 
 col names present in csv but bad (sql reserved words)
 can use col_names = [ @ary ], but this sets skip_first_line to FALSE as
 it *assumes* that colnames are NOT present in original dump
 
 what do ? :)

Rewrite the headers with Text::CSV_XS before using DBD::CSV

-- 
H.Merijn Brand  http://tux.nl   Perl Monger  http://amsterdam.pm.org/
using perl5.00307 .. 5.17   porting perl5 on HP-UX, AIX, and openSUSE
http://mirrors.develooper.com/hpux/http://www.test-smoke.org/
http://qa.perl.org   http://www.goldmark.org/jeff/stupid-disclaimers/


Re: DBD::CSV and skip_first_line

2012-11-25 Thread Jens Rehsack

On 25.11.12 10:00, H.Merijn Brand wrote:

On Fri, 23 Nov 2012 17:43:50 -0500, Scott R. Godin scot...@mhg2.com
wrote:


I've run into an issue where I need both col_names set and
skip_first_line still set to TRUE, because of malformed colnames in the
original dumpfiles that conflict with SQL Reserved Words (such as 'key')
that I am unable to find any other acceptable workaround short of


Why not automate the hacking using Text::CSV_XS and rewrite the header
before using DBD::CSV?


Or simply quote the column names in your SQL statement?


hacking the dumpfiles prior to import for unit-testing and validation
prior to splitting the dumpfiles out into a normalized sql database.
(I'll eed to hand this off to someone else for future dump-and-import
work, so it's just got to WORK with these ACCESS database dump files
as-is, plus HIPAA rules about not changing data complicates matters)

Is there any way to ensure that despite col_names being set, I can still
force skip_first_line = 1 ? or should I report this as a possible
edge-case bug


There should be, and it is likely that this case is already fixed in
the current development state of DBD::CSV by the valuable work of Jens
Rehsack, but that state is not (yet) releasable as it depends on
changes in SQL::Statement and DBI (DBD::File).


Well, since we're both busy - we need to find an hour or so to
integrate. I do not expect a general issue with DBI-1.622, I expect
some edge cases we need to tidy up.


to sum up,

col names present in csv but bad (sql reserved words)
can use col_names = [ @ary ], but this sets skip_first_line to FALSE as
it *assumes* that colnames are NOT present in original dump

what do ? :)


Rewrite the headers with Text::CSV_XS before using DBD::CSV


Or simply quote the reserved cols :)

Cheers
--
Jens Rehsack


Re: DBD::CSV, Select Numeric data

2012-02-15 Thread Michael Markusch
Hi,

the problem were the spaces in the column headers of the csv-file.

Thanks,
Michael


Am Mittwoch, den 15.02.2012, 21:50 +0100 schrieb Michael Markusch:

  Hi,
  
  I have tried using DBD::CSV to query a csv-file. But I have a problem with 
  handling numeric data. How can I modify the code below?
  
  Thanks,
  Michael
  
  use DBI;
  use Data::Dumper;
  
  my $dbh_csv = DBI-connect (dbi:CSV:, , , {
  f_dir = csv,
  f_ext = .csv/r,
  f_encoding = utf8,
  
  csv_sep_char = ;,
  csv_eol = \n,
  csv_quote_char = '',
  csv_escape_char = '',
  csv_class = Text::CSV_XS,
  csv_null = 1,
  RaiseError = 1,
  });
  
  $dbh_csv-{csv_tables}-{table_1} = {
  'file' = 'mmm.csv',
  'eol' = \n,
  };
  $dbh_csv-{csv_tables}-{table_1}-{types} = [Text::CSV_XS::PV (), 
  Text::CSV_XS::NV (), Text::CSV_XS::NV ()];
  
  my $csv_select = Select * From table_1 Where af1  1;
  
  my $sth_csv = $dbh_csv-prepare($csv_select);
  $sth_csv-execute;
  my $rowxx = $sth_csv-fetchall_arrayref();
  print Dumper $rowxx;
  
  content table_1:
  
  date;af1;vf1
  2010-10-02;1,2;16,4
  2010-10-03;1,4;18,4
  2010-10-04;2,2;23,4
  2010-10-02;0,2;34,7
  ...
 



signature.asc
Description: This is a digitally signed message part


Re: DBD::CSV, Select Numeric data

2012-02-15 Thread Michael Markusch
Hi,

the problem were the spaces in the column headers of the csv-file.

Thanks,
Michael


Am Mittwoch, den 15.02.2012, 21:50 +0100 schrieb Michael Markusch:

  Hi,
  
  I have tried using DBD::CSV to query a csv-file. But I have a problem with 
  handling numeric data. How can I modify the code below?
  
  Thanks,
  Michael
  
  use DBI;
  use Data::Dumper;
  
  my $dbh_csv = DBI-connect (dbi:CSV:, , , {
  f_dir = csv,
  f_ext = .csv/r,
  f_encoding = utf8,
  
  csv_sep_char = ;,
  csv_eol = \n,
  csv_quote_char = '',
  csv_escape_char = '',
  csv_class = Text::CSV_XS,
  csv_null = 1,
  RaiseError = 1,
  });
  
  $dbh_csv-{csv_tables}-{table_1} = {
  'file' = 'mmm.csv',
  'eol' = \n,
  };
  $dbh_csv-{csv_tables}-{table_1}-{types} = [Text::CSV_XS::PV (), 
  Text::CSV_XS::NV (), Text::CSV_XS::NV ()];
  
  my $csv_select = Select * From table_1 Where af1  1;
  
  my $sth_csv = $dbh_csv-prepare($csv_select);
  $sth_csv-execute;
  my $rowxx = $sth_csv-fetchall_arrayref();
  print Dumper $rowxx;
  
  content table_1:
  
  date;af1;vf1
  2010-10-02;1,2;16,4
  2010-10-03;1,4;18,4
  2010-10-04;2,2;23,4
  2010-10-02;0,2;34,7
  ...
 




signature.asc
Description: This is a digitally signed message part


Re: DBD-CSV taint mode

2011-03-29 Thread Karl Oakes
I have tried using the native IO:File open using multiple file modes and the 
writing methods and they work without error using exactly the same params.. I'm 
confused now, as I thought the file open in write mode would fail, obviously 
not... I know I could probably change to use the native IO::file but it is much 
easier to use SQL, no sequential access needed then. So why is the DBD-CSV 
failing? Any suggestions ?



Re: DBD-CSV taint mode

2011-03-28 Thread Karl Oakes
Thanks Robert, I'll try the open, I have already ruled out the all params to 
the DBD-CSV statements including the file name as I have even tried to use 
literal values for every param.  



Re: DBD-CSV taint mode

2011-03-27 Thread Robert Roggenbuck

Does an open(FILE, '', $csvfile) works right before a CUD-operation?

If not, move this open()-call step by step to the top of Your script to catch 
the point where the tainted data enters.


BTW: if Your filename is composed from tainted bits it is tained too.

greetings

Robert

--

Am 25.03.2011 10:04, schrieb Karl Oakes:

I am trying to perform SQL create update and delete (CUD) operations using 
DBD-CSV with taint mode and I am getting the following:
Execution ERROR: Insecure dependency in open while running with -T switch at 
C:/Perl/lib/IO/File.pm line 185.
I have untainted all the inputs and it looks like it not happy with the 
filename, which is not user inputted and therefore not tainted. File.pm is used 
by the CSV module and therefore I think File.pm is seeing the input parameters 
to its open method from the DBD-CSV module as tainted. This only happens when I 
try to perform SQL CUD operations as DBD-CSV / DBI will change the file mode 
flag to write on the File::open call. In taint mode, a file open with write 
mode flag will fail, any suggestions.






Re: DBD::CSV data types

2009-10-09 Thread Robert Roggenbuck

You will find the supported data-types in SQL::Dialiects::CSV of the
SQL::Statement package. There in no POD, but the section [VALID DATA TYPES]
lists all possible types. DATE is unfortunately not there (DBD::CSV 0.22).

Hope that helps

Robert

--

larry s schrieb:
Does anyone know why  the following fails with: 
SQL ERROR: 'DATE' is not a recognized data type! ?


I cannot seem to find valid data types.

use DBD::CSV;
use DBI;
my $table = csvtest;
my $dbh = DBI-connect(DBI:CSV:f_dir=/home/lsturtz/perl/)
or die Cannot connect:  . $DBI::errstr;
my $sth = $dbh-prepare(CREATE TABLE $table (symname CHAR(10), level REAL(10,2), 
obvdate DATE ))
or die Cannot prepare:  . $dbh-errstr();

$sth-execute() or die Cannot execute:  . $sth-errstr();
.
.
.






Re: DBD::CSV - UPDATE corrupts data!

2009-08-20 Thread Robert Roggenbuck

Hi all,

unfortunately I must continue this thread. I managed to update DBI on the 
Web-Server, where my test-script corrupts data while updating - and still it 
does not work. I checked it on another computer where it works fine:


OK 1:

DBI 1.607-ithread
DBD::CSV version 0.22
perl 5.10.0
Linux 2.6.27.15-170.2.24.fc10.x86_64 #1 SMP Wed Feb 11 23:14:31 EST 2009 x86_64 
x86_64 x86_64 GNU/Linux


OK 2:

DBI 1.52-ithread
DBD::CSV version 0.22
perl 5.8.8
Linux 2.6.18.8-0.7-default #1 SMP Tue Oct 2 17:21:08 UTC 2007 i686 i686 i386 
GNU/Linux


NOT OK:

DBI 1.607-ithread
DBD::CSV version 0.22
perl 5.8.8
SunOS 5.10 Generic_118822-30 sun4u sparc SUNW,Ultra-250


Now it seems to me that the difference is the OS or a speciallity in a strange 
setup of Perl which I can not see. Even trace(15) shows no differences in the 
execute-part (trace(9), as I set in the script, is for the execute-part the same 
as 15).


What's going on? Where should I look for the cause of the problem?

Greetings

Robert

PS: Here again my test-script. The 1st execution creates the table 'Projects' in 
/tmp. The 2nd execution should update the data (infact if everything went fine 
nothing changes, because the UPDATE woks with the same data as the INSERT).


###
use strict;
use warnings;
use DBI;

my %projects = (
'ID001' = {
'begin' = '20040101',
'end' = '20080630',
},
'ID002' = {
'begin' = '20050301',
'end' = '20091231',
},
'ID003' = {
'begin' = '20050701',
'end' = '20100430',
},
);

DBI-trace(9);

my $dbh = DBI-connect(dbi:CSV:f_dir=/tmp;csv_eol=\n,'','',
  { AutoCommit = 1, RaiseError = 1 });

my $sql = CREATE TABLE Projects (
project_id  VARCHAR(32) PRIMARY KEY,
begin   CHAR(8) NOT NULL,
end CHAR(8) NOT NULL
);
$dbh-do($sql) unless -e '/tmp/Projects';

warn will fill/actualise table 'Projects'\n;

my $sql_up = UPDATE Projects SET begin=?, end=? WHERE project_id LIKE ?;
my $sth_up = $dbh-prepare($sql_up);

my $sql_in = INSERT INTO Projects (project_id, begin, end) VALUES (?, ?, ?);
my $sth_in = $dbh-prepare($sql_in);

foreach my $id (keys %projects) {
my $begin = $projects{$id}-{begin};
my $end   = $projects{$id}-{end};

warn will try UPDATE Projects\n;

my $result = $sth_up-execute($begin, $end, $id);

if ($result == 0) {
warn will INSERT INTO Projects\n;

$result = $sth_in-execute($id, $begin, $end);

if ($result == 0) {
warn Could not update table 'Projects' project $id\n;
}
}
}
$sth_up-finish();
$sth_in-finish();
$dbh-disconnect();
warn finished\n;
###


Robert Roggenbuck schrieb:

Thank You for Your reply.

The most important thing first: I tried the script on a slightly newer 
environment (but still a very old one) and it works:


Perl, v5.8.8 built for i586-linux-thread-multi (SuSe-Linux, kernel 2.6.18)
DBI 1.52
DBD::CSV 0.22

Interesting to see that the DBD::CSV is the same.

Other things inlined ...

Alexander Foken schrieb:

Hello,

On 30.06.2009 14:41, Robert Roggenbuck wrote:

Hi all,


[snip]
Running the code below copied and pasted on Linux 2.6.26.5, Perl 
5.8.8, DBI 1.607, DBD::CSV 0.20, both runs deliver the same result 
from your first run. Even several further runs don't change the result.


I conclude from my successful run, that there were something wrong in 
the interaction between DBD::CSV and DBI, because a newer DBI banish the 
phantom.


[snip]


Here is the script:

It has some parts that look very strange to me.


You have keen eyes ;-)

[snip]

my $dbh = DBI-connect(dbi:CSV:f_dir=/tmp;csv_eol=\n,'','',
  { AutoCommit = 1, PrintError = 1, RaiseError = 1 });
Enabling RaiseError and PrintError is redundant, RaiseError should be 
sufficient.


Yes. This (and other things You detected) are remnants from the 
shortening of the original program to generate a minimal test script. 
Usually I set RaiseError and PrintError to false and make my own error 
handling.




my $sql = CREATE TABLE Projects (
project_id  VARCHAR(32) PRIMARY KEY,
begin   CHAR(8) NOT NULL,
end CHAR(8) NOT NULL
);
You store a date in a CHAR? OK, with CSV, this makes no difference, 
but still it is strange.


The dates I use are always eight-character-strings. I don't do any fancy 
things with them besides comparing them (see %projects). But of course 
usually dates should be dates and not CHARs or INTEGERs.


[snip]
my $sql_up = UPDATE Projects SET begin=?, end=? WHERE project_id 
LIKE ?;

my $sth_up = $dbh-prepare($sql_up);

my $sql_in = INSERT INTO Projects (project_id, begin, end) VALUES 
(?, ?, ?);

my $sth_in = $dbh-prepare($sql_in);
Two parallel prepares. DBD::CSV seems to be ok with that, MS SQL via 
ODBC does not like that.


Really? I did it very often to move several preparations of 
SQL-statements as far as possible outside loops. As I understood this is 

Re: DBD::CSV - UPDATE corrupts data!

2009-08-20 Thread Robert Roggenbuck

One correction to my last mail:

The OK 1 is NOT OK! I do not know what happend. Likely I had a look at the 
wrong file while checking the results. So I can not blame Sun for the strange 
things ;-)


Another observation: While comparing the trace-output from the Linux-tests I 
discovered that there is a difference in executing the UPDATE-statement (3 times):


OK:
- execute in DBD::File::st for DBD::CSV::st 
(DBI::st=HASH(0x860b3b4)~0x8607948 '20040101' '20080630' 'ID001') thr#8167008
- execute in DBD::File::st for DBD::CSV::st 
(DBI::st=HASH(0x860b3b4)~0x8607948 '20050701' '20100430' 'ID003') thr#8167008
- execute for DBD::CSV::st (DBI::st=HASH(0x860b3b4)~0x8607948 '20050301' 
'20091231' 'ID002') thr#8167008


NOT OK:
- execute in DBD::File::st for DBD::CSV::st 
(DBI::st=HASH(0x18e7698)~0x18e75c0 '20040101' '20080630' 'ID001') thr#1880010
- execute for DBD::CSV::st (DBI::st=HASH(0x18e7698)~0x18e75c0 '20050701' 
'20100430' 'ID003') thr#1880010
- execute for DBD::CSV::st (DBI::st=HASH(0x18e7698)~0x18e75c0 '20050301' 
'20091231' 'ID002') thr#1880010


But may be this is only a matter of reporting...

Greetings

Robert



Robert Roggenbuck schrieb:

Hi all,

unfortunately I must continue this thread. I managed to update DBI on 
the Web-Server, where my test-script corrupts data while updating - and 
still it does not work. I checked it on another computer where it works 
fine:


OK 1:

DBI 1.607-ithread
DBD::CSV version 0.22
perl 5.10.0
Linux 2.6.27.15-170.2.24.fc10.x86_64 #1 SMP Wed Feb 11 23:14:31 EST 2009 
x86_64 x86_64 x86_64 GNU/Linux


OK 2:

DBI 1.52-ithread
DBD::CSV version 0.22
perl 5.8.8
Linux 2.6.18.8-0.7-default #1 SMP Tue Oct 2 17:21:08 UTC 2007 i686 i686 
i386 GNU/Linux


NOT OK:

DBI 1.607-ithread
DBD::CSV version 0.22
perl 5.8.8
SunOS 5.10 Generic_118822-30 sun4u sparc SUNW,Ultra-250


Now it seems to me that the difference is the OS or a speciallity in a 
strange setup of Perl which I can not see. Even trace(15) shows no 
differences in the execute-part (trace(9), as I set in the script, is 
for the execute-part the same as 15).


What's going on? Where should I look for the cause of the problem?

Greetings

Robert

PS: Here again my test-script. The 1st execution creates the table 
'Projects' in /tmp. The 2nd execution should update the data (infact if 
everything went fine nothing changes, because the UPDATE woks with the 
same data as the INSERT).


###
use strict;
use warnings;
use DBI;

my %projects = (
'ID001' = {
'begin' = '20040101',
'end' = '20080630',
},
'ID002' = {
'begin' = '20050301',
'end' = '20091231',
},
'ID003' = {
'begin' = '20050701',
'end' = '20100430',
},
);

DBI-trace(9);

my $dbh = DBI-connect(dbi:CSV:f_dir=/tmp;csv_eol=\n,'','',
  { AutoCommit = 1, RaiseError = 1 });

my $sql = CREATE TABLE Projects (
project_id  VARCHAR(32) PRIMARY KEY,
begin   CHAR(8) NOT NULL,
end CHAR(8) NOT NULL
);
$dbh-do($sql) unless -e '/tmp/Projects';

warn will fill/actualise table 'Projects'\n;

my $sql_up = UPDATE Projects SET begin=?, end=? WHERE project_id LIKE ?;
my $sth_up = $dbh-prepare($sql_up);

my $sql_in = INSERT INTO Projects (project_id, begin, end) VALUES (?, 
?, ?);

my $sth_in = $dbh-prepare($sql_in);

foreach my $id (keys %projects) {
my $begin = $projects{$id}-{begin};
my $end   = $projects{$id}-{end};

warn will try UPDATE Projects\n;

my $result = $sth_up-execute($begin, $end, $id);

if ($result == 0) {
warn will INSERT INTO Projects\n;

$result = $sth_in-execute($id, $begin, $end);

if ($result == 0) {
warn Could not update table 'Projects' project $id\n;
}
}
}
$sth_up-finish();
$sth_in-finish();
$dbh-disconnect();
warn finished\n;
###




Re: DBD::CSV - UPDATE corrupts data!

2009-07-06 Thread Robert Roggenbuck

Thank You for Your reply.

The most important thing first: I tried the script on a slightly newer 
environment (but still a very old one) and it works:


Perl, v5.8.8 built for i586-linux-thread-multi (SuSe-Linux, kernel 2.6.18)
DBI 1.52
DBD::CSV 0.22

Interesting to see that the DBD::CSV is the same.

Other things inlined ...

Alexander Foken schrieb:

Hello,

On 30.06.2009 14:41, Robert Roggenbuck wrote:

Hi all,


[snip]
Running the code below copied and pasted on Linux 2.6.26.5, Perl 5.8.8, 
DBI 1.607, DBD::CSV 0.20, both runs deliver the same result from your 
first run. Even several further runs don't change the result.


I conclude from my successful run, that there were something wrong in the 
interaction between DBD::CSV and DBI, because a newer DBI banish the phantom.


[snip]


Here is the script:

It has some parts that look very strange to me.


You have keen eyes ;-)

[snip]

my $dbh = DBI-connect(dbi:CSV:f_dir=/tmp;csv_eol=\n,'','',
  { AutoCommit = 1, PrintError = 1, RaiseError = 1 });
Enabling RaiseError and PrintError is redundant, RaiseError should be 
sufficient.


Yes. This (and other things You detected) are remnants from the shortening of 
the original program to generate a minimal test script. Usually I set RaiseError 
and PrintError to false and make my own error handling.




my $sql = CREATE TABLE Projects (
project_id  VARCHAR(32) PRIMARY KEY,
begin   CHAR(8) NOT NULL,
end CHAR(8) NOT NULL
);
You store a date in a CHAR? OK, with CSV, this makes no difference, but 
still it is strange.


The dates I use are always eight-character-strings. I don't do any fancy things 
with them besides comparing them (see %projects). But of course usually dates 
should be dates and not CHARs or INTEGERs.


[snip]
my $sql_up = UPDATE Projects SET begin=?, end=? WHERE project_id LIKE 
?;

my $sth_up = $dbh-prepare($sql_up);

my $sql_in = INSERT INTO Projects (project_id, begin, end) VALUES (?, 
?, ?);

my $sth_in = $dbh-prepare($sql_in);
Two parallel prepares. DBD::CSV seems to be ok with that, MS SQL via 
ODBC does not like that.


Really? I did it very often to move several preparations of SQL-statements as 
far as possible outside loops. As I understood this is the main intention for 
preparing a statement: speed up the execution by avoiding repeatedly prepares.


[snip]

if ($sth_up) {
$sth_up is always true (prepare dies on error due to RaiseError =1), 
why do you test it here?


You are right. In the original script was RaiseError = 0.


warn will try UPDATE Projects\n; # DEBUG
$result = $sth_up-execute($begin, $end, $id);
}
if (not $result or $result eq '0E0' and $sth_in) {

$sth_in is always true for the same reason. Why do you test it here?

$result will always be true, again due to RaiseError = 1.

$result may be -1 if DBI does not know the number of rows affected.

0E0 is a special representation of 0, like 0 but true.

So, the real condition would better be written as ($result==0).


Yes and No. You are right for the same reason. But because in the context of my 
program even a '0E0' should not happen, I treat it as an error.




warn will INSERT INTO Projects\n; # DEBUG

my $result = $sth_in-execute($id, $begin, $end);

if (not $result or $result eq '0E0') {
Again, $result is always true, for the same reasons. Again, you better 
wrote ($result==0).

warn Could not update table 'Projects' project $id\n;
}
}


You never call finish for your statement handles. This short script has 
AutoCommit enabled and the script terminates very fast, so the DESTROY 
methods of the statement handles should call finish. I would not bet on 
that behaviour.


I did not called finish() because there is no more code after the loop and the 
allocated space will be freed by the termination of the program - as You 
mentioned. Does finish() more things? Is it recommended to call finish() in any 
case?



}
warn finished\n;
#


As You see in the script I turned tracing on to see what happens with 
the parameters. But I can not see anything wrong (my scripts name is 
debugSetup.pl):


[snip]
will try UPDATE Projects
- execute in DBD::File::st for DBD::CSV::st 
(DBI::st=HASH(0x4cee84)~0xd27e8 '20040101' '20080630' 'ID001') thr#22ea0
1   - finish in DBD::File::st for DBD::CSV::st 
(DBI::st=HASH(0xd27e8)~INNER) thr#22ea0

1   - finish= 1 at File.pm line 439

I don't get that finish line with your code.


There is none. This must be an implicit finish() by DBD::File or so.


- execute= 1 at debugSetup.pl line 48
will try UPDATE Projects
- execute for DBD::CSV::st (DBI::st=HASH(0x4cee84)~0xd27e8 
'20050701' '20100430' 'ID003') thr#22ea0

1   - finish for DBD::CSV::st (DBI::st=HASH(0xd27e8)~INNER) thr#22ea0
1   - finish= 1 at File.pm line 439

Neither this one.

- execute= 1 at debugSetup.pl line 48
will try UPDATE Projects
- execute for 

Re: DBD::CSV: perl script to read CSV file does not work

2008-06-02 Thread Prakash Prabhakar
$dbh-do(UPDATE ...) is not working as per the recommended syntax.
The query returns a 0 (undef?) for the line: $dbh-do(UPDATE info SET Owner
= 'me' WHERE SerialNumber = '1234) or die update:  . $dbh-errstr();

Is there anything that I am missing?

*Full code:*
use DBI;
 $dbh = DBI-connect(DBI:CSV:)
  or die Cannot connect:  . $DBI::errstr;
 $dbh-{'csv_tables'}-{'info'} = { 'file' = 'test.csv', 'eol'  = \n};
 $sth = $dbh-prepare(SELECT * FROM info WHERE SerialNumber =
'$serialnum')
  or die prepare:  . $dbh-errstr();
 $sth-execute()
  or die execute:  . $dbh-errstr();

 while (my $row = $sth-fetchrow_hashref)
 {
  print(Found result row: Equipment = , $row-{'Equipment'},,Serial
Number = , $row-{'SerialNumber'});
 }

 $sth-finish();
 $ret = $dbh-do(UPDATE info SET Owner = 'me' WHERE SerialNumber = '1234)
   or die update:  . $dbh-errstr();
 print update returned $ret;
--

$ret always has value 0E0.




On 6/2/08, Prakash Prabhakar [EMAIL PROTECTED] wrote:

 Jeff, All,

 Thank you very much for that information. My code's working now.

 Regards,
 Prakash


  On 6/1/08, Jeff Zucker [EMAIL PROTECTED] wrote:

 ReneeB wrote:


 Why do you use the Windows line endings as the default? Why not the value
 of $/?

 Because 1) the module was already several years old and widely used when
 I inherited it and I didn't want to break backward compatability 2) Because,
 I think using $/ would be a disaster since you'd never be able to depend on
 consistency within the program since you wouldn't know where or by what that
 had been set and 3) if we have to make either people who access *nix files
 or people who access winodws files do something that takes a tiny bit of
 smarts (like knowing what the line endings are in a file), better depend on
 the *nix people to figure it out :-).  Now that Ubuntu etc. have opened up
 the field that's less true than it used to be but eight or ten years ago
 when I made the decision, it seemed like the right one.  I'm afraid I still
 don't see a reason to change it.

 --
 Jeff

 See also https://rt.cpan.org/Public/Bug/Display.html?id=20340

 Try adding this :

   $dbh-{'csv_tables'}-{'info'} = {
   'file' = 'testtable.csv',
   'eol'  = \n
   };

 Be sure to use double quotes qround the \n.  If that doesn't work try a
 line ending specific to your OS (e.g. '\012' for *nix).



 Cheers,
 Renee







Re: DBD::CSV: perl script to read CSV file does not work

2008-06-01 Thread ReneeB

Hi Jeff,

Jeff Zucker wrote:

Prakash Prabhakar wrote:
I am using the following code in my .cgi progam (compiling with 
perl-5.8.6,
UNIX OS). It doesnt work. Could you please let me know what could be 
wrong?
I do not get any warnings/error messages/expected output. The .cgi 
and the

.csv are in the same directory.
  
My guess is that the line endings are not set explicitly (they default 
to windows line endings).  


Why do you use the Windows line endings as the default? Why not the 
value of $/?


See also https://rt.cpan.org/Public/Bug/Display.html?id=20340


Try adding this :

   $dbh-{'csv_tables'}-{'info'} = {
   'file' = 'testtable.csv',
   'eol'  = \n
   };

Be sure to use double quotes qround the \n.  If that doesn't work try 
a line ending specific to your OS (e.g. '\012' for *nix).





Cheers,
Renee


Re: DBD::CSV: perl script to read CSV file does not work

2008-06-01 Thread Jeff Zucker

ReneeB wrote:


Why do you use the Windows line endings as the default? Why not the 
value of $/?


Because 1) the module was already several years old and widely used when 
I inherited it and I didn't want to break backward compatability 2) 
Because, I think using $/ would be a disaster since you'd never be able 
to depend on consistency within the program since you wouldn't know 
where or by what that had been set and 3) if we have to make either 
people who access *nix files or people who access winodws files do 
something that takes a tiny bit of smarts (like knowing what the line 
endings are in a file), better depend on the *nix people to figure it 
out :-).  Now that Ubuntu etc. have opened up the field that's less true 
than it used to be but eight or ten years ago when I made the decision, 
it seemed like the right one.  I'm afraid I still don't see a reason to 
change it.


--
Jeff


See also https://rt.cpan.org/Public/Bug/Display.html?id=20340


Try adding this :

   $dbh-{'csv_tables'}-{'info'} = {
   'file' = 'testtable.csv',
   'eol'  = \n
   };

Be sure to use double quotes qround the \n.  If that doesn't work try 
a line ending specific to your OS (e.g. '\012' for *nix).





Cheers,
Renee






Re: DBD::CSV: perl script to read CSV file does not work

2008-05-31 Thread Andon Tschauschev
Hi Prakash,

executing your script at the command line, I take an error:
/tmp/perl $ perl csv.pl 
DBD::CSV::st execute failed: Error while reading file ./testtable.csv: Bad file 
descriptor at /usr/lib/perl5/site_perl/5.8.8/DBD/CSV.pm line 210, GEN0 chunk 
1.
 [for Statement SELECT * FROM info] at csv.pl line 11.
execute: Error while reading file ./testtable.csv: Bad file descriptor at 
/usr/lib/perl5/site_perl/5.8.8/DBD/CSV.pm line 210, GEN0 chunk 1.

It seems, that there is a bug in DBD::CSV v0.22, consider following posting:
http://www.perlmonks.org/?node_id=673399
and this bug ticket:
https://rt.cpan.org/Public/Bug/Display.html?id=33764 

Regards

Andon

--- On Sat, 5/31/08, Prakash Prabhakar [EMAIL PROTECTED] wrote:

 From: Prakash Prabhakar [EMAIL PROTECTED]
 Subject: DBD::CSV: perl script to read CSV file does not work
 To: dbi-users@perl.org
 Date: Saturday, May 31, 2008, 3:41 AM
 I am using the following code in my .cgi progam (compiling
 with perl-5.8.6,
 UNIX OS). It doesnt work. Could you please let me know what
 could be wrong?
 I do not get any warnings/error messages/expected output.
 The .cgi and the
 .csv are in the same directory.
 
 Thanks,
 Prakash
 
 use warnings;
 use DBI;
  $dbh = DBI-connect(DBI:CSV:)
   or die Cannot connect:  . $DBI::errstr;
  $dbh-{'csv_tables'}-{'info'} = {
 'file' = *'testtable.csv'*};
  $sth = $dbh-prepare(SELECT * FROM info)
   or die prepare:  . $dbh-errstr();;
  $sth-execute()
   or die execute:  . $dbh-errstr();
  while (my $row = $sth-fetchrow_hashref)
  {
   print(Found result row: id = ,
 $row-{'id'},, name = ,
 $row-{'name'});
  }
 $sth-finish();
 
 to work with *testtable.csv* file that contains just this:
 name,id
 DSA,123
 
 I even modified the .csv file to have:
 name,id
 DS,123


  


Re: DBD::CSV: perl script to read CSV file does not work

2008-05-31 Thread Jeff Zucker

Prakash Prabhakar wrote:

I am using the following code in my .cgi progam (compiling with perl-5.8.6,
UNIX OS). It doesnt work. Could you please let me know what could be wrong?
I do not get any warnings/error messages/expected output. The .cgi and the
.csv are in the same directory.
  
My guess is that the line endings are not set explicitly (they default 
to windows line endings).  Try adding this :


   $dbh-{'csv_tables'}-{'info'} = {
   'file' = 'testtable.csv',
   'eol'  = \n
   };

Be sure to use double quotes qround the \n.  If that doesn't work try a 
line ending specific to your OS (e.g. '\012' for *nix).


--
Jeff


Thanks,
Prakash

use warnings;
use DBI;
 $dbh = DBI-connect(DBI:CSV:)
  or die Cannot connect:  . $DBI::errstr;
 $dbh-{'csv_tables'}-{'info'} = { 'file' = *'testtable.csv'*};
 $sth = $dbh-prepare(SELECT * FROM info)
  or die prepare:  . $dbh-errstr();;
 $sth-execute()
  or die execute:  . $dbh-errstr();
 while (my $row = $sth-fetchrow_hashref)
 {
  print(Found result row: id = , $row-{'id'},, name = ,
$row-{'name'});
 }
$sth-finish();

to work with *testtable.csv* file that contains just this:
name,id
DSA,123

I even modified the .csv file to have:
name,id
DS,123

  




Re: DBD::CSV: perl script to read CSV file does not work

2008-05-31 Thread Andon Tschauschev
 My guess is that the line endings are not set explicitly
 (they default 
 to windows line endings).  Try adding this :
 
 $dbh-{'csv_tables'}-{'info'} =
 {
 'file' = 'testtable.csv',
 'eol'  = \n
 };

yes, it works fine on linux...


--- On Sun, 6/1/08, Jeff Zucker [EMAIL PROTECTED] wrote:

 From: Jeff Zucker [EMAIL PROTECTED]
 Subject: Re: DBD::CSV: perl script to read CSV file does not work
 To: Prakash Prabhakar [EMAIL PROTECTED]
 Cc: dbi-users@perl.org
 Date: Sunday, June 1, 2008, 12:50 AM
 Prakash Prabhakar wrote:
  I am using the following code in my .cgi progam
 (compiling with perl-5.8.6,
  UNIX OS). It doesnt work. Could you please let me know
 what could be wrong?
  I do not get any warnings/error messages/expected
 output. The .cgi and the
  .csv are in the same directory.

 My guess is that the line endings are not set explicitly
 (they default 
 to windows line endings).  Try adding this :
 
 $dbh-{'csv_tables'}-{'info'} =
 {
 'file' = 'testtable.csv',
 'eol'  = \n
 };
 
 Be sure to use double quotes qround the \n.  If that
 doesn't work try a 
 line ending specific to your OS (e.g. '\012'
 for *nix).
 
 -- 
 Jeff
 
  Thanks,
  Prakash
 
  use warnings;
  use DBI;
   $dbh = DBI-connect(DBI:CSV:)
or die Cannot connect:  . $DBI::errstr;
   $dbh-{'csv_tables'}-{'info'}
 = { 'file' = *'testtable.csv'*};
   $sth = $dbh-prepare(SELECT * FROM
 info)
or die prepare:  . $dbh-errstr();;
   $sth-execute()
or die execute:  . $dbh-errstr();
   while (my $row = $sth-fetchrow_hashref)
   {
print(Found result row: id = ,
 $row-{'id'},, name = ,
  $row-{'name'});
   }
  $sth-finish();
 
  to work with *testtable.csv* file that contains just
 this:
  name,id
  DSA,123
 
  I even modified the .csv file to have:
  name,id
  DS,123
 
 


  


Re: DBD::CSV: perl script to read CSV file does not work

2008-05-31 Thread Jeff Zucker

Andon Tschauschev wrote:

$dbh-{'csv_tables'}-{'info'} =
{
'file' = 'testtable.csv',
'eol'  = \n
};



yes, it works fine on linux...


  
Yes, if the file was created on linux.  If the file was created on 
windows or mac or with something other than '\012' as the line ending, 
using \n on linux will not work.


--
Jeff


--- On Sun, 6/1/08, Jeff Zucker [EMAIL PROTECTED] wrote:

  

From: Jeff Zucker [EMAIL PROTECTED]
Subject: Re: DBD::CSV: perl script to read CSV file does not work
To: Prakash Prabhakar [EMAIL PROTECTED]
Cc: dbi-users@perl.org
Date: Sunday, June 1, 2008, 12:50 AM
Prakash Prabhakar wrote:


I am using the following code in my .cgi progam
  

(compiling with perl-5.8.6,


UNIX OS). It doesnt work. Could you please let me know
  

what could be wrong?


I do not get any warnings/error messages/expected
  

output. The .cgi and the


.csv are in the same directory.
  
  

My guess is that the line endings are not set explicitly
(they default 
to windows line endings).  Try adding this :


$dbh-{'csv_tables'}-{'info'} =
{
'file' = 'testtable.csv',
'eol'  = \n
};

Be sure to use double quotes qround the \n.  If that
doesn't work try a 
line ending specific to your OS (e.g. '\012'

for *nix).

--
Jeff



Thanks,
Prakash

use warnings;
use DBI;
 $dbh = DBI-connect(DBI:CSV:)
  or die Cannot connect:  . $DBI::errstr;
 $dbh-{'csv_tables'}-{'info'}
  

= { 'file' = *'testtable.csv'*};


 $sth = $dbh-prepare(SELECT * FROM
  

info)


  or die prepare:  . $dbh-errstr();;
 $sth-execute()
  or die execute:  . $dbh-errstr();
 while (my $row = $sth-fetchrow_hashref)
 {
  print(Found result row: id = ,
  

$row-{'id'},, name = ,


$row-{'name'});
 }
$sth-finish();

to work with *testtable.csv* file that contains just
  

this:


name,id
DSA,123

I even modified the .csv file to have:
name,id
DS,123


  



  



  




Re: dbd::csv with col_names but insert / delete / update fail

2007-12-31 Thread Jeff Zucker

aclhkaclhk wrote:

I have a csv file. I defined the column names inside the perl script
but sql insert does not work. 
That's because you defined the column names for a table called passwd 
and then tried to do an insert on a table called b.  You need to 
define the column names for the table you are trying to insert into.


--
Jeff


If the csv has column names, the
script (without col names defined) works.

sql select works on both csv with or without column names.

* not working 
/root/csv/db/b
1,peter
2,wilson

insert.pl
   use DBI;
$dbh = DBI-connect(DBI:CSV:f_dir=/root/csv/
db;csv_sep_char=,;csv_quote_char=;csv_eol=\n)
or die Cannot connect:  . $DBI::errstr;
$dbh-{'csv_tables'}-{'passwd'} = {'col_names' = [id,passwd]};
$dbh-do(insert into b (id, passwd) values (10,));

[EMAIL PROTECTED] csv]# perl insert.pl

Execution ERROR: No such column 'B.ID' called from insert.pl at 6.

* working without error***
id,passwd
1,peter
2,wilson

insert.pl
use DBI;
$dbh = DBI-connect(DBI:CSV:f_dir=/root/csv/
db;csv_sep_char=,;csv_quote_char=;csv_eol=\n)
or die Cannot connect:  . $DBI::errstr;
$dbh-do(insert into b (id, passwd) values (10,));



  




Re: dbd::csv with col_names but insert / delete / update fail

2007-12-31 Thread Ron Savage
On Mon, 2007-12-31 at 00:52 -0800, aclhkaclhk wrote:

Hi Peter

 insert.pl
use DBI;
 $dbh = DBI-connect(DBI:CSV:f_dir=/root/csv/
 db;csv_sep_char=,;csv_quote_char=;csv_eol=\n)
 or die Cannot connect:  . $DBI::errstr;

I suggest you check the double-quotes used in the connect() call.
-- 
Ron Savage
[EMAIL PROTECTED]
http://savage.net.au/index.html




Re: DBD::CSV: make test fails

2007-10-19 Thread Robert Roggenbuck

Jeff Zucker schrieb:
 My guess is that you are either missing some prerequisites or that the
 older linux perl has some old copies of them.  Try to first install the
 latest DBD-File, SQL::Statement, and Text::CSV_XS.  If you still get
 errors, please let me know what versions of those modules you have.

I had some forwarding and mail folder filter problem with one of my 
mailboxes, so I read this mail just yesterday. Even if my problem seems 
solved I woun't let Your response without an answer. Indeed there is a 
very old perl installation besides my privat one. But there is no DBI, 
nor SQL::Satatement or Text::CSV_XS in it's @INC.


$ /usr/bin/perl -V

Summary of my perl5 (5.0 patchlevel 5 subversion 3) configuration:
  Platform:
osname=linux, osvers=2.2.14, archname=i586-linux
uname='linux apollonius 2.2.14 #1 mon nov 8 15:51:29 cet 1999 i686 
unknown '

hint=recommended, useposix=true, d_sigaction=define
usethreads=undef useperlio=undef d_sfio=undef
  Compiler:
cc='cc', optimize='-O2 -pipe', gccversion=2.95.2 19991024 (release)
cppflags='-Dbool=char -DHAS_BOOL -I/usr/local/include'
ccflags ='-Dbool=char -DHAS_BOOL -I/usr/local/include'
stdchar='char', d_stdstdio=undef, usevfork=false
intsize=4, longsize=4, ptrsize=4, doublesize=8
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
alignbytes=4, usemymalloc=n, prototype=define
  Linker and Libraries:
ld='cc', ldflags =' -L/usr/local/lib'
libpth=/usr/local/lib /lib /usr/lib
libs=-lnsl -lndbm -lgdbm -ldb -ldl -lm -lc -lposix -lcrypt
libc=, so=so, useshrplib=false, libperl=libperl.a
  Dynamic Linking:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-rdynamic'
cccdlflags='-fpic', lddlflags='-shared -L/usr/local/lib'


Characteristics of this binary (from libperl):
  Built under linux
  Compiled at Mar 11 2000 08:03:12
  @INC:
/usr/lib/perl5/5.00503/i586-linux
/usr/lib/perl5/5.00503
/usr/lib/perl5/site_perl/5.005/i586-linux
/usr/lib/perl5/site_perl/5.005
.

And in my private installation I have the most recent versions:

$ perl -e 'use SQL::Statement; print $SQL::Statement::VERSION\n'
1.15
$ perl -e 'use Text::CSV_XS; print $Text::CSV_XS::VERSION\n'
0.31
$ perl -e 'use DBI; print $DBI::VERSION\n'
1.59
$ perl -e 'use DBD::CSV; print $DBD::CSV::VERSION\n'
0.22

Best regards

Robert


Re: DBD::CSV: make test fails

2007-10-17 Thread Ron Savage

Robert Roggenbuck wrote:

Hi Robert

(lip.pl lines 28-34). If I unset the DBI_* variables, DBD::Adabas is not 
used and all DBD::CSV-tests are passing - and finally 'make install' 
succeeds.


I did say in one email that I was not setting these. I was wondering if 
you were. That would explain things.



May be this DBI_* usage should be removed from lib.pl?


I guess it's a problem of experience 'v' documented or undocumented 
defaults... Of course, this makes it hard for beginners :-(.


I hope your problem is now solved.
--
Ron Savage
[EMAIL PROTECTED]
http://savage.net.au/index.html


Re: DBD::CSV: make test fails

2007-10-17 Thread Robert Roggenbuck

Hi Jeff,

While looking more closely to the origin of the error message

Can't locate DBI object method list_tables via package 
DBD::Adabas::db at CSV.dbtest line 94.


I detected that there is nothing wrong with DBD::Adabas, it's DBD::CSV. 
At a first glance I thought 'list_table' is a DBI function - but it is a 
DBD::CSV privat feature. So again DBD::CSV is wrong in using DBD::Adabas 
and applying it's own feature to a foreign driver.
Cleaning the test codes from using foreign drivers will also solve this 
issue.


Best regards

Robert


Robert Roggenbuck schrieb:
I looked in the code of t/20createdrop.t and t/lib.pl and found that in 
lib.pl a lookup in the environment for DBI_DSN, DBI_PASS and DBI_USER is 
made. If they are found these settings are used for further testing (for 
getting data etc.) - else files for the use as tables are created 
(lip.pl lines 28-34). If I unset the DBI_* variables, DBD::Adabas is not 
used and all DBD::CSV-tests are passing - and finally 'make install' 
succeeds.


May be this DBI_* usage should be removed from lib.pl?

For the DBD::Adabas problem I will open a new thread.

Thanks for all the comments :-)

Robert




Re: DBD::CSV: make test fails

2007-10-17 Thread Robert Roggenbuck
I looked in the code of t/20createdrop.t and t/lib.pl and found that in 
lib.pl a lookup in the environment for DBI_DSN, DBI_PASS and DBI_USER is 
made. If they are found these settings are used for further testing (for 
getting data etc.) - else files for the use as tables are created 
(lip.pl lines 28-34). If I unset the DBI_* variables, DBD::Adabas is not 
used and all DBD::CSV-tests are passing - and finally 'make install' 
succeeds.


May be this DBI_* usage should be removed from lib.pl?

For the DBD::Adabas problem I will open a new thread.

Thanks for all the comments :-)

Robert



Jeff Zucker schrieb:

Robert Roggenbuck wrote:


The DBD::Adabas-error comes during the tests t/20createdrop, 
t/30insertfetch, t/40bindparam, and then I stopped going though the 
others. The message is exactly the same in every test. If these are 
Jeff Zucker's private tests, there is something wrong with the package...


The Adabas and My/Msql stuff in the test directory is left over from 
Jochen Wiedmann's original version of the tests which were meant to be a 
blueprint for other kinds of tests.  I suppose I should clean them out 
of the distro, but I've never heard of them being triggered during a 
make test of DBD::CSV and there are literally hundreds of CPAN tester 
pass reports that successfully ignored the Adabas stuff over the past 
years.




So the DBD::CSV problem turns to an DBD::Adabas issue...
But why do I need another DB to test DBD::CSV? This seems to me not
necessary.



No, you absolutely should not need DBD::Adabas to pass the DBD::CSV tests.




Re: DBD::CSV: make test fails

2007-10-16 Thread Robert Roggenbuck

Hi Ron,


Ron Savage schrieb:

Robert Roggenbuck wrote:

Hi Robert

Looking at the code and the first error msg you got:
YOU ARE MISSING REQUIRED MODULES: [ ]
makes me suspect the method you are using to test the module.
Are you using 'The Mantra' of standard commands? Something like:
As I said in my firs message I did the downloading, unzipping etc. via 
'perl -MCPAN...' - which should include the correct Mantra. But lets 
have a look, what going on if I do it maually:




shellgunzip DBD-CSV-0.22.tar.gz
shelltar -xvf DBD-CSV-0.22.tar

DBD-CSV-0.22/
DBD-CSV-0.22/t/
DBD-CSV-0.22/t/40numrows.t
DBD-CSV-0.22/t/mSQL.dbtest
DBD-CSV-0.22/t/dbdadmin.t
DBD-CSV-0.22/t/README
DBD-CSV-0.22/t/pNET.mtest
DBD-CSV-0.22/t/50commit.t
DBD-CSV-0.22/t/CSV.dbtest
DBD-CSV-0.22/t/50chopblanks.t
DBD-CSV-0.22/t/mSQL.mtest
DBD-CSV-0.22/t/Adabas.dbtest
DBD-CSV-0.22/t/mysql.dbtest
DBD-CSV-0.22/t/40bindparam.t
DBD-CSV-0.22/t/csv.t
DBD-CSV-0.22/t/40nulls.t
DBD-CSV-0.22/t/mSQL1.dbtest
DBD-CSV-0.22/t/Adabas.mtest
DBD-CSV-0.22/t/pNET.dbtest
DBD-CSV-0.22/t/mSQL1.mtest
DBD-CSV-0.22/t/30insertfetch.t
DBD-CSV-0.22/t/40listfields.t
DBD-CSV-0.22/t/mysql.mtest
DBD-CSV-0.22/t/00base.t
DBD-CSV-0.22/t/CSV.mtest
DBD-CSV-0.22/t/ak-dbd.t
DBD-CSV-0.22/t/lib.pl
DBD-CSV-0.22/t/10dsnlist.t
DBD-CSV-0.22/t/40blobs.t
DBD-CSV-0.22/t/skeleton.test
DBD-CSV-0.22/t/20createdrop.t
DBD-CSV-0.22/MANIFEST
DBD-CSV-0.22/lib/
DBD-CSV-0.22/lib/DBD/
DBD-CSV-0.22/lib/DBD/CSV.pm
DBD-CSV-0.22/lib/Bundle/
DBD-CSV-0.22/lib/Bundle/DBD/
DBD-CSV-0.22/lib/Bundle/DBD/CSV.pm
DBD-CSV-0.22/META.yml
DBD-CSV-0.22/ChangeLog
DBD-CSV-0.22/MANIFEST.SKIP
DBD-CSV-0.22/Makefile.PL
DBD-CSV-0.22/README

shellcd DBD-CSV-0.22
shellperl Makefile.PL

Checking if your kit is complete...
Looks good
Writing Makefile for DBD::CSV

shellmake

cp lib/Bundle/DBD/CSV.pm blib/lib/Bundle/DBD/CSV.pm
cp lib/DBD/CSV.pm blib/lib/DBD/CSV.pm
Manifying blib/man3/Bundle::DBD::CSV.3
Manifying blib/man3/DBD::CSV.3

shellperl -I lib t/20createdrop.t

1..5
ok 1
Can't locate DBI object method list_tables via package 
DBD::Adabas::db at t/CSV.dbtest line 94.



Show us the output of all these commands...
There they are. I get the message 'YOU ARE MISSING REQUIRED MODULES' 
only if I say just 'perl t/20createdrop.t' - I did not know that I need 
to include 'lib' before executing the tests manually.


So the question remains: Why are the 'blueprint tests' triggered in my 
case. Or better (looking at Your test result, where 20createdrop.t 
succeeds and is not skipped): Why are they make troubles in my special 
case? Could it be that they are looking for an Adabas installation and 
behave different if there is one?


Best regards

Robert



Re: DBD::CSV: make test fails

2007-10-16 Thread Ron Savage

Robert Roggenbuck wrote:

Hi Robert

There they are. I get the message 'YOU ARE MISSING REQUIRED MODULES' 
only if I say just 'perl t/20createdrop.t' - I did not know that I need 
to include 'lib' before executing the tests manually.


That's just to get Perl to look in lib/ in the unpacked distro to find 
DBD::CSV. Of course, if the module is already installed then not doing 
that is testing the previously installed version.


Maybe try deleting any old version and rerunning the tests?

It's 8:30 pm here now. More tomorrow.

--
Ron Savage
[EMAIL PROTECTED]
http://savage.net.au/index.html


Re: DBD::CSV: make test fails

2007-10-15 Thread rroggenb
[sorry for the missing subject in previous posting]

Hi Ron,

thanks for the hint in the right direction: executing the tests
individually shows the needed details. The message I get is:

YOU ARE MISSING REQUIRED MODULES: [ ]

This is not much (missing an unnamed module?), but I detected in lip.pl
(located in the same test directory) the code which throws the error
message. There are the prerequisits tested (lines 40-49). But this list
includes DBD::CSV itself! How can this ever work? Besides this DBD::CSV is
not part of the error reporting (which explains the empty brackets above).
After commenting the DBD::CSV-test, I come to the next problem:

Can't locate DBI object method list_tables via package DBD::Adabas::db
at CSV.dbtest line 94.

So the DBD::CSV problem turns to an DBD::Adabas issue...

But why do I need another DB to test DBD::CSV? This seems to me not
necessary.

Greetings Robert

---

Hi Robert


t/20createdrop.dubious
Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-5
Failed 4/5 tests, 20.00% okay

 Perhaps it's a permissions problem. Try running this test by itself, and
find if there is a specific statement (Perl, SQL) which fails.

 --
 Ron Savage





Re: DBD::CSV: make test fails

2007-10-15 Thread Ron Savage

[EMAIL PROTECTED] wrote:

Hi Robert


This is not much (missing an unnamed module?), but I detected in lip.pl
(located in the same test directory) the code which throws the error
message. There are the prerequisits tested (lines 40-49). But this list
includes DBD::CSV itself! How can this ever work? Besides this DBD::CSV is


How can this ever work, you ask.

Well, look. The '[d]make test' command contains 'blib\lib', which means 
the version of DBD::CSV shipped but not yet installed is looked for in 
'blib\lib', and that makes sense since it is precisely that code which 
is being tested!


C:\perl-modules\DBD-CSV-0.22dmake test
C:\strawberry-perl\perl\bin\perl.exe -MExtUtils::Command::MM -e 
test_harness(0, 'blib\lib', 'blib\arch') t/*.t

t/00base...1.15 at t/00base.t line 15.
t/00base...ok
t/10dsnlistok
t/20createdrop.ok
t/30insertfetchok
t/40bindparam..ok
t/40blobs..ok
t/40listfields.ok
t/40nulls..ok
t/40numrowsok
t/50chopblanks.ok
t/50commit.ok
t/ak-dbd...ok
t/csv..ok
t/dbdadmin.ok
All tests successful.
Files=14, Tests=243,  9 wallclock secs ( 0.00 cusr +  0.00 csys =  0.00 CPU)

Also, for this I did /not/ define the env vars DBI_DSN, DBI_USER and 
DBI_PASS.



Can't locate DBI object method list_tables via package DBD::Adabas::db
at CSV.dbtest line 94.
So the DBD::CSV problem turns to an DBD::Adabas issue...
But why do I need another DB to test DBD::CSV? This seems to me not
necessary.


Hmmm. Odd. I think you have done something which triggered Jeff Zucker's 
private tests. Don't do that!


--
Ron Savage
[EMAIL PROTECTED]
http://savage.net.au/index.html


Re: DBD::CSV: make test fails

2007-10-15 Thread Ron Savage

Robert Roggenbuck wrote:

Hi Robert

The DBD::Adabas-error comes during the tests t/20createdrop, 
t/30insertfetch, t/40bindparam, and then I stopped going though the 
others. The message is exactly the same in every test. If these are Jeff 
Zucker's private tests, there is something wrong with the package...


I don't think so. You saw the output of my 'dmake test'.

BTW I'm installing V 0.22 here.

I think you need to unpack the distro again, and trace thru the test 
code to see what triggers reference to Adabas on your system. Obviously 
it's not happening here, so there must be something in your set up which 
is triggering this effect.


Keep us informed. It may need to be reported back to Jeff.

Good luck!

--
Ron Savage
[EMAIL PROTECTED]
http://savage.net.au/index.html


Re: DBD::CSV: make test fails

2007-10-15 Thread Robert Roggenbuck
Ok, it makes sense. I was sure it makes sense, but I could not see it. 
Thanks for the explanation. But then: What's the way to successful tests 
without manipulating the test code?


The DBD::Adabas-error comes during the tests t/20createdrop, 
t/30insertfetch, t/40bindparam, and then I stopped going though the 
others. The message is exactly the same in every test. If these are Jeff 
Zucker's private tests, there is something wrong with the package...


Best regards

Robert

Ron Savage schrieb:

[EMAIL PROTECTED] wrote:

Hi Robert


This is not much (missing an unnamed module?), but I detected in lip.pl
(located in the same test directory) the code which throws the error
message. There are the prerequisits tested (lines 40-49). But this list
includes DBD::CSV itself! How can this ever work? Besides this 
DBD::CSV is


How can this ever work, you ask.

Well, look. The '[d]make test' command contains 'blib\lib', which means 
the version of DBD::CSV shipped but not yet installed is looked for in 
'blib\lib', and that makes sense since it is precisely that code which 
is being tested!


C:\perl-modules\DBD-CSV-0.22dmake test
C:\strawberry-perl\perl\bin\perl.exe -MExtUtils::Command::MM -e 
test_harness(0, 'blib\lib', 'blib\arch') t/*.t

t/00base...1.15 at t/00base.t line 15.
t/00base...ok
t/10dsnlistok
t/20createdrop.ok
t/30insertfetchok
t/40bindparam..ok
t/40blobs..ok
t/40listfields.ok
t/40nulls..ok
t/40numrowsok
t/50chopblanks.ok
t/50commit.ok
t/ak-dbd...ok
t/csv..ok
t/dbdadmin.ok
All tests successful.
Files=14, Tests=243,  9 wallclock secs ( 0.00 cusr +  0.00 csys =  0.00 
CPU)


Also, for this I did /not/ define the env vars DBI_DSN, DBI_USER and 
DBI_PASS.


Can't locate DBI object method list_tables via package 
DBD::Adabas::db

at CSV.dbtest line 94.
So the DBD::CSV problem turns to an DBD::Adabas issue...
But why do I need another DB to test DBD::CSV? This seems to me not
necessary.


Hmmm. Odd. I think you have done something which triggered Jeff Zucker's 
private tests. Don't do that!




--

===
Robert Roggenbuck
Universitaetsbibliothek Osnabrueck
Alte Muenze 16
D-49074 Osnabrueck
Germany
Tel ++49/541/969-4344  Fax -4482
[EMAIL PROTECTED]

Postbox:
Postfach 4496
D-49034 Osnabrueck
===


Re: DBD::CSV: make test fails

2007-10-15 Thread Ron Savage

Robert Roggenbuck wrote:

Hi Robert

Looking at the code and the first error msg you got:
YOU ARE MISSING REQUIRED MODULES: [ ]
makes me suspect the method you are using to test the module.
Are you using 'The Mantra' of standard commands? Something like:

shellgunzip DBD-CSV-0.22.tar.gz
shelltar -xvf DBD-CSV-0.22.tar
shellcd DBD-CSV-0.22
shellperl Makefile.PL
shellmake (or dmake or nmake)
shellperl -I lib t/20createdrop.t

Show us the output of all these commands...
--
Ron Savage
[EMAIL PROTECTED]
http://savage.net.au/index.html


Re: DBD::CSV: make test fails

2007-10-13 Thread Ron Savage

Robert Roggenbuck wrote:

Hi Robert


t/20createdrop.dubious
Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-5
Failed 4/5 tests, 20.00% okay


Perhaps it's a permissions problem. Try running this test by itself, and 
find if there is a specific statement (Perl, SQL) which fails.

--
Ron Savage
[EMAIL PROTECTED]
http://savage.net.au/index.html


Re: DBD::CSV: make test fails

2007-10-12 Thread Jeff Zucker
My guess is that you are either missing some prerequisites or that the 
older linux perl has some old copies of them.  Try to first install the 
latest DBD-File, SQL::Statement, and Text::CSV_XS.  If you still get 
errors, please let me know what versions of those modules you have.  
Good luck!


--
Jeff

[EMAIL PROTECTED] wrote:

Hello,

[sorry for eventual double posting]

since a longer time I am using DBD::CSV for several purposes and were
quite happy with it. But now I try to install it on an old Linux Server
(SuSE) and the 'make test' fails in nearly every case. Can someone give
some hints?

Here is the 'make test' part from the automated installation from CPAN
(perl -MCPAN...):

---
  /usr/bin/make -- OK
Running make test
PERL_DL_NONLAZY=1 /home/rroggenb/perl/bin/perl -MExtUtils::Command::MM
-e test_harness(0, 'blib/lib', 'blib/arch') t/*.t
t/00base...1.15 at t/00base.t line 15.
t/00base...ok
t/10dsnlistok
t/20createdrop.dubious
Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-5
Failed 4/5 tests, 20.00% okay
t/30insertfetchdubious
Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-17
Failed 16/17 tests, 5.88% okay
t/40bindparam..dubious
Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-28
Failed 27/28 tests, 3.57% okay
t/40blobs..dubious
Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-11
Failed 10/11 tests, 9.09% okay
t/40listfields.dubious
Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-13
Failed 12/13 tests, 7.69% okay
t/40nulls..dubious
Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-11
Failed 10/11 tests, 9.09% okay
t/40numrowsdubious
Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-25
Failed 24/25 tests, 4.00% okay
t/50chopblanks.dubious
Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-35
Failed 34/35 tests, 2.86% okay
t/50commit.dubious
Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-16
Failed 15/16 tests, 6.25% okay
t/ak-dbd...dubious
Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-47
Failed 46/47 tests, 2.13% okay
t/csv..dubious
Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-23
Failed 22/23 tests, 4.35% okay
t/dbdadmin.dubious
Test returned status 0 (wstat 11, 0xb)
DIED. FAILED test 4
Failed 1/4 tests, 75.00% okay
Failed Test   Stat Wstat Total Fail  List of Failed
---
t/20createdrop.t22  5632 58  2-5
t/30insertfetch.t   22  563217   32  2-17
t/40bindparam.t 22  563228   54  2-28
t/40blobs.t 22  563211   20  2-11
t/40listfields.t22  563213   24  2-13
t/40nulls.t 22  563211   20  2-11
t/40numrows.t   22  563225   48  2-25
t/50chopblanks.t22  563235   68  2-35
t/50commit.t22  563216   30  2-16
t/ak-dbd.t  22  563247   92  2-47
t/csv.t 22  563223   44  2-23
t/dbdadmin.t 011 41  4
Failed 12/14 test scripts. 221/243 subtests failed.
Files=14, Tests=243,  7 wallclock secs ( 6.35 cusr +  0.59 csys =  6.94 CPU)
Failed 12/14 test programs. 221/243 subtests failed.
make: *** [test_dynamic] Error 255
  JZUCKER/DBD-CSV-0.22.tar.gz
  /usr/bin/make test -- NOT OK
---

My system identifies itself as follows (uname -a):

Linux serapis 2.2.14-2GB-SMP #1 SMP Mon May 8 10:24:19 MEST 2000 i686 unknown

And 'perl -V' results in the following:

---
Summary of my perl5 (revision 5 version 8 subversion 8) configuration:
  Platform:
osname=linux, osvers=2.2.14-2gb-smp, archname=i686-linux-ld
uname='linux serapis 2.2.14-2gb-smp #1 smp mon may 8 10:24:19 mest
2000 i686 unknown '
config_args='-Dcc=gcc -Dprefix=/home/rroggenb'
hint=recommended, useposix=true, d_sigaction=define
usethreads=undef use5005threads=undef useithreads=undef
usemultiplicity=undef
useperlio=define d_sfio=undef uselargefiles=define usesocks=undef
use64bitint=undef use64bitall=undef uselongdouble=define
usemymalloc=n, bincompat5005=undef
  Compiler:
cc='gcc', ccflags ='-fno-strict-aliasing -pipe -I/usr/local/include
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',
optimize='-O2',
cppflags='-fno-strict-aliasing -pipe -I/usr/local/include'
ccversion='', gccversion='2.95.2 19991024 (release)', gccosandvers=''
intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
ivtype='long', ivsize=4, nvtype='long double', nvsize=12,
Off_t='off_t', 

Re: DBD::CSV and multi character separator

2007-04-27 Thread Robert Roggenbuck

Hi,

can You show us a relevant code snippet (connecting and querying) and a snippet 
of Your CSV-Data? Which kind of line separator are You using?


Best Regards

Robert



Santosh Pathak schrieb:

Hi,

I read on following site that DBD::CSV supports multi-character separator.
But I am not able to use it. my separator is |++|
http://www.annocpan.org/~JZUCKER/DBD-CSV-0.22/lib/DBD/CSV.pmhttp://www.annocpan.org/%7EJZUCKER/DBD-CSV-0.22/lib/DBD/CSV.pm 



Even after setting sep_char to |++|, it doesn't work.
Am I missing something?

Thanks
- Santosh



--

===
Robert Roggenbuck, M.A.
Konrad Zuse Zentrum fuer Informationstechnik Berlin
Takustr. 7  D-14195 Berlin
[EMAIL PROTECTED]
http://www.mathematik-21.de/
http://www.zib.de/

Buero:
Universitaet Osnabrueck
Fachbereich Mathematik / Informatik
Albrechtstr. 28aFon: ++49 (0)541/969-2735
D-49069 Osnabrueck  Fax: ++49 (0)541/969-2770
http://www.mathematik.uni-osnabrueck.de/
===


Re: DBD::CSV and multi character separator

2007-04-27 Thread Robert Roggenbuck
This looks clean too me - and Philip points already to the main reason for which 
Your code fails.


BTW: Did You get an error message while connecting / querying or did You just 
get no results?


Greetings

Robert

-


Santosh Pathak schrieb:

Here is my code,

use DBI;

$dbh = DBI-connect(qq{DBI:CSV:});
$dbh-{'csv_tables'}-{'host'} = {
 'col_names' = [Id, Name, IP, OS, VERSION, ARCH],
 'sep_char' = |++|,
 'eol' = \n,
 'file' = './foo'
  };
my $sth = $dbh-prepare(SELECT * FROM host);
$sth-execute() or die Cannot execute:  . $sth-errstr();
while (my $row = $sth-fetchrow_hashref) {
   print(Id = , $row-{'Id'});
   print(\nName=, $row-{'Name'});
   print(\nIP=, $row-{'IP'});
   print(\nOS=, $row-{'OS'});
   print(\nVersion=, $row-{'VERSION'});
   print(\nARCH=, $row-{'ARCH'});
}
$sth-finish();
$dbh-disconnect();

and my data,

100|++|abc|++|1.2.3.4|++|SunOS|++|5.10|++|sparc
101|++|abd|++|1.2.3.5|++|SunOS|++|5.10|++|sparc
102|++|abe|++|1.2.3.6|++|SunOS|++|5.10|++|sparc
103|++|abf|++|1.2.3.7|++|SunOS|++|5.10|++|sparc

Column separator is (|++|) and Line separator is newline (\n)

Thanks
- Santosh

On 4/27/07, Robert Roggenbuck [EMAIL PROTECTED] 
wrote:


Hi,

can You show us a relevant code snippet (connecting and querying) and a
snippet
of Your CSV-Data? Which kind of line separator are You using?

Best Regards

Robert



Santosh Pathak schrieb:
 Hi,

 I read on following site that DBD::CSV supports multi-character
separator.
 But I am not able to use it. my separator is |++|
 http://www.annocpan.org/~JZUCKER/DBD-CSV-0.22/lib/DBD/CSV.pm
http://www.annocpan.org/%7EJZUCKER/DBD-CSV-0.22/lib/DBD/CSV.pm


 Even after setting sep_char to |++|, it doesn't work.
 Am I missing something?

 Thanks
 - Santosh


--

===
Robert Roggenbuck, M.A.
Konrad Zuse Zentrum fuer Informationstechnik Berlin
Takustr. 7  D-14195 Berlin
[EMAIL PROTECTED]
http://www.mathematik-21.de/
http://www.zib.de/

Buero:
Universitaet Osnabrueck
Fachbereich Mathematik / Informatik
Albrechtstr. 28aFon: ++49 (0)541/969-2735
D-49069 Osnabrueck  Fax: ++49 (0)541/969-2770
http://www.mathematik.uni-osnabrueck.de/
===





--

===
Robert Roggenbuck, M.A.
Konrad Zuse Zentrum fuer Informationstechnik Berlin
Takustr. 7  D-14195 Berlin
[EMAIL PROTECTED]
http://www.mathematik-21.de/
http://www.zib.de/

Buero:
Universitaet Osnabrueck
Fachbereich Mathematik / Informatik
Albrechtstr. 28aFon: ++49 (0)541/969-2735
D-49069 Osnabrueck  Fax: ++49 (0)541/969-2770
http://www.mathematik.uni-osnabrueck.de/
===


Re: DBD::CSV and multi character separator

2007-04-27 Thread Santosh Pathak

Here is my code,

use DBI;

$dbh = DBI-connect(qq{DBI:CSV:});
$dbh-{'csv_tables'}-{'host'} = {
 'col_names' = [Id, Name, IP, OS, VERSION, ARCH],
 'sep_char' = |++|,
 'eol' = \n,
 'file' = './foo'
  };
my $sth = $dbh-prepare(SELECT * FROM host);
$sth-execute() or die Cannot execute:  . $sth-errstr();
while (my $row = $sth-fetchrow_hashref) {
   print(Id = , $row-{'Id'});
   print(\nName=, $row-{'Name'});
   print(\nIP=, $row-{'IP'});
   print(\nOS=, $row-{'OS'});
   print(\nVersion=, $row-{'VERSION'});
   print(\nARCH=, $row-{'ARCH'});
}
$sth-finish();
$dbh-disconnect();

and my data,

100|++|abc|++|1.2.3.4|++|SunOS|++|5.10|++|sparc
101|++|abd|++|1.2.3.5|++|SunOS|++|5.10|++|sparc
102|++|abe|++|1.2.3.6|++|SunOS|++|5.10|++|sparc
103|++|abf|++|1.2.3.7|++|SunOS|++|5.10|++|sparc

Column separator is (|++|) and Line separator is newline (\n)

Thanks
- Santosh

On 4/27/07, Robert Roggenbuck [EMAIL PROTECTED] wrote:


Hi,

can You show us a relevant code snippet (connecting and querying) and a
snippet
of Your CSV-Data? Which kind of line separator are You using?

Best Regards

Robert



Santosh Pathak schrieb:
 Hi,

 I read on following site that DBD::CSV supports multi-character
separator.
 But I am not able to use it. my separator is |++|
 http://www.annocpan.org/~JZUCKER/DBD-CSV-0.22/lib/DBD/CSV.pm
http://www.annocpan.org/%7EJZUCKER/DBD-CSV-0.22/lib/DBD/CSV.pm


 Even after setting sep_char to |++|, it doesn't work.
 Am I missing something?

 Thanks
 - Santosh


--

===
Robert Roggenbuck, M.A.
Konrad Zuse Zentrum fuer Informationstechnik Berlin
Takustr. 7  D-14195 Berlin
[EMAIL PROTECTED]
http://www.mathematik-21.de/
http://www.zib.de/

Buero:
Universitaet Osnabrueck
Fachbereich Mathematik / Informatik
Albrechtstr. 28aFon: ++49 (0)541/969-2735
D-49069 Osnabrueck  Fax: ++49 (0)541/969-2770
http://www.mathematik.uni-osnabrueck.de/
===



Re: DBD::CSV much slower on osX ?

2005-04-01 Thread Charles Plessy
On Wed, Feb 16, 2005 at 09:44:08AM -0800, Jeff Zucker wrote :
 [EMAIL PROTECTED] wrote:
 
 Total Elapsed Time = 63.19465 Seconds
  User+System Time = 43.46465 Seconds
 Exclusive Times
 %Time ExclSec CumulS #Calls sec/call Csec/c  Name
 66.5   28.91 36.463 109445   0.0003 0.0003  SQL::Statement::eval_where
  
 
 I'm the maintainer of DBD::CSV and while I don't have time this week to 
 get into this, I am following the conversation and will certainly 
 eventually revise the module if any useful information is discovered.  

Sorry for wasting your time with my problems, but since I
changed the i386 computer, I can not reproduce the speed difference
anymore. And I do not have access anymore to this computer where the
query was running faster to double-check wether everything is exactly
the same than on the new one.

As I had better to spend my time on analysing some data with
my script rather than analysing the script itself, I eventually
swiched to DBI::SQLite, which solved my problem by performing the
query in a few seconds.

The query was something like :

$query = SELECT * FROM $db_table WHERE SYMBOL1 LIKE \'%${Search}%\' OR  
SYMBOL2 LIKE \'%${Search}%\' ;
my $sth = $dbh-prepare($query);
$sth-execute;

(actually CLIKE, before the transition)


You may wonder the purpose of this mail, as there is no new
crucial information. The reason is that I do not like not knowing the
end of the story when I browse archived threads.

Anyway, many thanks to all that tried to help me.

-- 
Charles


Re: DBD::CSV much slower on osX ?

2005-04-01 Thread Tim Bunce
On Fri, Apr 01, 2005 at 03:09:12PM +0900, Charles Plessy wrote:
 
   You may wonder the purpose of this mail, as there is no new
 crucial information. The reason is that I do not like not knowing the
 end of the story when I browse archived threads.

Thanks Charles.

Tim.


Re: DBD::CSV much slower on osX ?

2005-04-01 Thread Jeff Zucker
Charles Plessy wrote:
As I had better to spend my time on analysing some data with
my script rather than analysing the script itself, I eventually
swiched to DBI::SQLite, which solved my problem by performing the
query in a few seconds.
There's no question, SQLite is faster than DBD::CSV for most things.  If 
someone asks me for a recommendation for a database to use and they have large 
and or complex data, don't care about the format of the data, and speed is an 
issue, I never recommend my own modules :-).  I hope you're using mod_perl or 
something because connecting to the database is one area where SQLite is 
slower.  It's also slower for inserts.  I'd like to point out also that 
SQL::Statement, the underlying engine for DBD::CSV, is undergoing some major 
changes and is becoming faster on each benchmark and that the purpose of the 
pure perl DBDs like DBD::CSV is to provide access to human readable data, 
unconventional datasources, and to provide support for platforms and contexts 
where compilation is not an option, not to try to rival the speed of RDBMSs 
written in C.
The query was something like :
$query = SELECT * FROM $db_table WHERE SYMBOL1 LIKE \'%${Search}%\' OR  SYMBOL2 LIKE \'%${Search}%\' ;
 

LIKE and CLIKE with wildcards are full text searches and are always 
going to be slow relative to other kinds of searches.

	You may wonder the purpose of this mail, as there is no new
crucial information. The reason is that I do not like not knowing the
end of the story when I browse archived threads.
 

Thanks, I appreciate hearing back. Good luck!
--
Jeff


Re: DBD::CSV much slower on osX ?

2005-02-16 Thread charles-perl
On Thu, Feb 03, 2005 at 10:18:43AM +, Tim Bunce wrote :
 On Thu, Feb 03, 2005 at 06:27:05PM +0900, [EMAIL PROTECTED] wrote:
  Dear list,
  
  I wrote a simple CGI script using DBD::CSV on a linux
  computer, and then installed it on a iMac G5. Its execution time is
  now alomst 10 times slower. Using print instructions, I traced the
  bottleneck to the following instruction :
  
  $sth-execute;
  
  Now I am a bit stuck, as I do not know how to investigate in
  the DBD::CSV module to find where is the slow isnstruction.
 
 The Devel::DProf module (and/or other code profiling modules) may help.
 

Thank you for pointing this module. I have used it to analyse my script :

Total Elapsed Time = 63.19465 Seconds
  User+System Time = 43.46465 Seconds
Exclusive Times
%Time ExclSec CumulS #Calls sec/call Csec/c  Name
 66.5   28.91 36.463 109445   0.0003 0.0003  SQL::Statement::eval_where
 11.1   4.848  6.132 109447   0. 0.0001  Text::CSV_XS::getline
 7.96   3.458  3.458 218890   0. 0.  SQL::Statement::get_row_value
 6.90   2.998  9.129 109447   0. 0.0001  DBD::CSV::Table::fetch_row
 5.80   2.520  7.545 109445   0. 0.0001  SQL::Statement::process_predicate
 4.36   1.894  1.894 109445   0. 0.  SQL::Statement::is_matched
 2.95   1.284  1.284 109447   0. 0.  IO::Handle::getline
 2.29   0.997 46.596  1   0.9965 46.595  SQL::Statement::SELECT
 0.16   0.069  0.076 27   0.0026 0.0028  CGI::_compile
 0.16   0.068  0.123  4   0.0170 0.0307  main::BEGIN
 0.09   0.040  0.079  4   0.0099 0.0198  DBI::SQL::Nano::BEGIN
 0.09   0.040  0.039  7   0.0057 0.0056  SQL::Statement::BEGIN
 0.07   0.029  0.164  1   0.0295 0.1645  DBI::install_driver
 0.02   0.010  0.010  1   0.0100 0.0100  Fcntl::bootstrap
 0.02   0.010  0.010  1   0.0100 0.0100  SQL::Parser::dialect


It seems that there is no bottleneck. I have ran the script on
a 3GHz linux box, and it took it 19 seconds to complete. So I do not
understand how I mananaged to run it in 7 seconds on a 1.8 GHz Athlon
laptop (which unfortunately I do not own anymore).

As somebody kindly pointed me out that hardware differences
could have a strong impact on the results, I will suppose that this is
the reason why I have seen such speed inconsistencies, unless somebody
found something insightful in the dprofpp output extract in this mail.

Thanks to those who answered me and offered their help.

-- 
Charles


Re: DBD::CSV much slower on osX ?

2005-02-16 Thread Charles Plessy
On Thu, Feb 03, 2005 at 11:33:10AM -0800, Henri Asseily wrote :

 Also note that DBD::CSV is significantly impacted by I/O speed. If your 
 IMac G5 has a 4200 rpm drive and your linux box has a 10k rpm one, that 
 makes quite a large difference.


Would that mean that running the script twice should
dramatically accelerate the execution because the file would then be
cached in memory? (which does not work, I just tried).

Best,

-- 
Charles


Re: DBD::CSV much slower on osX ?

2005-02-16 Thread Peter J. Holzer
On 2005-02-16 19:44:49 +0900, [EMAIL PROTECTED] wrote:
 On Thu, Feb 03, 2005 at 10:18:43AM +, Tim Bunce wrote :
  On Thu, Feb 03, 2005 at 06:27:05PM +0900, [EMAIL PROTECTED] wrote:
   Dear list,
   
 I wrote a simple CGI script using DBD::CSV on a linux
   computer, and then installed it on a iMac G5. Its execution time is
   now alomst 10 times slower.
[...]
  The Devel::DProf module (and/or other code profiling modules) may help.
  
 
 Thank you for pointing this module. I have used it to analyse my script :
 
 Total Elapsed Time = 63.19465 Seconds
   User+System Time = 43.46465 Seconds

That's strange. Unless your computer is busy doing something else, it is
waiting almost 20 seconds for I/O. Unless the lines of your CSV file are
really long, I cannot imagine that simply reading a file with 109447
lines can take 20 seconds. (Even less if that is a join over several
files)

 Exclusive Times
 %Time ExclSec CumulS #Calls sec/call Csec/c  Name
  66.5   28.91 36.463 109445   0.0003 0.0003  SQL::Statement::eval_where
[...]
  2.95   1.284  1.284 109447   0. 0.  IO::Handle::getline
  2.29   0.997 46.596  1   0.9965 46.595  SQL::Statement::SELECT
[...]
   It seems that there is no bottleneck.

Most of the CPU time is spent in SQL::Statement::eval_where. Unless you
can change your query, you probably can't make it much faster (except
maybe by moving from CSV to a real RDBMS or maybe SQLite).

hp

-- 
   _  | Peter J. Holzer  | If the code is old but the problem is new
|_|_) | Sysadmin WSR / LUGA  | then the code probably isn't the problem.
| |   | [EMAIL PROTECTED]|
__/   | http://www.hjp.at/   | -- Tim Bunce on dbi-users, 2004-11-05


pgpzOVmn24wHP.pgp
Description: PGP signature


Re: Re: DBD::CSV much slower on osX ?

2005-02-16 Thread amonotod
 From: Charles Plessy [EMAIL PROTECTED]
 Date: 2005/02/16 Wed AM 04:58:36 CST
 
   Would that mean that running the script twice should
 dramatically accelerate the execution because the file would then be
 cached in memory? (which does not work, I just tried).

Keep in mind that it's not just your script that must be cached, if indeed that 
did work.  There's also Perl, the modules, and your data (if in CSV form).

If you want to test to see if it is just I/O, find out if your O/S supports 
RAM-disk, copy all files involved there, and run a test.  If your run time is 
significantly reduced, it may indeed point to I/O issues...

 Best,
 Charles

HTH,
amonotod


--

`\|||/ amonotod@| sun|perl|windows
  (@@) charter.net  | sysadmin|dba
  ooO_(_)_Ooo
  _|_|_|_|_|_|_|_|



Re: DBD::CSV much slower on osX ?

2005-02-16 Thread Henri Asseily
On Feb 16, 2005, at 2:58 AM, Charles Plessy wrote:
On Thu, Feb 03, 2005 at 11:33:10AM -0800, Henri Asseily wrote :
Also note that DBD::CSV is significantly impacted by I/O speed. If 
your
IMac G5 has a 4200 rpm drive and your linux box has a 10k rpm one, 
that
makes quite a large difference.

Would that mean that running the script twice should
dramatically accelerate the execution because the file would then be
cached in memory? (which does not work, I just tried).
Looks like everything is normal from your dprof output. it just takes a 
while to access the file.
Assuming your IMac G5 has enough available RAM to use as file buffers 
when you load the file, try the following command just before running 
the script:

time cat /path/to/csv/file  /dev/null
This should load the file into memory buffers and tell you the time it 
took.
Then run the same command again. It should be almost instantaneous, 
telling you that the file is in RAM.
Only then, run your perl script and see it fly.

FYI on my powerbook, a 'cat' to /dev/null of a 230 meg file takes 11 
seconds. The next run takes 0.8 seconds.

H.


Re: DBD::CSV much slower on osX ?

2005-02-16 Thread Jeff Zucker
[EMAIL PROTECTED] wrote:
Total Elapsed Time = 63.19465 Seconds
 User+System Time = 43.46465 Seconds
Exclusive Times
%Time ExclSec CumulS #Calls sec/call Csec/c  Name
66.5   28.91 36.463 109445   0.0003 0.0003  SQL::Statement::eval_where
 

I'm the maintainer of DBD::CSV and while I don't have time this week to 
get into this, I am following the conversation and will certainly 
eventually revise the module if any useful information is discovered.  
My only comment so far is that if, as the above shows, the eval_where is 
what is taking the time, then I'd like to know what SQL query you are 
running.  Are you doing full text searches with LIKE? or many 
comparisons?  Do you use a prepare-once, execute many approach?  This 
may be a red herring, but I'd be curious.

--
Jeff


Re: DBD::CSV much slower on osX ?

2005-02-16 Thread Tim Bunce
On Wed, Feb 16, 2005 at 07:44:49PM +0900, [EMAIL PROTECTED] wrote:
 
   As somebody kindly pointed me out that hardware differences
 could have a strong impact on the results, I will suppose that this is
 the reason why I have seen such speed inconsistencies, unless somebody
 found something insightful in the dprofpp output extract in this mail.

Peter had some insightful comments.

And you've not addressed the issues I raised a few days ago:

: A couple of quick guesses:
:   - perl config differences - eg configured for threads or not
:   - perhaps on OS X you're using unicode data

You could start by posting the perl -V output for the two perls.

Tim.


Re: DBD::CSV much slower on osX ?

2005-02-03 Thread Tim Bunce
On Thu, Feb 03, 2005 at 06:27:05PM +0900, [EMAIL PROTECTED] wrote:
 Dear list,
 
   I wrote a simple CGI script using DBD::CSV on a linux
 computer, and then installed it on a iMac G5. Its execution time is
 now alomst 10 times slower. Using print instructions, I traced the
 bottleneck to the following instruction :
 
 $sth-execute;
 
   Now I am a bit stuck, as I do not know how to investigate in
 the DBD::CSV module to find where is the slow isnstruction.

The Devel::DProf module (and/or other code profiling modules) may help.

A couple of quick guesses:
  - perl config differences - eg configured for threads or not
  - perhaps on OS X you're using unicode data

Tim.


Re: DBD::CSV much slower on osX ?

2005-02-03 Thread Henri Asseily
On Feb 3, 2005, at 2:18 AM, Tim Bunce wrote:
On Thu, Feb 03, 2005 at 06:27:05PM +0900, [EMAIL PROTECTED] 
wrote:
Dear list,
I wrote a simple CGI script using DBD::CSV on a linux
computer, and then installed it on a iMac G5. Its execution time is
now alomst 10 times slower. Using print instructions, I traced the
bottleneck to the following instruction :
$sth-execute;
Now I am a bit stuck, as I do not know how to investigate in
the DBD::CSV module to find where is the slow isnstruction.
The Devel::DProf module (and/or other code profiling modules) may help.
A couple of quick guesses:
  - perl config differences - eg configured for threads or not
  - perhaps on OS X you're using unicode data
Tim.
We need to know a little more about your config:
OS X version?
What Perl version?
I know that Perl by default is threaded on OS X.
What type of linux box are you benchmarking against?
Also note that DBD::CSV is significantly impacted by I/O speed. If your 
IMac G5 has a 4200 rpm drive and your linux box has a 10k rpm one, that 
makes quite a large difference.

Send me off-list the code and data and I'll test it.


Re: DBD::CSV and joins

2005-01-24 Thread Jeff Benton
Any traction on this?

On Fri, 2005-01-14 at 10:17, Jeff Zucker wrote:
 Jeff Benton wrote:
 
 Sorry if I posted this twice - I did not see my first post go through.
 
 I am trying to do a join across to csv files but have been unsuccessful
 up to this point.
   
 
 ...
 
 I get the following:
 Use of uninitialized value in concatenation (.) or string at
 /usr/local/share/perl/5.8.4/SQL/Statement.pm line 553, GEN1 line 1.
   
 
 It appears to be a bug.  I'll let you know when I've tracked it down.
-- 
Jeffrey Benton
QA Lead/Interim Project Manager
[EMAIL PROTECTED]
x-113



Re: DBD::CSV and joins

2005-01-14 Thread Jeff Zucker
Jeff Benton wrote:
Sorry if I posted this twice - I did not see my first post go through.
I am trying to do a join across to csv files but have been unsuccessful
up to this point.
 

...
I get the following:
Use of uninitialized value in concatenation (.) or string at
/usr/local/share/perl/5.8.4/SQL/Statement.pm line 553, GEN1 line 1.
 

It appears to be a bug.  I'll let you know when I've tracked it down.
--
Jeff


Re: DBD::CSV and large files...

2004-09-20 Thread Jeff Zucker
amonotod wrote:
  I am wondering if there is any kind of switch that can be passed
 to DBD::CSV that will enable it to parse only one EOL at a time?  
No, but Text::CSV_XS, which underlies DBD::CSV can easily parse a line 
at a time.

I think there's discussion of adding something along the lines of 
execute for fetch into DBI which would allow this kind of cursor 
oriented operation.

   We are using DBD::CSV to parse files into databases
Check to see if your RDBMS has a load function, that is usually the 
fastest way to get CSV data into a database.

--
Jeff


Re: DBD::CSV and large files...

2004-09-20 Thread amonotod
 From: Jeff Zucker [EMAIL PROTECTED]
 Date: 2004/09/20 Mon PM 05:40:06 GMT

 No, but Text::CSV_XS, which underlies DBD::CSV can easily parse a line 
 at a time.

Thanks, I'll take a look at it...
 
 I think there's discussion of adding something along the lines of 
 execute for fetch into DBI which would allow this kind of cursor 
 oriented operation.

Sounds like VB's recordset crud...
 
 Check to see if your RDBMS has a load function, that is usually the 
 fastest way to get CSV data into a database.

The point of this script is to avoid BCP, SQLloader, etc., and maintain an agnostic 
load routine.  Is it as efficient?  At this time, no.  But I'll see what I can do...
 
 Jeff

Thanks for the response, always appreciated...
amonotod


--

`\|||/ amonotod@| sun|perl|windows
  (@@) charter.net  | sysadmin|dba
  ooO_(_)_Ooo
  _|_|_|_|_|_|_|_|



RE: DBD::CSV optimization (hack)

2004-09-20 Thread Ofer Nave

[NOTE: Moved to [EMAIL PROTECTED] from a private email thread]

Allow me to interrupt with some background:

We have thousands of tables and views across dozens of databases on a score
of severs.  We have many hundreds of gigabytes of data in the those
databases.  For some reason, our databases are painfully slow (Sybase, and I
don't why they're slow - maybe our DBA isn't on top of things ;), so we try
to do as much work outside the database as possible.  In the past, people
have written dozens of custom scripts that pull data from dozens of tables
and dumped them to random locations across our filesystem, using random
naming conventions, in custom undocumented formats.  Everyone who wanted to
pull data from these table dumps instead of the database had to figure out
if one even existed, ask around to find out where it is, `head` the file,
guess the format rules, and write custom code in their script to read it.

Needless to say, this is all very, very ugly.  Being the new guy, I quickly
drank a glass water to overcome my gag reflex and set about eliminating this
mess and solving the problem once and for all.  That's when I started poking
around CPAN (I'm a long time perl programmer, but somehow never got around
to exploring CPAN till just recently - good god there's some amazing stuff
there) and found DBD::CSV and Text::CSV_XS.

I thought: perfect!  I'll use Text::CSV_XS to dump tables, pick a standard
location and filename scheme
(/netapp/dbcache/server/database/table|view), and whip up a quick
module with two functions: cache_table, and query_cached_table, the second
of which would take a SQL query (using DBD::CSV), so that people could
painlessly change code to use either the live tables (DBI + DBD::Sybase) or
the cached table (DBI + DBD::CSV), as the situation dictates.

However, most of the time the user simply wants to read the whole file in
(or a subset of columns) and do something special (non-SQLish) with the
data, and in those cases, loading the data myself with Text::CSV_XS is far
faster than going through DBD::CSV.  I even wrote a little iterator object
that implements a simple subset of DBI::st, like fetchrow_array and
fetchrow_arrayref (trivial, but cute), which I return if I decided to use
Text::CSV_XS directly and not not DBD::CSV.

But I am currently deciding which strategy to use in the background with a
regex to determine if the query meets my definition of simple, and that's
kinda ugly.  I will probably mess around with SQL::Parser and use that
instead when I get a moment, but I wanted to bring this whole situation to
your attention in case you were willing/able to optimize DBD::CSV for the
special case of just give me some columns from this one table.

Of course, maybe this whole situation has alredy been solved by an existing
generic table caching module, and I just haven't had the good fortune to
come across it.  If not, however, perhaps it would be worthwhile polishing
up what I've worked on and putting it on CPAN.

For completeness, responses to individual points from the previous email can
be found below:

  I recently ran a test of Text::CSV_XS vs. DBD::CSV in which
 I simply wanted
  to load all data (or a subset of columns) from a CSV file
 (essentially a
  SELECT * FROM table or SELECT fieldlist FROM table
 query).  The raw
  Text::CSV_XS code ran significantly faster, sometimes twice as fast.

 If you don't have a WHERE clause, or only a single WHERE
 comparison, try
 using DBI::SQL::Nano as the DBD::CSV parser instead of
 SQL::Statement.
 Just put BEGIN { $ENV{DBI_SQL_NANO}=1 } at the top of your script and
 the DBD will use Nano rather than the usual SQL::Statement.  This can
 significantly speed things up for simple cases.

I can't know ahead of time how complex the query will be.  My goal is to
seamlessly support as much of SQL as I can on top of our database caching
system, while using shortcuts when possible if the performance difference
justifies it (as it does for straight table slurps).

  That's not surprising, since DBD::CSV provides an entire SQL
  parsing/execution layer that enables it to do far cooler
 things than a
  simple SELECT, but that means that if I want to optimize
 the table caching
  system I'm building for my company, I need to either:

 If your goal is an in-memory table, try DBD::AnyData which
 handles CSV
 and creates in-memory tables by default.

Only casually familiar with DBD::AnyData (I skimmed the docs - looks neat),
but in-memory table isn't exactly what I'm shooting for.  The working
assumption here is that the table was already dumped to disk in CSV format,
say once a day at 3am or something.

  2) parse the query myself (perhaps with SQL::Parser), and
 use Text::CSV_XS
  directly if its a simple select, or DBD::CSV for anything else

 Um, but that is just what DBD::CSV already does - it uses
 SQL::Parser to
 parse the query and Text::CSV_XS to parse the CSV.

I know DBD::CSV uses SQL::Parser (in fact, the docs imply that's what it was

Re: DBD::CSV and large files...

2004-09-20 Thread Ian Harisay
Text::CSV_XS will handle what you want to do just fine.  You could do: 
 
while(my $rec = $sth-fetchrow_arrayref()){ 
  print OUTFILE $csv-combine(@{$rec}),$/; 
} 
 
If you are pulling large amounts of data across your network, look at
doing some optimization by setting RowCacheSize in the DBI to a higher
number.  I have found 1200 to be optimal for my stuff.  Record size does
make a difference with this number. 
 
amonotod [EMAIL PROTECTED] 09/20 11:23 am  
 
Hello, 
 
 I am wondering if there is any kind of switch that can be passed to
DBD::CSV that will enable it to parse only one EOL at a time?  
 
 
  We are using DBD::CSV to parse files into databases, and it is working
beautifully.  Unfortunately, some of these files are in excess of 100MB,
and none are them are slated to stop growing. 
 
 
 When parsing files under @10MB, it goes fairly quickly, but the parse
time on very large files can be up to 30 minutes on a P4 1.7GHz with 1GB
RAM.  I am currently parsing a 122MB file, and perls memory use went to
over 300MB, with a parse time of 32 minutes for this file. 
 
 
 What I'd like to see is DBD::CSV create the handle, allow me to select
* from table, but then to wait on actually calling the select statement,
while allowing me to call $sth-fetchrow_array against it.  In the
background, after the initial statement and after each fetchrow_array or
fetchrow, DBD::CSV would call the next line of data, parsing to the next
EOL... 
 
 
 So, I know I'm dreaming, but is this possible with the present
DBD::CSV?  Yes, I could manually open() the file, parse to an EOL, and
then call DBD::CSV against the in-memory values, but I'd rather not. 
DBD::CSV is very good about finding the next EOL, and making sure it is
not part of a quoted field, and I'd much rather rely on that... 
 
 
 Maybe this needs a separate module, like DBD::CSV::Loader or something
like that, but that would be beyond my l33t skillz... :-( 
 
 
Ideas?  Tips?  Flames? 
 
 
Thanks, 
 
amonotod 
 
 
 
-- 
 
 
   `\|||/ amonotod@| sun|perl|windows 
 
 (@@) charter.net  | sysadmin|dba 
 
  
 
 _|_|_|_|_|_|_|_| 
 



Re: DBD::CSV

2004-08-18 Thread Peter L. Berghold
On Wed, 2004-08-18 at 13:51, Robert wrote:
[snip]
 my $sel1 = $dbh1-prepare(SELECT * FROM table1);
 my $sel1-execute();
[snip!]
 my variable $sel1 masks earlier declaration in same scope at test4.pl line 21.
 Can't call method execute on an undefined value at test4.pl line 21.
  

You've declared the variable $sel1 as scope my twice in as many lines.

-- 

Peter L. Berghold[EMAIL PROTECTED]
Dog event enthusiast, brewer of Belgian (style) Ales.  Happiness is
having your contented dog at your side and a Belgian Ale in your glass.



signature.asc
Description: This is a digitally signed message part


Re: DBD::CSV doesn't do Unicode?

2004-07-09 Thread Jeff Zucker
amonotod wrote:
One set of them has Kanji characters in it
...
I would prefer to use the CSV driver to do so
What happened when you tried?  If it didn't work as expected, what 
happened instead?  Sending the data is good, but sending a small test 
script that illustrates the problem (if there is a problem) is better. 
Also your version of perl is very relevant since different perls handle 
Unicode differently.

If you look at the archives of this mailing list, you'll find a whole 
discussion of using DBD::CSV with Unicode in the past month including 
reports of people in Japan who have it working fine.  (search for 
MultiByte Character Sets and False Matches and read the whole thread, 
especially the last couple of posts)

BTW, how did your issue Re: perl DBD::CSV and non-printing ASCII turn 
out?  I think the test script I wrote conclusively proved the issue 
wasn't with DBD::CSV, was that your conclusion also?

--
Jeff


Re: DBD::CSV doesn't do Unicode?

2004-07-09 Thread amonotod
 From: Jeff Zucker [EMAIL PROTECTED]
 Date: 2004/07/09 Fri PM 04:18:15 GMT

 What happened when you tried?  If it didn't work as expected, what 
 happened instead?

I'm capturing and printing the number of rows selected with
  print Loading data from $datafile into table $tablemap...\n;
  my $Text_dbh = DBI-connect(DBI:CSV:f_dir=$arg_data_location);
  $Text_dbh-{'csv_tables'}-{$tablemap} = {
  'eol' = \n,
  'sep_char' = \|,
  'quote_char' = \,
  'escape_char' = \,
  'file' = $datafile,
  'col_names' = [EMAIL PROTECTED]
  };
  my $Text_sth = $Text_dbh-prepare(SELECT * FROM $tablemap);
  my $row_count = $Text_sth-execute() || die Could not get the data...\n;
  print $row_count rows in $tablemap!\n;

And on the Unicode files it prints:
Loading data from consum.dat into table CONSUM...
0E0 rows in CONSUM!
Done!

 If you look at the archives of this mailing list, you'll find a whole 
 discussion of using DBD::CSV with Unicode in the past month including 
 reports of people in Japan who have it working fine.  (search for 
 MultiByte Character Sets and False Matches and read the whole thread, 
 especially the last couple of posts)

I will go digging through them...
 
 BTW, how did your issue Re: perl DBD::CSV and non-printing ASCII turn 
 out?  I think the test script I wrote conclusively proved the issue 
 wasn't with DBD::CSV, was that your conclusion also?

Yes, you were indeed correct; I apologize for not getting back to you on it.  
The issue had nothing to do with DBD::CSV and everything to do with how I has 
handling the data... :-( 
 
 Jeff

Thank you,
amonotod

-- 
`\|||/ amonotod@| subject line: 
  (@@) charter.net  | no perl, no read...
  ooO_(_)_Ooo
  _|_|_|_|_|_|_|_|



Re: DBD::CSV and SQL::Statement issues

2004-05-14 Thread Jeff Zucker
Hi Chrisopher,

Christopher Huhn wrote:

I'm missing the following features using DBD::CSV:

  * Selecting constant values isn't working (i.e. SELECT 'xyz' FROM 
table). AFAIK that's plain SQL.
Yes, it's plain SQL but not yet supported by SQL::Statement.  The syntax 
that is supported is listed in the SQL::Parser docs.  I plan to revamp 
the entire SELECT columns parsing to allow column aliases, functions, 
constants, etc. but currently those aren't supported in the 
select_columns clause even if they are supported in the WHERE clause.

  * There's no case insensitive IN operator (Yes, I know about CLIKE ...)
Sorry, that's not something I'll probably ever add, but this should work 
now:  ... WHERE UPPER(foo) IN( UPPER(bar), UPPER(baz) ).

$sql = 'SELECT wb,from_address FROM whitelist WHERE user_id = ? '.
  'AND from_address IN (?,?,?)';
  $sth = $dbh-prepare($sql) or die Cannot prepare:  .
$dbh-errstr();
  $sth-bind_param(1, '1001');
  $sth-bind_param(2, '[EMAIL PROTECTED]');
  $sth-bind_param(3, '[EMAIL PROTECTED]');
  $sth-bind_param(4, '[EMAIL PROTECTED]');
  $sth-execute() or die Cannot execute:  . $sth-errstr();
  DBI::dump_results($sth);
leads to

  Use of uninitialized value in substitution iterator at
/usr/share/perl5/SQL/Parser.pm line 974.
  SQL ERROR: Bad predicate: ''!

ii  libsql-statement-perl 1.005-1   SQL parsing
There was a bug in the IN parsing in 1.005, sorry.  The current version 
of SQL::Statement 1.09, works fine with that syntax.

--
Jeff


Re: DBD::CSV and embedded newlines

2003-07-29 Thread Jeff Zucker
R. Steven Rainwater wrote:

But with DBD::CSV, it seems to be converting each newline
newline character in the literal characters slash and n,
Hmm, I wonder why I never noticed that or no one reported it.  I'll 
investigate for the next release.  Two quick fixes: change the code to 
use placeholders, which are better anyway; or if you don't want to 
change the code at all, edit the file .../DBD/File.pm and comment out 
line 340 and DBD::CSV should then work as you expect.

--
Jeff


Re: DBD::CSV::db prepare failed when using IN with where clause

2003-03-12 Thread Jeff Zucker
[EMAIL PROTECTED] wrote:

I am getting an error when I am trying to use an IN with DBD-CSV.  It used
to work before, so I don't know what happened.
Every since DBD-CSV was installed on a new machine, the statements no
longer work.
I currently have version 0.1021 of SQL::Statement installed on Solaris.



That version of SQL::Statement doesn't support the IN predicate.

Below is the output of my test program when I set the trace to 2.

Should I reword this select statement using ORs instead of the IN clause?

Is this a version problem?  Should I use SQL::Statement 1.005 or is the XS
version okay with DBD::CSV?


The XS version is okay but has a different feature set.  The pure perl 
version supports perhaps twice as many SQL features like some joins, IN, 
BETWEEN, MIN MAX, etc.  See this page for a description of differences:

   http://www.vpservices.com/jeff/programs/sql-compare.html

--
Jeff


Re: DBD::CSV

2003-03-04 Thread Ron Savage
On Mon, 3 Mar 2003 17:56:33 +0100, Peter Schuberth wrote:
Hello,

Hi Peter

I would also like to stick to MySQL, but I have to change it, since
there is
no DB-Server available.

What do you mean? I use MySQL under Windows.
--
Cheers
Ron Savage, [EMAIL PROTECTED] on 04/03/2003
http://savage.net.au/index.html




Re: DBD::CSV

2003-03-04 Thread Peter Schuberth
Hello Jeff,
below the sample query, the one which is commented is working under Linux
(this is the one I need) , but gives an empty result under Windows, then I
removed the field artnummer wich is also a string field, then I got some
results but the result looks like a simple OR between (artbez_' . $isLang .
' LIKE ?) OR (artkurz_' . $isLang .' LIKE ?).
I am not accessing at the same time under Linux and Windows, I am about to
move the Programm from Linix to Windows.
The Version of SQL::Statement under Windows is 1.005

# my $searchsqlstring = 'SELECT * from artikel WHERE ((artbez_' . $isLang .
' LIKE ?) OR (artkurz_' . $isLang .' LIKE ?) OR (artnummer LIKE ?)) AND ' .

# '((artbez_' . $isLang . ' LIKE ?) OR (artkurz_' . $isLang .' LIKE ?) OR
(artnummer LIKE ?)) AND ' .

# '((artbez_' . $isLang . ' LIKE ?) OR (artkurz_' . $isLang .' LIKE ?) OR
(artnummer LIKE ?))';

my $searchsqlstring = 'SELECT * from artikel WHERE ((artbez_' . $isLang . '
LIKE ?) OR (artkurz_' . $isLang .' LIKE ?)) AND ' .

'((artbez_' . $isLang . ' LIKE ?) OR (artkurz_' . $isLang .' LIKE ?)) AND '
.

'((artbez_' . $isLang . ' LIKE ?) OR (artkurz_' . $isLang .' LIKE ?))';

$sth = $DBH - prepare ($searchsqlstring) or die SELECT von artikel nicht
moeglich.;



$search4 = $search1;

$search5 = $search2;

$search6 = $search3;

$search1 = '%' . $search1 . '%';

$search2 = '%' . $search2 . '%';

$search3 = '%' . $search3 . '%';

$search4 = $search4 . '%';

$search5 = $search5 . '%';

$search6 = $search6 . '%';



# $sth - execute($search1, $search1, $search4, $search2, $search2,
$search5, $search3, $search3, $search6) or die SELECT von artikel nicht
moeglich.;

$sth - execute($search1, $search1, $search2, $search2, $search3, $search3)
or die SELECT von artikel nicht moeglich.;



- Original Message -
From: Jeff Zucker [EMAIL PROTECTED]
To: Peter Schuberth [EMAIL PROTECTED]; dbi-users [EMAIL PROTECTED]
Sent: Monday, March 03, 2003 7:07 PM
Subject: Re: DBD::CSV


 Peter Schuberth wrote:

 On the same hardware system
 the same select for the same table
 
 Please check your version of SQL::Statement which is the module that
 determines the speed and capabilities of DBD::CSV.  Put these two lines
 at the top of the scripts:

 use SQL::Statement;
 print $SQL::Statement::VERSION;

 It sounds to me like you may have the XS version of the module on one
 install and the pure perl version on the other.  You can read about the
 difference between the two at

http://www.vpservices.com/jeff/programs/sql-compare.html

 If you have a moderately large data set and speed is an issue, then the
 XS version will be better but MySQl, PostgreSQL, or SQLlite will be even
 faster. DBD::CSV is really meant for relatively small data sets and
 relatively uncomplicated querries.

 Another issue:  if you are trying to access the same data file from both
 windows and linux, you'll need to specifically set the csv_eol for the
 table.  It doesn't really  matter which you choose, either \015\012 or
 \012.  If the script sets it to either one of those and the data
 matches, then the same file can be accessed from both windows and linux
 without changing the file or the script.  That sort of multi-platform
 accessibility can be achieved with the other DBDs, but probably not as
 simply.

 You may be running into some bugs I am working on relatvie to
 parentheses in WHERE clauses.  I'd appreciate it if you could send me
 any sample queries that don't work for you.

 --
 Jeff





Re: DBD::CSV

2003-03-04 Thread Peter Schuberth
Hello Jeff,

I believe the problem of the low speed is due using SQL-Statement 1.005. I
think I should change to a XS Version  like 0.1021.

But from where can I get such a package? I have loaded it initaly from
ActivePerl. But the package they provide there for 6xx build (5.6.1) has
only the SQL-Statement 1.005. The 5xx buid is using SQL-Statement 0.1021,
but I can not install this package on my system.

Any info where I can get this package would be of great help.

Peter

- Original Message -
From: Jeff Zucker [EMAIL PROTECTED]
To: Peter Schuberth [EMAIL PROTECTED]; dbi-users [EMAIL PROTECTED]
Sent: Monday, March 03, 2003 7:07 PM
Subject: Re: DBD::CSV


 Peter Schuberth wrote:

 On the same hardware system
 the same select for the same table
 
 Please check your version of SQL::Statement which is the module that
 determines the speed and capabilities of DBD::CSV.  Put these two lines
 at the top of the scripts:

 use SQL::Statement;
 print $SQL::Statement::VERSION;

 It sounds to me like you may have the XS version of the module on one
 install and the pure perl version on the other.  You can read about the
 difference between the two at

http://www.vpservices.com/jeff/programs/sql-compare.html

 If you have a moderately large data set and speed is an issue, then the
 XS version will be better but MySQl, PostgreSQL, or SQLlite will be even
 faster. DBD::CSV is really meant for relatively small data sets and
 relatively uncomplicated querries.

 Another issue:  if you are trying to access the same data file from both
 windows and linux, you'll need to specifically set the csv_eol for the
 table.  It doesn't really  matter which you choose, either \015\012 or
 \012.  If the script sets it to either one of those and the data
 matches, then the same file can be accessed from both windows and linux
 without changing the file or the script.  That sort of multi-platform
 accessibility can be achieved with the other DBDs, but probably not as
 simply.

 You may be running into some bugs I am working on relatvie to
 parentheses in WHERE clauses.  I'd appreciate it if you could send me
 any sample queries that don't work for you.

 --
 Jeff





Re: DBD::CSV

2003-03-04 Thread Jeff Zucker
Peter Schuberth wrote:

Hello Jeff,

I believe the problem of the low speed is due using SQL-Statement 1.005. I
think I should change to a XS Version  like 0.1021.
Maybe it will help, though as others have pointed out MySQL and 
PostgreSQL and SQLite may be better for you  All of them work fine on 
windows.  Also, how about DBD::ODBC with its CSV driver which of course 
works on windows?

Also I'm still confused about your initial time difference between Linux 
and Windows.  Are you saying that you were using the same version of 
SQL::Statement and the same version of DBI and the same version of perl 
on both platforms?  I really can't believe that is the case -- the speed 
might be different between platforms, but I can't imagine how the query 
processing could vary unless the SQL::Statement versions are different 
or you are making a mistake with the eol.

But from where can I get such a package? I have loaded it initaly from
ActivePerl. But the package they provide there for 6xx build (5.6.1) has
only the SQL-Statement 1.005. The 5xx buid is using SQL-Statement 0.1021,
but I can not install this package on my system.
Hmm, you're right.  Anyone on the list care to compile SQL::Statement 
0.1021 for ActiveState perl 5.8 and put it in a public repository?

--
Jeff


Re: DBD::CSV

2003-03-04 Thread Jeff Zucker
Tim Bunce wrote:

On Mon, Mar 03, 2003 at 10:07:29AM -0800, Jeff Zucker wrote:

use SQL::Statement;
  print $SQL::Statement::VERSION;
Or run this command

	perl -MSQL::Statement=

Hmm, what am I missing?  That doesn't work for me with SQL::Statement. 
It also doesn't work for me with DBD::ODBC, DBD::ADO, DBD::CSV although 
it does work with DBI and with DBD::Pg.  What do I need to do in the 
module to get this to work?

--
Jeff


Re: DBD::CSV

2003-03-04 Thread Scott R. Godin
Jeff Zucker wrote:

 Tim Bunce wrote:
 
On Mon, Mar 03, 2003 at 10:07:29AM -0800, Jeff Zucker wrote:

use SQL::Statement;
   print $SQL::Statement::VERSION;


Or run this command

perl -MSQL::Statement=

 
 Hmm, what am I missing?  That doesn't work for me with SQL::Statement.
  It also doesn't work for me with DBD::ODBC, DBD::ADO, DBD::CSV although
 it does work with DBI and with DBD::Pg.  What do I need to do in the
 module to get this to work?
 

I also occasionally use the syntax

perl -MModule::Name -le 'print Module::Name-VERSION'

2:35pm {20} pcp02404936pcs:/home/webdragon$ perl -MCGI -MSQL::Statement -le 
'print CGI-VERSION; print SQL::Statement-VERSION'
2.91
1.005

usually this works very well.


Re: DBD::CSV

2003-03-04 Thread Tim Bunce
On Tue, Mar 04, 2003 at 09:32:27AM -0800, Jeff Zucker wrote:
 Tim Bunce wrote:
 
 On Mon, Mar 03, 2003 at 10:07:29AM -0800, Jeff Zucker wrote:
 
 use SQL::Statement;
   print $SQL::Statement::VERSION;
 
 
 Or run this command
 
  perl -MSQL::Statement=
 
 
 Hmm, what am I missing?  That doesn't work for me with SQL::Statement. 
 It also doesn't work for me with DBD::ODBC, DBD::ADO, DBD::CSV although 
 it does work with DBI and with DBD::Pg.  What do I need to do in the 
 module to get this to work?

From memory... it needs a non-lexical $VERSION and (I think) to be
a subclass of Exporter.

Tim.


RE: DBD::CSV

2003-03-03 Thread Dan Muey


 Hello,
 
 I have changed from 
 MySQL DB to CSV File (DBD::CSV).
 And also from Linux to Windows

There's the first two mistakes! :)

 
 A)
 But now I discovered few problems:
 On the same hardware system
 the same select for the same table
 
 1) Linux the select takes 0.4 seconds
 
 2) Windows the select takes 2.1 seconds
 
 Using Apache 1.3.27 and under Windows ActivePerl.
 
 Is it a problem of Apache, Perl or the OS?

THE OS !! WINDOWS is an expensive resource cow, designed to stop working every 2.5 
years
And make you think you can't live with out it. 



 
 B)
 I have a select like this:
 ...WHERE ((field1 like ?) OR (field2 like ?) OR (field3 like 
 ?)) AND ((field1 like ?) OR (field2 like ?) OR (field3 like 
 ?)) AND ((field1 like ?) OR (field2 like ?) OR (field3 like ?))
 
 values1,values1,values1,values2,values2,values2,values3,values
 3,values3
 
 with Linux this is not a problem, but under Windows I get no 
 data and also no error.
 
 I removed the field3 only then I got data with Windows, but 
 the result looks like WHERE ((field1 like ?) OR (field2 like 
 ?)) has been used for the search.

On this one are the tables *exactly* the same?

 
 C)
 There is a different in the order of a table when I use it 
 under Linux or Windows. Under Windows the order is fine but 
 under Linux it looks mixed up.

That depends again on if the tables are *exactly* the same and *exactly* what query 
you use and *exactly* how the software is installed on the OS, and how you assemble 
the results. Windows and unix behave differently. If you can hack it I'd highly 
recommend sticking to a flavor of unix.
Too many variables to figure out why they are coming back different.

 
 Thanks for any help in advance!
 
 Peter
 
 
 
 
 


RE: DBD::CSV

2003-03-03 Thread Jenda Krynicky
From: Dan Muey [EMAIL PROTECTED]
  A)
  But now I discovered few problems:
  On the same hardware system
  the same select for the same table
  
  1) Linux the select takes 0.4 seconds
  
  2) Windows the select takes 2.1 seconds
  
  Using Apache 1.3.27 and under Windows ActivePerl.
  
  Is it a problem of Apache, Perl or the OS?
 
 THE OS !! WINDOWS is an expensive resource cow, designed to stop
 working every 2.5 years And make you think you can't live with out it.

Please stop flaming based on your personal tastes. It is true that  
Windows use more resources than Linux in the default setup, but the 
difference Dan sees has nothing to do with this.

It's caused by the DBD::CSV. If he installed Mysql and kept using 
DBD::mysql he'd see roughly the same results.

Dan, you may want to take a look at DBD::SQLite if you do not want to 
install mysql.

Jenda
P.S.: Dan, could you try to run the script using DBD::CSV, using the 
same data, under Linux and let us know how long did it take?
= [EMAIL PROTECTED] === http://Jenda.Krynicky.cz =
When it comes to wine, women and song, wizards are allowed 
to get drunk and croon as much as they like.
-- Terry Pratchett in Sourcery



RE: DBD::CSV

2003-03-03 Thread Dan Muey

 
 Hello,
 thanks for your answers.
 
 I would also like to stick to MySQL, but I have to change it, 
 since there is no DB-Server available.

Use a remote one, your isp's perhaps. Anywho...

 
 Dan, yes the data of the tables are *exactly* the same!!!

How about posting the first few lines form the linux one and form the windows one?
When ftping the CSV files up did you do it as binary. 

( I've found once when using CSV that they look identicle if you FTP them in ASCII and 
binary, but if I didn't do them in binary everythingn was screwy )

 
 The querry I use for C) is like:
 
 where (field1=?) and (field2=?) order by field3
 again the same data and the same query!
 For showing the data I just use a while loop and display 
 record by record. Again using the same script here and there.
 
 Regards
 
 - Original Message -
 From: Dan Muey [EMAIL PROTECTED]
 To: Peter Schuberth [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Sent: Monday, March 03, 2003 3:35 PM
 Subject: RE: DBD::CSV
 
 
 
 
  Hello,
 
  I have changed from
  MySQL DB to CSV File (DBD::CSV).
  And also from Linux to Windows
 
 There's the first two mistakes! :)
 
 
  A)
  But now I discovered few problems:
  On the same hardware system
  the same select for the same table
 
  1) Linux the select takes 0.4 seconds
 
  2) Windows the select takes 2.1 seconds
 
  Using Apache 1.3.27 and under Windows ActivePerl.
 
  Is it a problem of Apache, Perl or the OS?
 
 THE OS !! WINDOWS is an expensive resource cow, designed 
 to stop working every 2.5 years And make you think you can't 
 live with out it.
 
 
 
 
  B)
  I have a select like this:
  ...WHERE ((field1 like ?) OR (field2 like ?) OR (field3 like
  ?)) AND ((field1 like ?) OR (field2 like ?) OR (field3 like
  ?)) AND ((field1 like ?) OR (field2 like ?) OR (field3 like ?))
 
  values1,values1,values1,values2,values2,values2,values3,values
  3,values3
 
  with Linux this is not a problem, but under Windows I get 
 no data and 
  also no error.
 
  I removed the field3 only then I got data with Windows, but the 
  result looks like WHERE ((field1 like ?) OR (field2 like
  ?)) has been used for the search.
 
 On this one are the tables *exactly* the same?
 
 
  C)
  There is a different in the order of a table when I use it 
 under Linux 
  or Windows. Under Windows the order is fine but under Linux 
 it looks 
  mixed up.
 
 That depends again on if the tables are *exactly* the same 
 and *exactly* what query you use and *exactly* how the 
 software is installed on the OS, and how you assemble the 
 results. Windows and unix behave differently. If you can hack 
 it I'd highly recommend sticking to a flavor of unix. Too 
 many variables to figure out why they are coming back different.
 
 
  Thanks for any help in advance!
 
  Peter
 
 
 
 
 
 
 
 
 


Re: DBD::CSV

2003-03-03 Thread Jeff Zucker
Peter Schuberth wrote:

On the same hardware system
the same select for the same table
Please check your version of SQL::Statement which is the module that 
determines the speed and capabilities of DBD::CSV.  Put these two lines 
at the top of the scripts:

   use SQL::Statement;
   print $SQL::Statement::VERSION;
It sounds to me like you may have the XS version of the module on one 
install and the pure perl version on the other.  You can read about the 
difference between the two at

  http://www.vpservices.com/jeff/programs/sql-compare.html

If you have a moderately large data set and speed is an issue, then the 
XS version will be better but MySQl, PostgreSQL, or SQLlite will be even 
faster. DBD::CSV is really meant for relatively small data sets and 
relatively uncomplicated querries.

Another issue:  if you are trying to access the same data file from both 
windows and linux, you'll need to specifically set the csv_eol for the 
table.  It doesn't really  matter which you choose, either \015\012 or 
\012.  If the script sets it to either one of those and the data 
matches, then the same file can be accessed from both windows and linux 
without changing the file or the script.  That sort of multi-platform 
accessibility can be achieved with the other DBDs, but probably not as 
simply.

You may be running into some bugs I am working on relatvie to 
parentheses in WHERE clauses.  I'd appreciate it if you could send me 
any sample queries that don't work for you.

--
Jeff


Re: DBD::CSV--Execution ERROR: Couldn't find column names!.

2003-03-03 Thread Jeff Zucker
Snethen, Jeff wrote:

I've worked with the DBI some, but I'm now starting to experiment with DBD::CSV.  I'm 
trying to read a table without
column headers, letting CSV create the column names for me.  If I understand the 
documentation correctly, an empty array
reference should cause the driver to name the columns for me.
Thanks for pointing out this bug.  I have fixed it and will be uploading 
a corrected version of the module (1.006) shortly.

--
Jeff


Re: DBD::CSV

2003-03-03 Thread Tim Bunce
On Mon, Mar 03, 2003 at 10:07:29AM -0800, Jeff Zucker wrote:
 Peter Schuberth wrote:
 
 On the same hardware system
 the same select for the same table
 
 Please check your version of SQL::Statement which is the module that 
 determines the speed and capabilities of DBD::CSV.  Put these two lines 
 at the top of the scripts:
 
use SQL::Statement;
print $SQL::Statement::VERSION;

Or run this command

perl -MSQL::Statement=

and it'll say something like

SQL::Statement  required--this is only version X.YY

Tim.


Re: DBD::CSV

2003-03-03 Thread Peter J. Holzer
On 2003-03-03 13:35:36 +0100, Peter Schuberth wrote:
 I have changed from 
 MySQL DB to CSV File (DBD::CSV).
 And also from Linux to Windows

Neither seems like a smart move to me.

Changing two things at once definitely isn't.

 A)
 But now I discovered few problems:
 On the same hardware system
 the same select for the same table
 
 1) Linux the select takes 0.4 seconds
 
 2) Windows the select takes 2.1 seconds
 
 Using Apache 1.3.27 and under Windows ActivePerl.
 
 Is it a problem of Apache, Perl or the OS?

or of DBD::CSV? Depending on the size of the tables and the complexity
of the query DBD::CSV can be a lot slower that MySQL. 

 B)
 I have a select like this:
 ...WHERE ((field1 like ?) OR (field2 like ?) OR (field3 like ?)) AND
 ((field1 like ?) OR (field2 like ?) OR (field3 like ?)) AND
 ((field1 like ?) OR (field2 like ?) OR (field3 like ?))
 
 values1,values1,values1,values2,values2,values2,values3,values3,values3
 
 with Linux this is not a problem, but under Windows I get no data and also no error.
 
 I removed the field3 only then I got data with Windows, but the
 result looks like WHERE ((field1 like ?) OR (field2 like ?)) has been
 used for the search.

Have you tried the same query with DBD::CSV on Linux? Or with DBD::MySQL
on Windows?


 C)
 There is a different in the order of a table when I use it under Linux
 or Windows. Under Windows the order is fine but under Linux it looks
 mixed up.

If you want the result of a select to be ordered you have to use an
ORDERED BY clause. Tables have NO defined order in SQL.

hp

-- 
   _  | Peter J. Holzer  | Unser Universum wäre betrüblich
|_|_) | Sysadmin WSR / LUGA  | unbedeutend, hätte es nicht jeder
| |   | [EMAIL PROTECTED]| Generation neue Probleme bereit.
__/   | http://www.hjp.at/   |  -- Seneca, naturales quaestiones


pgp0.pgp
Description: PGP signature


Re: DBD::CSV

2003-03-03 Thread Peter Schuberth
Hello,
thanks for your answers.

I would also like to stick to MySQL, but I have to change it, since there is
no DB-Server available.

Dan, yes the data of the tables are *exactly* the same!!!

The querry I use for C) is like:

where (field1=?) and (field2=?) order by field3
again the same data and the same query!
For showing the data I just use a while loop and display record by record.
Again using the same script here and there.

Regards

- Original Message -
From: Dan Muey [EMAIL PROTECTED]
To: Peter Schuberth [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Monday, March 03, 2003 3:35 PM
Subject: RE: DBD::CSV




 Hello,

 I have changed from
 MySQL DB to CSV File (DBD::CSV).
 And also from Linux to Windows

There's the first two mistakes! :)


 A)
 But now I discovered few problems:
 On the same hardware system
 the same select for the same table

 1) Linux the select takes 0.4 seconds

 2) Windows the select takes 2.1 seconds

 Using Apache 1.3.27 and under Windows ActivePerl.

 Is it a problem of Apache, Perl or the OS?

THE OS !! WINDOWS is an expensive resource cow, designed to stop working
every 2.5 years
And make you think you can't live with out it.




 B)
 I have a select like this:
 ...WHERE ((field1 like ?) OR (field2 like ?) OR (field3 like
 ?)) AND ((field1 like ?) OR (field2 like ?) OR (field3 like
 ?)) AND ((field1 like ?) OR (field2 like ?) OR (field3 like ?))

 values1,values1,values1,values2,values2,values2,values3,values
 3,values3

 with Linux this is not a problem, but under Windows I get no
 data and also no error.

 I removed the field3 only then I got data with Windows, but
 the result looks like WHERE ((field1 like ?) OR (field2 like
 ?)) has been used for the search.

On this one are the tables *exactly* the same?


 C)
 There is a different in the order of a table when I use it
 under Linux or Windows. Under Windows the order is fine but
 under Linux it looks mixed up.

That depends again on if the tables are *exactly* the same and *exactly*
what query you use and *exactly* how the software is installed on the OS,
and how you assemble the results. Windows and unix behave differently. If
you can hack it I'd highly recommend sticking to a flavor of unix.
Too many variables to figure out why they are coming back different.


 Thanks for any help in advance!

 Peter










Re: DBD::CSV

2003-03-03 Thread Peter Schuberth
Hello Jenda,
I have used the same data under Linux and Windows, also the same scripts.
The times I gave in A) are only for the select of the CSV table not for the
script. But comparing the select query with the execution of the rest of the
script, select is taking most of the time. Under Windows (Script total 2.5
sec. Search 2.0 sec).

I have also changed now to Apache 2.00.44, then the query is 0.15 seconds
faster under Win.

Peter


- Original Message -
From: Jenda Krynicky [EMAIL PROTECTED]
To: Peter Schuberth [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Monday, March 03, 2003 4:22 PM
Subject: RE: DBD::CSV


 From: Dan Muey [EMAIL PROTECTED]
   A)
   But now I discovered few problems:
   On the same hardware system
   the same select for the same table
  
   1) Linux the select takes 0.4 seconds
  
   2) Windows the select takes 2.1 seconds
  
   Using Apache 1.3.27 and under Windows ActivePerl.
  
   Is it a problem of Apache, Perl or the OS?
 
  THE OS !! WINDOWS is an expensive resource cow, designed to stop
  working every 2.5 years And make you think you can't live with out it.

 Please stop flaming based on your personal tastes. It is true that
 Windows use more resources than Linux in the default setup, but the
 difference Dan sees has nothing to do with this.

 It's caused by the DBD::CSV. If he installed Mysql and kept using
 DBD::mysql he'd see roughly the same results.

 Dan, you may want to take a look at DBD::SQLite if you do not want to
 install mysql.

 Jenda
 P.S.: Dan, could you try to run the script using DBD::CSV, using the
 same data, under Linux and let us know how long did it take?
 = [EMAIL PROTECTED] === http://Jenda.Krynicky.cz =
 When it comes to wine, women and song, wizards are allowed
 to get drunk and croon as much as they like.
 -- Terry Pratchett in Sourcery





Re: DBD::CSV and multiple table queries

2002-10-26 Thread Scott McGee
Jeff Zucker [EMAIL PROTECTED] wrote in message
news:3DB98694.8080701;vpservices.com...
 Yes, there is a bug in SQL::Statement related to table names combined with
a column name (e.g. megaliths.id).

 I will be releasing a fix later today.

Thank you oh great and wise one! I appreciate the help and the speed of it
too!

Scott





Re: DBD::CSV and multiple table queries

2002-10-25 Thread nhendler
It appears from the error message that it cannot locate the CSV file
at all.  Are you sure:

$dbd = DBI:CSV:f_dir=magalith_db;

Is correct?  You've spelled 'megalith' as 'magalith'.  Just have to
check.  I assume you've got whatever dir on the same level as your
script and that you've got a file named megaliths.csv in that dir.


 
Nick Hendler
[EMAIL PROTECTED]

_

From:Scott McGee [EMAIL PROTECTED]
Date:Friday, October 25, 2002 
Time:9:01:08 AM
Subject: DBD::CSV and multiple table queries

I am trying to get comfortable with DBI, and due to stupid circumstances
beyond my control at the moment, have to try to make do without an actual
database server. This means using something like DBD::CSV. I am just trying
to go through the O'Reilly Perl DBI book, so don't need anything too complex

I have found, however, that I cannot do a SELECT over multiple tables. The
documentation seems to indicate that I should be able to.

I have checked all the software, and it is up to date:
  perl v5.8.0
  DBI v1.30
  DBD::CSV v0.2002
  SQL::Parser v1.004
  SQL::Statement v1.004
  (I did find an older version of CSV, but got rid of it)
I am running under Solaris 8 on a SPARCstation 5. (Yes, there are DB servers
for it, but I don't have disk space!)

The error I get is:

Execution Error: No such column 'MEGALITHS.ID' called from query2 at 23.

The program (query2) looks like this:

-- begin included file query2 --
#!/usr/bin/perl -w

use DBI;# Load the DBI module

$dbd = DBI:CSV:f_dir=magalith_db;
$login = undef;
$passwd = undef;

### Perform the connection using the Oracle driver
my $dbh = DBI-connect( $dbd, $login, $passwd );
$dbh-{'RaiseError'} = 1; # turn off die() on error
$dbh-{'PrintError'} = 1; # turn off warn() on error

### create the megaliths table
my $query = SELECT megaliths.id, megaliths.name, site_types.site_type
 FROM megaliths, site_types
 WHERE megaliths.site_type_id = site_types.id
 ;
$sth = $dbh-prepare($query )
|| die Dude! This sucker failed! : $sth-errstr()\n;
$sth-execute();
#while (my $row = $sth-fetchrow_hashref) {
#print(Found result row: id = , $row-{'id'},
#  , name = , $row-{'name'}, , site_type = ,
#  $row-{'site_type'}, \n);
#}

while ( my @row = $sth-fetchrow_array() ) {
print row: @row\n;
}


### Disconnect from the database
$dbh-disconnect;

exit;
-- end included file query2 --

If anyone can help, I would greatly appreciate it.

Scott




Re: DBD::CSV and multiple table queries

2002-10-25 Thread Scott McGee
[EMAIL PROTECTED] wrote in message
news:1842904766.20021025094128;comcast.net...
 It appears from the error message that it cannot locate the CSV file
 at all.  Are you sure:

 $dbd = DBI:CSV:f_dir=magalith_db;

 Is correct?  You've spelled 'megalith' as 'magalith'.  Just have to
 check.  I assume you've got whatever dir on the same level as your
 script and that you've got a file named megaliths.csv in that dir.

No, the above is a typo. It should read megalith not magalith. In truth, I
had it as undef in the script I ran and had an environment variable
(DBI_DSN) set to handle it. I used the same script successfully by just
changing the SELECT statement so I am positive that the problem is in the
modules being used somewhere.

Scott





Re: DBD::CSV and multiple table queries

2002-10-25 Thread Jeff Zucker
Scott McGee wrote:


  SQL::Parser v1.004
  SQL::Statement v1.004


 ...


my $query = SELECT megaliths.id, megaliths.name, site_types.site_type



Yes, there is a bug in SQL::Statement related to table names combined with a column name (e.g. megaliths.id).


I will be releasing a fix later today.


--
Jeff






Re: DBD::CSV and multiple table queries

2002-10-25 Thread John

On Friday, October 25, 2002 11:01 PM [GMT + 10:00 AEST],
Scott McGee [EMAIL PROTECTED] wrote:

 I am trying to get comfortable with DBI, and due to stupid
 circumstances beyond my control at the moment, have to try to make do
 without an actual database server. This means using something like
 DBD::CSV. I am just trying to go through the O'Reilly Perl DBI book,
 so don't need anything too complex

As an alternative to DBD::CSV have you considered DBD::SQLite for your
situation. SQLite is public domain, DBD::SQLite includes the entire
thing in the distribution.


--
Regards
   John McMahon  (mailto:jmcm;bendigo.net.au)








Re: DBD::CSV test suite dumps core in SQL::Parser

2002-07-08 Thread Bart Lateur

On Sun, 7 Jul 2002 21:50:44 -0500 (CDT), Randy Kobes wrote:

What seems to be happening is that we use the same index files
that CPAN.pm uses to associate module names to distributions. And
in these Jochen's version of DBD::File shows up, as can be seen
from the CPAN.pm shell. What I think has to happen is Jochen has
to transfer, via the PAUSE web site, ownership of DBD::File to
you. If he's already done that, I'm not sure what's going on, but
then it's also a problem within the CPAN.pm shell ...

I'd blame the immediate symptoms, as seen with CPAN.pm, on
http://www.cpan.org/modules/02packages.details.txt.gz, which, AFAIK,
counts as the reference database for CPAN.pm, and similar scripts. The
entry for DBD::File there still reads:

  DBD::File 0.1023  J/JW/JWIED/DBD-CSV-0.1030.tar.gz

which is not quite right, is it?

-- 
Bart.



Re: DBD::CSV test suite dumps core in SQL::Parser

2002-07-08 Thread Jeff Zucker

Bart Lateur wrote:

 On Sun, 7 Jul 2002 21:50:44 -0500 (CDT), Randy Kobes wrote:
 
 
What seems to be happening is that we use the same index files
that CPAN.pm uses to associate module names to distributions. And
in these Jochen's version of DBD::File shows up, as can be seen

from the CPAN.pm shell. What I think has to happen is Jochen has
 
to transfer, via the PAUSE web site, ownership of DBD::File to
you. If he's already done that, I'm not sure what's going on, but
then it's also a problem within the CPAN.pm shell ...

 
 I'd blame the immediate symptoms, as seen with CPAN.pm, on
 http://www.cpan.org/modules/02packages.details.txt.gz, which, AFAIK,
 counts as the reference database for CPAN.pm, and similar scripts. The
 entry for DBD::File there still reads:
 
   DBD::File   0.1023  J/JW/JWIED/DBD-CSV-0.1030.tar.gz
 
 which is not quite right, is it?


Right, that isn't right.  And the same for Bundle::DBD::CSV.  Jochen and 
I are getting it straightene out now.

--
Jeff


 
 






Re: DBD::CSV test suite dumps core in SQL::Parser

2002-07-07 Thread Jeff Zucker

Steve Piner wrote:

 I'm using ... DBD::CSV 0.1030,


Please upgrade to the current version of DBD::CSV -- 0.2002.  If you 
continue to have problems, let me know.

-- 
Jeff






Re: DBD::CSV test suite dumps core in SQL::Parser

2002-07-07 Thread Jeff Zucker

Steve Piner wrote:

 
 Jochen Wiedmann wrote:
 
I suggest you try the Perl-only version of SQL::Statement.

Good luck,

Jochen

 
 I'm using SQL-Statement 1.004, the version maintained by Jeff Zucker.
 That is the perl-only version, isn't it? 


Yes.

Please read

   http://www.vpservices.com/jeff/programs/sql-compare.html

Especially the section that says:

If you are trying to install version 0.1x of DBD::CSV with version 1.x 
of SQL::Statement, you may get test failure messages on blobs, 
chopblanks, and ak-dbd tests. If those are the only errors, you can 
safely ignore them and do a make install. To eliminate the problem 
completley, upgrade to version 0.2x of DBD::CSV.

-- 
Jeff







Re: DBD::CSV test suite dumps core in SQL::Parser

2002-07-07 Thread Steve Piner



Jeff Zucker wrote:
 
 Steve Piner wrote:
 
  I'm using ... DBD::CSV 0.1030,
 
 Please upgrade to the current version of DBD::CSV -- 0.2002.  If you
 continue to have problems, let me know.

D'oh! I thought I had grabbed the latest version - obviously not! What
happened was I searched kobesearch.cpan.org for DBD::File, which only
returned Jochen's CPAN directory. Basically there was no indication that
Jochen was no longer maintaining it, or that there were two versions.

Upgrading to 0.2002 solved it. Thank you both for your time.


Steve

-- 
Steve Piner
Web Applications Developer
Marketview Limited
http://www.marketview.co.nz



Re: DBD::CSV test suite dumps core in SQL::Parser

2002-07-07 Thread Jeff Zucker

[with cc to Randy Kobes]

Steve Piner wrote:

 
 Jeff Zucker wrote:
 
Steve Piner wrote:


I'm using ... DBD::CSV 0.1030,

Please upgrade to the current version of DBD::CSV -- 0.2002.  If you
continue to have problems, let me know.

 
 D'oh! I thought I had grabbed the latest version - obviously not! What
 happened was I searched kobesearch.cpan.org for DBD::File, which only
 returned Jochen's CPAN directory. Basically there was no indication that
 Jochen was no longer maintaining it, or that there were two versions.
 
 Upgrading to 0.2002 solved it. Thank you both for your time.


Hmm I'm the registered maintainer of the DBD::File and 
http://search.cpan.org/ correctly points to the latest version in my 
directory (and has for at least four months) so I'm not quite sure 
what's going on with kobesearch.cpan.org.  It finds both Jochen's and my 
version of DBD::CSV (which is also somewhat odd since my version 
supercedes Jochen's) but it only finds Jochen's version of DBD::File, 
whereas search.cpan points to my directory for both DBDs.  Randy, with 
many thanks for your great site, could you let me know what's up with 
this or whether its some setting or registration I should look into?

-- 
Jeff




Re: DBD::CSV test suite dumps core in SQL::Parser

2002-07-07 Thread Randy Kobes

On 7 Jul 2002, Jeff Zucker wrote:

 [with cc to Randy Kobes]

 Steve Piner wrote:

 
  Jeff Zucker wrote:
 
 Steve Piner wrote:
 
 I'm using ... DBD::CSV 0.1030,
 
 Please upgrade to the current version of DBD::CSV -- 0.2002.  If you
 continue to have problems, let me know.
 
 
  D'oh! I thought I had grabbed the latest version - obviously not! What
  happened was I searched kobesearch.cpan.org for DBD::File, which only
  returned Jochen's CPAN directory. Basically there was no indication that
  Jochen was no longer maintaining it, or that there were two versions.
 
  Upgrading to 0.2002 solved it. Thank you both for your time.


 Hmm I'm the registered maintainer of the DBD::File and
 http://search.cpan.org/ correctly points to the latest version in my
 directory (and has for at least four months) so I'm not quite sure
 what's going on with kobesearch.cpan.org.  It finds both Jochen's and my
 version of DBD::CSV (which is also somewhat odd since my version
 supercedes Jochen's) but it only finds Jochen's version of DBD::File,
 whereas search.cpan points to my directory for both DBDs.  Randy, with
 many thanks for your great site, could you let me know what's up with
 this or whether its some setting or registration I should look into?

What seems to be happening is that we use the same index files
that CPAN.pm uses to associate module names to distributions. And
in these Jochen's version of DBD::File shows up, as can be seen
from the CPAN.pm shell. What I think has to happen is Jochen has
to transfer, via the PAUSE web site, ownership of DBD::File to
you. If he's already done that, I'm not sure what's going on, but
then it's also a problem within the CPAN.pm shell ...

best regards,
randy




Re: DBD::CSV from Windows to Unix?

2002-06-26 Thread Felix Geerinckx

on Wed, 26 Jun 2002 12:26:17 GMT, [EMAIL PROTECTED] (Jonathan)
wrote: 

 How do I go about getting hold of the Unix version?

http://search.cpan.org/search?mode=modulequery=DBD%3A%3ACSV

-- 
felix



Re: DBD::CSV join

2002-05-31 Thread Jeff Zucker

Helmut Wittneben wrote:


 my($query) = SELECT * FROM a,b where a.id=b.bid;

Sorry, there appears to be a problem with SQL::Statement and WHERE 
clause specification of joined columns.  Try renaming b.bid to b.id and 
using explicit join (e.g. a NATURAL JOIN b) instead of a where clause. 
I hope to have a release out soon fixing this.

-- 
Jeff




Re: DBD::CSV table joins ...

2002-05-17 Thread Sumit_Babu


Hello Jeff,

 I can't help unless I see the code that caused the error.  Please show
 me the prepare statement (including the full SQL) for the line that
 caused the error.

 Thanks for your time. Sorry, I was using a previous version of
SQL::Statement. Actually I had installed all the required modules through
the CPAN, and the latest there is still pointing to a older version of
SQL::Statement. Therefore I manually installed the latest version (1.004)
of SQL::Statement.

The latest module versions I am working with are:
perl   = 5.6.0
DBI= 1.21
DBD::CSV   = 0.2002
DBD::File  = 0.2001
SQL::Statement = 1.004

 After this I am not even able to get output for simple SQL's. The
following script is giving me no results. Please correct me if anything is
wrong:

=[Start Script]
==

use DBI;

DBI-trace(1, DBI_trace.log);
my $dbh = DBI-connect(qq{DBI:CSV:csv_sep_char=\\|}, { PrintError = 0,
RaiseError= 0 });
$dbh-{'csv_tables'}-{'tab1'} = { 'file' = 'info.csv'};
$dbh-{'csv_tables'}-{'tab2'} = { 'file' = 'info2.csv'};

sub executeSQL( $ ) {
my $sqlStat = shift(_);
my $sth = $dbh-prepare($sqlStat) or die(Unable to prepare SQL: 
.. $dbh-errstr());
$sth-execute() or die(Unable to prepare SQL:  . $dbh-errstr());
my $count;
while ( my $Fields = $sth-fetchrow_arrayref() ) {
$count++;
if ( $DBI::err ) {
die(Unable to prepare SQL:  . $dbh-errstr());
}
print $count = , join(, , {$Fields}), \n;
}
print \n\n;
}

executeSQL(SELECT * FROM tab1);

#executeSQL(SELECT tab1.* FROM tab1, tab2 WHERE tab1.col1 = tab2.col2);

=[End Script]
=

And these are the two csv files I am using:

=[Start info.csv]
==

col1|col2|col3|
test1|test12|test13|
test2|test22|test23|
test3|test32|test33|
test4|test42|test43|
test5|test52|test53|
test6|test62|test63|
test7|test72|test73|
test8|test82|test83|

=[End info.csv]


=[Start info2.csv]
==

col1|col2|col3|
test1|test12|test13|
test2|test22|test23|
test3|test32|test33|
test4|test42|test43|
test5|test52|test53|
test6|test62|test63|
test7|test72|test73|
test8|test82|test83|

=[End info2.csv]
===

And the following is the DBI trace output for trace level 1:

=[Start DBI Trace]
==

DBI 1.21-nothread dispatch trace level set to 1
- DBI-connect(DBI:CSV:csv_sep_char=\|, HASH(0x80ea6a0), )
- DBI-install_driver(CSV) for linux perl=5.006 pid=28729 ruid=711
euid=711
   install_driver: DBD::CSV version 0.2002 loaded from
/usr/local/perl/5.6.0/site_lib/DBD/CSV.pm
- install_driver= DBI::dr=HASH(0x8172408)
- default_user= ( HASH(0x80ea6a0)2keys undef ) [2 items] at DBI.pm
line 468
- STORE('f_dir' '.' ...)= 1 at File.pm line 94
- STORE('csv_sep_char' '|' ...)= 1 at File.pm line 105
- FETCH= undef at CSV.pm line 77
- STORE('csv_tables' HASH(0x834e068) ...)= 1 at CSV.pm line 77
- connect= DBI::db=HASH(0x81466e4) at DBI.pm line 471
- STORE('PrintError' 1 ...)= 1 at DBI.pm line 513
- STORE('AutoCommit' 1 ...)= 1 at DBI.pm line 513
- connect= DBI::db=HASH(0x81466e4)
- FETCH= HASH(0x834e068)0keys ('csv_tables' from cache) at CSV.pl line
6
- FETCH= HASH(0x834e068)1keys ('csv_tables' from cache) at CSV.pl line
7
- FETCH= 'DBD::CSV::st' ('ImplementorClass' from cache) at File.pm
line 161
3   - FETCH= ( '' ) [1 items] at CSV.pm line 91
3   - FETCH= ( 1 ) [1 items] at CSV.pm line 91
3   - FETCH= undef at CSV.pm line 96
2   - csv_cache_sql_parser_object= SQL::Parser=HASH(0x81d744c) at File.pm
line 170
- STORE('f_stmt' DBD::CSV::Statement=HASH(0x8142ef4) ...)= 1 at
File.pm line 182
- STORE('f_params' ARRAY(0x837f6a0) ...)= 1 at File.pm line 183
- STORE('NUM_OF_PARAMS' 0 ...)= 1 at File.pm line 184
- prepare('SELECT * FROM tab1' CODE)= DBI::st=HASH(0x834e038) at
CSV.pl line 11
- FETCH= HASH(0x834e068)2keys ('csv_tables' from cache) at CSV.pm line
119
- FETCH= undef at CSV.pm line 124
- FETCH= undef at CSV.pm line 126
- FETCH= undef at CSV.pm line 129
2   - FETCH= '|' ('csv_sep_char' from cache) at DBI.pm line 1015
- EXISTS= 1 at CSV.pm line 130
- FETCH= '|' ('csv_sep_char' from cache) at CSV.pm line 130
2   - FETCH= undef at DBI.pm line 1015
- EXISTS= '' at CSV.pm line 133
2   - FETCH= undef at DBI.pm line 1015
- EXISTS= '' at CSV.pm line 137
- FETCH= '.' ('f_dir' from cache) at Unix.pm line 57
2  

  1   2   >