File opening problem

2008-05-12 Thread Tatiana Lloret Iglesias
Hi all!

i'm running the following dummy program which just opens a file and I get an
error  (die message)

#!/usr/bin/perl


if (  @ARGV[0] eq '' )
{
print "\nUSAGE:\n\t genenames.pl genes.txt \n\n";
exit;
}

my $file = $ARGV[0];
open(FICH,"$file") or die "cannot open $file";

I've tried to pass the input parameter ARGV[0] with / with \ with relative
path ... but nothing

any idea?

Thanks a lot!
T


Re: Beginner Reg.Expression Question

2007-10-10 Thread Tatiana Lloret Iglesias
Thanks a lot !!
T



On 10/10/07, Rob Dixon <[EMAIL PROTECTED]> wrote:
>
> Tatiana Lloret Iglesias wrote:
> >
> > Hi all!
> >
> > What regular expression do I need to convert  Version: 1.2.3   to
> Version:
> > 1.2.4  ?
> >
> > I.e.  my pattern is Version: number.number.number and from that i need
> > Version: number.number.number+1
> > After the : i can have a space or not...
>
> Hello Tatiana
>
> This will do what you need:
>
> $string =~ s/(Version:\s+\d+\.\d+\.)(\d+)/$1.($2+1)/e;
>
> Rob
>

Thanks


Beginner Reg.Expression Question

2007-10-10 Thread Tatiana Lloret Iglesias
Hi all!

What regular expression do I need to convert  Version: 1.2.3   to Version:
1.2.4  ?

I.e.  my pattern is Version: number.number.number and from that i need
Version: number.number.number+1
After the : i can have a space or not...

Thanks!
T


Accessing to a HTML Frame

2007-07-25 Thread Tatiana Lloret Iglesias

Hi all! I need to save locally lower Frame of:

http://www.wipo.int/patentscopedb/en/wads.jsp?IA=CA2006001738&LANGUAGE=EN&ID=id0005063181&VOL=70&DOC=011820&WO=07/045101&WEEK=17/2007&TYPE=A2&DOC_TYPE=PAMPH&PAGE=HTML

When I browse this page from my perl i only get .html where frames are
defined:






  


Is it possible to access to a particular frame??



Thanks!

T


Re: exit code

2007-06-29 Thread Tatiana Lloret Iglesias

And why i windows I get exit value 777?

On 6/29/07, Martin Barth <[EMAIL PROTECTED]> wrote:


exit codes are stored in 1 byte. so you can only exit with
2^8 == 256 different values.


--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
http://learn.perl.org/





Re: exit code

2007-06-29 Thread Tatiana Lloret Iglesias

Sorry.. but I don't understand what do you mean ...



On 6/29/07, Paul Johnson <[EMAIL PROTECTED]> wrote:


On Fri, Jun 29, 2007 at 11:08:19AM +0200, Tatiana Lloret Iglesias wrote:

> Hi all,
>
> when I execute my perl script in my local machine and I get to a
controlled
> error situation i've got as exit value 777
>
> if(!$test){
>$failed_times =$failed_times +1;
>
>if($failed_times ==3)
>{
> exit(777);
>}
>sleep 15;
>   }# if $test
>
> but when i execute the same script with the same controlled error
situation
> I've got as exit value 9 which seems a generic error code
>
> Any idea ?

Exit statuses are stored in eight bits.

> Thanks!

You're welcome.

--
Paul Johnson - [EMAIL PROTECTED]
http://www.pjcj.net



exit code

2007-06-29 Thread Tatiana Lloret Iglesias

Hi all,

when I execute my perl script in my local machine and I get to a controlled
error situation i've got as exit value 777

if(!$test){
   $failed_times =$failed_times +1;

   if($failed_times ==3)
   {
exit(777);
   }
   sleep 15;
  }# if $test

but when i execute the same script with the same controlled error situation
I've got as exit value 9 which seems a generic error code

Any idea ?

Thanks!
T


delete from java temporary file generated from PERL

2007-06-29 Thread Tatiana Lloret Iglesias

Hi all!


From my Java application I call a PERL process which generates some files.


When I come back to the Java part and I process these files I try to delete
them and some of them are deleted correctly but other not...

Any idea?

I've checked that I call CLOSE methods everytime I create a file in PERL

Hint: I've noticed that big files are deleted correctlty perhaps because I
spend more time processing them before trying to delete them ...

Thanks!!
T


Re: call system command

2007-05-14 Thread Tatiana Lloret Iglesias

Thank a lot!

Another related question,,, system command can be used also for linux?
Regards
T


On 5/14/07, Xavier Noria <[EMAIL PROTECTED]> wrote:


On May 14, 2007, at 3:44 PM, Tatiana Lloret Iglesias wrote:
> Hi all,
>
> I have to execute this command from perl:
>
> my $status = system("d:\\blast\\bin\\blastall -p blastn -i $file -d
> $patDB
> -o $workdir\\blast_$blast_file_id.txt");
>
>
> but the problem is that $workdir contains spaces  how can I
> make it
> work?

Break that into a list of arguments:

  system("d:\\blast\\bin\\blastall", "-p", "blastn", "-i", $file, ...);

-- fxn






--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
http://learn.perl.org/





call system command

2007-05-14 Thread Tatiana Lloret Iglesias

Hi all,

I have to execute this command from perl:

my $status = system("d:\\blast\\bin\\blastall -p blastn -i $file -d $patDB
-o $workdir\\blast_$blast_file_id.txt");


but the problem is that $workdir contains spaces  how can I make it
work?

Thanks!
Regards
T


Re: scape . character

2007-04-27 Thread Tatiana Lloret Iglesias

thanks a lot!!

And how can I locate the version String it self in the file?

bla bla bla
bla bla bla 1.2.0  bla bla
bla bla bla

my pattern is number.number.number

Thanks!
T


On 4/27/07, Jeff Pang <[EMAIL PROTECTED]> wrote:


2007/4/27, Tatiana Lloret Iglesias <[EMAIL PROTECTED]>:
> Hi all!
>
> how can I create a regular expression to find a software version pattern
in
> a file (e.g.  1.2.0) and return the last number , i.e. 0

Hi,

What's the form of your version string?
Given the case of $version_str = '1.2.0',you may write:

my ($lastnum) = $verison_str =~ /.*\.(\d+)/;

Good luck.

--
Chinese Practical Mod_perl book online
http://home.arcor.de/jeffpang/mod_perl/



scape . character

2007-04-27 Thread Tatiana Lloret Iglesias

Hi all!

how can I create a regular expression to find a software version pattern in
a file (e.g.  1.2.0) and return the last number , i.e. 0

Thanks!
T


Re: URL too long

2007-03-01 Thread Tatiana Lloret Iglesias

Thanks a lot ... unfortunately URL doesnt work from a browser

So finally i'm parsing the long query manually and joining results at the
end..

Thanks!
T


On 3/1/07, Tom Phoenix <[EMAIL PROTECTED]> wrote:


On 3/1/07, Tatiana Lloret Iglesias <[EMAIL PROTECTED]> wrote:

> The query is so long because it's created "manually" in a java program
and
> passed to perl script as input parameter

Do you know whether or not the URL is valid? That is, if you use the
java-generated URL in your favorite web browser, does everything work
according to plan?

> What I dont understand is that, if the website i'm connecting has a form
GET
> how can I change this?

You can't change what the webserver will accept. (Actually, it might
accept POST, in addition to GET. But it's somewhat impolite, at least,
not to use the site as the designer expected.)

In sum: If you can use the URL from a web browser, you can use it from
Perl. If you can't get the URL to work from a browser, we can't help
you to get it to work from Perl.

Good luck with it!

--Tom Phoenix
Stonehenge Perl Training



Re: URL too long

2007-03-01 Thread Tatiana Lloret Iglesias

The query is so long because it's created "manually" in a java program and
passed to perl script as input parameter

What I dont understand is that, if the website i'm connecting has a form GET
how can I change this? I mean if it's a form from my application I can
change HTML code without problems but in a external website ... if the form
is GET how can I use POST??


Thanks!

T


On 2/28/07, Tom Smith <[EMAIL PROTECTED]> wrote:


Tatiana Lloret Iglesias wrote:
>>
>> but the problem is that i cannot modify form html code because it's a
>> public external website 
If the QUERY_STRING is too long for the URL, your only option is to use
POST.

GET will allow for "canned" queries--that is, you can create a link that
will execute the query on the server.

POST, however, will require a form and its fields to be populated. When
you click Submit, the data will be sent to the server and the query
executed. Just try it and see if it works.

I am curious to know... How did you derive the QUERY_STRING for your
URL? If it's too long to be used in the URL, then it's not likely that
you came by the string at the web site you're trying to use it at.

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
http://learn.perl.org/





Re: URL too long

2007-02-28 Thread Tatiana Lloret Iglesias


but the problem is that i cannot modify form html code because it's a
public external website ...



Thanks!
t



--

To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
http://learn.perl.org/





Re: URL too long

2007-02-28 Thread Tatiana Lloret Iglesias

Yes! that's the problem, GET method doesnt allow very very long url's ...
how can I use POST from Perl code? do you have any example?

Thanks!
T


On 2/28/07, Beginner <[EMAIL PROTECTED]> wrote:


I think GET request are restricted to 256 characters, try using POST
instead.

HTH,
Dp.


On 28 Feb 2007 at 18:57, Tatiana Lloret Iglesias wrote:

> Hi all,
>
> i have to browse a very long URL from my PERL script and it fails:
>
>
http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.html&r=0&p=1&f=S&l=50&Query=%28%22NRF3%22+AND+%22Colorectal+Cancer%22%29+OR+%28%22NRF3%22+AND+%22Colorectal+Carcinoma%22%29+OR+%28%22NRF3%22+AND+%22Colorectal+Tumors%22%29+OR+%28%22NRF3%22+AND+%22Neoplasms%2C+Colorectal%22%29+OR+%28%22NRF3%22+AND+%22colorectal+cancer%22%29+OR+%28nfe2l3+AND+%22Colorectal+Cancer%22%29+OR+%28nfe2l3+AND+%22Colorectal+Carcinoma%22%29+OR+%28nfe2l3+AND+%22Colorectal+Tumors%22%29+OR+%28nfe2l3+AND+%22Neoplasms%2C+Colorectal%22%29+OR+%28nfe2l3+AND+%22colorectal+cancer%22%29+&d=PG01
>
> Do you know how to make it works??
>
> Thanks a lot!
> T
>



--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
http://learn.perl.org/





URL too long

2007-02-28 Thread Tatiana Lloret Iglesias

Hi all,

i have to browse a very long URL from my PERL script and it fails:

http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.html&r=0&p=1&f=S&l=50&Query=%28%22NRF3%22+AND+%22Colorectal+Cancer%22%29+OR+%28%22NRF3%22+AND+%22Colorectal+Carcinoma%22%29+OR+%28%22NRF3%22+AND+%22Colorectal+Tumors%22%29+OR+%28%22NRF3%22+AND+%22Neoplasms%2C+Colorectal%22%29+OR+%28%22NRF3%22+AND+%22colorectal+cancer%22%29+OR+%28nfe2l3+AND+%22Colorectal+Cancer%22%29+OR+%28nfe2l3+AND+%22Colorectal+Carcinoma%22%29+OR+%28nfe2l3+AND+%22Colorectal+Tumors%22%29+OR+%28nfe2l3+AND+%22Neoplasms%2C+Colorectal%22%29+OR+%28nfe2l3+AND+%22colorectal+cancer%22%29+&d=PG01

Do you know how to make it works??

Thanks a lot!
T


url encode

2007-01-31 Thread Tatiana Lloret Iglesias

Hi,
i'm trying to get this url but i get an error:

http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&p=1&f=G&l=50&d=PTXT&S1=%22clear+dead%22&OS=%22clear+dead%22&RS=%22clear+dead%22

my $response =  $ua->get($url);

how should i encode it? I've tried to use uri_escape function but it
fails...

Thanks
T


Perl X file

2007-01-30 Thread Tatiana Lloret Iglesias

Hi all,

i've got a perl script which does the following:

1. gets html page with SEARCH FORM
2. Introduce QUERY content and press Search button
3. Several pages are obtained
4. Process the first page and press NEXT PAGE BUTTON and continue processing
next pages

Problem:

When Query contains only one word it works. More than one word (e.g.
"colorectal cancer" ) it fails in point 4 although the first page obtained
with the first 50 results seems correct:


Html code for 1 word ( I press NextList2 button):










Html code for 2 words ( I press NextList2 button):









I get the following error:

Error #2406LINEA=== Error: Process terminated abnormally. Document may be
truncated.

But if I do it manually from the website it works

Any idea?? I'm completely lost!
Thanks!
T.


Re: perl and network connection

2007-01-24 Thread Tatiana Lloret Iglesias

Thanks  a lot ...  I asked it because I thought that perhaps in perl there
was some kind of QUEUE object to manage a big number of connections or some
kind of "homemade" solutions with SLEEP or something like that ...
T


On 1/24/07, Jay Savage <[EMAIL PROTECTED]> wrote:


On 1/24/07, Tatiana Lloret Iglesias <[EMAIL PROTECTED]> wrote:
> Anyone can help me with this issue?

No.

> the situation is:
> 1. network is working perfectly
> 2. i execute my perl script which download a big number of web pages
into
> local html
> 3. the network starts to disconnect intermintently ...
>
> In my script i use Mechanize to submit forms, get web content and save
it in
> local files .. so nothing strange 
>
> Thanks!
> T.

This is a system issue, not a Perl issue, which is why you didn't get
a response the first time.  The problem here is that your system
can't, apparently, handle the type or volume of traffic your program
is generating, possibly thousands or requests per second. If you are
having network problems, consult the documentation for your system's
networking stack. You may be running into some sort of hard limit in
your system configuration, or you may just be running out of hardware
resources. Most likely you've hit the maximum number of TCP/IP
connections you can open simultaneously, or run out of memory for the
stack or one of the various networking buffers. It's also possible
that the volume of incoming traffic is interfering with the reception
of syn/ack packets from hosts you're trying to contact.

Googling "network tuning ," "traffic shaping ,
"packet prioritization " and "QoS " should turn up
some useful information.

HTH,

-- jay
--
This email and attachment(s): [  ] blogable; [ x ] ask first; [  ]
private and confidential

daggerquill [at] gmail [dot] com
http://www.tuaw.com  http://www.downloadsquad.com  http://www.engatiki.org

values of β will give rise to dom!



Re: perl and network connection

2007-01-24 Thread Tatiana Lloret Iglesias

Anyone can help me with this issue?
the situation is:
1. network is working perfectly
2. i execute my perl script which download a big number of web pages into
local html
3. the network starts to disconnect intermintently ...

In my script i use Mechanize to submit forms, get web content and save it in
local files .. so nothing strange 

Thanks!
T.

http://cuandomemiras.blogspot.com





On 1/9/07, Tatiana Lloret Iglesias <[EMAIL PROTECTED]> wrote:


Hi all,
i'm developping a perl script which uses WWW-Mechanize and performs a big
number of  submit_form
I've noticed that because of the execution of this script networkconnection 
fails intermitently ...
how can i avoid this?
Thanks
T.



Re: files download and performance

2007-01-22 Thread Tatiana Lloret Iglesias

I've executed script using ip number instead of domain name but it takes
more or less the same time in these 2 steps:

$browser2->get($url);
   $content = $browser2->content();

Honestly i dont know if these 9 seconds can be improved  ...
Regards,
T



On 1/22/07, Igor Sutton <[EMAIL PROTECTED]> wrote:


Hi Tatiana,

2007/1/22, Tatiana Lloret Iglesias <[EMAIL PROTECTED]>:
> does it work for this kind of urls?
> http://patft.uspto.gov/netacgi/nph-Parser?
> thanks!
> T.

For these kind of problem, you always have URI:


#!env perl

use strict;
use warnings;

use URI;

my $url = "http://patft.uspto.gov/netacgi/nph-Parser?";;
my $uri = URI->new($url);
print $uri->host, "\n";


HTH!

--
Igor Sutton Lopes <[EMAIL PROTECTED]>



Re: files download and performance

2007-01-22 Thread Tatiana Lloret Iglesias

does it work for this kind of urls?
http://patft.uspto.gov/netacgi/nph-Parser?
thanks!
T.


On 1/22/07, Igor Sutton <[EMAIL PROTECTED]> wrote:


Hi Tatiana,

2007/1/22, Tatiana Lloret Iglesias <[EMAIL PROTECTED]>:
> i've realized that for each link, i spend most of the time in the
following
> perl script
>
> foreach my $url (@lines){ -- I READ MY 1-ROW URL FILE
> $contador=2;
> $test=0;
> while(!$test){
>
> $browser2->get($url);
> $content = $browser2->content();
>
> --IN THESE 2 STEPS I SPEND 6 SECONDS for a 86 kb html, Is it ok? Can i
> perform these 2 steps faster?
>

Are you using the domain name or the ip address in link (e.g.
http://www.google.com/ or http://1.2.3.4)? If you are using the first,
perl will first contact your DNS server or cache, and then connect and
retrieve the contents you want. If you are not using a DNS cache, you
can build it using Net::DNS and Memoize for caching.

Check the example:


#!env perl

use strict;
use warnings;

use Benchmark::Timer;
use Carp;
use Memoize;
use Net::DNS;

# used by get_ip_from_hostname
my $resolver = Net::DNS::Resolver->new;

sub get_ip_from_hostname {
   my ($hostname) = @_;
   my $query = $resolver->search($hostname);
   if ($query) {
   foreach my $rr ( $query->answer ) {
   next unless $rr->type eq 'A';
   return $rr->address;
   }
   }
   else {
   croak "Query failed: ", $resolver->errorstring;
   }
}

my $t = Benchmark::Timer->new();

for ( 1 .. 1000 ) {
   $t->start('get_ip_from_hostname without memoize');
   my $ip = get_ip_from_hostname("www.google.com");
   $t->stop('get_ip_from_hostname without memoize');
}
print $t->report();

$t->reset();

memoize('get_ip_from_hostname');
for ( 1 .. 1000 ) {
   $t->start('get_ip_from_hostname memoize');
   my $ip = get_ip_from_hostname("www.google.com");
   $t->stop('get_ip_from_hostname memoize');
}

print $t->report();



HTH!

--
Igor Sutton Lopes <[EMAIL PROTECTED]>



Re: files download and performance

2007-01-22 Thread Tatiana Lloret Iglesias

i've realized that for each link, i spend most of the time in the following
perl script

foreach my $url (@lines){ -- I READ MY 1-ROW URL FILE
   $contador=2;
   $test=0;
   while(!$test){

   $browser2->get($url);
   $content = $browser2->content();

--IN THESE 2 STEPS I SPEND 6 SECONDS for a 86 kb html, Is it ok? Can i
perform these 2 steps faster?

Thanks!

T



On 1/22/07, Igor Sutton <[EMAIL PROTECTED]> wrote:


Hi Tatiana,

2007/1/22, Tatiana Lloret Iglesias <[EMAIL PROTECTED]>:
> I do it from the JAva and not from the PErl because i need to perform an
> insert into the database each time i process a link and also i have to
> inform via rss about the progress of the global download process (23.343out
> of 70.000 files have been downloaded) 
>

You can use the awesome XML::RSS module to create RSS. Now, for
database insert you have the excellent DBI module.

I bet Java is your problem there (no, I'm not initiating a language war
here).

--
Igor Sutton Lopes <[EMAIL PROTECTED]>



Re: files download and performance

2007-01-22 Thread Tatiana Lloret Iglesias

I do it from the JAva and not from the PErl because i need to perform an
insert into the database each time i process a link and also i have to
inform via rss about the progress of the global download process (23.343 out
of 70.000 files have been downloaded) 




On 1/22/07, Igor Sutton <[EMAIL PROTECTED]> wrote:


Hi Tatiana,

2007/1/22, Tatiana Lloret Iglesias <[EMAIL PROTECTED]>:
> Regarding the performance problem:
>
> The schema of my application is:
>
> 1. I execute perl script which performs a search in a public database.
It
> gets total results in *several pages*. Pressing "Next Page" button (with
> perl script) i get a list of all the links related to my query (70.000more
> or less) I write down all these links in a unique text file.
>
> 2. From the Java i read each of the 70.000 links and i create a new file
> containing the current i'm reading. Then i call a perl script which uses
> this link as input parameter. It browses it and get website content
saving
> it in a local html file.
>
> I'm having performance problems   i've tried to don't create a
single
> file containing url for each of the 70.000 links and pass it
automatically
> to perl script as input parameter but it fails...
>
> I've heard about LWP module? do you recomend me to use it??
> Have you ever done something similar to this? can you give me some
advice?
> Thanks
>
> T.
>

I can't see the point you are using Java for that. If your code with
WWW::Mechanize is already working, why don't you do everything in
Perl?

I would do this:

1. Read all links you want to open, it is ok to store it on a single
file IMHO. You can use Tie::File to append lines, it can make your
life easier.

open my $links_file, ">", $filename or die $!;

while (my $link = my_mechanize_get_link()) {
   print {$links_file} $link, "\n";
}

close $links_file or warn $!;

2. After that, you can read from that tied array, from beginning to
end and use LWP::UserAgent or LWP::Simple to retrieve the data you
want to store:

use LWP::Simple;

sub filename_from_url {
   # your code here. logic to compose the filename
   # from url.
}

open my $input, "<", $filename or die $!;
while (my $url = <$input>) {
   chomp($url);
   my $content = get($url);
   if ($content) {
   open my $output, ">", filename_from_url($url) or die $!;
   print {$output} $content;
   close $output or warn $!;
   }
}


HTH!
--
Igor Sutton Lopes <[EMAIL PROTECTED]>



Re: files download and performance

2007-01-22 Thread Tatiana Lloret Iglesias

Regarding the performance problem:

The schema of my application is:

1. I execute perl script which performs a search in a public database. It
gets total results in *several pages*. Pressing "Next Page" button (with
perl script) i get a list of all the links related to my query (70.000 more
or less) I write down all these links in a unique text file.

2. From the Java i read each of the 70.000 links and i create a new file
containing the current i'm reading. Then i call a perl script which uses
this link as input parameter. It browses it and get website content saving
it in a local html file.

I'm having performance problems   i've tried to don't create a single
file containing url for each of the 70.000 links and pass it automatically
to perl script as input parameter but it fails...

I've heard about LWP module? do you recomend me to use it??
Have you ever done something similar to this? can you give me some advice?
Thanks

T.





On 1/15/07, Rob Dixon <[EMAIL PROTECTED]> wrote:


Tatiana Lloret Iglesias wrote:
> Hi all,
> from my java applicaation i invoke a perl script which downloads a huge
> quantity of files from an external database using WWW-mechanize library
and
> my problem is that I have big CPU performance problems ... can you give
me
> any advice to avoid this?

Hi Tatiana

Do you really mean "CPU performance problems"? If you're downloading a lot
from
the Internet then your problem is more likely to be limited by the speed
of the
network, in which case you need to look at what you're downloading and see
if
you can get the information you require without moving as much extraneous
data.
Can you anticipate the URLs you need to access instead of following links
from
other pages, for instance?

If your process is indeed CPU-bound then it is most liley to be an error
in your
coding, which we would need to see before we could help.

HTH,

Rob



files download and performance

2007-01-15 Thread Tatiana Lloret Iglesias

Hi all,
from my java applicaation i invoke a perl script which downloads a huge
quantity of files from an external database using WWW-mechanize library and
my problem is that I have big CPU performance problems ... can you give me
any advice to avoid this?
Thanks!
T.


perl and network connection

2007-01-09 Thread Tatiana Lloret Iglesias

Hi all,
i'm developping a perl script which uses WWW-Mechanize and performs a big
number of  submit_form
I've noticed that because of the execution of this script network connection
fails intermitently ...
how can i avoid this?
Thanks
T.


Re: Fwd: click_button gives error on a existing button

2006-12-29 Thread Tatiana Lloret Iglesias

Hi again,
yes, I have this line:
my $browser2 = WWW::Mechanize->new(autocheck => 1);
where I initilize browser2 variable

The strange thing is that in the following while, when I print browser2
content everything seems to be allright , Next 25 records button exists in
the html but .. it fails when passing from page 2 to page 3

while($test eq 0){
  #$content=$browser2->submit_form("DBSELECT");
  print "justo antes de fallar";
  $browser2->click_button( value => "Next 25 records");
  print "justo despues de 25 next";
  $content = $browser2->content();

  @html = split ("\n",$content);
  foreach my $linea (@html){

  if($linea =~ /.*Results of searching.*/){
 $test=1;

  }
  print $linea;
  }
  if(!$test){
print "error in getting main query page, will try again in 30
seconds\n";
sleep 30;
  }
  }

You can see the whole script in
http://tlloreti.googlepages.com/pct3.pl

Thanks!
T.

On 12/28/06, Owen <[EMAIL PROTECTED]> wrote:


On Thu, 28 Dec 2006 09:53:05 +0100
"Tatiana Lloret Iglesias" <[EMAIL PROTECTED]> wrote:

> Thanks Owen,
> but in this case, inspecting the html page i see that the button hasn't
got
> NAME attribute but only VALUE that's why i've used value command.
> The strange thing is that to pass from page  1 to page 2 it works but to
> pass from page 2 to page 3 it fails ... although the code is the same
and
> the button exists!

>  On 12/28/06, Owen Cook <[EMAIL PROTECTED]> wrote:
> >
> > On Thu, Dec 28, 2006 at 08:46:25AM +0100, Tatiana Lloret Iglesias
wrote:
> > > Hi!
> > > I'm executing this perl and I get an error in this line_
> > > $browser2->click_button( value => "Next 25 records");
> > >
> > > but i dont understand why because Next 25 records button exists!!
> > > Can you help me with this please?
> > > You can download the script from:
> > > http://tlloreti.googlepages.com/pct3.pl

> >
> > Normally buttons have names.
> >
> > $browser2->click_button( value => "Next 25 records",
> > name  => "Whatever");
> >
> > Then you have a "Whatever" button, and when clicked, sends "Next 25
> > records" back to the server



You need to track down the origin of
$browser2->click_button( value =>"Next 25 records");

At the start of your program there should be some use statements, like
'use strict;'

What other use statements are there?

Then you will need to look a little further down the program and maybe
find something like

$browser2 = new->browser(some args perhaps);

So have you got anything like that?




Owen



Fwd: click_button gives error on a existing button

2006-12-28 Thread Tatiana Lloret Iglesias

Thanks Owen,
but in this case, inspecting the html page i see that the button hasn't got
NAME attribute but only VALUE that's why i've used value command.
The strange thing is that to pass from page  1 to page 2 it works but to
pass from page 2 to page 3 it fails ... although the code is the same and
the button exists!
A real X file ...

T.


On 12/28/06, Owen Cook <[EMAIL PROTECTED]> wrote:


On Thu, Dec 28, 2006 at 08:46:25AM +0100, Tatiana Lloret Iglesias wrote:
> Hi!
> I'm executing this perl and I get an error in this line_
> $browser2->click_button( value => "Next 25 records");
>
> but i dont understand why because Next 25 records button exists!!
> Can you help me with this please?
> You can download the script from:
> http://tlloreti.googlepages.com/pct3.pl



Normally buttons have names.

$browser2->click_button( value => "Next 25 records",
name  => "Whatever");

Then you have a "Whatever" button, and when clicked, sends "Next 25
records" back to the server



Owen



click_button gives error on a existing button

2006-12-27 Thread Tatiana Lloret Iglesias

Hi!
I'm executing this perl and I get an error in this line_
$browser2->click_button( value => "Next 25 records");

but i dont understand why because Next 25 records button exists!!
Can you help me with this please?
You can download the script from:
http://tlloreti.googlepages.com/pct3.pl

Thanks a lot !!
T.


click_button gives error on a existing button

2006-12-27 Thread Tatiana Lloret Iglesias

Hi!
I'm executing this perl and I get an error in this line_
$browser2->click_button( value => "Next 25 records");

but i dont understand why because Next 25 records button exists!!
Can you help me with this please?
You can download the script from:
http://tlloreti.googlepages.com/pct3.pl

Thanks a lot !!
T.