Re: [SLUG] Removing Toolbars etc.. from browser window

2003-11-03 Thread education
Hi Brad:

Yes you are right. My mistake. I have not tried your suggestion yet. I
will post the outcome soon.

Cheers.

> [EMAIL PROTECTED] wrote:
>
>>Hi Brad:
>>
>>[Louis] below.
>>
>>
>>
>>>[EMAIL PROTECTED] wrote:
>>>
>>>
>>>
Hi :

I've tried this in the perl script with no luck. Here is what I did.
 I have this after reading replies I got.

$javascriptwin = "
function openWindow(u, n, w, h)
{
 var win = window.open(u, n, \"toolbar=no,location=no,\"+
 \"status=no,menubar=no,scrollbars=yes\"+
  \"width=\"+w+\",height=\"+h);
}
";

In the hash that has the  is pasted below:

%hashdata = (
...
'key_data' => [">>>onclick=\"openWindow(\"http://www.domain.com/file.html\",\"win\";,
 400, 400); return false;\">?<\/a>"],

);

This does not open a new window at all. I also do not even see the
 contents of the file as well.

Have I missed something here?




>>>rather than using the onclick event try:
>>>
>>>'key_data' => [">>openWindow(\"http://www.domain.com/file.html\",\"win\";, 400,
>>>400);\">?<\/a>"]
>>>
>>>
>>
>>[Louis] Sure but I also want no toolbars, no menu bars, but only
>> scrollbars.
>>
>>So I do this :
>>
>>$option = "toolbar=no,location=no,status=no,menubar=no,scrollbars=yes";
>>
>>and add $option at the tail of your code snippet.
>>
>>So I guess I add an extra variable in your code snippet. I will try
>> again with the above snippet and see what happens.
>>
>>
>
> No, you are calling your openWindow() function which allready has those
> details in it, does it not?
>
> cheers,
> Brad



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Removing Toolbars etc.. from browser window

2003-11-03 Thread education
Hi Brad:

[Louis] below.

> [EMAIL PROTECTED] wrote:
>
>>Hi :
>>
>>I've tried this in the perl script with no luck. Here is what I did. I
>> have this after reading replies I got.
>>
>>$javascriptwin = "
>>function openWindow(u, n, w, h)
>>{
>>  var win = window.open(u, n, \"toolbar=no,location=no,\"+
>>  \"status=no,menubar=no,scrollbars=yes\"+
>>   \"width=\"+w+\",height=\"+h);
>>}
>>";
>>
>>In the hash that has the  is pasted below:
>>
>>%hashdata = (
>>...
>>'key_data' => [">onclick=\"openWindow(\"http://www.domain.com/file.html\",\"win\";, 400,
>> 400); return false;\">?<\/a>"],
>>
>>);
>>
>>This does not open a new window at all. I also do not even see the
>> contents of the file as well.
>>
>>Have I missed something here?
>>
>>
> rather than using the onclick event try:
>
> 'key_data' => [" openWindow(\"http://www.domain.com/file.html\",\"win\";, 400,
> 400);\">?<\/a>"]

[Louis] Sure but I also want no toolbars, no menu bars, but only scrollbars.

So I do this :

$option = "toolbar=no,location=no,status=no,menubar=no,scrollbars=yes";

and add $option at the tail of your code snippet.

So I guess I add an extra variable in your code snippet. I will try again
with the above snippet and see what happens.

Cheers


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Paths Dilema on RH Linux Server [Typo]

2003-11-03 Thread education
Just a minor typo with attempt 2, the code is

push(@INC, $cgibin);

Thanks

> Hi Sluggers:
>
> I have been looking at an efficient way to move away from having to
> constantly have paths specified in scripts repeatedly. So here is the
> full story in great detail so that I can hopefully get an informed
> decision. This is not a problem but merely looking for an efficient way
> to do this.
>
> When I write scripts (Perl) I normally specify the full paths to the
> base root directory (base html dir) and the path to cgi-bin specified.
> With these two paths I can access other scripts interfaces after pushing
> the path in @INC and use 'require', and also process text files etc 
>
> Now I was normally specifying these two paths in all scripts at the top
> in the past. But recently have been thinking about moving away from
> this. So here are two things I did but is not quite to what I want to
> achieve but close.
>
> Attempt 1.
> ===
> I basically have a text file where I have these two full paths
> specified. The file is located in the same directory as all the scripts.
> If there are scripts in another directory, then in that directory I have
> a sim link to the file that holds the actual path data. Now in all
> scripts, I just do this before anything else to set the paths:
>
> my @paths;
>
> open (FILE, "path.dat") || die "error blah blah ...";
> while() {
> chomp;
> push @paths, $_;
> }
> close (FILE);
>
> As I know the order of the paths in the file, I have these two below as
> globals:
>
> my $homedir = $paths[0];
> my $cgibin = $paths[1];
>
> This work greats with browser called scripts, but I hit a problem with
> scripts that runs via cron. The problem with cron scripts is that it
> cannot open the "path.dat" file despite that it's in the same directory
> as the cron script itself. I think where cron executes (don't know
> where) it's not in reference with the same directory where the script
> and file is located, so cannot see it.
>
> So I moved away from this solution and went to attempt 2.
>
> Attempt 2.
> ===
> I create a 'path.pl' script where I specify $homedir, $cgibin, and other
> other common used stuff by all scripts via a routine called
> "set_paths()". Then with "Exporter::Lite", I export these two variables
> and the others.
>
> In other scripts the problem is that I have to tell it from this
> 'path.pl' script is. So I am forced to have one path specified. i.e I
> have to define
>
> $cgibin = "/path_to_where_path.pl_is_located";
>
> Then I do this
>
> push($cgibin, @INC);
> require 'path.pl';
>
> &set_paths();
>
> This now has all common stuff accessible. But I still have to specify
> one hardcoded path in all scripts which is no way as good as attempt
> one. With attempt 2 cron scripts also works fine.
>
> I have been looking at a way to have @INC permanently have the path to
> where this 'path.pl' is located so that all I need to do is just call
> "&set_paths()". I read about this from this url
>
> http://perl.apache.org/docs/1.0/guide/porting.html#_INC_and_mod_perl
>
> However I'm not sure what configuration file they are talking about and
> also what is this startup.pl script located. It also appears that only
> the server administrator can do this. Is this right ?
>
> So for now I am with attempt 2 as I can run cron and browser called
> scripts.
>
> If anyone have some thoughts or a better solution on this please share
> them with me.
>
> Cheers.
>
>
> --
> SLUG - Sydney Linux User's Group - http://slug.org.au/
> More Info: http://lists.slug.org.au/listinfo/slug



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] Paths Dilema on RH Linux Server

2003-11-03 Thread education
Hi Sluggers:

I have been looking at an efficient way to move away from having to
constantly have paths specified in scripts repeatedly. So here is the full
story in great detail so that I can hopefully get an informed decision.
This is not a problem but merely looking for an efficient way to do this.

When I write scripts (Perl) I normally specify the full paths to the base
root directory (base html dir) and the path to cgi-bin specified. With
these two paths I can access other scripts interfaces after pushing the
path in @INC and use 'require', and also process text files etc 

Now I was normally specifying these two paths in all scripts at the top in
the past. But recently have been thinking about moving away from this. So
here are two things I did but is not quite to what I want to achieve but
close.

Attempt 1.
===
I basically have a text file where I have these two full paths specified.
The file is located in the same directory as all the scripts. If there are
scripts in another directory, then in that directory I have a sim link to
the file that holds the actual path data. Now in all scripts, I just do
this before anything else to set the paths:

my @paths;

open (FILE, "path.dat") || die "error blah blah ...";
while() {
chomp;
push @paths, $_;
}
close (FILE);

As I know the order of the paths in the file, I have these two below as
globals:

my $homedir = $paths[0];
my $cgibin = $paths[1];

This work greats with browser called scripts, but I hit a problem with
scripts that runs via cron. The problem with cron scripts is that it
cannot open the "path.dat" file despite that it's in the same directory as
the cron script itself. I think where cron executes (don't know where)
it's not in reference with the same directory where the script and file is
located, so cannot see it.

So I moved away from this solution and went to attempt 2.

Attempt 2.
===
I create a 'path.pl' script where I specify $homedir, $cgibin, and other
other common used stuff by all scripts via a routine called "set_paths()".
Then with "Exporter::Lite", I export these two variables and the others.

In other scripts the problem is that I have to tell it from this 'path.pl'
script is. So I am forced to have one path specified. i.e I have to define

$cgibin = "/path_to_where_path.pl_is_located";

Then I do this

push($cgibin, @INC);
require 'path.pl';

&set_paths();

This now has all common stuff accessible. But I still have to specify one
hardcoded path in all scripts which is no way as good as attempt one. With
attempt 2 cron scripts also works fine.

I have been looking at a way to have @INC permanently have the path to
where this 'path.pl' is located so that all I need to do is just call
"&set_paths()". I read about this from this url

http://perl.apache.org/docs/1.0/guide/porting.html#_INC_and_mod_perl

However I'm not sure what configuration file they are talking about and
also what is this startup.pl script located. It also appears that only the
server administrator can do this. Is this right ?

So for now I am with attempt 2 as I can run cron and browser called scripts.

If anyone have some thoughts or a better solution on this please share
them with me.

Cheers.


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Removing Toolbars etc.. from browser window

2003-11-03 Thread education
Hi :

I've tried this in the perl script with no luck. Here is what I did. I
have this after reading replies I got.

$javascriptwin = "
function openWindow(u, n, w, h)
{
  var win = window.open(u, n, \"toolbar=no,location=no,\"+
  \"status=no,menubar=no,scrollbars=yes\"+
   \"width=\"+w+\",height=\"+h);
}
";

In the hash that has the  is pasted below:

%hashdata = (
...
'key_data' => ["http://www.domain.com/file.html\",\"win\";, 400,
400); return false;\">?<\/a>"],

);

This does not open a new window at all. I also do not even see the
contents of the file as well.

Have I missed something here?

Cheers.

> [EMAIL PROTECTED] wrote:
>> Hi Sluggers:
>>
>> Objective:
>> When someone clicks on a link, I want a new browser window to open
>> with no toolbars, menubar. I want scrollbars. I know Javascript has
>> window.open() to achieve what I want. However the new window opens up
>> from a  tag from a script. I tried window.open() in the  tag but
>> it did not work. I have also tried this in the ".html" file I want to
>> open up in the new resized browser window with no menu and toolbars.
>>
>
> Javascript:
> window.open(,,)
>
>  amongst other things could be:
> "menubar=no,personalbar=no,resizable=yes,scrollbars=yes,width=300,height=300,left="
>  + l + ",top=" + t);
>
> where l and t are the calculated left and top coordinates to put a
> 300x300 window in the middle of the screen.
>
>
> To call a javascript function from an  tag, you MUST return false so
> that the link is not activated, eg:
>
> Something here
>
> Also, with window operations, particularly sizing, be careful what
> object parameters you use - Internet Explorer uses some non-standard
> ones.
>
>> I have this sample Javascript in the  tag
>>
>> 
>> function resizeWindow(width,height){
>> if (document.layers) {
>> // resizeTo sets inner size, so use this instead
>> window.outerWidth = width;
>> window.outerHeight = height;
>> } else window.resizeTo(width,height);
>> }
>> 
>>
>> Now in my  tag I have added this
>>
>> 
>>
>> This of course just resizes the window. But it still leaves the
>> toolbars and menu bars. Any ideas on how I can extend the above script
>> with a window.open() or something similar would be greatly
>> appreciated.
>>
>> Cheers.
>>
>>
>
>
> --
> Phil Scarratt
> Draxsen Technologies
> IT Contractor
> 0403 53 12 71



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Cron Not Running Scripts

2003-11-02 Thread education
> On Sun, 2003-11-02 at 18:28, [EMAIL PROTECTED] wrote:
>> Hi Sluggers:
>>
>> For some reason if I have cron setup like this the script does not do
>> anything:
>>
>> hh mm * * * /usr/bin/perl /path_to_script/scriptname.pl > /dev/null
>> 2>&1
>
> There is a subtle difference with root crons.
>
> are you anacron or is your computer on full time?

[Louis] This is a web servers that always runs.
>
>> When I remove the "> /dev/null 2>&1" cron finally executes what I want
>> the script to do.
>
> What output did it give you?  Normally this will not affect the outcome
> like this.

[Louis] The script is suppose to update some files and also sends some
output to the display. I know that "> /dev/null >> 2>&1" will not display
anything to the screen. But after looking into it it also revealed that
the files that should be updated were not.

Removing it shows the display I have in the script and also updates the
files I wanted updated.

So does this mean there is something in the script that's doing this ?

Cheers.


>
> --
> Thanks
> KenF
> OpenOffice.org developer



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


RE: [SLUG] Anyone used Perl's Filter package !!

2003-11-02 Thread education
Hi Angus:

My replies under [Louis].

> -Original Message-
> From: Angus Lees [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, 28 October 2003 20:37
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Re: [SLUG] Anyone used Perl's Filter package !!
>
>
> At Mon, 27 Oct 2003 13:17:11 +1100 (EST), education  wrote:
> > I download the package Filter-1.26 from CPAN. I'm having problems
> > understanding how to use this package to get Perl code to
> go through
> > the filter before compilation.
>
> > Also does this method clearly prevents anyone from seeing
> the source
> > code of a Perl script ?
>
> Unless it uses some form of encryption, then all you've done
> is obscured the code - anyone who can use the same filter can
> read the source code.
>
> See Embperl for an example of a module which *does* do
> encrypted source code - you compile a private key into the
> embperl module and then use it to encrypt source code files
> (see crypto/README in Embperl source).
>

[Louis] This is not installed on my server. I will get it installed to
play with. However I have just read about it from CPAN from the reference
you gave. I have some questions about this.
Here goes:

Q1. This may sound basic. I suppose if I encrypt the file, then if I run
it on another server, does it just needs EmbPerl (same version or any
version) installed ?

I read from the README that the "epcrypto_config.h" file needs to be
edited to enable encryption and choose the algorithm to use.

Q2. Does the setup for this part on the other server have to be the same ?
i.e for the other server to decrypt the binary and run the perl script the
header file has to be edited and recompiled to use the exact algorithm ?
Can't EmbPerl detect what algorithm to use when decrypting ?

Q3. When encrypting the file from the epcrypto program, can I name the
destination filename as a ".pl" script ?

Q4. As this is a Perl script I am trying to encrypt, does the path to perl
have to be in the source before encryption ? Will the script work if I add
the path to perl manually in the destination filename ?

Q5. Finally I suppose all I distribute is the destination encrypted files,
and DO NOT give out the source files from the crypto directory ?

Once I get it installed I will play with it a little bit. If I need more
help I'll post again.

Cheers.


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] Removing Toolbars etc.. from browser window

2003-11-02 Thread education
Hi Sluggers:

Objective:
When someone clicks on a link, I want a new browser window to open with no
toolbars, menubar. I want scrollbars. I know Javascript has window.open()
to achieve what I want. However the new window opens up from a  tag
from a script. I tried window.open() in the  tag but it did not work. I
have also tried this in the ".html" file I want to open up in the new
resized browser window with no menu and toolbars.

I have this sample Javascript in the  tag


function resizeWindow(width,height){
if (document.layers) {
// resizeTo sets inner size, so use this instead
window.outerWidth = width;
window.outerHeight = height;
} else window.resizeTo(width,height);
}


Now in my  tag I have added this



This of course just resizes the window. But it still leaves the toolbars
and menu bars. Any ideas on how I can extend the above script with a
window.open() or something similar would be greatly appreciated.

Cheers.


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] Cron Not Running Scripts

2003-11-02 Thread education
Hi Sluggers:

For some reason if I have cron setup like this the script does not do
anything:

hh mm * * * /usr/bin/perl /path_to_script/scriptname.pl > /dev/null 2>&1

When I remove the "> /dev/null 2>&1" cron finally executes what I want the
script to do.

Any clues ??

Cheers.


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] Anyone used Perl's Filter package !!

2003-10-26 Thread education
Hi :

I download the package Filter-1.26 from CPAN. I'm having problems
understanding how to use this package to get Perl code to go through the
filter before compilation.

If anyone has used this or have experience with this would it be possible
to get a quick run down on how to get a simple Perl script to pass through
this package before compilation and then execute the code.

Also does this method clearly prevents anyone from seeing the source code
of a Perl script ?

Are there any dangers that part of the code may not execute as expected
when passed through this package ?

My objective is to get all my POD formatted scripts and main scripts to
pass through a Filter before compilation. I'm basically looking for a way
to prevent users from seeing the source code of the Perl scripts. Am I on
the right track with this package ?

Thanks in advance.

Louis.


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] Questions about AWStats Logs !!

2003-10-20 Thread education
Hi Sluggers:

I hope some people here have heard of AWStats. I have some questions
regarding the stats it provides. Please note that I have looked up the
AWStats documents but am not sure if I'm understanding everything
properly. So I will ask here. So please confirm my understanding and
queries.

Issue 1:

First let start with Unique Visitor. From my understanding a unique
visitor is a host that came to a web site. A unique visitor can have more
than one visits, and a visit can consist of more than 1 page viewed. Not
let's say I connect to the internet using my ISP that uses dynamic IP
addressing. If I go to my own main url site (i.e the base index.html) will
AWStats count this as a Unique Visitor if the IP address changes from each
connection despite the fact that I visted the site myself ?

For such a visit does this unique visitor falls in the category "Direct
address / Bookmarks" from AWStats "Connect to site from" ?

If so then does "Direct address / Bookmarks"  means I'm the one visited
the site, i.e if most of the hits are from hosts that resolves to my ISP
domain name provider ?

Issue 2:
=
If I connect to my site via ftp or SSH is this also logged as Unique
Visitor ? Does uploads and downloads counts as visits ?

Issue 3:
=
AWStats has a section called "Authenticated users (Top 10)". What falls in
the category "Other logins (and/or anonymous users)" ?

Issue 4:
==
Cron scripts that accesses web pages not via browser but command line.
Does AWStats count stats for this as hits and logged them as Unique
Visitors for host that resolve to the domain name where the cron is
executed ?

Cheers.


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] Hiding Perl Code - Using Morse.pm

2003-10-04 Thread education
Hi Sluggers:

Summary:
I have Perl code written in various .pl files. One main file uses
"require" so that interfaces from the other .pl files are available. The
main file has the path to perl in there.

The other files just have interfaces and their implementation as well. I
have written these ones using the POD format

Objective and Problem:
I tried to use Morse.pm to hide the code in the POD formatted files only.
The code is hidden when I first run them from my PC. However the main
script file which is not Morsed cannot read the Morsed encoded code.

I have loaded "use Morse" at the very top in each of the interface files.

I'm running the main script on apache using RH 7.3 .

Any suggestions.

Louis.


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


RE: [SLUG] HTTP_REFERER not found [Repost]

2003-09-29 Thread education
>>Q1. If I click on a link to go to another HTML page say name.html, I
>> lose the "hop=some_data" part. How do I make the "hop=some_data" go
>> with the "name.html" with the  tag from an HTML document ? Is this
>> possible or a script can do this only ?
>
> there are a couple ways -
> one way is to add the "hop=some_data" to the href itself, IF it is a
> static string...  ie:href="http://www.domain.com/name.html?hop=some_data";> otherwise, you can
> implement a simple CGI/Perl script to generate the index page from a
> template...

Well the "hop=some_data" is dynamic. Looks like a script is required to do
this then, and pass it dynamically to all  tag on the page being viewed
so that when a new page is clicked on, the "hop=some_data" is preserved.

>
>
>>Q2. The order page calls a Perl script say "orderpage.pl", I want the
>> script to capture the "hop=some_data". I got the script to print the
>> whole %ENV, and I see no "HTTP_REFERER". I thought "HTTP_REFERER" would
>> show the url that called "orderpage.pl" with the "hop=some_data" . But
>> if I cannot see "HTTP_REFERER", then how do I get the script to capture
>> the
>>"hop=some_data" ?
>
> The ENV variable you are looking for is definitely the QUERY_STRING...
> or use STDIN to retrieve POST data...

Well from the page

http://www.yourdomain.com/index.html?hop=some_data

I clicked on a link for the order page which has this

http://www.yourdomain.com/cgi-bin/orderpage.pl?value=some_data_the_order_script_uses";>

This shows QUERY_STRING as

value=some_data_the_order_script_uses

It looks like I need to somehow parse the "hop=some_data" dynamically to
the "orderpage.pl" scripts so that when clicked it has this

href="http://www.yourdomain.com/cgi-bin/orderpage.pl?value=some_data_the_order_script_uses&hop=some_data";

Any other way I can do this ?

>
> If you are using perl (looks like you are using a hash there - %ENV)
> then you can
> either use the CGI module to retrieve the query strings, in which case
> you don't
> have to worry about that at all, or you can do something like this -
>
>
> my %formData;
>
> read STDIN, $_, $ENV{'CONTENT_LENGTH'};  #Read in STDIN to default
> variable...
> %formData = split/&|=/;  #Place separate keys/values
> into 'formData' hash, split by '&' and '='
> if (!%formData)  #- If we didn't get
> anything from POST method,
> {
>$_ = $ENV{'QUERY_STRING'};#  try retrieving it from
> QUERY_STRING (GET method)
>%formData = split/&|=/;   #  and split like before.
> }
>
>
> Then to access the data, you would go:
> my $hop = $formData{'hop'};
>
> This is a quick hack - CGI module is much better & more efficient
>  eg. what happens when you get a number of values from 
> in
> this code?
>

Thanks for this piece of code. I already have a read_input() sub doing
something similar.


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] HTTP_REFERER not found [Repost]

2003-09-29 Thread education
>> Q1. If I click on a link to go to another HTML page say name.html, I
>> lose the "hop=some_data" part. How do I make the "hop=some_data" go
>> with the "name.html" with the  tag from an HTML document ? Is this
>> possible or a script can do this only ?
>
> This is not how a relative link works. Everything after the last '/' is
> lost. However with enough creativity anything can be made possible in
> Javascript.

I'll see if there are any javascript already available to do this. Or I
might just write a Perl script like the order page one I got.

>
>>
>> Q2. The order page calls a Perl script say "orderpage.pl", I want the
>> script to capture the "hop=some_data". I got the script to print the
>> whole %ENV, and I see no "HTTP_REFERER". I thought "HTTP_REFERER"
>> would show the url that called "orderpage.pl" with the "hop=some_data"
>> . But if I cannot see "HTTP_REFERER", then how do I get the script to
>> capture the
>> "hop=some_data" ?
>
> You should be seeing the previouse URL in the environment variable
> HTTP_REFERER. Contrary to what someone else said it will contain the
> query data. It might not exists because the browser is not sending it.
> Look at the HTTP traffic with a sniffer to verify the browser is sending
> it. Some more information about what web server you're using will help
> fix the problem too.
>

Well I am using a Red Hat Linux 7.3 running Apache server. Does this help
?Not sure sure what u mean by use a sniffer (please clarify). I just
printed out the %ENV hash, and saw no HTTP_REFERER there.


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] HTTP_REFERER not found [Repost]

2003-09-29 Thread education
Hi Sluggers:

I subscribed with a new email address. So I am reposting my query on this.

Setup:
Url comes to site like this

http://www.yourdomain.com/index.html?hop=some_data

Q1. If I click on a link to go to another HTML page say name.html, I lose
the "hop=some_data" part. How do I make the "hop=some_data" go with the
"name.html" with the  tag from an HTML document ? Is this possible or a
script can do this only ?

Q2. The order page calls a Perl script say "orderpage.pl", I want the
script to capture the "hop=some_data". I got the script to print the whole
%ENV, and I see no "HTTP_REFERER". I thought "HTTP_REFERER" would show the
url that called "orderpage.pl" with the "hop=some_data" . But if I cannot
see "HTTP_REFERER", then how do I get the script to capture the
"hop=some_data" ?

Thank You.



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] HTTP_REFERER not found

2003-09-27 Thread education
Hi Sluggers:

Setup:
Url comes to site like this
http://www.yourdomain.com/index.html?hop=some_data

Q1. If I click on a link to go to another HTML page say name.html, I lose
the "hop" data. How do I make the "hop=some_data" go with the "name.html"
?

Q2. The order page calls a Perl script say "orderpage.pl", I want the
script to capture the "hop=some_data". I got the script to print the whole
%ENV, and I see no "HTTP_REFERER". I thought "HTTP_REFERER" would show the
url that called "orderpage.pl" with the "hop=some_data" . But if I cannot
see "HTTP_REFERER", then how do I get the script to capture the
"hop=some_data" ?

Thank You.
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] Weird HTML Omitting code via Browser !!

2003-09-13 Thread education
Hi Sluggers:

Issue:
For some reason when I load an html file in a browser some of the data is
not displayed. e.g I have an image  tag code in there. But when I do
view source via browser, I do not see the image  tag for the image I
wanted to see. However when I open the html file via the command line, I
see that the  tag for that particular image is there.

Has anyone experienced this ? Is this the web server server doing this ?
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Anyone Installed AWStats on a Server

2003-09-06 Thread education
> I am not completely sure what you mean by "where is that located for
> "root" ", do you mean that you have more than one cgi-bin set up, for
> more that one domain for eg? If you only have the one cgi-bin then put
> them there. Where it is located depends on the server type and OS, the
> web server I use (Apache 1.3.28 running on FreeBSD4.8) has it's cgi-bin
> at /usr/local/www/cgi-bin, while my MandrakeLinux box has its cgi-bin at
> /var/www/cgi-bin from memory.  What are you using?

I am using Red Hat 7.3.

AWStats (http://awstats.sourceforge.net/) is a server side program. I
believe you install as root in it's html, cgi-bin repository. Then root
sets it up so that sites on the server can use it as well.

>From your suggestions above (from root):

html => /var/www/html
cgi-bin => /var/www/cgi-bin

Cheers.
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] Anyone Installed AWStats on a Server

2003-09-05 Thread education
Hi Sluggers:

I'm trying to install AWStats onto my web server. However I am stuck with
the setup/install instructions.

It asks me to install all scripts on the server's cgi-bin, where is that
located for "root" ?

It also asks me to install some other files in a directory readable by web
server. Again does root have a directory that I can access via web ?

Thanks in advance.
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug