Re: [analog-help] how to filter by time and show all hosts
Viau, Stephane ([EMAIL PROTECTED]; Tuesday, February 04, 2003 2:53 PM): > HOSTSORTBY BYTES is that asc or desc > what if i want HOSTSORTBY BYTES desc , top 100 ? Try it and look at the results. All reports are sorted descending. > alos, i tried FROM TO > FROM 030202:100 > TO 030204:200 > against logs on 20030203 but it returns: > Program started at Tue-04-Feb-2003 16:46. > Analysed requests from Mon-03-Feb-2003 04:59 to Tue-04-Feb-2003 04:59 (1.00 > days). ??? Presumably because that's all the data that exists in the log file 20030203. -- Jeremy Wadsack Wadsack-Allen Digital Group + | TO UNSUBSCRIBE from this list: |http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +
RE: [analog-help] how to filter by time and show all hosts
Title: RE: [analog-help] how to filter by time and show all hosts HOSTSORTBY BYTES is that asc or desc what if i want HOSTSORTBY BYTES desc , top 100 ? alos, i tried FROM TO FROM 030202:100 TO 030204:200 against logs on 20030203 but it returns: Program started at Tue-04-Feb-2003 16:46. Analysed requests from Mon-03-Feb-2003 04:59 to Tue-04-Feb-2003 04:59 (1.00 days). ??? Thanks, Stephane Viau Systems Analyst The Canadian Real Estate Association (CREA) suite 1600, 344 Slater St Ottawa, ON K1R 7Y3 work: (613) 237-7111 fax: (613) 234-2567 [EMAIL PROTECTED] -Original Message- From: Jeremy Wadsack [mailto:[EMAIL PROTECTED]] Sent: Tuesday, February 04, 2003 4:50 PM To: [EMAIL PROTECTED] Subject: Re: [analog-help] how to filter by time and show all hosts Viau, Stephane ([EMAIL PROTECTED]; Tuesday, February 04, 2003 2:30 PM): > i want to create a report that looks at entries from 1:30 a.m. to 1:45 a.m., Use FROM and TO, see http://analog.cx/docs/include.html#FROMTO. > and show ALL hosts (no limit), and sort by the most bandwidth used, page > views, desc > ive read the documentation but im still lost. pls help. this is what i have > now: > HOST ON # Host Report > HOSTFLOOR NPpBbD > HOSTFLOOR -100r This will only show the top 100 entries sorted by request. To show all use something like this: HOSTFLOOR 1r To sort by bandwidth used, use this: HOSTSORTBY BYTES > host is the ip that made the rquest to the site, rite?? Correct. -- Jeremy Wadsack Wadsack-Allen Digital Group + | TO UNSUBSCRIBE from this list: | http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +
Re: [analog-help] how to filter by time and show all hosts
Viau, Stephane ([EMAIL PROTECTED]; Tuesday, February 04, 2003 2:30 PM): > i want to create a report that looks at entries from 1:30 a.m. to 1:45 a.m., Use FROM and TO, see http://analog.cx/docs/include.html#FROMTO. > and show ALL hosts (no limit), and sort by the most bandwidth used, page > views, desc > ive read the documentation but im still lost. pls help. this is what i have > now: > HOSTON # Host Report > HOSTFLOOR NPpBbD > HOSTFLOOR -100r This will only show the top 100 entries sorted by request. To show all use something like this: HOSTFLOOR 1r To sort by bandwidth used, use this: HOSTSORTBY BYTES > host is the ip that made the rquest to the site, rite?? Correct. -- Jeremy Wadsack Wadsack-Allen Digital Group + | TO UNSUBSCRIBE from this list: |http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +
[analog-help] how to filter by time and show all hosts
Title: how to filter by time and show all hosts i want to create a report that looks at entries from 1:30 a.m. to 1:45 a.m., and show ALL hosts (no limit), and sort by the most bandwidth used, page views, desc ive read the documentation but im still lost. pls help. this is what i have now: HOST ON # Host Report HOSTFLOOR NPpBbD HOSTFLOOR -100r host is the ip that made the rquest to the site, rite?? Thanks, Stephane Viau Systems Analyst The Canadian Real Estate Association (CREA) suite 1600, 344 Slater St Ottawa, ON K1R 7Y3 work: (613) 237-7111 fax: (613) 234-2567 [EMAIL PROTECTED]
Re: [analog-help] rotating cache files with progressively less granularity
Andrew Houghton ([EMAIL PROTECTED]; Tuesday, February 04, 2003 12:59 PM): > I have ~ 100MB of daily logs that I need to process. I need to provide > historical information in the same file as I provide recent daily > information, but I want to throw away the fine-grained detail as it > becomes older. > Ideally, I'd have 15 days worth of fine-grained log analysis, 30 days of > of medium-grained analysis (for days -16 -> -45), followed by all > historical data at a low-grained level. > I *think* the following will work, but I wanted to run it by this group > to get some feedback before I start writing scripts. > 1. process nightly logs into distinct nightly cache files with > fine-grained info. > 2. keep nightly cache files for 45 days > 3. every night, create a new history cache file out of previous > history cache + oldest nightly (low-grained info). > 4. delete oldest nightly. > 5. every night, create a brand new cache file of medium-grained > info out of the oldest 30 nightly cache files > 6. every night, create a brand new cache file of fine-grained info > out of the newest 15 nightly cache files > 7. every night, create a single fine-grained run of the history > cache file, the medium-grained cache file, and the fine-grained > cache file. > This seems like it should work, and it seems like it should provide the > information we need while providing savings of processing time and > memory usage. > Is there anything in here that doesn't make sense? Anything that I need > to consider? Will the final nightly run suffer in any way from > including sparse historical data? First of all, the fine-grained should be 14 days (you just deleted the 45th day, so there are only 44 days left). Second, in order to avoid common cache-file pitfalls, heed these points: * As you specify make sure you delete the 45th day before building the medium-grain otherwise you will have double-counting for that day. * Make sure you never include daily, fine-grained cache files in your reporting step or they will interfere with the other cache files. * You don't really need to create the new fine-grained cache file of the most recent 14 days -- you can just include the daily cache files which would save a step of processing. Now, I'm not sure how you are defining 'low-grained' and 'medium-grained' and whether that will reduce the memory used by Analog. The cache files contain a complete record of each internal table Analog uses to process. In order to get accurate results you need to store every records there was. If you apply any filters or aliases to some cache files and not others then you aren't comparing the same thing. For example, suppose you have these records in your daily, fine-grained cache of 16 days ago: REQUEST: /index.html, 234R REQUEST: /index.html?show=login, 45R And these records in your daily, fine-grained cache of yesterday: REQUEST: /index.html, 154R REQUEST: /index.html?show=login, 12R Now you reduce the granularity of the older cache file by removing arguments, so that your medium grained cache file contains: REQUEST: /index.html, 279R When you run the reports you'll see this, which isn't necessarily accurate: Reqs | File -- 433 | /index.html 12 |/index.html?show=login This happens because while Analog will collect all the information for particular days out of the cache files, it aggregates it across all periods reported. So I'm not sure that what your reports tell you will really be what you want them to show. -- Jeremy Wadsack Wadsack-Allen Digital Group + | TO UNSUBSCRIBE from this list: |http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +
[analog-help] rotating cache files with progressively less granularity
I have ~ 100MB of daily logs that I need to process. I need to provide historical information in the same file as I provide recent daily information, but I want to throw away the fine-grained detail as it becomes older. Ideally, I'd have 15 days worth of fine-grained log analysis, 30 days of of medium-grained analysis (for days -16 -> -45), followed by all historical data at a low-grained level. I *think* the following will work, but I wanted to run it by this group to get some feedback before I start writing scripts. 1. process nightly logs into distinct nightly cache files with fine-grained info. 2. keep nightly cache files for 45 days 3. every night, create a new history cache file out of previous history cache + oldest nightly (low-grained info). 4. delete oldest nightly. 5. every night, create a brand new cache file of medium-grained info out of the oldest 30 nightly cache files 6. every night, create a brand new cache file of fine-grained info out of the newest 15 nightly cache files 7. every night, create a single fine-grained run of the history cache file, the medium-grained cache file, and the fine-grained cache file. This seems like it should work, and it seems like it should provide the information we need while providing savings of processing time and memory usage. Is there anything in here that doesn't make sense? Anything that I need to consider? Will the final nightly run suffer in any way from including sparse historical data? Thanks in advance for any responses, Andrew p.s. I'm moving to this (sort of complicated) scheme after suddenly finding that nightly analog runs were running out of memory; I hadn't considered the size that the cache files grow to. + | TO UNSUBSCRIBE from this list: |http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +
Re: [analog-help] Search Word and Search Query
Egon <[EMAIL PROTECTED]> wrote: > is a possibility to sort Search Word and Search Query by search > engine? No. You might be able to get some of what you want by looking at the Referrer report, though. Aengus + | TO UNSUBSCRIBE from this list: |http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +
[analog-help] Search Word and Search Query
is a possibility to sort Search Word and Search Query by search engine? Thank aou for help + | TO UNSUBSCRIBE from this list: |http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +
Re: [analog-help] Grouping domain names
>> I have a list of domains in my Referring Site Report that I would like to >> group eg domain.com and search.domain.com to be reported as one referring >> site. > Use the REFALIAS command. See http://analog.cx/docs/alias.html. Thanks Jeremy - that's the one. + | TO UNSUBSCRIBE from this list: |http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +
[analog-help] Help
I am unable to get into analog.cfg to config the informtion for the sites I wish to moniter. It tells me on screen that there was a fauilt with the download but I have done it 3 times. Hope that you can get me started. I did e.mail Mr Handfield but received a most unhelpful reply. Rosemarie Haseldine.
Re: [analog-help] Grouping domain names
David Redfern ([EMAIL PROTECTED]; Tuesday, February 04, 2003 9:38 AM): > Dear analog-helpers, > I have a list of domains in my Referring Site Report that I would like to > group eg domain.com and search.domain.com to be reported as one referring > site. > I though I had seen how to do this when I was looking for something else a > few months ago, but now I can't see it again? > Do I just have a bad memory or is there a way of doing this in Analog? Any > pointers much appreciated. Use the REFALIAS command. See http://analog.cx/docs/alias.html. -- Jeremy Wadsack Wadsack-Allen Digital Group + | TO UNSUBSCRIBE from this list: |http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +
[analog-help] Grouping domain names
Dear analog-helpers, I have a list of domains in my Referring Site Report that I would like to group eg domain.com and search.domain.com to be reported as one referring site. I though I had seen how to do this when I was looking for something else a few months ago, but now I can't see it again? Do I just have a bad memory or is there a way of doing this in Analog? Any pointers much appreciated. TIA, David + | TO UNSUBSCRIBE from this list: |http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +
Re: [analog-help] Hacking attempts log?
Dave Bender <[EMAIL PROTECTED]> wrote: > I was just sorting through our "Failed Requests" report, trying to > count up the numbers of hits to > "/scripts/..%5c../winnt/system32/cmd.exe" and numerous variations to > get a sense of how often hacking attempts (-- that's what these are, > right? --) our site deals with daily. They're more likely to be attempts by IIS web servers infected with Code Red of Nimda to spread. > I was wondering if it is possible to automatically break those out in > a separate report. It seems like it would be worthwhile to help > raise the awareness of attempts to exploit web sites for our > non-technical management. > > How difficult would it be to create such a report? Just FILEINCLUDE all the requests you feel fall into that category, and run a report with FAILURE ON. Aengus + | TO UNSUBSCRIBE from this list: |http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +
[analog-help] Hacking attempts log?
I was just sorting through our "Failed Requests" report, trying to count up the numbers of hits to "/scripts/..%5c../winnt/system32/cmd.exe" and numerous variations to get a sense of how often hacking attempts (-- that's what these are, right? --) our site deals with daily. I was wondering if it is possible to automatically break those out in a separate report. It seems like it would be worthwhile to help raise the awareness of attempts to exploit web sites for our non-technical management. How difficult would it be to create such a report? + | TO UNSUBSCRIBE from this list: |http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +
Re: [analog-help] Processing Time Report 2
On Tue, 4 Feb 2003, Roger Perttu wrote: > Hi! > > In the Processing time report, is it possible to get a list of the 10 > files that have the longest processing time? There is a date of last > access but afik no file name or url of last access. > See docs/faq.html#faq128 -- Stephen Turner, Cambridge, UKhttp://homepage.ntlworld.com/adelie/stephen/ "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." (Edsger W. Dijkstra) + | TO UNSUBSCRIBE from this list: |http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +
[analog-help] Processing Time Report 2
Hi! In the Processing time report, is it possible to get a list of the 10 files that have the longest processing time? There is a date of last access but afik no file name or url of last access. Thanks, Roger P + | TO UNSUBSCRIBE from this list: |http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +
Re: [analog-help] Count PDF as page
On Tue, 4 Feb 2003, Gaby Weigang wrote: > Hello, > > who can tell me if it is possible to count a pdf-file as a page? > > I wrote > PAGEINCLUDE *.pdf > > but the results seem to be too high. > Search for "PDF" in the FAQ. -- Stephen Turner, Cambridge, UKhttp://homepage.ntlworld.com/adelie/stephen/ "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." (Edsger W. Dijkstra) + | TO UNSUBSCRIBE from this list: |http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +