Re: [squid-users] upload

2008-12-24 Thread john Moylan
In my experience. Uploads through a reverse proxy will add some
latency. May make the uploads unusable. It may be advisable to upload
directly to the origin server instead.

J

2008/12/24 ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ mirz...@gmail.com:
 is upload can be limit from squid ?



Re: [squid-users] load balancing

2008-12-24 Thread john Moylan
If you want to a load balancer for Squid servers then LVS is a good
option. Redhat even have a packaged version.

J

2008/12/23 Ken Peng kenp...@rambler.ru:



 Hi All,

 any links on how to configure load balancing of squid



 See the default squid.conf, :)



Re: [squid-users] performance datas for Squid

2008-12-07 Thread john Moylan
For this 15-25MB/s, do you mean bits or bytes? Thanks
bits

Thanks John. for small files, why don't use GDSF on both locations?
I can't remember exactly - I'll probably compare them both again soon.

J


2008/12/7 Ken DBA [EMAIL PROTECTED]:



 --- On Sun, 12/7/08, john Moylan [EMAIL PROTECTED] wrote:

 From: john Moylan [EMAIL PROTECTED]

 GDSF on disk, LRU on
 Memory.


 Thanks John. for small files, why don't use GDSF on both locations?



 that's serving
 between
 15-25Mb/s of outbound traffic.



 For this 15-25MB/s, do you mean bits or bytes? Thanks.


 Ken.






Re: [squid-users] performance datas for Squid

2008-12-06 Thread john Moylan
I have a number of squid boxes behind LVS acting as reverse proxies.

They are all HP DL380/385's G4 Machines (about 3 years old) with 7GB
to 12 GB per machine, 4 unraided 15K SCSI HDD for caches on each
machine.

Mem Caches are 30% of the available ram on each box and each disk has
a 10GB cache.
I only cache small objects (96K max). GDSF on disk, LRU on Memory.

My normal traffic if 200-400 http requests per second per box (I don't
use ICP), with 5-20% CPU utilization - that's serving between
15-25Mb/s of outbound traffic.

I have peaked at 800 req/s with 40% CPU (the current origin servers
may be a bottleneck though.)

I intend to test new machines soon using large RAM 64GB cache and
small disk cache 500MB.

J



2008/12/6 Ken DBA [EMAIL PROTECTED]:
 Has anyone get Squid's best performance datas on a server box with common 
 hardware (ie,DELL 1950)? These datas include:

 1) concurrent connections;
 2) flow capacity;
 3) TPS (http transaction per second).

 Thanks.






Re: [squid-users] performance datas for Squid

2008-12-06 Thread john Moylan
I should add.. hit ratios are 90 or request but only 20-30% of volume
with  my current solution. My requirement is to reduce load on the
backend.

J

2008/12/6 john Moylan [EMAIL PROTECTED]:
 I have a number of squid boxes behind LVS acting as reverse proxies.

 They are all HP DL380/385's G4 Machines (about 3 years old) with 7GB
 to 12 GB per machine, 4 unraided 15K SCSI HDD for caches on each
 machine.

 Mem Caches are 30% of the available ram on each box and each disk has
 a 10GB cache.
 I only cache small objects (96K max). GDSF on disk, LRU on Memory.

 My normal traffic if 200-400 http requests per second per box (I don't
 use ICP), with 5-20% CPU utilization - that's serving between
 15-25Mb/s of outbound traffic.

 I have peaked at 800 req/s with 40% CPU (the current origin servers
 may be a bottleneck though.)

 I intend to test new machines soon using large RAM 64GB cache and
 small disk cache 500MB.

 J



 2008/12/6 Ken DBA [EMAIL PROTECTED]:
 Has anyone get Squid's best performance datas on a server box with common 
 hardware (ie,DELL 1950)? These datas include:

 1) concurrent connections;
 2) flow capacity;
 3) TPS (http transaction per second).

 Thanks.







Re: [squid-users] large memory squid

2008-11-13 Thread john Moylan
Should I still leave 30% of my RAM for the OS's cache etc?

J

2008/11/13 Amos Jeffries [EMAIL PROTECTED]:
 john Moylan wrote:

 Hi,

 I am about to take ownership of a new 2CPU, 4 core server with 32GB of
 RAM - I intend to add the server to my squid reverse proxy farm. My
 site is approximately 300GB including archives and I think 32GB of
 memory alone will suffice as cache for small, hot objects without
 necessitating any additional disk cache.

 Are there any potential bottlenecks if I set the disk cache to
 something like 500MB and cache_mem to  something like 22GB. I'm using
 Centos 5's Squid 2.6.

 I have a full set of monitoring scripts as per
 http://www.squid-cache.org/~wessels/squid-rrd/ (thanks again) and of
 course I will be able to benchmark this myself once I have the box -
 but any tips in advance would be appreciated.


 Should run sweet. Just make sure its a 64-bit OS and Squid build or all that
 RAM goes to waste.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2



[squid-users] large memory squid

2008-11-12 Thread john Moylan
Hi,

I am about to take ownership of a new 2CPU, 4 core server with 32GB of
RAM - I intend to add the server to my squid reverse proxy farm. My
site is approximately 300GB including archives and I think 32GB of
memory alone will suffice as cache for small, hot objects without
necessitating any additional disk cache.

Are there any potential bottlenecks if I set the disk cache to
something like 500MB and cache_mem to  something like 22GB. I'm using
Centos 5's Squid 2.6.

I have a full set of monitoring scripts as per
http://www.squid-cache.org/~wessels/squid-rrd/ (thanks again) and of
course I will be able to benchmark this myself once I have the box -
but any tips in advance would be appreciated.

Thanks,
John


Re: [squid-users] File cache squid

2007-12-13 Thread John Moylan
The OS file cache is Very important for most IO operations for most
applications - including Squid.


On Thu, 2007-12-13 at 17:49 +, Paul Cocker wrote:
 Is the OS file cache of any importance to squid? And by that I mean
 quite simply, HOW important is the OS file cache to squid?
 
 Paul Cocker
 IT Systems Administrator
 
 
 
 
 TNT Post is the trading name for TNT Post UK Ltd (company number: 04417047), 
 TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland Ltd 
 (05695897),TNT Post North Ltd (05701709) and TNT Post South West Ltd 
 (05983401). Emma's Diary and Lifecycle are trading names for Lifecycle 
 Marketing (Mother and Baby) Ltd (02556692). All companies are registered in 
 England and Wales; registered address: 1 Globeside Business Park, Fieldhouse 
 Lane, Marlow, Buckinghamshire, SL7 1HY.
 



Re: [squid-users] Handling GeoIP specific content

2007-12-03 Thread John Moylan
Some sites use dynamic DNS based on lookups to something like Maxmind or Quova.

J

On Dec 3, 2007 6:02 PM, Sascha Linn [EMAIL PROTECTED] wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi all,

 I'm sure someone has to have dealt with this... but I cal only find
 one hit in the archives relating to dealing with GeoIP specific
 content. And it's over a year old (a patch for 2.5 that appends a
 header to the request.)

 Basically, we want to have a reverse proxy in Europe to offload some
 of our traffic (servers in the US.) However, some of the content on
 the site is GeoIP specific (ie. users in Spain see one thing, those in
 France another.) What's the best way to deal with this?

 thanx,
 sascha =)
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.7 (Darwin)

 iD8DBQFHVES7NIqNkZp0KoQRAjk5AJ0SHQgnch1U5bPNrr8zUHEXwwrrqgCgzeYW
 C84xqyPoNiW2q7tcE4EK8zA=
 =9Lab
 -END PGP SIGNATURE-



Re: [squid-users] High CPU usage when cache full

2007-11-27 Thread John Moylan
Hi,

I don't have any scientific metrics regarding mem cache versus disk
and cache, apart from metrics collected by  keynote.com which show a
slight inprovment in overall site speed which may or may not be
related.. I had been using both memory and disk until fairly recently
but feel I should have enough RAM to cache frequently accessed files
in Memory only and avoid any potential disk buffer issues during
flashmob type events where our traffic can potentially increase 10
fold in a matter of minutes.

Will try LRU on one of the machines next time I do a restart.

squidclient output below.

HTTP/1.0 200 OK
Server: squid
Date: Tue, 27 Nov 2007 16:33:36 GMT
Content-Type: text/plain
Expires: Tue, 27 Nov 2007 16:33:36 GMT
Last-Modified: Tue, 27 Nov 2007 16:33:36 GMT
X-Cache: MISS from www.xxx.xxx
X-Cache-Lookup: MISS from www.xxx.xxx:80
Via: 1.0 www..xxx:80 (squid)
Connection: close

Squid Object Cache: Version 2.6.STABLE6
Start Time: Mon, 26 Nov 2007 11:16:17 GMT
Current Time:   Tue, 27 Nov 2007 16:33:36 GMT
Connection information for squid:
Number of clients accessing cache:  43833
Number of HTTP requests received:   14361302
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   8172.3
Average ICP messages per minute since start:0.0
Select loop called: 175178637 times, 0.602 ms avg
Cache information for squid:
Request Hit Ratios: 5min: 87.5%, 60min: 87.2%
Byte Hit Ratios:5min: 48.2%, 60min: 53.3%
Request Memory Hit Ratios:  5min: 44.7%, 60min: 45.0%
Request Disk Hit Ratios:5min: 0.1%, 60min: 0.2%
Storage Swap size:  0 KB
Storage Mem size:   6181676 KB
Mean Object Size:   0.00 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.00091  0.00179
Cache Misses:  0.00286  0.00286
Cache Hits:0.00179  0.00179
Near Hits: 0.00562  0.00678
Not-Modified Replies:  0.00091  0.00091
DNS Lookups:   0.0  0.03223
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:105439.283 seconds
CPU Time:   5072.935 seconds
CPU Usage:  4.81%
CPU Usage, 5 minute avg:8.48%
CPU Usage, 60 minute avg:   7.95%
Process Data Segment Size via sbrk(): 8582112 KB
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
Total space in arena:  193504 KB
Ordinary blocks:   193413 KB 14 blks
Small blocks:   0 KB  0 blks
Holding blocks: 17104 KB  3 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  90 KB
Total in use:  210517 KB 100%
Total free:90 KB 0%
Total size:210608 KB
Memory accounted for:
Total accounted:   7735377 KB
memPoolAlloc calls: 1284461973
memPoolFree calls: 1253790650
File descriptor usage for squid:
Maximum number of file descriptors:   16384
Largest file desc currently in use:   4287
Number of file desc currently in use: 4080
Files queued for open:   0
Available number of file descriptors: 12304
Reserved number of file descriptors:   100
Store Disk files open:   0
IO loop method: epoll
Internal Data Structures:
1138371 StoreEntries
1138371 StoreEntries with MemObjects
1138143 Hot Object Cache Items
 0 on-disk objects

J

On Nov 26, 2007 2:55 PM, Tek Bahadur Limbu [EMAIL PROTECTED] wrote:
 Hi John,

 John Moylan wrote:
  Hi,
 
  I have three memory only caches set up 7GB of memory each (the
  machines have 12GB of physical memory each). Throughput is fairly high
  and this setup works well in reducing the number of requests for
  smaller files from my backend storage with lower latency that a disk
  and mem. solution.

 Do you have statistics regarding fetching from memory and disk? How much
 is the performance increment when using memory cache only?


 However, the cache's on  of the machines fill up
  every 2-3 days and Squid's CPU usage subsequently goes up to 100%
  (These are all dual SMP machines and system load average remains
  around 0.7). FD's, the number of connections and swap are all fine
  when the CPU goes up so the culprit is more than likely to be cache
  replacement.
 
  I am using heap GDSF as the policy. The maximum size in memory is set
  to 96 KB.

 Have you tried the LFUDA or the default LRU memory replacement policies?

   I am using squid

[squid-users] High CPU usage when cache full

2007-11-26 Thread John Moylan
Hi,

I have three memory only caches set up 7GB of memory each (the
machines have 12GB of physical memory each). Throughput is fairly high
and this setup works well in reducing the number of requests for
smaller files from my backend storage with lower latency that a disk
and mem. solution. However, the cache's on  of the machines fill up
every 2-3 days and Squid's CPU usage subsequently goes up to 100%
(These are all dual SMP machines and system load average remains
around 0.7). FD's, the number of connections and swap are all fine
when the CPU goes up so the culprit is more than likely to be cache
replacement.

I am using heap GDSF as the policy. The maximum size in memory is set
to 96 KB. I am using squid-2.6.STABLE6-4.el5 on Linux 2.6.

Is there anything I can do to improve expensive cache replacement
apart from stopping and starting Squid every day?

J


Re: [squid-users] Re: saving all web traffic

2007-11-15 Thread John Moylan
This might suit your requirements better

http://gertjan.freezope.org/replicator/

Haven't tried it. Presume it's not as efficient as Squid as a cache.

On Nov 15, 2007 4:33 PM, bryan rasmussen [EMAIL PROTECTED] wrote:
  Hi,

  I want to run Squid as my proxy and get everything that passes through
  squid and save it to my local filesystem as a sort of archive. Is
  there a specific tutorial that shows how to do this (I don't need
  everything like how to keep duplicates etc. from occurring). I thought
  it would be in the FAQ but it doesn't seem to be. I figured  the
  following faqs would be the most likely:


  /OperatingSquid:
  /ContentAdaptation
  /InnerWorkings
  /SquidRedirectors


 Cheers,
 Bryan Rasmussen



Re: [squid-users] Squid Performance (with Polygraph)

2007-11-14 Thread John Moylan
Yes. Although your setup behaves better under high load for longer. I
stopped using Diskd myself because of bug #761. although I must admit
that I had not experienced any issues on my servers when I was using
it.

Maybe one of the developers on the list can clarify.  Is it the case
that diskd crashes under high load, but other systems will have
reached high load and crashed long before diskd;)


J
On Nov 14, 2007 10:56 AM, Dave Raven [EMAIL PROTECTED] wrote:
 I have seen the error messages before, but not during these tests. diskd 
 definitely seems to delay the time-till-crash by a lot - as I understand it 
 the problems in diskd are crashes under high load, not that it slows it down 
 right?

 Thanks for the help
 Dave

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of John Moylan
 Sent: Wednesday, November 14, 2007 12:39 PM
 To: Dave Raven
 Subject: Re: [squid-users] Squid Performance (with Polygraph)


 Doesn't diskd have a bug whereby it has issues under heavy load.
 http://www.squid-cache.org/bugs/show_bug.cgi?id=761 . If so, I am
 surprised that it is behaving best under heavy load.
 http://www.squid-cache.org/Versions/v2/2.6/squid-2.6.STABLE16-RELEASENOTES.html

 J




Re: [squid-users] Squid cluster - flat or hierarchical

2007-11-06 Thread John Moylan
Hi,

My loadbalancing is handled very well by LVS.  My caches are using
unicast ICP with the no-proxy option for their cache_peer's. I don't
think Carp or round robin anything would help me much. My concern is
whether or not my caches performance could suffer from forwarding
loops if they are all siblings of each other? Is it OK to ignore the
forwarding loop warnings in cache.log?

J





On Nov 6, 2007 7:29 AM, Amos Jeffries [EMAIL PROTECTED] wrote:

 John Moylan wrote:
  Hi,
 
  I have 4 Squid 2.6 reverse proxy servers sitting behind an LVS
  loadbalancer with 1 public IP address. In order to improve the hit
  rate all 4 servers are all peering with eachother using ICP.
 
 
  squid1 - sibling squid{2,3,4}
  squid2 - sibling squid{1,3,4}
  squid3 - sibling squid{1,2,4}
  squid4 - sibling squid{1,2,3}
 
  This works fine, apart from lots of warnings about forwarding loops in
  the cache.log
 
  I would like to ensure that the configs are optimized for an up and
  coming big traffic event.
 
  Can I disregard these forwarding loops and keep my squids in a flat
  structure or should I break them up into parent sibling relationships.
  Will the forwarding loop errors I am experiencing cause issues during
  a quick surge in traffic?
 

 The CARP peering algorithm has been specialy designed and added to cope
 efficiently with large arrays or clusters of squid.

 IFAIK it's as simple as adding the 'carp' option to your cache_peer
 lines in place of other such as round-robin.

 http://www.squid-cache.org/Versions/v2/2.6/cfgman/cache_peer.html

 Amos



[squid-users] Squid cluster - flat or hierarchical

2007-11-05 Thread John Moylan
Hi,

I have 4 Squid 2.6 reverse proxy servers sitting behind an LVS
loadbalancer with 1 public IP address. In order to improve the hit
rate all 4 servers are all peering with eachother using ICP.


squid1 - sibling squid{2,3,4}
squid2 - sibling squid{1,3,4}
squid3 - sibling squid{1,2,4}
squid4 - sibling squid{1,2,3}

This works fine, apart from lots of warnings about forwarding loops in
the cache.log

I would like to ensure that the configs are optimized for an up and
coming big traffic event.

Can I disregard these forwarding loops and keep my squids in a flat
structure or should I break them up into parent sibling relationships.
Will the forwarding loop errors I am experiencing cause issues during
a quick surge in traffic?


Thanks,
John


Re: [squid-users] maximum size of cache_mem

2007-09-20 Thread John Moylan
SNMP is good 

http://www.squid-cache.org/~wessels/squid-rrd/

On Thu, 2007-09-20 at 05:08 -0700, zulkarnain wrote:
 Hi all,
 
 I've squid running with 4GB of cache_mem, seemed my
 squid unable to use 4GB of cache_mem. I would like to
 know is there any tools to analyze cache_mem
 utilization? 
 
 My system is: Fedora 7 64bit and squid-2.6S13
 
 Thanks,
 Zul 
 
 

 
 Be a better Globetrotter. Get better travel answers from someone who knows. 
 Yahoo! Answers - Check it out.
 http://answers.yahoo.com/dir/?link=listsid=396545469
***
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution, or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful.
Please note that emails to, from and within RTÉ may be subject to the Freedom
of Information Act 1997 and may be liable to disclosure.



Re: [squid-users] maximum size of cache_mem

2007-09-20 Thread John Moylan
The pages linked to provide 2 very relevent graphs generated using
rrdtool cachemanager and snmp.

Those are Memory Usage and page faults

Memory usage is well..memory useage, and Page Faults are usually a good
indicator of swapping activity. 

Using both graphs will enable you to tweak your memory to ensure max
usage and help you to avoid swapping.

J

On Thu, 2007-09-20 at 07:44 -0700, zulkarnain wrote:
 --- John Moylan [EMAIL PROTECTED] wrote:
  SNMP is good 
  
  http://www.squid-cache.org/~wessels/squid-rrd/
  
 
 Thanks John! But snmp still did not provide
 utilization of usage cache_mem.
 
 Zul
 
 

 
 Yahoo! oneSearch: Finally, mobile search 
 that gives answers, not web links. 
 http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC
***
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution, or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful.
Please note that emails to, from and within RTÉ may be subject to the Freedom
of Information Act 1997 and may be liable to disclosure.



Re: [squid-users] maximum size of cache_mem

2007-09-20 Thread John Moylan
Hi,

That's only referring to the amount of memory currently consumed by the
memory cache. You should be able to see the figure growing as the memory
cache fills up.

J 

On Thu, 2007-09-20 at 07:50 -0700, zulkarnain wrote:
 --- Gonzalo Arana [EMAIL PROTECTED] wrote:
  Hi,
  
  Have a look at cache manager:
  
  http://wiki.squid-cache.org/SquidFaq/CacheManager
  
 
 Here is my cache manager output:
 
 Memory usage for squid via mallinfo():
 Total space in arena:  1810628 KB
 Ordinary blocks:   1810311 KB 19 blks
 Small blocks:   0 KB  0 blks
 Holding blocks: 18800 KB  5 blks
 Free Small blocks:  0 KB
 Free Ordinary blocks: 316 KB
 Total in use:  1829111 KB 100%
 Total free:   316 KB 0%
 Total size:1829428 KB
 Memory accounted for:
 Total accounted:   1707551 KB
 memPoolAlloc calls: 115795525
 memPoolFree calls: 112049666
 
 squid only recognize 1.8GB of cache_mem but I've 4GB
 cache_mem in squid.conf, does it mean maximum size of
 cache_mem is 2GB? any help would be great. Thanks.
 
 Zul
  
 
 
   
 
 Fussy? Opinionated? Impossible to please? Perfect.  Join Yahoo!'s user panel 
 and lay it on us. http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 
***
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution, or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful.
Please note that emails to, from and within RTÉ may be subject to the Freedom
of Information Act 1997 and may be liable to disclosure.



[squid-users] squid pre-pending blank line

2007-09-18 Thread John Moylan
Hi,

Pages served via our reverse proxy squid seem to have a blank line
pre-pended to them. Is this normal? We are trying to validate mobile
XHTML and this is causing us issues.

Version 2.6.STABLE6 on Centos

Thanks,

J



On Tue, 2007-09-18 at 03:23 -0700, Nadeem Semaan wrote:
 I have noticed that when ever a url contains a port squid does not allow it.  
 For example the webpage http://www.sns2.dns2go.com:81/helpdesk/
 is there a way to allow all pages when a port is specified in the link?
 
 

 
 Be a better Heartthrob. Get better relationship answers from someone who 
 knows. Yahoo! Answers - Check it out. 
 http://answers.yahoo.com/dir/?link=listsid=396545433
***
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution, or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful.
Please note that emails to, from and within RTÉ may be subject to the Freedom
of Information Act 1997 and may be liable to disclosure.



Re: [squid-users] squid pre-pending blank line

2007-09-18 Thread John Moylan
Hi,

Please disregard, the issue is being caused by an web server module.

J

On Tue, 2007-09-18 at 11:57 +0100, John Moylan wrote:
 Hi,
 
 Pages served via our reverse proxy squid seem to have a blank line
 pre-pended to them. Is this normal? We are trying to validate mobile
 XHTML and this is causing us issues.
 
 Version 2.6.STABLE6 on Centos
 
 Thanks,
 
 J
 
 
 
 On Tue, 2007-09-18 at 03:23 -0700, Nadeem Semaan wrote:
  I have noticed that when ever a url contains a port squid does not allow 
  it.  For example the webpage http://www.sns2.dns2go.com:81/helpdesk/
  is there a way to allow all pages when a port is specified in the link?
  
  
 
  
  Be a better Heartthrob. Get better relationship answers from someone who 
  knows. Yahoo! Answers - Check it out. 
  http://answers.yahoo.com/dir/?link=listsid=396545433
 ***
 The information in this e-mail is confidential and may be legally privileged.
 It is intended solely for the addressee. Access to this e-mail by anyone else
 is unauthorised. If you are not the intended recipient, any disclosure,
 copying, distribution, or any action taken or omitted to be taken in reliance
 on it, is prohibited and may be unlawful.
 Please note that emails to, from and within RTÉ may be subject to the Freedom
 of Information Act 1997 and may be liable to disclosure.
 
***
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution, or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful.
Please note that emails to, from and within RTÉ may be subject to the Freedom
of Information Act 1997 and may be liable to disclosure.