EZproxy already handles HTTPS connections for HTTPS enabled services today, and 
on modern hardware (i.e. since circa 2005), cryptographic processing far 
surpasses the speed of most network connections, so I do not accept the “it’s 
too heavy” argument against it supporting the HTTPS to HTTP functionality.  
Even embedded systems with 500MHz CPUs can terminate SSL VPNs at over 100Mb/s 
these days.

All I am saying is that the model where you expose HTTPS to the patron and 
still continue to use HTTP for the vendor is not possible with EZproxy today, 
and there is no technical reason why it could not do so, but rather a policy 
decision.  While HTTPS to HTTP translation would not completely solve the 
entire point of the original posting, it would be a step in the right direction 
until the rest of the world caught up.

As an aside, the lightweight nature of EZproxy seems to be becoming its 
Achilles Heel these days, as modern web development methods seem to be pushing 
the boundaries of its capabilities pretty hard.  The stance that EZproxy only 
supports what it understands is going to be a problem when vendors adopt 
HTTP/2.0, SDCH encoding, web sockets, etc., just as AJAX caused issues 
previously.  Most vendor platforms are Java based, and once Jetty starts 
supporting these features, the performance chasm between dumbed-down proxy 
connections and direct connections is going to become even more significant 
than it is today.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Jun 18, 2014, at 11:20, Cary Gordon <listu...@chillco.com> wrote:

> One of the reasons that EZProxy is so fast and resource-efficient is that
> it is very lightweight. HTTPS to HTTP processing would require that
> EZProzy, or another proxy layer behind it, provide an HTTPS endpoint.
> Building this into EZProxy, I think, would not be a good fit for
> their model.
> 
> I think that it would be simpler to just do everything in nginx, or
> possibly node.
> 
> Cary
> 
> On Wednesday, June 18, 2014, Andrew Anderson <and...@lirn.net> wrote:
> 
>> On Jun 17, 2014, at 17:09, Stuart Yeates <stuart.yea...@vuw.ac.nz
>> <javascript:;>> wrote:
>> 
>>> On 06/17/2014 08:49 AM, Galen Charlton wrote:
>>>> On Sun, Jun 15, 2014 at 4:03 PM, Stuart Yeates <stuart.yea...@vuw.ac.nz
>> <javascript:;>> wrote:
>>>>> As I read it, 'Freedom to Read' means that we have to take active
>> steps to
>>>>> protect that rights of our readers to read what they want and  in
>> private.
>>>> [snip]
>>>>> * building HTTPS Everywhere-like functionality into LMSs (such
>> functionality
>>>>> may already exist, I'm not sure)
>>>> 
>>>> Many ILSs can be configured to require SSL to access their public
>>>> interfaces, and I think it would be worthwhile to encourage that as a
>>>> default expectation for discovery interfaces.
>>>> 
>>>> However, I think that's only part of the picture for ILSs.  Other
>>>> parts would include:
>>>> 
>>>> * staff training on handling patron and circulation data
>>>> * ensuring that the ILS has the ability to control (and let users
>>>> control) how much circulation and search history data gets retained
>>>> * ensuring that the ILS backup policy strikes the correct balance
>>>> between having enough for disaster recovery while not keeping
>>>> individually identifiable circ history forever
>>>> * ensuring that contracts with ILS hosting providers and services that
>>>> access patron data from the ILS have appropriate language concerning
>>>> data retention and notification of subpoenas.
>>> 
>>> Compared to other contributors to this thread, I appear to be (a) less
>> worried about state actors than our commercial partners and (b) keener to
>> see relatively straight forward technical fixes that just work 'for free'
>> across large classes of library systems. Things like:
>>> 
>>> * An ILS module that pulls the HTTPS Everywhere ruleset from
>> https://gitweb.torproject.org/https-everywhere.git/tree/HEAD:/src/chrome/content/rules
>> and applies those rules as a standard data-cleanup step on all imported
>> data (MARC, etc).
>>> 
>>> * A plugin to the CMS that drives the library's websites / blogs /
>> whatever and uses the same rulesets to default all links to HTTPS.
>>> 
>>> * An EzProxy plugin (or howto) on silently redirectly users to HTTPS
>> over HTTP sites.
>>> 
>>> cheers
>>> stuart
>> 
>> This is something that I have been interested in as well, and I have been
>> asking our content providers when they will make their content available
>> via HTTPS, but so far with very little uptake.  Perhaps if enough customers
>> start asking, it will get enough exposure internally to drive adoption of
>> HTTPS for the content side.
>> 
>> I looked into what EZproxy offers for the user side, and that product does
>> not currently have the ability to do HTTPS to HTTP proxying, even though
>> there is no technical reason why it could not be done (look at how many
>> HTTPS sites run Apache in a reverse proxy to HTTP servers internally for
>> load balancing, etc.)
>> 
>> EZproxy makes the assumption that a HTTP resource will always be accessed
>> over HTTP, and you cannot configure a HTTPS entry point to HTTP services to
>> at least secure the side of the communication channel that is going to
>> contain more identifiable information about the user, before it becomes
>> aggregated into the general proxy stream.
>> 
>> --
>> Andrew Anderson, Director of Development, Library and Information
>> Resources Network, Inc.
>> http://www.lirn.net/ | http://www.twitter.com/LIRNnotes |
>> http://www.facebook.com/LIRNnotes
>> 
> 
> 
> -- 
> Cary Gordon
> The Cherry Hill Company
> http://chillco.com

Reply via email to