On Jan 20, 2009, at 9:01 AM, Benn Oshrin wrote:

> At the time it was Oracle though it has since moved to mysql. The idea
> was to offload HA to the database, though I'm not sure how successful
> that was.

That's what I want to avoid.

> Here at Rutgers we're using repcache,  which is lightweight but fits
> well.

So that means you are using 2 CAS servers? We are currently using 2,  
but it would be ideal to have 4 (2 on campus, and 2 in the data center).

> The only catch is that the protocol is clear text, so you need
> to have a secure network layer, all the more important if you're
> replicating across campus.

Thanks.

-lucas

> On Jan 19, 2009, at 2:07 PM, Lucas Rockwell <l...@berkeley.edu> wrote:
>
>> Hi Benn,
>>
>> Thanks for the reply!
>>
>> On Jan 19, 2009, at 10:05 AM, Benn Oshrin wrote:
>>
>>> We thought about doing something similar at Columbia a few years  
>>> ago,
>>> but ultimately went with a DB backed store.  Our concerns included
>>
>> What database and clustering method are you using? I am not opposed  
>> to
>> using a DB, but I am having difficulty finding a relatively simple
>> cluster technology.
>>
>> Also, a little more background from our situation: Our higher-ups  
>> want
>> us to explore moving the CAS servers around campus so that if there  
>> is
>> a problem with the data center, people can still authenticate, which
>> that means moving our LDAP servers out as well (our Kerberos servers
>> are already distributed).
>>
>>> (1) Only 1/n (where n=# of servers) validation requests would hit
>>>   the correct server and not need forwarding, meaning an increase
>>>   of (n-1)/n transactions
>>
>> This is true.
>>
>>> (2) It wasn't replication, so if a server went down, we would
>>>   still lose 1/n of the login state, meaning we couldn't
>>>   really call this high availability
>>
>> This is why I suggest replicating the TGTs in something like LDAP,
>> which does secure HA replication very well (but is not super speedy  
>> on
>> writes, so that is why it would be good for TGTs).
>>
>> -lucas
>>
>>> Lucas Rockwell <l...@berkeley.edu> wrote on January 15, 2009 1:57:12  
>>> PM
>>> -0800:
>>>
>>> ] Hello CAS developers,
>>> ]
>>> ] I have an idea about how to make CAS highly available (HA) without
>>> ] doing Service Ticket/Proxy (Granting) Ticket (ST and PT)
>>> replication,
>>> ]  so I would like to propose it to you all and see what you think.
>>> ]
>>> ] First, let me start off by saying that I think it is still wise to
>>> ] replicate TGTs, especially considering replicating TGTs can
>>> withstand
>>> ]  very high latencies (several seconds or more). Actually, the
>>> latency
>>> ]  tolerance for TGTs is such that you could even replicate them in
>>> ] LDAP   if you wanted to (which does replication very well, and
>>> ] securely).
>>> ]
>>> ] So, my proposal for "replicating" STs and PTs is as follows: Make
>>> ] each   CAS server aware of its peers, and use the CAS protocol
>>> itself
>>> ] for   validating tickets. I will try to clarify this with an
>>> example:
>>> ]
>>> ] The simplest scenario is 2 CAS servers -- CAS1 and CAS2 -- (but
>>> this
>>> ] scales to n CAS servers). Each CAS server has a list of all the  
>>> CAS
>>> ] servers in the cluster (using "cluster" for lack of a better  
>>> term),
>>> ] including itself. When a CAS server (CAS1) receives a ST or PT for
>>> ] validation, it simply looks at the ticket to determine where the
>>> ] ticket originated (this is done by inspecting the end of the  
>>> ticket
>>> ] value which will have the hostname of the originating CAS server
>>> ] appended to it -- just as tickets do now). If the ticket  
>>> originated
>>> ] with itself (CAS1), it handles the validation like normal. If the
>>> ] ticket originated with another CAS server (CAS2), the CAS1 server
>>> now
>>> ]  becomes a CAS client and asks CAS2 to do the validation (using  
>>> the
>>> ] CAS   protocol), and then CAS1 just passes the response right back
>>> to
>>> ] the   client as if it (CAS1) had done the validation.
>>> ]
>>> ] That's it. This whole concept could probably be implemented in the
>>> ] CentralAuthenticationService class.
>>> ]
>>> ] Of course, this concept scales to n CAS servers, but I do not know
>>> ] the   throughput of even doing this with just 2 CAS servers. But
>>> this
>>> ]  certainly makes CAS HA without a lot of extra baggage. The local
>>> ST
>>> ] and PT ticket registry can be implemented in memcache or even a DB
>>> ] but   the nice thing about it is that they do not have to be
>>> ] replicated. As   I said in the beginning, the TGTs could be stored
>>> in
>>> ] something like   LDAP which does replication very well, but it is
>>> not
>>> ] fast enough for   STs and PTs.
>>> ]
>>> ] Please let me know what you think and/or if you need more
>>> ] clarification.
>>> ]
>>> ] Thanks.
>>> ]
>>> ] -lucas
>>> ]
>>> ] -----------------------
>>> ] Lucas Rockwell
>>> ] CalNet Team
>>> ] l...@berkeley.edu
>>> ]
>>> ]
>>> ]
>>> ]
>>> ] _______________________________________________
>>> ] Yale CAS mailing list
>>> ] cas@tp.its.yale.edu
>>> ] http://tp.its.yale.edu/mailman/listinfo/cas
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Yale CAS mailing list
>>> cas@tp.its.yale.edu
>>> http://tp.its.yale.edu/mailman/listinfo/cas
>>
>> _______________________________________________
>> Yale CAS mailing list
>> cas@tp.its.yale.edu
>> http://tp.its.yale.edu/mailman/listinfo/cas
> _______________________________________________
> Yale CAS mailing list
> cas@tp.its.yale.edu
> http://tp.its.yale.edu/mailman/listinfo/cas

_______________________________________________
Yale CAS mailing list
cas@tp.its.yale.edu
http://tp.its.yale.edu/mailman/listinfo/cas

Reply via email to