Re: [osol-discuss] job interview help!
Rosie, Not really an 'OpenSolaris/Solaris' question. Also, you may want to consolidate down to an Oracle/Solaris 10 architecture - removing MySql /SyBase overhead maintenance. ~ Ken --- On Mon, 3/15/10, Bayard Bell wrote: From: Bayard Bell Subject: Re: [osol-discuss] job interview help! To: "Rosie" Cc: opensolaris-discuss@opensolaris.org Date: Monday, March 15, 2010, 6:57 PM If you're doing cross-building question, a significant question is the network failover model: is service provision dependent on providing consistent service access at the IP or network name level? If it's currently tied to an IP address, does your network topology support address failover? What kind of concurrency model does the application allow? Is the application tied to locally mounted filesystem? What starts and stops application processes and what does their process model look like for various components (does the application daemonise itself vs. something like "(/usr/local/bin/myapp &)" for background and separation from the current process group vs. non-persistent process vs. scheduled application under something like cron or autosys)? What interfaces do you have to verify application state (e.g. "kill -0 $(cat /var/run/myapp)" vs. being able to contact a management thread in the server via a IP or domain socket)? Do application components currently run under SMF, or are you running an older kind of startup script or with additional mechanisms like cron? Am 11 Mar 2010 um 22:14 schrieb Rosie: > hi guys, > > I have a job interview next week and have been asked to make a presentation > on the following topic > > The computerised Library System at a university runs on a number of servers, > two of which are essential > to the service. These two standalone servers provide different parts of the > service and are each single > points of failure. The two servers and the applications running on them are: > > • Sun Fire V240 – 2x1503 MHZ UltraSPARC III CPUs – 8GB memory – 4 years > old – Solaris 9 – > Oracle 10.2 – MySQL 4.1.9-standard – applications to access library databse. > > • Sun Fire V490 – 2x1350 MHZ UltraSPARC IV CPUs – 8GB memory – 4 years old > – Solaris 8 – > Sysbase 12.0 – application to access university portal. > > Storage is provided on a dual site Storage Area Network. > > We must introduce high availability into our increasingly important Library > Systems so we wish to > replace these servers with new hardware and a configuration which will give > us high availability and > will minimise future down time. > > Suggest how this may be achieved based on the following assumptions: > > • The new high availability system will be hosted on Sun servers running > Solaris 10. > • We have two data centres in separate locations with fast fibre > connections. > • Data storage will continue to be provided from a two site SAN. > > > please, please help > anything at all would be greatly appreciated > > thanks, > Rosie > --This message posted from opensolaris.org > ___ > opensolaris-discuss mailing list > opensolaris-discuss@opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] job interview help!
If you're doing cross-building question, a significant question is the network failover model: is service provision dependent on providing consistent service access at the IP or network name level? If it's currently tied to an IP address, does your network topology support address failover? What kind of concurrency model does the application allow? Is the application tied to locally mounted filesystem? What starts and stops application processes and what does their process model look like for various components (does the application daemonise itself vs. something like "(/usr/local/bin/myapp &)" for background and separation from the current process group vs. non-persistent process vs. scheduled application under something like cron or autosys)? What interfaces do you have to verify application state (e.g. "kill -0 $ (cat /var/run/myapp)" vs. being able to contact a management thread in the server via a IP or domain socket)? Do application components currently run under SMF, or are you running an older kind of startup script or with additional mechanisms like cron? Am 11 Mar 2010 um 22:14 schrieb Rosie: hi guys, I have a job interview next week and have been asked to make a presentation on the following topic The computerised Library System at a university runs on a number of servers, two of which are essential to the service. These two standalone servers provide different parts of the service and are each single points of failure. The two servers and the applications running on them are: • Sun Fire V240 – 2x1503 MHZ UltraSPARC III CPUs – 8GB memory – 4 years old – Solaris 9 – Oracle 10.2 – MySQL 4.1.9-standard – applications to access library databse. • Sun Fire V490 – 2x1350 MHZ UltraSPARC IV CPUs – 8GB memory – 4 years old – Solaris 8 – Sysbase 12.0 – application to access university portal. Storage is provided on a dual site Storage Area Network. We must introduce high availability into our increasingly important Library Systems so we wish to replace these servers with new hardware and a configuration which will give us high availability and will minimise future down time. Suggest how this may be achieved based on the following assumptions: • The new high availability system will be hosted on Sun servers running Solaris 10. • We have two data centres in separate locations with fast fibre connections. • Data storage will continue to be provided from a two site SAN. please, please help anything at all would be greatly appreciated thanks, Rosie -- This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] job interview help!
Ibelieve he said that you need to reharden after patching. I don't disagree with that at all. On Fri, Mar 12, 2010 at 3:22 PM, Mike DeMarco wrote: > Yes but to Mike Gerdts point if you go applying patches after you have > hardened then you are applying holes to your hardened system through the > patches that you may or may not discover. > -- > This message posted from opensolaris.org > ___ > opensolaris-discuss mailing list > opensolaris-discuss@opensolaris.org > ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] job interview help!
Yes but to Mike Gerdts point if you go applying patches after you have hardened then you are applying holes to your hardened system through the patches that you may or may not discover. -- This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] job interview help!
On Fri, Mar 12, 2010 at 9:36 AM, Mike DeMarco wrote: >> everything else on the SAN. Don't forget to minimize >> and harden the >> build as much as possible before you patch, and patch >> (including >> firmware) before you let users of any type on it. > > Can you explain the logic behind minimize and harden before you patch? I have > always fully patched them JASS then minimize. > > I would think that if I minimized first it would save some time patching but > that something could get missed in the patch install. Something like if one > package is not installed a patch to a library that is used by another > packages could get missed. There are several reasons, not the least of which are time and storage. However, the ones to be more concerned about are issues that come up from the patches still be listed as "applied" when some, if not all, of the underlying files have actually be removed through the minimization process. This leaves the system in an "unknown" state with regard to applicability of future patches. fpsm ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] job interview help!
> On Fri, Mar 12, 2010 at 8:36 AM, Mike DeMarco > wrote: > >> everything else on the SAN. Don't forget to > minimize > >> and harden the > >> build as much as possible before you patch, and > patch > >> (including > >> firmware) before you let users of any type on it. > > > > Can you explain the logic behind minimize and > harden before you patch? I have always fully patched > them JASS then minimize. > > Hardening should always be verified after patching. > Solaris patches > uite commonly whack hardening applied to sendmail. > > > I would think that if I minimized first it would > save some time patching but that something could get > missed in the patch install. Something like if one > package is not installed a patch to a library that is > used by another packages could get missed. > > If your minimization is such that patching breaks, > the order doesn't > matter. At some point x months or years in the > future you will need > to patch again. Don't minimize to the point that > patching breaks. > > -- > Mike Gerdts > http://mgerdts.blogspot.com/ > ___ > opensolaris-discuss mailing list > opensolaris-discuss@opensolaris.org > I fully agree. secure after patching everytime. -- This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] job interview help!
On Fri, Mar 12, 2010 at 8:36 AM, Mike DeMarco wrote: >> everything else on the SAN. Don't forget to minimize >> and harden the >> build as much as possible before you patch, and patch >> (including >> firmware) before you let users of any type on it. > > Can you explain the logic behind minimize and harden before you patch? I have > always fully patched them JASS then minimize. Hardening should always be verified after patching. Solaris patches quite commonly whack hardening applied to sendmail. > I would think that if I minimized first it would save some time patching but > that something could get missed in the patch install. Something like if one > package is not installed a patch to a library that is used by another > packages could get missed. If your minimization is such that patching breaks, the order doesn't matter. At some point x months or years in the future you will need to patch again. Don't minimize to the point that patching breaks. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] job interview help!
> everything else on the SAN. Don't forget to minimize > and harden the > build as much as possible before you patch, and patch > (including > firmware) before you let users of any type on it. Can you explain the logic behind minimize and harden before you patch? I have always fully patched them JASS then minimize. I would think that if I minimized first it would save some time patching but that something could get missed in the patch install. Something like if one package is not installed a patch to a library that is used by another packages could get missed. -- This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] job interview help!
I feel like I'm doing someone's homework assignment for them. Wait, I am - and for free at that! There are quite a few more questions that need to be asked before a proper solution can be fully fleshed out, for example: - how far apart are the two data centers? are they both "live" or is one for DR/BC? - how is the performance of the current hardware? - do you just need HA, or is LB a concern as well? - does the SAN provide the data replication behind the scenes or does the cluster need to do that for itself? - are there any security or performance concerns that mandate separate hardware for the two solutions? Assuming that the data centers are in different buildings, but otherwise relatively close together, at a minimum, you would need two servers (one at each site) in a clustered configuration. I would suggest building a Zone Cluster for each "application" on Solaris 10 (10/09) and Sun Cluster 3.2 (11/09) on X4450 or M3000 servers with at least 16GB of RAM, 2 redundant quad-port GigE NICs and 2 redundant dual-port HBAs (as fast as you can get, and don't mix vendors) - make sure you separate disk and tape IO. Put 3 small drives in each box for the root pool for the global zone and LU disk, and put everything else on the SAN. Don't forget to minimize and harden the build as much as possible before you patch, and patch (including firmware) before you let users of any type on it. If the data centers are farther apart, then I would do the same thing, but put both nodes at one site, and then build a second clustered pair at the second site and use Geographical Edition between the sites for failover. fpsm On Thu, Mar 11, 2010 at 5:14 PM, Rosie wrote: > hi guys, > > I have a job interview next week and have been asked to make a presentation > on the following topic > > The computerised Library System at a university runs on a number of servers, > two of which are essential > to the service. These two standalone servers provide different parts of the > service and are each single > points of failure. The two servers and the applications running on them are: > > • Sun Fire V240 – 2x1503 MHZ UltraSPARC III CPUs – 8GB memory – 4 years > old – Solaris 9 – > Oracle 10.2 – MySQL 4.1.9-standard – applications to access library databse. > > • Sun Fire V490 – 2x1350 MHZ UltraSPARC IV CPUs – 8GB memory – 4 years > old – Solaris 8 – > Sysbase 12.0 – application to access university portal. > > Storage is provided on a dual site Storage Area Network. > > We must introduce high availability into our increasingly important Library > Systems so we wish to > replace these servers with new hardware and a configuration which will give > us high availability and > will minimise future down time. > > Suggest how this may be achieved based on the following assumptions: > > • The new high availability system will be hosted on Sun servers > running Solaris 10. > • We have two data centres in separate locations with fast fibre > connections. > • Data storage will continue to be provided from a two site SAN. > > > please, please help > anything at all would be greatly appreciated > > thanks, > Rosie > -- > This message posted from opensolaris.org > ___ > opensolaris-discuss mailing list > opensolaris-discuss@opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
[osol-discuss] job interview help!
hi guys, I have a job interview next week and have been asked to make a presentation on the following topic The computerised Library System at a university runs on a number of servers, two of which are essential to the service. These two standalone servers provide different parts of the service and are each single points of failure. The two servers and the applications running on them are: • Sun Fire V240 – 2x1503 MHZ UltraSPARC III CPUs – 8GB memory – 4 years old – Solaris 9 – Oracle 10.2 – MySQL 4.1.9-standard – applications to access library databse. • Sun Fire V490 – 2x1350 MHZ UltraSPARC IV CPUs – 8GB memory – 4 years old – Solaris 8 – Sysbase 12.0 – application to access university portal. Storage is provided on a dual site Storage Area Network. We must introduce high availability into our increasingly important Library Systems so we wish to replace these servers with new hardware and a configuration which will give us high availability and will minimise future down time. Suggest how this may be achieved based on the following assumptions: • The new high availability system will be hosted on Sun servers running Solaris 10. • We have two data centres in separate locations with fast fibre connections. • Data storage will continue to be provided from a two site SAN. please, please help anything at all would be greatly appreciated thanks, Rosie -- This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org