Yes, my basic vision for Strikeforce Blade was to 'aquire, provision, and
deploy several compute nodes functioning as a cluster of hypervisors'.

This would give us a bunch of processors with a bunch of hypervisors on top
of them, all clustered together into one clustered hypervisor.

So, from an end user's perspective (read: any skullspace member's
perspective), they could then request a virtual machine be created for
them. Someone would then receive that request, create a VM and allocate
cluster resources for it, create a user account with access to that vm (or
assign an existing user account rights to it). The end user would then
receive an email with an IP address of the cluster management interface
where they can login and adjust settings of (only their own) virtual
machine, put an ISO file into the virtual DVD, and turn on their VM. Then
they can remotely install the operating system, enable IP addressing and
routing, and enable remote management services for the VM directly (either
remote desktop or SSH or VNC or whatever). From there, they can login to
their VM and do whatever they want. If they need to troubleshoot (reboot,
etc...), they can use the previously created account to login to the
cluster management interface again and reboot or reconfigure their vm.

Also note, that the reason for "requesting" accounts, rather than just
letting people do what they want to do, is not to 'moderate' machines, but
simply to allow management of capacity planning and compute resources,
aswell as the simple fact that administering a virtualization cluster is
not experience that even 90% of system administrators have experience with,
and it's easy to fuck up. Tons of admins are now experimenting with
virtualization, but only the chosen few have played with clustered
virtualization.

On Sat, Aug 25, 2012 at 10:31 AM, Stefan penner <stefan.pen...@gmail.com>wrote:

> The original CPU. At the time I donated it was a q6600. So 64bit yea,
> virtualization extensions no. That being said the CPU may have been changed.
>
> As for the blade discussion, I strongly suggest the investment goes
> towards a nice virtualized environment of some kind. I feel this would lead
> to the most flexible and cost effective solution.
>
>
> On Saturday, August 25, 2012, chris kluka wrote:
>
>> Yes, we could. Though it should probably get a 64 bit processor (if i
>> remember correctly, it had been socketed with a 32 bit processor? can
>> somene confirm this?)
>>
>>
>>
>> On Sat, Aug 25, 2012 at 3:01 AM, Ken DeWitt <
>> kendew...@yourfellowtechguy.com> wrote:
>>
>> Is the current vm server a blade?  If not then we could build off the
>> current system.
>>
>> Any question or comments you can email or call me at any time.
>> I will get back to you as fast as I can.
>>
>> Thank you and have a nice day!!
>>
>> Ken DeWitt
>> Your Fellow Tech. Guy
>>
>> Phone # : 204-998-3218
>> Email: kendew...@yftg.ca
>>
>>
>>
>> On Sat, Aug 25, 2012 at 2:57 AM, Chris Kluka <asd...@asdlkf.net> wrote:
>>
>> If we hot a blade system, we would not be able to 'fit' those drives.
>> Each blade can fit 2 drives and they must be 2.5" sas drives.
>>
>> Sent from my iPhone
>>
>> On Aug 25, 2012, at 2:03 AM, Ken DeWitt <kendew...@yourfellowtechguy.com>
>> wrote:
>>
>> I would like to give a suggestion so that the hardware that everyone has
>> paid for it their current server be added to the vm server meaning their
>> hard drives not the motherboards.  The hard drives can be labelled so the
>> admins know who's drive is who's.  Why I was suggesting this is because I
>> was reading in the email that someone could help the members convert their
>> current servers over to a vm server with their current os, scripts and all
>> the software they worked so hard on and sent so much time on setting up.
>>  This will still give the members a system as close to their current
>> systems as possible.  I don't know if this is possible to link the hard
>> drives with the different vm logins for each user.  Something else that
>> would help with adding all the hard drives if the members want to is to
>> sell their current hardware minus the hard drives and put the money or a
>> portion of the money in a put to add hardware raid cards, better
>> motherboard and more ram.  Then all the money they invested in their
>> current servers does not all go to waste.
>>
>> Hope that makes sense.
>>
>> Any question or comments you can email or call me at any time.
>> I will get back to you as fast as I can.
>>
>> Thank you and have a nice day!!
>>
>> Ken DeWitt
>> Your Fellow Tech. Guy
>>
>> Phone # : 204-998-3218
>> Email: kendew...@yftg.ca
>>
>>
>>
>> On Sat, Aug 25, 2012 at 12:02 AM, Mark Jenkins <m...@parit.ca> wrote:
>>
>> On 23/08/12 11:53 PM, ayecee wrote:
>>
>> Yes, Mark runs a VM server. It runs noisy and hot even when lightly
>> loaded, and when your VM stops working, you can't go to the space and
>> kick it.
>>
>> I'm presently unable to login to my account on it, and some day I'll get
>> around to rectifying that.
>>
>>
>> Problem solved.
>>
>> But, the vm service (vmsrv) in itself was not to blame for ayecee's
>> issue. (hence the subject line)
>> http://www.skullspace.ca/wiki/**index.php/Vmsrv<http://www.skullspace.ca/wiki/index.php/Vmsrv>
>>
>> Ayecee has a MUMD account -- which is a service running under vmsrv.
>> http://www.skullspace.ca/wiki/**index.php/Mumd<http://www.skullspace.ca/wiki/index.php/Mumd>
>>
>> One of the upsides of a mumd account is that you can use it to log into
>> the vmsrv host operating system as well.
>>
>> This is documented in the vmsrv wiki page:
>> """[vmsrv] Accounts
>>
>> Pick one of two ways to get an account:
>>
>>   *  Ask the admin team (Mark Jenkins <
>>
>>
> _______________________________________________
> SkullSpace Discuss Mailing List
> Help: http://www.skullspace.ca/wiki/index.php/Mailing_List#Discuss
> Archive: https://groups.google.com/group/skullspace-discuss-archive/
>
_______________________________________________
SkullSpace Discuss Mailing List
Help: http://www.skullspace.ca/wiki/index.php/Mailing_List#Discuss
Archive: https://groups.google.com/group/skullspace-discuss-archive/

Reply via email to