Virtual Machine Hosting
Virtual Machine (VM) hosting provides VMs so that your organization's IT staff can run dedicated and customized Linux or Windows systems. This allows your IT staff to focus on your computing needs without the worry of purchasing and maintaining hardware resources.
Your VM will live in UFIT's secure private cloud which leverages:
- Multiple enterprise-class datacenters
- Secure enterprise-class network - public or UF private IP space available
- SAN/NAS-backed failover to prevent downtime due to hardware failure or maintenance
- VMWare vSphere 6.5 environment
Everything up to the hypervisor (virtualization layer). This includes all physical resources such as computing hardware, networking, datacenter resources, and VMware software. UFIT also provides access to allow you to connect to your VM for management purposes (console access, power-on/off, etc).
You provide IT staff to install, configure, and maintain all software on your VM - OS and Application software. This includes maintenance of proper licensing for any software installed on your VM. Additionally, your staff will be required to handle all monitoring and backups of your VM. While UFIT monitors the health of the underlying hypervisor systems we do not provide VM guest OS or Application monitoring for your hosted VM. Additionally, your staff will be responsible for working with UFIT Network Services to maintain network ACLs pertaining to your VM's IP address(es). Your staff will also be responsible for responding to and working with UFIT's office of Information Security and Compliance.
Data Center Architecture
The University of Florida has two physical data centers, Space Sciences Research Building (SSRB) on campus and University of Florida Data Center (UFDC) off campus. Our infrastructure design stretches our compute, storage, and network across these two physical datacenters into two Availability Zones (AZ1, AZ2). Each AZ consists of a compute, and dedicated storage and network infrastructure to provide redundancy across the physical data centers. Compute consists of stretched clusters that allow machine states to be migrated across datacenters at any time. Storage consists of two SAN arrays, one in each datacenter, synchronously replicating data to provide failover capabilities between datacenters. Network consists of two pairs of routers in each datacenter providing a network fabric across both datacenters.
In conjunction, these infrastructure resources provide the capability to automatically shift workloads between datacenters as needed. In the event of a partial infrastructure failure, such as a single datacenter losing power, VMs are automatically started on unaffected infrastructure resources. In the event an entire infrastructure resource becomes unavailable, such as a routing change taking down the network fabric, all VMs in the AZ will be down with no failover capability.
For applications with a single box architecture, you would want to place your VM into either of the AZs.
For applications with a highly available architecture, you would want to place at least one VM in each AZ and for additional redundancy, you could “pin” the VMs to one of the data centers within the AZ (AZ1 SSRB, AZ2 UFDC). The compute clusters share physical infrastructure. Because the stretched compute clusters are constantly migrating VMs across hosts to balance the workload, there is a chance your VMs in each AZ could run in one datacenter on the shared infrastructure. If that infrastructure was lost all your VMs could go down.