Virtualization is becoming more and more popular among IT communities, and with good reason. But that popularity will require a greater level of vigilance as well.
As virtualization continues to grow, it’s reaching into “virtually” (pun intended) every corner of your network, not just in the cloud, but in your on-premise network as well. According to one CIO that we spoke to, virtualization represents a tremendous advancement in terms of ease of use and customization of machines for end users, and as well as for the ability to contain costs, but it comes with plenty of caveats.
Virtual machines have been around for a long time. When they initially appeared in the mainstream some 15 years ago, it seemed like a small miracle, by enabling hardware consolidation within a data center. More recently, the ROI required to spin up a server has fallen dramatically. However, governance around virtual machines had not yet matured, and so the number of them rose dramatically.
Over time, organizations developed a need to manage the vast increase in the number of server instances. In response, virtual machine manufacturers developed various forms of virtual machine management to allow such management, giving both visibility and management simplification to the virtual environment.
Visibility and management alone are not enough to control VM instances. A good governance model is essential to controlling cost and driving value from any server landscape, even a virtual one.
This article will discuss briefly the history of virtualization, what options are available to you (“in the cloud” or “on premise”), plus, best practices with respect to these, and how you can best manage the whole process, with an eye toward “good governance” across the network.
Virtualization: A Brief History
First conceived by IBM in the late 1960’s, they were developed to deal with a couple of different kinds of issues.
One early use case revolved around time sharing, which allowed multiple users to use the same computer at the same time. Each program appeared to have access to the machine at the same time, but in reality, the hardware system switched between program in “time slices”, saving and restoring between them in sequence.
A second use case for early virtual machines involved the creation of an abstract platform for an intermediate language that could be used as a compiler to move code to different machines, with only some small adjustments to back-end architecture. This functionality still exists in the form of “containerization”, although it’s less prevalent than the other type of usage.
My own first experience with virtual machines came in the late 1990’s, when Macs and PCs still weren’t interacting with each other. You had several choices. You could buy two machines. Or at one point, Apple offered several different machines that had the hard drive partitioned to hold both a Mac and a PC. A third option involved buying a “Windows emulator”. Who remembers those days?
Virtualization: Overview
Gone are the days when the old Windows and PC networks ruled, and IT people would run from machine to machine and install software on networks of individual machines.
Today, the configurations can easily become overwhelming. Some organizations choose to operate their networks “on premises”. Essentially, they build “private clouds” internally. These still make sense from a security standpoint, where all data MUST reside internally, behind perimeter defenses.
Today, some organizations have huge numbers of storage arrays in-house, serving tens of thousands of physical machines, dozens of image templates, and hundreds, if not thousands, of image packages.
In these systems, software functions as hardware, giving IT departments far greater control – and far greater challenges in managing the various kinds of virtualization in their networks. Others have migrated most of their IT infrastructure “to the cloud”. Large providers such as Microsoft Azure and Amazon Web Services (AWS) – and there are others – provide hosting for cloud-based virtualization.
Still more complex are those systems which operate as hybrids – part in the cloud, part on premises.
Virtual machines enable for a greater level of customization, by OS, by application needs. These “packages” of applications (and other data) can range from those that provide executive, IT access, accounting, HR, marketing, sales, or even front-line needs.
Here are several examples of the various binary choices that IT departments can face as they think through the best ways to optimize their networks.
Your Own Private Cloud: On Premises or Public Cloud?
The most primary question that organizations will face will be to develop on-premises or in the public cloud. Each version of these private clouds offer advantages, and each has its own disadvantages. Consider the following questions:
- Cost: Complications can be very complex. While the licensing fees themselves may be relatively simple to calculate, all of the many different choices from different vendors (many or all of whom you may rely on) can build huge complexity into any cost calculations. Heavy use of resources over time can also bite into your budget.
- Security: Cloud-based providers can be very secure these days, but privacy and other regulations – as well as a host of specific security concerns – can still lead you to make the case for an on-premises solution. Also, while security is built into many cloud-based solutions, the type of redundancy that you can set up in an on-site solution isn’t yet an easy thing to do with cloud data. API calls to services you use in the cloud should also be scrutinized.
- Flexibility: Cloud-based services can extend forever with simple reconfigurations of licensing and settings modifications. They can be removed with similar ease. This may be a consideration for some organizations (such as those with seasonal demands).
Because of the flexibility offered by many virtualization solutions, it is possible to host solutions either on premises (private cloud) or on the public cloud (e.g. AWS, Azure, Rackspace), or with a combination of both which is called the hybrid cloud.
From a business standpoint, companies that are themselves tech companies that are developing new solutions may find that there’s no need to build up their internal organizations with on-premises hardware. It’ll be simpler just to start in the cloud.
Most companies that work in non-tech environments will likely have some strong need to maintain an on-premises presence. In fact, the widespread adoption of virtualization has quickly led to feature sets that provide easy integration with existing environments with minimal disruption.
Either way, the number of options that are available can be staggering.
Public Cloud: Microsoft Azure or Amazon Web Services (AWS)?
While a discussion of on-premises hardware solutions is beyond the scope of this article, any solution(s) you choose to work with will compete with cloud-based options. Without question, the two largest providers are Microsoft, with its Azure cloud solution, and Amazon Web Services (AWS).
While AWS is the older (and more well-established) of the two cloud-based services, Microsoft has a great deal of experience at playing catch-up to market leaders; combined with their broad lead in complex network solutions, there is no doubt that they are a formidable challenger.
One key challenge with Azure is that it extends “real-world” items (such as Microsoft’s active directory) into a virtual world, where active directory-types of machine interactions don’t necessarily work.
Business.com provides a useful article that compares the two cloud-based services. There is a need to compare not only the systems themselves, but to measure them against your own business needs.
IT departments that tend to lean toward Microsoft will probably do better in the end with Azure. Organizations that are developing new infrastructures, or that already work with open-source or other non-Microsoft solutions may benefit from considering AWS.
On a feature-by-feature comparison, there are literally dozens of different considerations. Both solutions offer limited free services, so you can “try before you buy”.
On Premises VM Network: Microsoft Hyper-V or VMware?
While Azure and AWS will run on their own operating systems, for internal VMs, you will need a “hypervisor” to operate your virtual networks. (Think of “hyper” as a designation that means “super” on steroids).
There are two leading hypervisors that can function on your servers or server arrays – Microsoft’s Hyper-V, and various products by VMware. Both enable you to run extensive virtualization, both hosts and guests, on internal networks.
Each of these products has its own needs. Microsoft’s “Hyper-V” is a Type 1 hypervisor that can run directly on the hardware you’re using, and it serves as the foundation for whatever OS you’ll put on top of it. Microsoft also offers its own brand of “system center virtual machine manager” (scvmm) tools that are designed to manage the Microsoft Hyper-V virtual machines.
VMware is most often provides “Type-2” hypervisor software that must run on top of some other OS that actually operates the hardware. (Although, VMware also makes a Type-1 hypervisor called VMware ESX).
As well, organizations such as Oracle and Red Hat have their own offerings, but these are less prevalent in most organizations. Red Hat’s Openshift software is a Linux container application that enables the easy distribution of Linux applications.
This article provides a brief and relatively current overview of both of the major systems.
Advantages and Disadvantages of Virtualization
As mentioned above, virtualization brings its own caveats.
- Security: With virtualization, security is not directly something you control. This may have some disadvantages for organizations such as health care or financial institutions that are required to maintain not only data availability, but now data security and (with the advent of such laws as HIPAA, CASL, and GDPR) data privacy.
- Disaster Recovery: Virtualization offers opportunities for better disaster recovery, specifically because of the opportunity for redundant systems. But one thing that they don’t do well is to back up the data (as you might on an internal network), and in fact many organizations fail to back up information that’s hosted outside of the organization.
- Business Continuity: In an era of mergers and acquisitions, there may be some disadvantages in terms of business continuity, as the virtualized systems of merging companies may not always be so easy to merge.
- Sprawl: The easy-to-add nature of virtual machines, however, often results indiscriminate VM sprawl in which machines spun up for testing, for instance, remain active long after the are needed. Because of this, some systems admins can end up with little or no visibility to the mapping of physical servers to virtual machines. This is also a resource hog.
Governance Best Practices
As systems and options have proliferated, it has become more important to implement governance polices sooner rather than later, and these policies always tie back to the type of network management you have in place.
Good governance policies will help minimize risks across a broad spectrum of areas, including some traditionally non-IT concerns such as legal compliance with a growing number of regulations (in finance, health care, privacy/marketing, and more), internal concerns, business continuity (especially in the case of mergers or acquisitions), vendors, risk management, and more.
A good governance policy might include such things as data backups, creating business practices that offer better visibility of policies across the organization, and even the use of automated features such as automatic failover (which enables system administrators to move data handling to a standby system in case of a system compromise).
You’ll want to be aware of the way that the virtual machines, the software (and licensing), and even users can proliferate as systems grow, and track and manage these patterns of growth as your company expands.
WhatsUp Gold Starter
Most traditional management systems were built around the things you’ll find in a traditional IT world: servers and PC have unique identities. Once set, they don’t move. They tend to be either on or off (up or down). These factors don’t generally apply in the virtual world.
That’s why a management tool designed specifically to handle the unique needs of virtual machines in a virtual environment can be so helpful.
The WhatsUp Gold Starter is one of a number of free tools that Ipswitch makes available, that you can “try before you buy”, while investigating the best options for the growth of your organization.
It has an easy-to-understand interface that works on familiar red, amber, and green statuses, and provides constant insight not only to CPU usage, memory usage, and throughput for both hosts and guests in real time, but reporting features that enable you to understand these figures for various periods of time, frequently up to a year or more in the past.
This small but powerful tool helps keep track of the entire network inventory by visualizing and identifying all of the VMs on a particular host. You can view and control VM statuses and display up to 24 hours of performance data, using a number of interactive graphs and charts. As well, you can see the real-time resource consumption of CPUs, memory, and disk usage per-VM and host-wide.
When it comes to accuracy, Mark Amick, a Senior Product Consultant at Ipswitch says, “We can’t misconfigure a monitor, because we get the data right from the vendor.” The tool provides both active monitoring and performance cloud monitoring for both AWS and Azure. Beyond discovery, any metric that these systems are collecting now is available through the API of those solutions, which report information directly from that vendor’s interface.
Of course, good governance requires much more than the name of the machine and the host on which it’s running – both on premises or in the public cloud.