Advertisement

Are you coddling your mission-critical apps?

Whenever I ask state and local government IT managers if they're using virtualization, the answer is “yes” nearly 100 percent of the time.

Whenever I ask state and local government IT managers if they’re using virtualization, the answer is “yes” nearly 100 percent of the time.

But when I alter that question slightly and ask if they’re virtualizing mission-critical applications, the “yes” percentage falls close to zero.

Essentially, these managers are saying: Sure, we’re happy to leverage virtualization, just not for the operations we care about.

Logically, this is a bit of an odd position to take. If system components were children, would you really buy food for your youngest daughter Storage that you wouldn’t feed to your eldest son Project Management?

Advertisement

Then again, government IT operations are to a large degree driven by risk tolerances. And when they run the risk-benefit equations, customers often find that the risks of critical application virtualization (reduced reliability and manageability of services) outweigh the benefits.

The trouble is: these risk-benefit equations fail to account for new technologies and strategies.

It is true that historically, we’ve had trouble gaining visibility into virtual environments—which is, naturally, essential for identifying and debugging malfunctions before they escalate.

However, over the past year and a half, we’ve seen groundbreaking solutions arise for getting better optics inside virtual machines. And using these tools, government organizations should feel a lot more comfortable moving their mission-critical applications to the virtual environment.

This is particularly important because the aggregate computing power of state and local governments is very often underutilized. (Frequently, servers dedicated to running a single application won’t even attempt to share excess power with other CPUs in the environment.)

Advertisement

The common perception is that when adding a new application, IT must purchase a new, dedicated CPU to support it. But quite frequently—with a little planning—the existing unutilized CPU power in an environment is more than adequate.

Another common fear is that servicing or upgrading CPUs will compromise the performance of the services they power. But again, we have tools today that can provide robust, real-time failover of virtualized mission-critical services and applications. With the right tools and management structures in place, there’s simply no reason for hardware swapping—or the peaks and valleys of service use—to impact the performance or resiliency of virtualized applications.

That said, it’s incumbent on IT managers to design data centers, critical services, and applications to accommodate these inevitable change operations (both planned and unplanned). It’s also important for IT managers to build data centers with maximum flexibility—so that resiliency can be preserved even when using different vendor platforms and system components.

A lot of government IT managers believe they have to standardize with a single vendor in order to achieve that resiliency. But it’s a myth—and an expensive one.

According to Gartner, organizations spend 30 percent more when locked into a single vendor. And in the future, these IT managers won’t be able to upgrade to cheaper, more effective technologies from alternate vendors. So in essence, they’re paying more to limit their own options.

Advertisement

The point is, we have the tools today to help governments reap the benefits of mission-critical app virtualization (plus lower costs and greater flexibility through vendor competition) without sacrificing one ounce of reliability or manageability.

So go ahead and run those risk-benefit equations one more time. And if you find any unnecessary coddling—now’s the time to make a change.

Latest Podcasts