A closer look at the Cloud, based on Server 2012 R2

This article was originally written as a guest blogger for intense School IT educational services. Let’s continue where we left off. Windows server 2012 R2 has been available as a tech preview download as of June 2013 and was officially released on October 18, 2013, together with Windows 8.1. It’s now more enterprise-class, application-focused and cloud-oriented than ever. High-performance multi-tenant storage, software-defined networking, and multiple VDI and RDP enhancements are some of the new and improved technologies that have been included in the R2 release.

Remember that in my last article I already highlighted “Work Folders” together with the newly introduced “Domain Join” option and the SCCM 2012 cloud-oriented features as part of Microsoft’s renewed mobile and BYOD strategy. In this article, I’d like to zoom in on some of the new VDI / RDP features and multi-tenant storage capabilities and see what Hyper-V has to offer as part of the new R2 release.

Even Newer

As mentioned, Microsoft released its new 2012 R2 server OS in mid-October, the 18th, to be exact. Although most of what we’ll cover is based on the earlier released tech preview, there is an exception: about a week ago, Microsoft announced its Remote Desktop App, big news, or so I’m told. It’s basically a mobile RDP client and will be available for different (mobile) platforms, including Android, iOS, Windows (RT), and a few more. It’s supported as part of server 2012 R2 and based on the improved RDP 8 protocol (some cool new features in there as well), also part of the R2 release. It will be available for download later this month; you’ll find it in your app store of choice.

This will again bring them one step closer to BYOD and mobile device support in general. But before we get too excited, note that this doesn’t do anything for our (mobile) devices, privately or corporately owned. We still need to implement some kind of MDM and/or MAM solution to keep our devices, including the corporate data on these devices, as safe as possible. In one of my previous articles, I also mentioned Microsoft’s InTune, their 100% cloud-based MDM solution. Go to the Microsoft InTune page and sign up for a 30-day free trial run, no strings attached, and see what you make of it.

VDI and Storage

VDI deployments are probably among the most complicated infrastructures to deploy, because they involve networking, application delivery, base images, profile considerations, and, above all, storage configurations. I think we’ve all heard the term “IOPS” once or twice; it’s a common performance measurement used for storage devices, in most cases hard disks, solid state disks or SAN (storage area network) environments build up out of disk storage cabinets. IOPS stands for input/output operations per second; it tells us something about the storage capabilities in terms of speed and load handling.

Every manufacturer can give you certain IOPS numbers that go with the storage solution they sell you. Unfortunately, theory and practice don’t always go hand in hand. I won’t go into any further detail for now, but give it a Google because there’s plenty of information out there, and believe me, if you’re the one responsible for making these IOPS-related decisions about storage you want to be prepared! I haven’t even got to the storage space requirements yet which also differ depending on the type of VMs you’re going to deploy, pooled, personal or hosted shared desktops.

Since storage is such a big deal when it comes to VDI, Microsoft has given this a lot of thought. I think it’s probably one of the main reasons why this technology (VDI) never really took off, at least not the way they thought and hoped just a few years ago. RDS in R2 supports different kinds of “cheaper” and “simpler” storage solutions, as alternative to the more complicated and expensive SAN solutions out there. Think about directly attached storage (DAS) for example, or plain file shares based on the improved SMB 3.0 protocol, perhaps using storage spaces (storage virtualization based on the concept of so called storage pools) as the underlying storage mechanism, and the list goes on. Storage spaces are straightforward: They’re configured from within the OS itself, using simple but effective just a bunch of disk (JBOD) configurations as the underlying physical disk storage. I’m not saying that using these kinds of storage solutions will take away your IOPS issues overnight, but it certainly does help to have such a wide variety of options to choose from.

Building on storage spaces, Microsoft introduces storage tiering, a new technology which I already mentioned in my previous article, a technology they bought in from a company called StorSimple, which they acquired about a year AGO. The idea is simple: frequently used data is stored on fast storage like SSDs and less frequent data is stored on low-cost, “simple” hard disks. HDDs and SSDs can coexist in the same storage pool when using storage spaces and the “system” will take care of the rest. That’s just awesome! Storage de-duplication for VDI isn’t completely new, but now it’s supported on live running virtual machines as well. Simply stated, this means that de-duplication can be applied to VHD/VHDX files in use by running VMs.

Azure Backup and QoS

If your Internet connection is up to it, you don’t have too many machines, and we’re not talking terabytes of data, then perhaps Windows Azure backup might do the trick. It supports all workloads, SQL, exchange, file servers and more and it’s offered as a separate service. Nice if you own a relatively small company. Last but not least, I’ll throw in a quote coming from Microsoft regarding Storage QoS: a new feature in Windows Server 2012 R2 that allows you to restrict disk throughput for overactive or disruptive virtual machines and can be configured dynamically while the virtual machine is running. For maximum bandwidth applications, it provides strict policies to throttle IO to a given virtual machine to a maximum IO threshold. For minimum bandwidth applications, it provides policies for threshold warnings that alert of an IO-starved VM when the bandwidth does not meet the minimum threshold.

Of course there’s (lots) more to tell regarding storage and VDI, but you can imagine that with these kind of technologies Microsoft is offering some really solid and, not unimportant, affordable alternatives to “the way it has always been done,” so to speak. I’d definitely recommend giving the above some thought when you’re working on one of your future designs, never mind if it’s VDI related, even if it’s not related to virtualization at all, most of the above is still very usable.

Hyper-V

People often tend to confuse cloud infrastructures and solutions with virtualization, thinking that virtualization is cloud computing, while in fact virtualization is just one of the technologies that are part of cloud computing in general. I’ll use a quote I found on one of the Wiki pages which pretty much sums it up: “Cloud Computing is the result of evolution and adoption of existing technologies and paradigms. The goal of cloud computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The cloud aims to cut costs, and help the users focus on their core business instead of being impeded by IT obstacle.”

The main enabling technology for cloud computing is virtualization. Virtualization generalizes the physical infrastructure, which is the most rigid component, and makes it available as a soft component that is easy to use and manage. By doing so, virtualization provides the agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. On the other hand, autonomic computing automates the process through which the user can provision resources on demand. By minimizing user involvement, automation speeds up the process and reduces the possibility of human errors.

Server (and client as well) virtualization is hot and has been for years, and it continues to be one of the most written and talked about subjects up to date. With R2, the Hyper-V platform has once again improved. According to Microsoft, Hyper-V now supports: 320 logical processors, 64 TB virtual disks, 4 TB of physical memory, and 1024 running virtual machines per host, again, impressive numbers. Although I’ve personally never seen an environment even close to these kinds of numbers, it’s good to know your options. Live migration improved as well, it now supports a feature called live migration compression, it compresses the VM’s VHD/VHDX file and, as a result, it accelerates to speeds up to two times as fast as we were used to. Live migration with RDMA support is another new feature. It’s is designed to work over 10-Gbit connections supporting speeds up to 56 GB/s. It does this by leveraging the power of direct memory access technologies.

Hyper-V replicas are something completely new in server 2012 R2. The name basically says it all: It’s a replica of your virtual machine. Using this technology, you can replicate any given VM according to a predefined replication schedule, ranging from 30 seconds to 15 minutes. Think about how you might use this. You could replicate certain critical VMs to your DR site, for example, given that your network has enough bandwidth to handle the task, although this also depends on the replication schedule you apply and the amount of data that needs to be transferred, etc. Or perhaps create a replica for testing purposes. This feature, as far as I’m concerned, is not meant to be used on a grand scale; pick out a few systems that might be critical to your business and leave it at that.

Networking

Software-defined networking (SDN): In simple terms, what is it and what does it offer us? SDN is primarily made out of two main ingredients, Hyper-V network virtualization and the Hyper-V extensible switch. These two technologies combined make it possible to create and configure multiple virtual networks (these can include multiple subnets, complete routing topologies, and a tenant’s own IP address ranges of choice), using overlapping IP ranges without the need for separate VLANs, and all this on the same underlying physical network. This makes building multi-tenant infrastructures a lot easier and adds a great deal of efficiency as well. For example, you create three separate virtual networks, all using the 192.168.168.X, range and it will just work, no VLANs or (extra) physical strings attached, so to speak. Each virtual (network) infrastructure will behave like it’s the only one configured on the shared physical network. Now I’m no networking specialist, but I do know that this is a great technology all together.

It also allows IT administrators to manage and shape network traffic from a centralized console without having to (re)configure any physical switches, routers or other networking components. Network traffic/packets can easily be prioritized (QoS) or blocked if necessary, adding even more granularity. The concept is still relatively new and you might need some time get your head around it, but once you do, it’s awesome. Although SDN is mainly built and applied to support large(r) cloud based multi-tenant infrastructures, hosted by Microsoft service providers for example, you can probably come up with one or two use cases in which you might apply some of this technology within your own organization.

Having all these virtual infrastructures is great, but how do they all interconnect? Windows server 2012 offers the cross-premises connectivity feature, which acts as a gateway providing enterprise customers with site-to-site connectivity. It connects them with their private subnet(s) hosted within the cloud infrastructure. One of its drawbacks is that each tenant needs to have its own gateway, so having multiple tenants means configuring and managing multiple gateways on a per-tenant basis. 2012 R2 offers us the multi-tenant gateway already built in and ready for use. As the name implies, it provides site-to-site (VPN) connectivity between multiple customers and their cloud-based resources all at the same time, managed from one central interface. It can also be used to connect physical and virtual networks, datacenters and of course it can provide connectivity between physical networks and Windows Azure as well, which are basically virtual network as well.

Conclusion

Again we covered a whole bunch of new and improved technology, including some examples on how this might help you when working on one of your new or future infrastructural designs. I would specifically recommend spending some time on the virtual network part. Although this might not be a technology that you will implement anytime soon, I think it’s imperative to understand the concept and consequences that it brings. As time progresses, we’ll probably see more and more of these kinds of infrastructures being implemented, so it’s important to at least know the basics.

Some of the new Hyper-V features, including VDI, are probably a bit easier to have a look at, especially now that we don’t need enterprise-class storage solutions anymore. Keep a look out for the new remote desktop app as well. Give it a try, compare it with the Citrix receiver, they’ve been doing this for years, and see how it holds up. Just remember that before you distribute it to your users you make sure you have a proper MDM and or MAM solution in place as well. Keep it clean and secure!

Bas van Kaam ©

Reference materials used: Microsoft.com, Blog.technet.com, Thincomputing.net and Wikipedia.org

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s