Last week, Wednesday the 19th of February 2014 to be exact, I was invited for the ‘grand’ opening of the new Dutch Nutanix headquarters in Hoofddorp, near Amsterdam. Before the partner reception took off and everybody arrived we had a private get together which lasted around an hour and a half or so. Of course I wasn’t alone, a group of 10 to 12 technicians, bloggers and pre-sales engineers (Nutanix employees excluded) gathered to hear what Dheeraj Pandey (founder and CEO), Howard Ting (Vice President of product and marketing) and Mark Fisher (senior director, demand generation marketing) had to say. It turned out to be a very interesting and informative conversation.
Getting to know each other
Since I’m relatively new to the Nutanix portfolio, to be honest, I didn’t really knew what to expect coming in and I was probably a bit to focussed on the technical side of things. As it turned out it was more about introducing Nutanix as a company, the people involved, sharing some of their long and short term visions and getting to know each other a bit better (it has to start somewhere right?). And although I was hoping to hear a bit more about their products, I’m glad I was there and able to meet some of Nutanix (passionate) crew members. They’re ready to go, no doubts there! After a short, but inspiring, introduction announcements were made, thoughts were shared and questions were asked and answered, and it wasn’t one way traffic. Also, during the reception, which directly followed our meeting, I had the chance to meet Dheeraj and Howard personally, and all I can say for now is that good things are coming.
They (Nutanix as a company) were / are very interested and sincere in what ‘we’ as the community, had to say. What do ‘we’ think of Nutanix as a company, the products they sell, the marketing involved etc… Besides the ‘what can we do to improve’ they were also extremely focussed on what they as Nutanix could do to help us, the community, in making life a little easier if and when evaluating and testing their products, educational services and so on. They truly understand the power a community can bring and as such are willing to invest in their ‘partners’, so to speak. We (can) help them to stay Honest, as Howard Ting pointed out during his personal introduction, and that’s exactly what we toasted to, honesty, cheers!
I think it’s important to clarify what a converged infrastructure actually is, since that’s what Nutanix is all about in the end, what’s its definition? Well… in its most simple form it’s a way to combine, or integrate, multiple infrastructural components like (server) compute, storage, networking and virtualization into a single platform or appliance of some sort. Resources are aggregated and pooled, positively impacting performance, flexibility and overall efficiency. Management is centralized and simplified (converged infrastructures are, or should be, managed as a single entity no matter how big they get) initial set up and configuration is a breeze, up and running within 30 to 45 minutes, and maintenance should be minimal. All this will eventually lead to lower operational costs, think ROI. More on their unique set-up in a minute in the ‘An overview’ section.
Still far from mainstream
Storage and networking (partly virtualized) will be ‘local’ to your VM’s increasing overall performance immensely, which is probably one of the biggest ‘eye catchers’. In terms of scalability IT will be able to scale / add what they need, when they need it a.k.a. scaling on demand, no more over-provisioning of storage, which is a big plus as well. Now this may sound as simple logic to some but if you look at most data centers today the above can be hard to find. Converged architectures are also often referred to as converged, or software defined, (virtualized) data centers / architectures.
Traditional storage networks (we’re primarily talking SAN’s here) are complex, they consist of multiple components like switches, disk arrays and fibre optic HBA’s, they’re not that flexible in terms of scalability and often require a dedicated team of specialists when it comes to configuration, expansion and or troubleshooting. Support can be hard to get and support contracts are expensive. They’re not designed to handle today’s virtualized workloads and the (ever returning) IOPS that come with it although nowadays there are some ways (products) to boost performance (good ones like PernixData for example) but unfortunately it’s still far from ideal, something we’ll just have to deal with.
The same can be said for ‘traditional’ infrastructures, networking and server computing, they all require some sort of expertise and are all configured and managed separately from each other. Not a bad thing per se, but when things get complex (read big) it doesn’t help. And who are we to contradict companies like Amazon, Google, facebook and Microsoft who have all embraced these unified converged architectures as well? In fact, their architects basically invented the concept. The best practices we once knew will soon become obsolete and known as ‘just’ practices, the way we once ‘did’ stuff. Another thing to think about is how we not only manage, but also need to support, upgrade and expand (scalability) our current infrastructures which isn’t an easy task with the hardware-centric architectures we’ve got going today.
It won’t be long now
I know, I get carried away sometimes, but you have to admit that there’s a truth in there somewhere, right? I’m not saying that SAN like solutions will disappear overnight, no way, it just wouldn’t be possible even if we’d want them to. And besides, despite some of their ‘handicaps’ they’re still very usable in most cases (I do like em, really) and way too expensive to just get rid off. But change is coming and it’s approaching fast! For me personally, and as a Citrix engineer in particular, I think this is a good thing, we need more speed (massive IOPS and les latency), flexibility, scalability and ease of management. Fortunately there are companies like Pernix, Atlantis and of course Nutanix (to name a few) to help us overcome most of these challenges. Of course there will always be hardware involved, we need cooling, backup and power supplies, storage (disks and flash), electricity, networking components etc. It’s just the way that things are put together (converged) and managed (software) that is changing, drastically.
Where Nutanix comes in
Now that we’ve established what software defined and converged architectures and or data centers are all about, let’s have a look at how Nutanix fits in. I won’t discuss any technical details with regards to their products, since, to be honest, I’m still in the research phase myself, but I know enough to at least provide you with a global overview. I’ll start with some quotes from their website combined with some remarks of my own.
The Nutanix Virtual Computing Platform © converges compute and storage into a single system, eliminating the need for traditional NAS and or SAN like storage arrays. A single 2U appliance can contain up to four independent nodes, comparable to blade systems, each optimized with high-performance compute, memory and storage. Each node runs an industry-standard hypervisor and al major platforms are supported. The system embeds all control logic into intelligent virtual machines, called Nutanix controller VM’s, that run on each cluster node, it handles all I/O operations for the local hypervisor (this is where the magic happens). A global storage pool aggregates storage capacity from all nodes and all hosts in the cluster will have access to that pool. Storage resources are exposed to the hypervisor through traditional interfaces, such as NFS, iSCSI etc. It also leverages existing Ethernet networking investments out of the box. As a 100% software-driven solution, Intelligence is abstracted into a distributed software layer, rather than being ‘baked’ into specialized hardware for programmatic control and simpler centralized management. And since a picture says more than a thousand words… Might look a bit blurry, click to enlarge.
There are plenty, Nutanix discusses several on their website, including End-User Computing / VDI, Enterprise Branch Office, Big Data, Private Cloud / Server Virtualization and Disaster Recovery. The Nutanix appliances are all Citrix Ready certified and support both MCS as well as PVS, this combined with their advanced architecture, simple and straight forward out-of-the-box deployment, scalability and high performance (low latency) IOPS, to name a few, make them an excellent candidate for XenDesktop (persistent) VDI and XenApp deployments. Make sure to check out some of their reference architecture white papers, see the highlighted link above, and Case Studies. Of course the same goes for the other use cases as well, which will probably be implemented just as often, if not more. And although it’s all still theory for now, for me anyway, the technology looks very promising and I hope to get my hands on one of these baby’s soon. Although you’re able to request a demo on their website, unfortunately it’s only available for the U.S. Have a look here for an overview on all Nutanix Technology Partnerships I believe VMware is in there somewhere as well :-)
Scalability is key
The modular building-block design lets organizations start with small deployments (no massive upfront investments) and grow incrementally into very large clusters. As mentioned, each 2U appliance can hold up to four nodes in total (see both pictures, to the right and below, click to enlarge) but you can start out with one or two (HA) nodes just as easy if that’s what you need. Have a look at their product page to see your options there are four different series to choose from ranging from the NX-1000 to NX-7000 including GPU and PCoIP support in case you’re interested. As stated earlier, Nutanix eliminates the need for a dedicated storage network or array and streamlines overall datacenter administration with a single, intuitive management console. They also put together their own Software Defined Storage For Dummies E-Book definitely worth having a look, it’s free for download.
There’s more, I won’t be able to list all features, but I’ll highlight some the most interesting ones, again, without going into too much detail, at least for now. For those of you who can’t wait and want to get down and dirty right away check out The Nutanix Bible written by Steven Poitras (Solutions Architect and Technology Evangelist at Nutanix) it tells the whole story including all the bits and bytes involved. If you look at some of the fact sheets on the Nutanix website you’ll come across a whole bunch of interesting features and techniques, I’ll list some of the most interesting ones, again, I’ll use a combination of quotes and own text.
Before data is ‘acknowledged’ all writes are first replicated to, at least, one other cluster node, although all clustered nodes are part of the replication process. Only after the data and its associated metadata is replicated, will the host receive an acknowledgment of a successful write. This ensures that data exists in at least two independent locations within the cluster and is fault tolerant. They also include snapshot and clone technologies and a feature called Shadow Clones, have a look here.
Storage capacity is increased up to 4 times by applying the Google Snappy compression algorithm. Nutanix compresses data at the sub-block level for increased efficiency and greater simplicity. Compression is managed through policies which can be applied during the write process, or after data has been written to prevent an impact on performance.
Simply put, ‘hot’ data, which is accessed frequently, is put on SSD / Flash like storage and ‘cold’ data, you get the point right, is placed on slower HDD’s. As ‘cold’ data becomes ‘hot’ again it will automatically be placed on faster storage and vice versa.
Data deduplication gets applied to both normal HDD’s as well as Flash / SSD storage. The Deduplication Engine continuously analyzes read I/O access patterns and granularly de-duplicates data to deliver the highest possible performance. For applications with large common working sets, such as virtual desktop infrastructure (VDI) deployments, Nutanix deduplication increases effective flash and memory resources by up to 10x and delivers nearly instantaneous application response times.
As quoted from Nutanix.com All processes running in the CVM (the controller VM) are designed to fail fast when an error is encountered Components are continuously monitored, killed and restarted in the event of an error in order to recover as quickly as possible, rather than linger in non-responsive or corrupt state. Each host relies on its local Controller VM to service all storage requests. NDFS continuously monitors the health of all CVMs in the cluster. For example, if a Controller VM were to fail, Nutanix auto-pathing automatically re-routes requests from the host to a healthy Controller VM on another node.
Note that the above list is far from complete and lacks in-depth technical details. I’ll address some of these features (more detailed) in future posts, including policy based (software) management, which is a big step forward as well. Although Nutanix isn’t new, they started their business back in 2009, they’re still relatively small and unknown in Europe, but that’s all about to change as far as they’re concerned. Like I mentioned earlier, for me their technology is still new, but from what I’ve heard, seen and read I think they hold great potential in becoming one of the major players not to long from now, I guess only time will tell. I realise that they’re not the only ones playing the field and that’s (more than) fine, we need some sound competition, it will only make them try harder resulting in, hopefully, better solutions.
Bas van Kaam ©
Reference materials used: Nutanix.com, Citrix.com, Wikipedia.org and Google.com