Citrix Provisioning Services 7… A sneak preview

With the approaching release of XenDesktop 7 also comes Provisioning Services 7 (PVS from now on) I’m not sure if both products will be released at the same time but it won’t be a surprise, let’s just leave it at that. Although the basic functionality and underlying architecture haven’t changed over the past few years, at least not significantly, it has become a very popular platform and continues to grow each day. With Machine Creation Services (MCS in short) on its heels, especially with the introduction of XenDesktop 7 in which MCS has again been improved and the EOL of Windows XP nearing, PVS will have to dig deep to keep up. I guess it’s up to Citrix which one will come out on top eventually.

UPDATED – 26-06-2013

Citrix officially launched XD7 including PVS 7, I wrote this blog a few days before the official launch so there might be some info missing which wasn’t available at the time of writting. Although I covered most give the Citrix E-Docs site a read through, it will fill in the potential blanks.

PVS 7 is still a stand-alone product (sort of) and preferably installed on a separate server which, nowadays, can be virtual as well as physical just as easy, read this blog by Citrix. It’s part of XenApp and or XenDesktop if you buy the proper licenses (Enterprise and Platinum). Stand-alone also means an extra (PVS) management console, another disadvantage, however small it may be, compared to MCS.

When using MCS you won’t need to build up a separate (PVS) infrastructure (although PVS can be part of XD7 it will still need a separate server, you may need to configure your DHCP environment appropriately or PXE might be needed etc… Although PXE functionality (and DHCP as well) can now be replaced with the use of a BDM disk, read on) MCS directly leverages the underlying storage managed by your Hypervisor and you can do without PXE and or DHCP reconfiguration of-course. However, PVS does provide some enhanced image management features and storage IO optimizations which might be beneficial in larger environments. On the other hand, as of XD7 MCS can take advantage of a Hyper-V 3.0 feature called: Clustered Shared Volume Read Caching. I got this from one of the Citrix Blogs:

XenDesktop 7 can take advantage of this capability to reduce storage IO for MCS catalogs during boot and logon storms.  The effect is similar to that of the caching that takes place on PVS hosts, except that the blocks are delivered once to each Hyper-V host and then shared among the VMs on that host.  CSV cashing makes use of host RAM for this cache so there will be some tradeoff between cache size and the amount of RAM available to VM’s. But hey… we are talking about PVS 7 here, I’ll address MCS in more detail while updating my previous Blog about the FMA, let’s continue.

The slightly renewed basics

PVS 7 can be installed on the following Operating Systems # Windows Server 2012 Standard, Essential and Data Center edition # Windows Server 2008 32 and 64 bit all editions # Windows Server 2008 R2 and Windows Server 2008 R2 SP1 both Standard, Enterprise and Datacenter edition.

PVS 7 supports the following database software # Microsoft SQL 2008, Microsoft SQL 2008 R2 and Microsoft SQL 2012 this can be either Express, Workgroup, Standard or Enterprise edition which goes for all three platforms.

PVS 7 supports the following Target Devices # Windows Server 2012 Standard, Essential and Enterprise Edition # Windows Server 2008 R2 SP1 Standard, Enterprise and Datacenter edition # Windows 8 32 and 64 bit # Windows 7 SP1 32 and 64 bit Enterprise, Professional and Ultimate edition # Windows XP SP3 32 bit # Windows XP SP2 64 bit bot without an XD7 agent, XP isn’t supported in XD7. So it is a valid target device when it comes to PVS it just can’t be used in conjunction with XD7.

New and improved features

Although not a direct PVS feature per se, it comes from the underlying Operating System, when used together with Hyper-V 3.0 PVS can leverage SMB 3.0 enabled shares as shared storage. Use it to store vDisk files or for Write cache purposes.

When Hyper-V 3.0 is used as the underlying Hypervisor MCS supports the VHDX format. Secondary disks attached to the virtual machine destined for PVS Write cache for example will also automatically leverage the ‘new’ VHDX format, the same goes for PVS Personal vDisks. PVS disks that are accessed and managed directly from the Provisioning Server itself will continue to use the VHD format since PVS is and can still be used on Windows Server 2008 R2 as well.

Again, when Hyper-V 3.0 is used as the underlying Hypervisor you still need to configure Legacy Network Adapters on the virtual machines just like before. But… Once the initial boot process is completed and the streaming process starts PVS will automatically switch the virtual adapter over to synthetic mode which will enhance streaming performance.

In some environments, for whatever reason, using or reconfiguring PXE / DHCP is a no go. As a way around PXE and or DHCP reconfiguration you can use the Boot Device Manager instead. Although the concept isn’t new, the way it gets configured and provisioned is. No more separate BDM utility. During the setup of your virtual machines using the XenDesktop Setup Wizard (virtual machine preferences) you can now configure a so called BDM disk (create a Boot Device Manager partition) It contains all the information needed to complete the initial boot process. No more PXE, DHCP or TFTP setup or reconfiguration needed! It will appear as an xxxbdm.vhdx virtual hard disk part of the virtual machine. PVS battling back!

PVS 7 supports multiple NIC’s on the (physical) Provisioning server itself to split out the streaming and other network traffic generated. Nothing new but now configurable through the Provisioning Services Configuration Wizard, looks nice.

As a side note to XenDesktop 7. From within XD7 you can create a Delivery Group containing both PVS and MCS provisioned machines. Perhaps not something you will implement in ‘real life’ but it shows you just how flexible the product is. Could be nice for POC’s though.

TFTP Load Balancing

TFTP (improved) Load Balancing comes with NetScaler 10.1 The overall configuration process of TFTP Services and thus the TFTP virtual servers has been simplified (or so they say) as opposed to the old days (not that I ever tried it, but still) It now offers intelligent monitoring capabilities which are pretty straight forward to set up, configure and bind to your TFTP virtual server. Choosing your preferred Load Balance mechanism is as easy as selecting your method of choice from a dropdown menu, Round Robin, Least Connection, Least Bandwidth to name a few are all available Load Balance methods. Next you point your virtual TFTP servers to your actual TFTP servers and finally in DHCP you configure option 66 with the NetScaler address and fill in the Bootstrap file name with option 67 and you are good to go, High Availability included.

Image creation

About six months ago some significant performance and usability improvements in the PVS image capture and catalog creation tools were announced, I haven’t heard or read anything about that to be honest, nothing specific anyway, perhaps it’s still to come.


Unfortunately I haven’t had the chance to get down and dirty with PVS, new or old(er) editions for that matter. Sure, I know my theory, did the exams and all that, but nothing beats the real thing if you ask me. Hopefully it will be around long enough for me to get a taste ;-) but I’m sure it will. I’m wrapping up for now, already have another subject planned and ready to go.

Bas van Kaam ©

Reference materials used:, and the E-Docs website.


6 thoughts on “Citrix Provisioning Services 7… A sneak preview

  1. Phani


    If I have to choose between MCS & PVS while deploying XD7, what would you suggest? Also, can you get me some difference between MCS vs PVS in XD7 please.

    Thanks | Phani

    1. Bas van Kaam Post author

      Hi Phani,

      Regarding MCS, have a look here:

      For PVS give the eDocs website a visit:

      As far as which one to choose… it al depends on the situation, I can’t give a direct answer.
      Also give my Blogs on Provisioning Server and FMA another read, they will explain some of the differences in features between the two, also helping you decide which one to choose.

      Hope it helps.



      1. albertwt

        Hi Bas,

        Can we conclude it like this:

        With PVS, it is good for larger VDI desktop deployment, and the Storage Array is still traditional (no automated tier-ing).

        With MCS better to be used for smaller deployment (under 100 VM) and the Storage Array is fast enough with auto tiering.

      2. Bas van Kaam Post author

        Hi Albert,

        Of course there’s a bit more involved, but yes, you could say that PVS is probably the better choice for larger VDI and or HSD orientated deployments. Due note that MCS is getting closer by the day and in some cases will do just fine, this will also depend on the rest of your infrastructure.

        Auto tiering isn’t a necessity per se, although faster storage, or some sort of underlying Cache / IOPS enhancer, does help. Don’t forget that with MCS only virtual machines are supported.



  2. Roy

    “PVS 7 is still sold as a stand-alone product and, if possible, installed on a separate server preferably a physical one.” <===The physical statement is a bit outdated, see also the following blog:

    18 months ago, the answer to this question was “maybe” and we really only recommended virtualizing PVS on XS in smaller deployments
    Today, after the recent release of XS 6.1.0, which offers true active-active bonding for up to 4 NICs, the answer to this question is “almost always!”

    On VMware platforms, PVS virtualization was already the prefered solution.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s