Why I Use Thick Images for Lab Deployment

There are several best practices in Operating System Deployment (OSD). One of those is to use “thin” (or reference) images, which would only contain a patched OS and probably the (patched) Office suite of your choice. After the thin image is applied, you apply the software needed for the device, including such mainstays as Flash and the Java Runtime Environment (I joke, but then again, not). I use the thin image for employee PCs, but not for computer labs.

Both MDT and SCCM (and I am sure a host of/most other tools) support using reference images method. It’s the right approach in many cases because:

  • It’s flexible: you can target the software that needs to be installed on each device based on the computer’s location or the target user, etc. This is also important when it comes to driver management. I realize that using a thick image doesn’t necessarily mean you include drivers. In some thick image deployment scenarios (looking at you, old Ghost), it was pretty much common practice.
  • It’s secure: you will deploy the latest version of the needed software packages immediately.
  • It’s up-to-date: it doesn’t really matter how often you create a new thick image, it’s going to be out-of-date soon.

These benefits are tangible. When discussing OSD for employee devices, I agree that thin images are the way to go. Yet, for the computer labs I manage, I use thick images. There are several reasons why.

Deployment Time and Bandwidth

A thick image is significantly larger than a thin one, but it deploys much faster than installing about 40 software packages sequentially. Consider that creating my thick image takes about 5 hours versus the deployment of the image about 1h20m (on a good network day).

Once the thick image is transferred and applied, the job is done. It actually saves bandwidth because the image can be transferred using multicast while the individual installations of software would be done over SMB where every machine gets every byte via unicast. I work with 220+ computers in our labs, so this deployment time and bandwidth argument holds. If you have 10 systems, maybe not that much.


Having a known configuration across the entire lab helps when troubleshooting. Sure, if you’ve made one mistake in the thick image, it’s everywhere. But the workaround or solution will also work everywhere!

When using a thin image, all systems are consistent when deployed at the same time. However, if a machine needs to be redeployed (because of a hardware failure or gremlins messed with it), that one machine may have different versions of some software. Having the latest versions deployed is a benefit for employee machines; it’s a negative for a lab environment.

Manual Steps

It must be nice to live in a world where all software can be installed silently with full control over features, settings, and activation. That’s not the world I live in. It’s gotten much, much better over the past 10 years. Yet, there are those pesky applications that we absolutely need to install that require that manual step. Doing that manual step one time when creating the thick image is fine; having to do it 220+ times just isn’t feasible. It’s time consuming and error-prone (I am not accusing anyone of making mistakes; I am just talking about me).

So, that’s it. I will say that good practice also means you automate the building of your reference or thin image. That goes for a thick image too. With the exception of the two or three manual steps I have left, everything else is automated. Sometimes, automating the many new installers that are needed on a yearly basis consumes time. I have to admit that after trying to install Visual Studio* just right for the 99th time I sometimes am ready to give up and do it as a manual step. Then I look at myself in the mirror and say, “Self, stop it! There are 39 other applications that install without intervention because you made the effort up front to get it right. Keep at it – VS will bend to your will!” And I get right back to it and get VS to do what I want it to.

*Visual Studio was not harmed during the writing of this post. I am only using it as an example. Many times, it takes more than a few attempts to get the correct parameters or auto-answer file for a complex installer. PRO TIP: Use a VM, set it all up to just the point where you would enter the command and take a snapshot.

One thought on “Why I Use Thick Images for Lab Deployment

Let me know what you think, or ask a question...

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.