Planning for Azure File Sync: Key Considerations

Here are some tips on how to properly size the Azure File Sync server to ensure effective performance.

Brien Posey

May 8, 2021

4 Min Read
Planning for Azure File Sync: Key Considerations

Microsoft Azure File Sync is a service that is designed to make it more practical to store unstructured data in Azure Files. Azure File Sync caches hot data on premises, which helps to provide users with a better overall experience. According to Microsoft, the Azure File Sync server requires a minimum of 2 GB of memory. However, 2 GB of memory is rarely going to be adequate for a real-world deployment. Properly sizing the Azure File Sync server is essential ensuring a good level of performance.

Before you can determine the resources that need to be allocated to an Azure File Sync server, you will need to be familiar with the concepts of server endpoints and sync groups.

Microsoft defines a server endpoint as the path on a Windows file server that is being synced to an Azure file share. In most cases, the server endpoint will be mapped to a volume root, but it also is possible for a server endpoint to map to a folder.

A sync group is the mechanism that tells Azure File Sync what to synchronize. A sync group might form a sync relationship between, for example, an Azure file share and a server endpoint.

When it comes to planning for Azure File Sync, it is important to understand that a single Windows file server can have multiple server endpoints. It could be that an administrator simply chooses to synchronize some folders, but not others. It is also possible that the server includes multiple volumes, each of which are being synchronized.

If a file server includes multiple server endpoints, it will be necessary to create multiple sync groups. Remember, the endpoints within a sync group are synchronized with one another. Therefore, if a server contains multiple endpoints, those endpoints should not be combined into a single sync group. Instead, there should be a separate sync group for each endpoint.

The reason why this distinction is so important is because the calculations used in determining the resources needed for an Azure File Sync server are based on the number of files and directories that are being synchronized. To determine this number, you will have to determine the number of files and directories associated with each server endpoint and then add those numbers together. The number that you come up with is known as the namespace size.

Microsoft provides a chart detailing the number of CPU cores and the amount of memory that an Azure File Sync server will need based on the total namespace size. However, the hardware requirements are not quite as straightforward as one might suppose.

Part of the reason for this is that the initial synchronization process consumes more memory than ongoing day-to-day operations typically do. In the case of a namespace with 3 million objects, for example, Microsoft estimates that the initial synchronization will require 8 GB of RAM, but subsequent use will require only 2 GB of memory. This 2 GB of memory helps to accommodate churn stemming from the normal creation, deletion and modification of files.

Here is a table of Microsoft’s recommendations:

Screen Shot 2021-05-08 at 2.49.44 PM.png

Screen Shot 2021-05-08 at 2.49.44 PM

One thing to keep in mind as you look at the table above is that these values are not absolute. There are factors that could require an organization to allocate additional resources to its Azure File Sync server.

The first of these factors is the rate of churn. Microsoft’s numbers are based on a churn rate of 0.5% per day. This number refers to the namespace size (with regard to the number of files and folders), not the volume capacity. For example, a namespace with 5 million files and folders and a 0.5% churn rate might expect to see 25,000 changes per day to the namespace. Churn primarily impacts CPU usage, so if you are anticipating a higher churn rate you might consider adding some extra CPU cores.

The average file size can also impact the hardware requirements. If you look back at the table, you will notice that the last column references the typical capacity. If you compare the typical capacity to the namespace size, you will find that the math works out to an average file size of about 500 KB, give or take. If your average file size is larger than that, the capacity will increase even if the namespace size remains the same. With that said, the namespace is stored in memory, so as the size of the namespace increases, so, too, does the need for RAM.

So, as you can see, the chart shown above can give you a ballpark estimate of the Azure File Sync server’s hardware needs, but significantly more memory and CPU cores may be required depending on how many files you are syncing, how large those files are, and what kind of churn rate you are experiencing.

About the Author

Brien Posey

Brien Posey is a bestselling technology author, a speaker, and a 20X Microsoft MVP. In addition to his ongoing work in IT, Posey has spent the last several years training as a commercial astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space.

https://brienposey.com/

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like