today my post is about design and infrastructure, more specific on when use or not use secondary site as today, people has a misconception.
The reason of this misconception is simple, SCCM 2007 secondary site worked differently from the 2012 version.
On a 2007 version, you would use secondary site where your network is slow and the number of clients where high (depending on the network speed it could be only 10 clients). Another reason would be if the network was unreliable.
On 2012 version, Microsoft changed the way secondary site works a bit, the 1st change is that a secondary site has a database and this database is part of the overall SCCM infrastructure replication, however, secondary site does not hold everything as a primary site, but only information that is required for the machines connected to that secondary site. Another huge change is that MS added network schedule and rate limits on the Distribution Point, allowing you to use DP instead of a Secondary site on most cases.
On Microsoft Documentation, it says: The number of secondary sites per primary site is based on continuously connected and reliable wide area network (WAN) connections. For locations that have fewer than 500 clients, consider a distribution point instead of a secondary site.
Note that Microsoft is clear on the “continuously connected and reliable WAN”, it means if you have an unreliable network, secondary site should be out of the picture. Unfortunately I’ve seen many clients using Secondary Site instead of Distribution Point only because they used to do this on SCCM 2007
As the secondary site has database and it is part of the SCCM infrastructure, we need to understand how SCCM does it. The documentation says: “Configuration Manager groups data that replicates by database replication into different replication groups. Each replication group has a separate, fixed replication schedule that determines how frequently changes to the data in the group is replicated to other sites. For example, a configuration change to a role-based administration configuration replicates quickly to other sites to ensure that these changes are enforced as soon as possible. Meanwhile a lower priority configuration change, such as a request to install a new secondary site, replicates with less urgency and takes several minutes for the new site request to reach the destination primary site.”
This has changed a bit with SP1 as the database replication links can be controlled. Per documentation “Configuration Manager database replication is configured automatically and does not support configuration of replication groups or replication schedules. However, beginning with Configuration Manager SP1, you can configure database replication links to control when specific traffic traverses the network. You can also configure when Configuration Manager raises alerts about replication links that have a status of degraded or failed.”
As a dumb rule, I use the following:
1-Slow link is any connection lower than 10MB
2-High speed links is any connection where the latency is lower than 50ms. I really do not care is you have 100Mbps. This is because the network utilization could be 99%…
3-Reliable link is where I get 1% or less drop packages in a period of time (test this at least for 1 week)
4-Number of the users and machines at remote location
These are the minimum information that can help you identify if you need to use remote DP or a secondary site…