This session was all about preparing for performance and capacity requirements for 2013, but started off letting us know some of the challenges that Microsoft was trying to solve in this new release. The main challenges that they knew the infrastructure was going to need to accommodate were: New Capabilities / Features, a Richer User Experience, Requiring both the Server and the Client (Browser) to do more work and build a technology that can support the needs for Office365
This is where a lot of numbers start coming out, but overall with the enhancements of the product and supporting services like the server OS and SQL they claim that SharePoint 2013 has a 50% faster server response time. One large pain that everyone needed to work through was the User Profile sync and the amount of time that it took to synchronize user profiles and group memberships. In SharePoint 2013 you will supposedly get a 4x faster profile sync using the FIM sync and a 10x improvement if you decide to use the AD Direct Import.
Some of the high level “limits” that were talked about from an improvement perspective were around being able to support 750k sites / farm, which was 250k in 2010. Each content database can now support up to 10k sites / db, which was 5k in 2010. A 2013 farm can now support 500 content databases where in 2010 it was 300.
Scale & Reliability
New features in 2013 can help with scale and reliability like the Distributed Cache which pulls data from all of the social avenues as well as authentication tokens to scale up the user experience. Search improvements now enable you to have a fully fault tolerant model which can also be used in a multi-tenant fashion.
The new Request Management feature makes SharePoint route traffic based on admin define rules. It expands on the introduction of the Health score that was introduced in 2010 to ensure health based routing in the event a server is under deep load, patching or other issues. This feature also has the ability to redirect traffic based on the type of request so that you can ensure certain processes are only being run on the servers that are intended for that purpose.
One important thing to note is the Request Management feature does NOT mean that you no longer need a load balancer (software or hardware), it just means that SharePoint now has the ability to route traffic once it has been received in SharePoint.
Now that all services can be separated and that SharePoint can route traffic based on the services required designing your farm should allow you to set SLA’s based on service areas. Request Management & Distributed Cache area was the first on which they talked about, which should have a < 5 ms latency. The Front End Services like MMS, Secure Store, Access, etc should be the next level requiring < 500 ms latency. Search services and their resource heavy processes should have < 500 ms latency. Batch Processing services like the User Profile Service and Workflow that have a large processing time and are not directly involve the user experience can survive with being > 1 min for availability. The last area, which most people are already doing today is the Database layer which requires < 5 ms latency.
When defining your topology there is no silver bullet and the items to consider are: Workload (Publishing, Social, Collaboration, etc), Hardware, Dataset and SLA’s
Latest posts by Brian Caauwe (see all)
- Explore the Power of the Microsoft Cloud - April 11, 2017
- The Power of Document Generation with Nintex — April 5th - March 8, 2017
- Avtex SharePoint Expert Brian Caauwe to Host Sessions at InspireX 2017 - February 8, 2017