In configuring and maintaining SharePoint Server 2013 relational databases on SQL Server 2008 RS with Service Pack (SP1), SQL Server 2014 and SQL Server 2012, one has to make choices that promote security and performance.
Microsoft has recently released the streamlined topology in SharePoint version 2013, a new way of building and configuring the farm. When it comes to the physical topology, there are three tiered approach since MOSS 2007. In it, there were Web Tier, Application Tier and Database Tier. In SP 2010, there was not much changed and has almost the same 3-tier topology except Application level. However, dedicated hosts could be added in app tiers for high execution service applications such as Performance Point, Search and more.
In the 2013 version, there were more service apps and a lot of these could be categorized in similar groups which are either based on their RAM and CPU requirements, or based either on their throughput, latency or workloads/resource utilization, for optimizing system resources and to maximize user rendition. All the WCF and windows services could be divided into very low, low and high tolerant inactivity and this may need dividing up application tiers in numerous tiers for every kind of latency tolerant utility applications.
Microsoft has provided an alternative farm design topology through redefining the traditional web and app tier to numerous tiers. The traditional web tier is redefined as Caching and Request Processing which would group same web front end hosts for request of end user processing, together with new amenity apps, including Request Management and Distributed Cache what would need a very low latency but extremely inflated throughout. The request management by default is disabled and the distributed cache is enabled by default. As the request manager is CPU intensive while the distributed cache is memory intensive, both services could share the same server with no need for any major implementation hit.
The Traditional app level is divided into two optimized tiers, the front end servers and batch processing hosts.
FRONT END SERVERS:
These would group same service applications that could serve user requests with low resource use, low latency and optimized for a more rapid performance and response time. Services such as managed metadata, central admin, user profile, app management, search query role and business data connectivity are ideal for this.
BATCH PROCESSING SERVERS:
Would group similar service applications that typically would require long running background processes, high resource use and high latency as well as optimized for higher workload through maximizing system resources. The services ideal for this include work management, user profile sync, workflow, machine translation, search crawl, index role and more. For big scale farms, the batch processing level can be grouped further to specialized load servers for utilities like performance point, search or excel that could cause high performance spikes during peak season.
The database layer remains the same in both streamlined and traditional model. The servers could either be mirrored, clustered or configured with Always On. Depending on the situation, farm size and number of users, one could get away with running the traditional three tiered approach so long as they have ample hardware resources such as CPU and RAM allocated.