Overview
In general customers with well specified servers are able to run SQL Server using default installation parameters, however with some planning you can apply recommended best practices to optimize the configuration of your SQL Server installation which will help achieve best possible performance when the system is under load.
If you simply want to maximize system performance, or if you are aware that your environment may introduce performance issues and wish to avoid these if possible, then you should review these options carefully. The sheer range of options available when configuring SQL Server, along with variations in the host environment and the precise way in which Discovery Center will be used, mean that making a definitive statement on "best" settings is not straightforward. However all the steps outlined in this document are best practices or have been proven to address real world deployment issues and they will provide a sound initial baseline.
Note that many of these settings will impact the whole server system; if the Discovery Center database is co-located with other SQL databases/instances then the overall balance of all dependent applications should be considered.
Apply latest SQL Server updates
You should ensure you keep your SQL installation up to date to benefit from the latest enhancements. Each release of SQL Server brings enhancements to, and fixes for, query optimization, tempdb usage, etc. that will improve the overall operation of the Discovery Center application.
Consider I/O performance
The performance of the underlying data storage layer, whether local disk or SAN based, is critical to the overall performance of SQL Server. If you have a choice of different storage options you should select the best performing option or you risk running into a performance limitation.
A target for a well specified storage layer is to be able to support throughput of 50-100MBps and a latency of < 20ms.
A typical way to assist in this goal is to spread the database files across multiple drives. For instance have additional drives dedicated to tempdb, database data files and database transaction logs. If utilizing SAN based disk arrays for storage then it may not be possible to genuinely separate out the I/O for these components but you will still gain the ability to monitor available space independently for each class of file.
Optimal tempdb Configuration
The tempdb is a critical part of the SQL Server architecture and can have a significant impact on the performance of the system, however the default "out of the box" configuration does not reflect recommended practice.
We recommend that tempdb should be configured as:
- To use 8 separate data files, ideally allocated to more than one disk. This helps to reduce I/O contention while accessing tempdb
- Configure each data file to be the same size with auto growth disabled.
The correct sizing of tempdb will depend on data volumes and other factors such as available memory but a reasonable starting point is 8 files sized at 5GB each. If you encounter errors due to lack of space in tempdb then all files should be increased in size to maintain equal sizes.
For a new installation using SQL 2016 or later, the tempdb data files can be specified in the installer. For existing installations or earlier versions the changes must be applied using script or SQL Server Management Studio.
Memory Configuration
By default SQL Server does not have a limit imposed on maximum memory usage and SQL Server Analysis Services is permitted to use 80% of system memory. In a system that is not operating at the limit of its available RAM then there is not usually a problem with this default configuration/ However, if SQL and SSAS are co-hosted on the same server then when limits are reached the contention for memory between SQL, SSAS and Windows may lead to degraded performance.
A guideline for more controlled memory configuration for a server hosting SQL and SSAS is:
SQL : Memory / Minimum - 20%; Memory / Maximum 80%
SSAS : Memory / LowMemoryLimit - 20%; Memory / TotalMemoryLimit - 80%;
Adjustments for parallel execution plans
The default parameters for determining the use of parallel execution plans have "out of the box" values that are not appropriate for modern hardware and this has been observed to cause performance issues in some Discovery Center operations.
We recommend the following settings be applied:
-
Cost Threshold For Parallelism : 50
- Max degree of Parallelism : based on the number of CPU Cores
- <= 4 cores, MDOP = 1
- 8 cores, MDOP = 2
- > 8 cores, MDOP = 4
Set Server Power Plan
If the system hosting the SQL Server instance is using the default "Balanced" power plan then this may have an impact on overall system performance. Setting this to "High Performance" may yield performance improvements at the cost of power consumption. The effectiveness of this change may also depend on system BIOS settings.