This new technology, feature, product, or tool can increase performance, save our company time, and money!

It’s a common notion in the world of data management that new features can alleviate common shortfalls while also increasing productivity and performance. Being selective about which technologies or features to implement and the timing of their implementation is becoming more challenging for organizations today.

Consulting has afforded me the ability to work with the latest and greatest in business intelligence. While new and exciting does keep work fresh, it also offers a variety of challenges that are not always directly related to the development work or management of the project itself.

Recently, I stepped into a data warehousing project halfway through its completion and it became apparent that a newly implemented feature, Memory-Optimized tables, needed further evaluation.

Memory-optimized tables have been a feature of SQL Server since 2014 and have consistently improved with each consecutive release. Their primary function is to allow for much faster read and write speeds given the table data is all stored in memory which outperforms standard disk based storage by a fair amount. Recently, Microsoft allowed for the creation of Column-store indexes on memory-optimized tables. This drastically improves the performance of analytical queries against them. At a glance, this would seem to translate to increased performance for data warehouse applications, given their output workloads are almost exclusively analytical in nature. However, there are several factors which require additional attention, particularly the manner in which the data warehouse ETL transfers and transforms the data from the source system.

The ETL process that had been built was relatively standard, source replication to a staging area with further transformation in a dimensional model. The process was executed nightly with a rather large amount of data movement taking place. When a data warehouse follows this pattern, it can be expected that the nightly ETL will be moving and transforming large quantities of data in as short of a time frame as possible.

The workloads these nightly ETLs were creating were much different from standard OLTP workloads. Because of this, the team found that memory-optimized tables actually performed worse than standard disk-based tables for their write activities. This was strictly due to the workloads being executed during the ETL process. After reaching this conclusion the team had one of two choices: restructure the ETL to imitate standard OLTP workloads or revert to standard disk tables.

Unfortunately, the memory-optimized table implementation had already been completed. This would require the tables to be rebuilt with appropriate indexes created, or refactoring the ETL to accommodate OLTP workloads. The team finally chose to revert the tables to disk as this would require the least amount of effort between the two options. In retrospect, it was noted that despite enthusiasm and paper-promises the team would research, test, and evaluate new features before performing the work. This often skipped step can save organizations a great deal of time and money when choosing the proper solution for any given problem.

Microsoft is adding many new features with each subsequent release of SQL server. It is important for teams to perform due diligence on these features prior to implementation to avoid derailing a subsequent sprint with unnecessary technical debt. Each new feature released, including memory-optimized tabled can be utilized to improve performance, increase reliability, or decrease costs in the right scenarios.