Microsoft Boosts Azure SQL Premium with In-Memory Technology

Microsoft Boosts Azure SQL Premium with In-Memory Technology

New in-memory data technology that makes short work out of big data analytics is now available on Microsoft’s Azure SQL cloud database.

Azure SQL’s in-memory capabilities are now generally available, Microsoft announced this week, setting the stage for brisk and advanced cloud-based analytics. As the term suggests, in-memory processing relies on data stored in a RAM to accelerate database performance. Rather than wait for data to wend its way from storage components and arrays to a server’s processor, in-memory databases can access information much more rapidly courtesy of a server’s own memory bus and its comparatively faster data transfer speeds. Accordingly, in-memory database servers are often outfitted with massive amounts of RAM. For example, in a “scale-up” configuration, Hewlett Packard Enterprise’s (HPE) ConvergedSystem 900 systems for SAP HANA, the German business software maker’s in-memory database, can contain up to 16 TB of system memory. Microsoft is now offering in-memory support to Azure SQL customers with Premium database tier subscriptions. And those customers can expect a big performance boost, according to Rohan Kumar, general manager of Microsoft’s Database Systems group.

 

“In-memory technology helps optimize the performance of transactional (OLTP), analytics (OLAP), as well as mixed workloads (HTAP),” wrote Kumar in a blog post. “These technologies allow you to achieve phenomenal performance with Azure SQL Database—75,000 transactions per second for order processing (11X [performance] gain) and reduced query execution time from 15 seconds to 0.26 (57X [performance gain]).”

 

Further, customers can use Azure SQL’s in-memory features to blaze through database workloads at no extra cost, he added. For example, customers under a P2 database plan can achieve 9X and 10X improvements in transactions and analytics queries, respectively, without paying extra. A handful of features comprise Azure SQL’s newfound performance-boosting capabilities, including in-memory OLTP (online transaction processing). This latency-reducing technology increases data throughput, enabling high-speed trading and rapid data ingestion from internet of things (IoT) devices. Another feature, clustered columnstore indexes, slashes data storage footprints by up to 10X, accelerating reporting and analytics workloads, claimed Kumar. Used with Hybrid Transactional and Analytical Processing (HTAP), Azure SQL’s non-clustered columnstore indexes that forego the time-consuming extract, transform, load (ETL) process, enabe real-time analytics that query the operational database directly.

Of course, the competition isn’t sitting still.

 

On Nov. 8, SAP announced the impeding release of HANA 2, the follow-up to the company’s cloud-friendly, in-memory computing platform. HANA is a big moneymaker for SAP, growing into a near-$2 billion business since it was introduced in December 2010. On Nov. 30, the company plans to release a new version of the blisteringly fast database, complete with new artificial intelligence algorithms and bring-your-own-language capabilities intended to help developers build and deploy a new generation of intelligent applications. Mirroring the white-hot container development trend, customers can also expect new SAP HANA microservices in the cloud. They include a batch of text and natural language data processing features (Text Analysis Entity Extraction, Text Analysis Fact Extraction and Text Analysis Linguistic Analysis) along with a new Earth Observation Analysis service. The latter, available in beta, uses satellite data from the European Space Agency (ESA) enabling researchers to analyze historic information about the world’s vegetation, soil and water conditions.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply