AWS Adds Blockchain and Time-series Databases
At AWS Re:Invent 2018 today, Amazon announced three major functional upgrades to existing databases and two brand-new databases, bringing the full complement of purpose-built AWS databases to 15.
AWS announced the Amazon Quantum Ledger Database (QLDB) (available in beta). The database provides a transparent, immutable, and cryptographically verifiable ledger that can be used to build applications that act as a system of record, where multiple parties are transacting within a centralized, trusted entity. Amazon QLDB removes the need to build complex audit functionality into a relational database or rely on the ledger capabilities of a blockchain framework. Amazon QLDB uses an immutable transactional log, known as a journal, which tracks each and every application data change and maintains a complete and verifiable history of changes over time. All transactions must comply with atomicity, consistency, isolation, and durability (ACID) to be logged in the journal, which cannot be deleted or modified. All changes are cryptographically chained and verifiable in a history that can be analyzed using familiar SQL queries. Amazon QLDB is serverless, so Amazon customers don’t have to provision capacity or configure read and write limits. They simply create a ledger, define tables, and Amazon QLDB will automatically scale to support application demands, and customers pay only for the reads, writes, and storage they use. And, unlike the ledgers in common blockchain frameworks, Amazon QLDB doesn’t require distributed consensus, so it can execute two to three times as many transactions in the same time as common blockchain frameworks.
Amazon Timestream is a new time-series database also released in beta. It is a purpose-built, fully managed time series database service for collecting, storing, and processing time series data. It processes trillions of events per day at one-tenth the cost of relational databases, with up to one thousand times faster query performance than a general purpose relational database. Amazon says that Timestream makes it possible to get single-digit millisecond responsiveness when analyzing time series data from IoT and operational applications. It offers analytics functions and it is serverless, so it automatically scales up or down to adjust capacity and performance.
AWS also added:
- Global database support for Amazon Aurora MySQL (so enterprises can update a database in one region and automatically replicate it globally across multiple regions).
- On-demand (removing the need for capacity planning) and transactions (full ACID guarantees) for Amazon DynamoDB.
- Glacier Deep Archive is a data-storage service coming in 2019 that is about a fourth of the cost of Amazon’s current storage pricing.
- Amazon FSx for Windows File Server is a Windows-compatible service that may help Amazon win cloud-computing customers who might otherwise shift to Microsoft’s Azure.
- A service to make it faster and cheaper to use machine learning algorithms in Amazon’s cloud.
New machine learning chip
Amazon.com also launched a microchip aimed at machine learning, entering a market that both Intel and Nvidia are counting on to boost their earnings in the coming years.
The company's latest in-house designed chip is called “Inferentia” and has been designed to help with what researchers call inference, which is the process of taking an artificial intelligence algorithm and putting it to use.
Amazon said that the Inferentia provides hundreds of teraflops per chip and thousands of teraflops per Amazon EC2 instance for multiple frameworks (including TensorFlow, Apache MXNet, and PyTorch), and multiple data types (including INT-8 and mixed precision FP-16 and bfloat16).
Amazon will sell services to its cloud customers that run atop the chips starting next year.
Alphabet Inc-owned Google’s cloud unit in 2016 unveiled TPU, an artificial intelligence chip designed to take on chips from Nvidia. Google Cloud executives have said customer demand for Google’s TPU has been strong.