Azure Cosmos DB: real-time data movement using Change Feed and Azure Functions

Many people who read my Cosmos DB articles are looking for an effective way to export data to SQL, either on-demand or in real-time. After performing a search term analysis for my blog earlier this year, I had made up my mind about posting a solid article on exporting data from Cosmos DB to SQL Server.

Note that this serverless and event-based architecture may be used to not only persist Cosmos DB changes to SQL Server, but trigger alternate actions such as stream processing or loading to blob/Data Lake.

CosmosDBSearches_SQLRoadie.jpg

Real-time ETL using Cosmos DB Change Feed and Azure Functions

In this article, we will focus on creating a data pipeline to ETL (Extract, Transform and Load) Cosmos DB container changes to a SQL Server database. My main requirements or design considerations are:

  • Fault-tolerant and near real-time processing
  • Incur minimum additional cost
  • Simple to implement and maintain

Cosmos DB Change Feed

Cosmos DB Change Feed listens to Cosmos DB containers for changes and outputs the list of items that were changed in the chronological order of their modification. Cosmos DB Change Feed enables building efficient and scalable solutions for the following use cases:

  • Triggering a notification or calling an API
  • Real-time stream processing
  • Downstream data movement or archiving

AzureCosmosDBChangeFeedOverview

Types of operations

  • Change feed tracks inserts and updates. Deletes are not tracked yet
  • Cannot control change feed to track only one kind of operation, for example only inserts
  • For tracking deletes in the Change Feed, workaround is to soft-delete and assign a small TTL (Time To Live) value of “n” to automatically delete the item after “n” seconds
  • Change Feed can be read for historic items, as long as the items have not been deleted
  • Change Feed items are available in order of their modification time (_ts system attribute), per logical partition key, and tagged with the same _lsn (system attribute) value for all items modified in the same transaction

Read more about Azure Cosmos DB Change Feed from Microsoft docs to gain a thorough understanding. Change Feed can be processed using Azure Functions or Change Feed Processor Library. In this article, we will use Azure Functions.

Azure Functions

Azure Functions is an event-driven, serverless compute platform for easily running small pieces of code in Azure. Key points to note are:

  • Write specific code for a problem without worrying about an application or the infrastructure to run it
  • Use either C#, F#, Node.js, Java, or PHP for coding
  • Pay only for the time your code runs and trust Azure to scale
  • As of July 2019, the Azure Functions trigger for Cosmos DB is supported for use with the Core (SQL) API only

Read more from Microsoft docs to understand full capabilities of Azure Functions.

If you use Consumption plan pricing, it includes a monthly free grant of 1 million requests and 400,000 GBs of resource consumption per month per subscription in pay-as-you-go pricing across all function apps in that subscription, as per MS docs.

Compare hosting plans and check out pricing details for Azure Functions at the Functions pricing page to gain a thorough understanding of pricing options.

Real-time data movement using Change Feed and Azure Functions

The following architecture will allow us to listen to a Cosmos DB container for inserts and updates, and copy changes to a SQL Server Table. Note that Change Feed is enabled by default for all Cosmos DB containers.

I will create a Cosmos DB container and add an Azure Function to listen to the Cosmos DB container. I will then modify the Azure Function code to parse modified container items and save them to a SQL Server table.

1. First, I navigated to Azure portal, Cosmos DB blade and created a container called reservation in my Cosmos DB database. As it is purely for the purposes of this demo, I assigned lowest throughput of 400 RU/s

01_ContainerCreation02_ContainerCreated

 

2. Now that the container is ready, proceed to create an Azure Function App. The Azure Function will be hosted in the Azure Function app

03_AddFunctionApp.png

04_AddFunctionApp.png

 

3. Add an Azure Function within the newly created Azure Function App. Azure Function trigger for Cosmos DB utilizes the scaling and event-detection functionalities of Change Feed processor, to allow creation of small reactive Azure Functions that will be triggered on each new input to the Cosmos DB container.

055_AzureFunction.png

05_FunctionAppCreated.png

06_AddAzureFunction

 

4. Configure the trigger. Leases container may be manually created. Alternately, check the box that says “Create lease collection if it does not exist”. Please note that you would incur cost for storage and compute for leases container.

07_AzureFunctionConfig

I got this error that read – “The binding type(s) ‘cosmosDBTrigger’ are not registered. You just need to install the relevant extension. I saw many posts about this, so it will most likely be fixed soon.

08_AzureFunctionBindingError.png

Sort out the error by installing the extension for Azure Cosmos DB trigger.

09_AzureCosmosDBTriggerExtensionInstall

 

5. Once the function is up and running, add an item to the reservations container that we are monitoring. And we have a working solution!

10_CosmosDBContainer_AddEntries11_AzureFunctionRunning

 

6. Trigger definition may be modified to achieve different things, in our case we will parse the feed output and persist changes to SQL server. You can download the csx file I used.

12_AzureFunctionDefinitionModify.png

13_AzureCosmosDBContainer_ModifyItem14_AzureFunction_SavingToDatabase15_SavedInDatabase

Summary

We have successfully implemented a serverless, event-based low cost architecture that is built to scale. Bear in mind that you would still end up paying for Azure Function and the underlying leases collection, but there will be minimum additional RU cost incurred from reading your monitored container(s) as you are tapping into the Change Feed.

You can monitor the function and troubleshoot errors.

17_Funcion_Control20_Monitor_AzureFunction

I hope you found the article useful. Add a comment if you have feedback for me. If you have any question, drop me a line on LinkedIn. I’ll be happy to help 🙂 Happy coding!

Resources:

https://docs.microsoft.com/en-us/azure/cosmos-db/change-feed
https://docs.microsoft.com/en-us/azure/cosmos-db/changefeed-ecommerce-solution
https://azure.microsoft.com/en-au/services/functions/
https://docs.microsoft.com/en-us/azure/azure-functions/functions-overview
https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2
https://docs.microsoft.com/en-us/azure/cosmos-db/change-feed-functions
https://azure.microsoft.com/en-au/resources/videos/azure-cosmosdb-change-feed/
https://h-savran.blogspot.com/2019/03/introduction-to-change-feed-in-cosmos.html

Azure Cosmos DB – Partition and Throughput

In my previous article Introduction to Azure Cosmos DB, I mentioned Partition and Throughput only briefly. Adopting a good partition scheme is quintessential to setting up your Cosmos DB container for elastic scaling and blazing performance. This article will take a closer look at these two aspects to help fully utilize the storage and performance offerings of Cosmos DB.

Partition

Azure Cosmos DB containers store documents, graphs or tables. Containers (a.k.a. collections in the context of documents) are logical entities that could be distributed across multiple physical partitions or servers.

Physical and Logical Partitions

A physical partition is an internal Cosmos DB concept, essentially a fixed amount of SSD storage combined with a variable amount of compute power (CPU, memory and IO). The number of physical partitions of a container depends on its storage and throughput. For containers with shared throughput, number of partitions depends on RU/s assigned to the set of containers.

Request Unit (RU/s) – is the unit of throughput. 1 RU/s serves a get by self-link (internal property) or id of a 1 KB item.

When a collection is created, we can specify a fixed storage capacity of 10 GB or unlimited capacity. A fixed storage collection is limited in performance to a max of 10,000 RU/s. If we choose unlimited capacity, the collection created potentially has no max RU/s limit. Collections are supposedly unlimited in terms of storage and throughput, and physical partition management is handled by Cosmos DB behind the curtains. Note that for a multi-partition collection, we need to specify a partition key.

CosmosDB Container - Partitions

Data within a container having the same partition key value form a logical partition. The max storage limit of a logical partition is 10 GB, which means if data associated with a certain partition key value goes beyond 10 GB, the logical partition will be full and cannot grow any further. This is why adopting a good partition scheme is very important to avail the storage and performance guarantees of Azure Cosmos DB.

Partitioning example

Azure Cosmos DB internally has a limit for the max throughput that can be provided by a physical partition – PRUmax. This value keeps changing based on factors such as hardware used and platform upgrades. For now, keep in mind that this happens behind the scenes.

Let us assume PRUmax = 10,000 RU/s. We create an unlimited Collection product at 20,000 RU/s initial throughput and productid as the partition key. Cosmos DB has to create at least 2 physical partitions to support the 20,000 RU/s throughput requested. Currently, the default seems to be 5. So, Cosmos DB creates a new collection with 5 physical partitions. The throughput requested will be equally assigned to these physical partitions. This means, the max throughput limit for each partition is 20,000/5 = 4000 RU/s.

Partitioning-Example-Product collection

As we add new documents, Cosmos DB allocates the key space of partition key hashes evenly and consistently across the 5 physical partitions. If the partition key is well chosen, writes will be distributed evenly across the partitions, each partition serving nearly 5000 RU/s and a cumulative of nearly 20,000 RU/s. This is ideal. In real world, it is possible that we chose a bad partition key.

What can go wrong?

  • Performance impact: If majority of the concurrent writes/reads pertain to a specific partition key value, we could have 1 physical partition maxing out the 5000 RU/s allocated to it (hot partition), while the other 4 partitions idling. When this happens, requests are bound to get rate-limited and we will get Http 429 response code.
  • Storage impact: Earlier in the article, I mentioned the concept of logical partition. All data having the same partition key form a logical partition. Logical partitions cannot be split across physical partitions. For the same reason, if the partition key chosen is of bad cardinality, we could potentially have skewed storage distribution. Say, 1 logical partition becomes fatter faster and hits the max limit of 10 GB, while the others are nearly empty. The physical partition housing the maxed out logical partition cannot split and could thus cause an application downtime.

Physical partition split

Azure Cosmos DB manages physical partitions seamlessly behind the scenes, if you chose your partition key smartly that is. Following are two scenarios when Cosmos DB will split a physical partition.

  • Storage limit of 10 GB: When a physical partition is full, Cosmos DB will split it into 2 new partitions assigning data corresponding to nearly half of the keys to each new partition. As mentioned previously, the split cannot happen if data in the physical partition in question have the same partition key value.
  • Increasing throughput: When throughput assigned is increased such that the existing number of physical partitions are insufficient to support it, Cosmos DB will add new physical partitions. In the above example, if the throughput is increased to 100,000 RU/s, Cosmos DB would add 5 new physical partitions.
    Cosmos DB needs 100,000/PRUmax = 10 physical partitions to support the throughput setting.

Throughput

What makes Cosmos DB an attractive high volume transaction database is the ease of scaling. When request rates are low, throughput could be lowered to keep costs down. Cosmos DB’s performance is predictable. For example, a read of a 1-KB document with session consistency always consumes 1 RU, regardless of number of concurrent requests or amount of data stored.

There are, however, two major design considerations to facilitate elastic scaling of Azure Cosmos DB.

Distribute requests and storage

Ideal candidate property for partition key will allow writes to be distributed across various distinct values. Requests to the same partition key should remain lower than the max throughput limit allocated to a partition. A good partition key will evenly distribute writes across all physical partitions and not cause hot partitions. In our example, productid is a good partition key, because it is unlikely that all concurrent requests will be focused on a specific product. If we were to chose the property productcategory as partition key, that could potentially cause hot partitions

Partition scope for queries and transactions

At one extreme, we can use the same partition key for all documents. At the other extreme, we can have unique partition key for each document. Both approaches have their limitations. Using the same partition key for all documents will limit scalability and cause a hot partition and inefficient utilization of throughput. Using unique partition keys will support high scalability, but result in a lot of cross-partition queries and prevent use of cross-document transactions. Occasional fan-out of queries is not too bad, but frequent fan-out will incur high RU consumption and result in rate-limiting.

Estimating throughput

Throughput can be estimated based on the number of expected reads/writes per second. 1 Request Unit (RU) corresponds to read of a 1-KB document containing 10 unique property values by self-link or id. Write, replace or delete will consume more RU/s.

RU calculator

Microsoft provides a Request Unit calculator that serves to arrive at a base throughput to assign when creating a new collection. Be prepared to fine tune the RU setting as you trot along, but this is a good starting point.

This URL ignites nostalgia 🙂

Request Unit Calculator
Pic courtesy: Microsoft

Conclusion

Azure Cosmos DB is a lot more versatile compared to the initial Document DB days. With added support for Mongo DB, Graph, Cassandra and Table APIs and multi-master and global distribution support, Cosmos DB is definitely the most exciting product in database technology at the moment. With the new Azure data products such as Azure Stream Analytics, Azure Data Bricks and HDInsight supporting out-of-the-box integration with Cosmos DB, it is fast becoming a good candidate for Big Data solutions.

Please feel free to reach out if you have questions. I’m always happy to discuss technology 🙂

Introduction to Azure Cosmos DB

What is Azure Cosmos DB?

Cosmos DB started as an internal project – “Project Florence” – at Microsoft in 2010. It is a globally distributed, highly scalable and performant multi-model database service built on Azure. It was built ground up with horizontal scalability and global distribution in view, so you can expect seamless geo-replication to Azure regions by clicking on a map and performance to the order of milliseconds.

CosmosDBIcon

Why should you use Cosmos DB?

Cosmos DB is the future of storage. Below are some reasons you may want to use it.

  • You use Microsoft stack already and find your favourite RDBMS – SQL Server – too slow
  • Your applications need to process large amounts of data at blazing speeds
  • Your data is not relational anymore, you want to use JSON document store or Graph DB or one of the many popular APIs Cosmos DB supports
  • Big Data applications – This is an evolving use case. Many Azure products such as Azure Stream Analytics, HDInsight and Data Bricks integrate with Cosmos DB, making it an ideal choice for persisting big data.
  • You are having to “over-provision” specs to cover peak loads on the database server
  • Your business spans multiple continents and you want easy and reliable replication
  • You trust Microsoft to do a better job with availability using Azure

CosmosDBGeoReplication

Hello Collections!

Collections are to Cosmos DB what tables are to an RDBMS. This analogy is for easy understanding; don’t get hung up on details. There are many differences between collections and tables. A more generic term is Container, especially as Cosmos DB has evolved to support more APIs since the Document DB days.

After you create a Database in your Cosmos DB account, you can create your first collection. Below are a few aspects of a collection you need to be familiar with.

1Request Units / Throughput: Request Unit (RU) is the currency of Cosmos DB.

1 RU = capacity to read* a single 1 KB item consisting of 10 unique property values
*by self-link or id

A collection need not be a set of documents of the same schema. As Cosmos DB is schema-agnostic, you can very well store different document types in the same collection. That said, you may want to store document types that need the same amount of scalability and storage in the same collection. Cosmos DB is cost effective for high volume transaction systems because RU/s for a collection can be ramped up at peak hours and reduced to a minimum during hours of low trade.

When the provisioned number of RU/s is used up, client requests may experience throttling. Prolonged throttling indicates an insufficient throughput setting.

CosmosDBThroughput

2. Partition: Each document in a collection has a partition key and an id

Partition key helps to identify the logical partition, and the id field helps to identify a specific document.

When you create a collection with an initial throughput more than 1000 RU/s, the resulting collection has unlimited scalability. Depending on the initial throughput assigned, Cosmos DB computes the number of physical partitions required to support the provisioned throughput. Logical partitions have a storage limit of 10GB. Due to this, a single logical partition (housing documents having a certain partition key value, such as Product Type = ‘Motorbike helmet’) should not amount to more than 10GB. Theoretically, the number of partitions in a collection can be infinite. Therefore, a good partition key must be a field having a suitable cardinality. With a bad partition key, you are bound to have uneven distributions of data and are likely to hit the 10 GB storage limit per partition.

CosmosDBPartitions3. Time to Live: TTL can be set at a collection or document level. Document level TTL takes precedence, obviously.

Simply put, if TTL for a document is set to 3600 seconds, the document expires when it is older than 3600 seconds. System field _ts (_ts or timestamp is set to Epoch time) indicates the last modified time of a document. Be careful when you set this at collection level. However, this feature helps to implement a sound purge policy.

Global Distribution:

Geo-replication becomes as simple as clicking on a graph with Cosmos DB’s global distribution feature. With more than 50 Azure regions, you could not only set multiple Read regions for global customers, but also specify several failover regions to help with disaster recovery.

CosmosDBAutomaticFailover

There is a lot more to Cosmos DB than can be covered in a single post. With a whole heap of new Azure features that it can easily integrate with, such as SQL Managed Instance, SQL Data warehouse, HDInsight, Data bricks and Data lakes, Cosmos DB is here to stay.

Check out my post about Cosmos DB migration tool if you are migrating data to or from Cosmos DB.