Skip to content

DynamoDB performance and scaling with partitions, indexes, and read/write capacity units

This is part 1 of a 3 part series

  1. Performance and scaling with partitions, indexes, and read/write capacity units
  2. Data modeling in DynamoDB
  3. Additional design patterns for DynamoDB


First let’s give a very brief overview of the main uses for a traditional RDBMS versus a NoSQL data store, particularly DynamoDB.

  • SQL
    • Optimized for storage
    • Normalized and relational
    • Ad hoc queries
    • Scales vertically
    • Good for OLAP
  • NoSQL
    • Optimized for compute
    • Denormalized and hierarchical
    • Instantiated views
    • Scales horizontally
    • Built for OLTP at scale


Partitioning and Keys within DynamoDB

Up until 7 or so years ago, and excluding main frame and other legacy based systems, it was common for most developers and database professionals to use a relational database for almost anything that involved storing data and even non-textual data. At its most simplified level, these systems of predefined schema, incorporate normalized tables, a primary  key, foreign key relationships, and a whole lot of implied constraints.  Oh and one more comment on RDBMS, Since 1997 (when I started doing software development as a career), every time a database was involved, whether it be Oracle, Sybase, MSSQL, PostgreSQL, or MySQL, there was always highly paid and skilled “Database Administrator” team, whom were experts on the platform, and only job was to constantly watch the db and fine tune, scale it, optimize, and protect it as needed.    


Unlike a RDBMS, DynamoDB uses aggregation of “documents” which makes nosql much more efficient. It has tables, which are collections of items, which are a collection of attributes.  Like a relational database, DyanamoDB has a primary key, but it also has 2 different types of primary keys:  

  • Partition Key – One attribute known as the partition key. DynamoDB uses the value as input to an internal hash function, with the output from the hash function determining the partition where the item will be stored. The partition key value must be unique per table, and is also known as the “hash key”.
  • Partition Key and Sort Key – When combining two attributes, the first attribute being the partition key, and the second attribute being the sort key. All items with the same partition key are stored together in the same physical partition, however the items are then sorted by the sort key value. The sort key values must be unique within each partition and the 2 keys can be considered in RDBMS terms a “Composite Primary Key”.  The Sort key is also known as the “range key”.

So, the sort or range key is optional, but when used together with the hash, it uniquely identifies the item because the combination of the partition key and the sort key must be unique. Consider an example of partition or hash  key as a product id, with a product image id being the sort or range key.  The partition and sort key are combined together and become globally unique composite key.  These Items are ordered sequentially for fast sequential reads.  There are no limits on number of items, unless a local secondary index is exists.  Partitions are 3 way replicated automatically, to 3 different nodes, in 3 different availability zones (AZs), thanks to hidden AWS infrastructure.  For more information look into Amazon Availability Zones and Regions.  When data is written, DyanmoDB will not acknowledge the completion until 2 nodes have been written to.  Reading by default, reads from 2  nodes and returns the latest of each item when there are differences.  There is an option to only write to 1 node to trigger the acknowledgement, which can essentially double your throughput at the risk of inconsistent data.  For more detailed information on this, please see this link.

Local Secondary Index (LSI) are indexes located on the same partition as the primary data, which therefore creates consistent indexes.  For tables with an LSI, table and index items cannot exceed 10 GB.  Additional fields can be included into the LSI so that a query does not even need to hit the primary partition to get the data it needs.  This is very similar to the concept of non-clustered indexes in relational databases for “covered queries”. You can optionally create one or more secondary indexes on a table. Doing this would allow you to query the data in the table using an alternate key, in addition to queries against the primary key.

Global Secondary Indexes (GSI) are indexes that are more flexible and work similar to using separate tables, and use a separate partition and/or sort key to support total flexibility in sorting and aggregating.  With a GSI, the index spans across all partition keys unlike what was discussed prior in an LSI.  They are unlimited in size, are asynchronous and eventually consistent (in milliseconds), and provisioned separately.  The read and write capacity units are completely separate from the parent table it belongs to, and if they are at capacity the read or write will be throttled.  It is important to consider this for not only the main table, but any GSIs that are created.

DynamoDB currently supports 5 indexes per table on each, LSI and GSI.  Please see this link for more detailed information on LSI and GSI within DynamoDB.

Should I use an LSI or a GSI?

  • Will your data be > 10gb? Use as GSI if it will.
  • When eventual consistency is acceptable, use a GSI.
  • If you don’t know the index definition at the time of the table creation, use a GSI.


Taken from the DynamoDB developer guide:
“To get the most out of DynamoDB throughput, create tables where the hash key element has a large number of distinct values, and values are requested fairly uniformly and as random as possible.”


Scaling is done in 2 ways with DynamoDB via throughput and size.

Throughput scaling is done through adjusting the write and read capacity units.  A WCU is measured in 1kb per second, and a RCU is measured in 4kb per second  for strictly consistent reads, whereas eventually consistent reads are ½ of the cost.  

Size scaling refers to the number of items added to a table.  As mentioned earlier, with an LSI, the size limit is 10gb.  The maximum item size is 400kb.  So, scaling by size is done using partitions.

RCUs and WCUs are provisioned and consumed independent of each other.  They are uniformly spread across partitions, so it’s important to do this correctly, or you will run into throttling. What are some common causes of throttling?  Non-uniform workloads are one, which also called “hot” keys that are located in the same partition.  Very large bursts to the same partition can also cause throttling. It is important to try and have an evenly distributed workload spread across partition keys to minimise throttling.  Using CloudWatch monitoring and configured graphs you can watch for this and adjust as needed.


How to determine the number of partitions? (As of 1/18/2016)
Capacity: (Total RCU / 3000) + (Tocal WCU / 1000)
Size: Total Size / 10 GB
Total Partitions: Ceiling(MAX(Capacity, Size))

Stay tuned for the 2nd part of our 3 part series.

Written for
Written by Jeff Mangan

Leave a Reply