Unlocking the Power of DynamoDB: A Developer’s Journey

November 30, 2024


Early Skepticism

My impression of what I understood by “serverless” was not so positive. Previously, I had worked with the Firebase ecosystem in a team responsible for migrating a decent-sized Firestore database into a Postgres database. The company had decided to move away from Firestore since it was becoming increasingly challenging for the development team to launch new features—largely because the database was designed as if it were a SQL database. The process was so arduous that I became convinced NoSQL was either useful for apps with a very simple schema or for very specific use cases, like extremely high throughput and low latency. Most of the time, my apps fell somewhere in the middle of the spectrum, so NoSQL didn’t make much sense.

At the same time, I tested some other GCP and Firebase serverless services, and I quickly became convinced that this approach wasn’t the right way to go. Gluing together a bunch of remote services to build an app felt like an anti-pattern to me. I was definitely more on the conservative side: “Let’s just use a traditional framework like Django with a SQL database and deploy everything to a virtual machine.” My experience had been with building relatively small apps, and I didn’t mind the effort of configuring a virtual machine from scratch and installing packages until everything worked. In fact, I kind of enjoyed it, and I never faced any major issues since these apps weren’t critical. The affordability of small virtual machines was another big plus for me.

Reconsidering Serverless

Years later, I met my current co-founders, and the idea for a product was born. It was time to start building an MVP, and of course, the question of choosing a tech stack came up. At the time, we had access to expert advice from AWS representatives, and since AWS was widely trusted in the European market, it made sense to explore its ecosystem and evaluate whether we could leverage its services to build the MVP. We were encouraged to explore specific serverless services and use infrastructure-as-code tools to improve our development experience and app deployment. Soon, my perception of serverless began to shift. The ability to easily launch API endpoints in our preferred programming language—with high security, observability, seamless scalability, and more—without having to configure a server from scratch was instantly attractive. I realized serverless wasn’t an anti-pattern but a different paradigm with massive potential.

Most engineers would agree that choosing the right database is one of the most important architectural decisions. This time, our product was significantly more critical, as it dealt with health data. Compliance with data privacy regulations was one of our top priorities. Spinning up an EC2 instance and manually configuring a SQL database was no longer the best idea. Properly setting up the EC2 instance and database management system with high security standards, patching dependencies, ensuring encryption, managing backups, and monitoring database health is no trivial task—especially for an early-stage startup. AWS offers RDS, a managed service for relational databases, which significantly reduces the burden of some of these tasks since the underlying infrastructure is managed for you. However, you’re still responsible for properly configuring the database and ensuring its health as it scales. This is also true for Aurora Serverless, where you need to configure how it scales up and down. Additionally, the cost of managed features can quickly add up.

The DynamoDB Breakthrough

Naturally, we considered DynamoDB, AWS’s fully managed NoSQL database. I was hesitant at first, given my previous experiences with NoSQL databases. DynamoDB seemed like AWS’s answer to MongoDB, but without collections and references. Again, I thought it might be useful for specific scenarios, but for most cases, it seemed better to avoid it. DynamoDB checked all the boxes in terms of compatibility with the AWS ecosystem, security, ease of configuration, and maintenance—but I wasn’t sure it was the right fit for modeling our problem.

Eventually, my curiosity led me to watch some AWS re:Invent conferences during my free time. One talk, “Amazon DynamoDB Deep Dive: Advanced Design Patterns for DynamoDB,” caught my attention, and it was mind-blowing to see how complex entities and relationships could be modeled in a single DynamoDB table with very efficient access patterns. Everything started to make sense. Single Table Design requires a completely different mindset and a good understanding of your desired access and write patterns. However, the end result can be a solid, efficient, and highly scalable database. We’ve been working with DynamoDB for approximately 1.5 years now, and the results have been very positive. Over time, we’ve also learned to leverage additional features like Time to Live (TTL) to automatically delete irrelevant items and streams to trigger other AWS services when an item is updated.

Final Thoughts

In general, I strongly believe that DynamoDB enables you to build a robust database infrastructure with minimal friction and a pricing model that can align well with your business needs. In our case, requirements like high availability, strong encryption at rest, on-demand scaling, and data residency aligned well with DynamoDB’s capabilities. However, if you’re not deeply integrated into the AWS serverless ecosystem or don’t have specific requirements, there are other options that might better suit your use case. For example, D1 from Cloudflare is a fascinating alternative—a fully managed, horizontally scalable database built on SQLite. Today, we have a wide variety of databases to choose from, but it’s crucial to step back and analyze your business requirements in detail before committing to a solution.