This document provides an overview of database systems, database system design, and database administration. It’s goal is provide context about database administration and architecture for developers of web applications who have never developed or administered large database systems and systems administrations without database administration experience.
Database systems, are high performance engines meant to provide high throughput read/write and processing for data. Data can literally be anything, but are databases all optimized to handle specific data structures (arrays, lists, numbers, strings, binary blobs, etc.) that obeys specific kinds of standard usage patterns. While the term “database” has taken on a quasi-colloquial sense reflecting a large digital collection of data, technically speaking the term “application persistence system” is a more accurate descriptor of these systems than “database.”
Files and file systems can, and do of course, provide application persistence. Although file systems tend to store more unstructured data, using certain kinds of regular formats including delineated (e.g. CSV,) or specific file formats (e.g. JSON, YAML, or XML) or even binary formats, it’s possible to reap some of the structured benefits of a database on the file system.
Although storing data on file systems is intuitively easier to understand, and all databases must ultimately store data on the file system, there are problems with storing data in files and on file systems:
Access times and overhead. There are a number of operational limitation with files. In short, to read a few bytes of information from files, your program probably has to spend a lot of time finding the file from among the thousands or millions of files on your system, Then the program has to parse through the entire file (likely) and it has to repeat this process anytime it needs to find data.
While there are ways to index and cache data for easier access, this adds complexity and overhead, and a fuzzy line emerges between “home grown database system,” and a file-system based persistence layer.
Data aggregation. Most applications use data in different forms and combinations than they store data at the persistence level. Applications must take some amount of disparate data and aggregate them together dynamically (more or less) to provide interesting features. The aggregation requirements in even simple programs often outstrip the ability of files and file systems to supply data fast enough.
Concurrency. File systems are conventionally bad for concurrent operations, which is largely a result of their design and history. If two processes need to read or write to a file they have to wait for their turn. If more than one process can modify a file, then programs need to be constantly re-reading files to ensure that their data is consistent and up to date. If you need more than one system to read data from a single file system, you’re basically out of luck.
There’s also a “catch-22” situation here: if you have lots of data in a single file, you run into problems where reading data takes too long. If you put little bits of data into lots of different files you can typically skirt around the concurrency issues and to a lesser extent the data aggregation issues, but you spend more time finding files, and if the “little bits” of data is consistently below (or around) the block size (i.e. 4kb) of your file system, the file system can’t store data efficiently.
File systems also have a number of permission and access control issues, that are difficult to solve in most applications. For most potentially problems with using file systems for application persistence there are work arounds, but the complexity and challenges lead developers to reliable solutions for these kinds of problems, databases, time and again.
Database systems provide a centralized interface for data persistence, using a client-server architecture for managing data. Because the database server controls access to the data it provides, it can ensure that data modifications and additions are can happen quickly and reliably and it can allow multiple process (i.e. application instances) to read and write to a single corpus of data without placing the consistency or integrity of that data at risk.
Furthermore, the database system can optimize your data storage and access. By enforcing structural requirements, this is particularly true in the relational model, the database can store the data efficiently and provide some level of in-memory caching or indexing. These features mean that database can spend less time finding or returning data. Finally, because databases have ready access to “hot” copies of the data, efficient indexes, and some basic information about the data types and structure they are the ideal location for doing the first pass of data aggregation, analysis, and processing.
Because databases provide a crucial component of almost every piece of sufficiently complex software and the bounds of the data persistence problem space are well known, database technology is quite advanced and very powerful. There are dozens of serious database solutions, each tuned to serve different usage profiles and to provide different kinds of functionality. This chapter is not meant as a comparison guide between different database products or even data persistence paradigms, but there are strengths and weaknesses for every database system which this section attempts to highlight.
Databases systems provide many performance advantages over using file systems (for example) for data storage. However, there are some hard technical limitations from hardware and operating systems with which all solutions struggle. Basically, databases are able to achieve performance gains by optimizing and limiting the way that they store, organize, and indexing data.
Databases also attempt to place as much data and indexes in working memory for fast access. As applications begin to regularly access and write volumes of data that exceed the capacity of your database server’s RAM, performance always drops off.
Beyond, size constrains, there are also potential limitations around the number of connections. The available bandwidth and network configuration between the application instances and the database can be a limitation. Operating systems have hard limits with regards to the number of simultaneous connections that they can track. These limits are high, but they exists.
If your database cannot meet its demand, the problem is almost always related to limited RAM relative to the size of the “working set,” rather than connection or networking related. In the absence of an application bug that leads to too many open connections, or networking glitches, its difficult to reach these limitations.
Every application needs a unique scaling strategy because the ways that applications create and consume data are unique. While a large part of the “scaling” problem deals directly with the database architecture, database planning is only a small part database scaling. While you can (and should!) begin writing your application with good database practices in mind, there’s a limited amount of preemptive optimization that’s possible before testing the application in the “real use.” Most scaling work requires working in response to actual usage data.
As you begin developing a term strategy for scaling your database you need to have a clear understanding of the following factors:
a grasp of the actual limitations of your database server software.
an understanding of the usage pattern of your database. This should include:
- the distribution of activity between reads and writes.
- peak and average utilization.
- expected growth.
an idea of the ways that your application stores and accesses data. Know your schema and the ways that you build queries, as well as (generally) what aspects of your applications access the database and at what rate.
This section provides an overview, some shorthand, and a few specific ideas about scaling strategies. These architectures provide an overview of the possible tools that you can use to help increase the capacity of your system and application.
There are two ways to scale a database system “horizontally,” or to distribute the load across a cluster of database server systems: sharding, or partitioning the data so that each machine is responsible for only a small part of the data; and replication where multiple instances of the database exist in parallel. The former increases the functional “working set” and available processing capacity at the expense of increased complexity, and the later increases redundancy and (potentially) increases read capacity. There are advantages and disadvantages to both strategies.
Partitioning is a great strategy for avoiding contention problems that databases often encounter, and can provide a great deal of raw capacity. However, partitioned databases are much more complicated to manage, both from administration perspective and from the perspective of the application: Data must to be effectively distributed among the partitions, and per-partition backups and administration is much more complex. Also the application, or the database itself must have some provision for directing operations to the partitions as needed, which incurs complexity. In some cases database tooling can help address these issues well, but these technologies are young and continue to develop.
Replication is typically much easier to set up and adds great availability/fail-over possibilities. Most database systems have some support for simple “master-slave” replication at the very least, and many have more sophisticated automatic replication support. Compared to partitioning, replicated systems are easy to administer. A signifigant challenge for replicated systems is that it’s very difficult to distribute write operations to more than one database sernver without risking the concurrency of the database. Database systems that need to support read-heavy loads can flourish in replicated environments; however, if the system is write heavy, replication provides limited benefits.
Many production deployments use some combination of both replication and partitioning.
There is only one practical method for scaling a databases “vertically:” buy better hardware. Some may also consider optimization of the code and indexing to be part of vertical scaling. The options for this kind of scaling, of course, depend on the environment and use of your software but there are a few consistent high-level concerns.
The first step in scaling vertically, and indeed in any kind of scaling, is to segregate processes and functions to unique servers. Run application servers, web servers, and load balancers on different instances. This will reduce contention across your entire system, and once your deployment has functional/process segregation, it becomes easier to optimize specific bottlenecks horizontally.
If your application is running in a shared/virtualized environment (i.e. “the cloud”), begin by increasing the amount of memory allocated to the server. Then, do whatever you can to decrease the amount of disk contention. Eventually cost or disk contention will force you to running your actual hardware. In that case:
There are limitations to how large you can scale a system vertically, the operational requirements and complexity of horizontal scale are often far greater than vertical scaling. Although every deployment has unique needs and costs, vertical scaling is economically equivelent to with horizontal scaling.
The proliferation of large, powerful database servers have helped create the notion that the only way to increase the scale of an application is to increase the power and resources of the database server. Indeed even the new database systems designed for large-scale uses, contributed to the idea that the only real way to increase application performance is to increase the size and capacity of the database engine itself. Broadly speaking, database system es have hard limits and performance requires resources; however, focusing on the database may prevent administrators and database architects from developing innovative and creative solutions to scaling problems.
Consider the following recommendation: always scale your database up or out as last-resort solution. See improving capacity for some other element or providing another caching layer helps improve performance or capacity before you move on to database architecture. Also consider the scale of your own environment: will deploying a new architecture or additional database infrastructure enhance performance commensurate to the amount of time required to develop the solution or money required to support that infrastructure long term. Consider the following:
Even if your application is huge and requires a monolithic data store, you may be able to archive some amount of older, to increase the available resources in your higher-performance and higher-availability system. Parts of your application might be really well suited to running on top of relational/SQL database and parts might make more sense running off of a non-relational/NoSQL database.
Finally don’t be afraid to be creative, there’s nothing wrong with running a some “databases” using SQLite in-RAM (i.e. /dev/shm) if the situation calls for it. Consider the needs of your applications, and see where this leads you.
See also
“High(er) Availability Is a Hoax” and “Web Service Architecture.”
For the most part, from an administrative perspective databases are all roughly equivelent: they have similar resource requirements and performance limitations. However, in recent years, a number of companies and projects have attempted to provide new approaches to data-persisance related problems.
The most common term for thses tools is “NoSQL,” but neither this or the more correct descriptor “non-relational database,” really captures the sme of the most important shifts that are in progress for database systems.
There are two major aspects or lessons of this ongoing paradigm shift:
Databases do not need to be general purpose tools and flexibility for many different workloads, particularly at the cost of simplicity.
Specialized systems that address the needs of the workload at thand are often preferable. Different applications and different aspects of applications require different things from a database system, and having database tools that are great for different kinds of structure, for analytic operations, data processing, and distribution models is important for being able to scale appropriately.
Having horizontal scalability slightly more important than optimized vertical scalability and general purpose design. The ACID-compliant relational database paradigm is powerful and having these kinds of tools is important, but ACID effectively prohibits horizontal scalability, in situations where raw throughput often matters more than instant consistency.
In practice these conceptual shifts give rise to, and support, the following practical developments for database users, software developers, and systems administrators:
Inevitably, most developers and architects see scaling problems on the horizon and they get really scared. This is probably the worst response. If nothing else, you can solve nearly all scaling problems can–at least temporally–within a day by increasing the amount of resources (i.e. RAM) allotted. This will buy you more time to figure out how to scale more efficiently.
The first response to the threat of a scaling problem: too much contention or slowing performance is to address all scaling problems, and to build a system that’s impervious to every availability threat. You start thinking about sharding (horizontal partitions,) and replication, and fail over. The results is inevitably near total overwhelm.
If your current database system cannot handle demand, or might not be able to handle demand, do everything you can to learn this before demand saturates the server. Then do everything you can do figure out when the service will reach saturation. Then prepare for this eventuality, when your systems have reached capacity, or near this point, its often too late to do anything. Typically preparations involve:
Developing business plans. No matter how you have to scale it will require additional resources: more development to optimize the code, more money for servers, and/or better servers. If you have resources (i.e. money) at hand, you’ll be able to buy time and afford the solution.
The upside: if you have an application that’s getting enough use that it demands “scaling,” you likely provide value to users, so this part might be easier than you think.
Consider easier ways to decrease the load on your database that don’t require fancy database architectures. Sometimes the bottleneck is in the application server. Sometimes it’s the code. Sometimes putting a more application servers and load balancing connections amongs them works better. Sometimes the best solution is to put caching layer in-front of the web-server.
In most cases, if you’re running any other applications on your database system, separating non-database services from other services increases consistent performance and should always precede any other serious scaling activity. Preventing your application and database server from contending with the same pool of resources is often immensely helpful.
Address systems administration challenges in the same way that you’d address optimization problems in code: identify the bottlenecks and optimize tactically.
Practically, most applications have a pretty standard development trajectory, and similar if not identical infrastructure requirements. Consider the following progression and “scaling” process:
The application, the database, your email server, and probably the machine you connect to IRC from are all running on a single instance. It works well until you have more than a dozen concurrent users, and 100 MB of data at which point the server starts crawling along. It’s time to do something.
Partition services onto different systems. Put the database on the biggest server you van afford, and keep everything but the database off this system. RAM is what matters here, and you don’t want any other processes running and competing for resources.
Application servers are typically single threaded and are often a bottleneck far before the database. So you want to run as many of those as you have processor cores for (taking RAM into consideration.) Put everything behind an HTTP server/load balancer (i.e. nginx) that proxies the dynamic requests back to the application servers and provides static content (javascript, CUSS, etc.) directly.
Begin replicating the database. Replication has some overhead, but it provides a reasonably hot standby so if anything happens to the “primary,” database, you have a running copy to use while you fix things. That’s very useful.
The operation challenge of replicated databases is in ensuring that write operation propagate to the replicated databases promptly. If you tell the database to write data, does the write have to propagate to the secondary before it returns? Is it safe to write to the non-primary? If so, how does the system deal with possible collisions that come from concurrent writes?
Despite the potential issues around concurrency, possible collisions, and writing data to replicated databases, there are no fundamental issues simply data from the “non-primary,” database except for possible slightly-stale while waiting for replication to complete. As a result applications eventually add support for splitting reads and writes between two databases: read from the non-master/non-primary database, and write to the master.
Adding a replicated database instance splitting read and write operations can allow your application to provide a great deal of additional requests. If your application is “read heavy” using a replicated systems will often solve most database-related scaling issues. Database systems often make it easy to add additional replicated instances, or have instances replicate from other replicated instances (as in a chain,)
If your application is “write heavy,” you will need to explore other options.
If you reach the limits of what simple replication can provide, and your database layer is still saturated (or nearly saturated,) the only remaining option is to partition your database or to create “shards.”
Some systems make this easier than others, but all partitioning systems require noticeable increase in complexity. In these systems the database system (or the application) splits the data corpus between several master processes each of which store and write separate “slices” of the data.
Partitioning requires substantial thought and consideration to determine the best way to partition the data so that:
Depending on your database system and the structure of your application, some modifications to the application itself may help facilitate this partitioning.
If you have a sharded (and replicated) database layer and it still cannot keep up with the demand, the following options will can add capacity.
As demand continues to increase, which it might, take an analysis of your application, its use patterns, and use these insights to enhance or bolster your architecture accordingly.
Again, throw money at the problem. If you continue to run into capacity problems at this stage, presumably you have users and you’re providing a valuable service. Someone, ought to be able to pay you for that, and with more resources many options are available:
While this background, examples, and context is potentially enlightening, all of the information presented above may not be immediately useful to you, and developing a plan for “how to scale applications in the future,” may be an exercise in futility and premature optimization.
Developers of open source web applications [1] find themselves in a unique situation: most of the time the applications run as single instances in very small deployments, but sometimes they run as massive instances. It’s helpful to have an application that can run with the same code on top of clusters and small instances as needed. But only as you need.
You should always consider the possibility that you will need to scale your database server and application, eventually. However, beyond knowing how you would scale the database system, and the application, as needed, actually working on scaling infrastructure and support before you have actual need can be counter productive. While there are patterns (and anti-patterns) for scaling the capacity of the database layer, it’s difficult to devise useful strategy without concrete data about use patterns.
[1] | I’ve glossed over the argument that many free network services developers and proponents make which is to say that FNS web apps scale horizontally by virtue of federation rather than by virtue of complex database systems and architectures. I’m not sure if this is a practically useful yet, but it’s a useful idea. |