Adding database to a Rust powered framework
In one of my previous stories (see here), we have looked at an example of implementing a small web app by using the Rocket framework.
This web app did host the assets of a client app and provided a small API. Now, we are going to extend this by adding a database (PostgreSQL) together with the ORM named Diesel. Moreover, we will look into how to bundle this all together as a shareable web app by means of docker-compose.
Let us summarize here what parts of the application we are planning to add. Remember, so far our application provides an endpoint that allows computing the convex hull of a given set of points.
Let us add the following things:
- allow saving a result from the above-mentioned endpoint
- allow to
GETall results in order to list them at the UI
- allow to
- allow to
UPDATEa display name of a result
Of course, a useful thing when developing this is to have a PostgreSQL DB running in the background. You don’t need to install any locally on your system but instead just use a configured docker image:
docker pull postgres:14.2docker run --name postgres -e POSTGRES_PASSWORD=mysecretpassword POSTGRES_USER=convexhull -p 5432:5432 -d postgres:14.2
You can use any other version than 14.2 or even leave this term away to default to the
The above creates a PostgreSQL database named
convexhull with user
convexhull and password
mysecretpassword. It runs at port
5432 which is mapped to the local port of some number.
The Diesel ORM comes with a CLI that I propose to install locally. In order to do this you might first have to install the following PostgreSQL client on your system:
Afterward, you can install the CLI by using
cargo install diesel_cli --no-default-features --features postgres
Having all this, from within the project folder you can run
This puts a file
diesel.toml and a migration folder to our project. The migration folder contain two files,
down.sql. These files are used to migrate the database from one version to the other and backward. So, the general contract is, that everything that produces
up.sql should be reverted in
Everybody who ever managed a DB in a larger project knows, that this stuff is all about creating good SQL schemas and using as less as possible but as many as necessary indexes. Our schema will be kept small to teach the principles behind Diesel. First, we will tell Diesel to generate migration files for our schema:
diesel migrate generate convex_hulls
This generates the corresponding bespoken
up/down.sql files in the folder
migrations/XXX_convex_hulls. We will add the following data definition to
CREATE TABLE convex_hulls (
"id" INTEGER PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY,
"created" TIMESTAMP NOT NULL
);CREATE TABLE points(
"id" INTEGER PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY,
"input" JSON NOT NULL,
"output" JSON NOT NULL,
"convex_hull_id" INTEGER NOT NULL REFERENCES convex_hulls ON DELETE CASCADE
DROP TABLE IF EXISTS points;
DROP TABLE IF EXISTS convex_hulls;
ConvexHull may have a
Point associated with it.
Now, we can instruct Diesel to run the migration by typing:
diesel migration run
and to redo it (in case) by:
diesel migration redo
During development, you will find yourself using the latter comment every time you are changing the data model. At the same time, a file named
schema.rs is kept up to date resp. is getting created. It is worth having a look at this file to get an idea of the migration scripts do produce the mapping as expected. Resources defined here are intended to provide references to table names, columns, etc from within your code. So, such names are never getting hard-coded and are secured by the compiler against typos!
In order to use Diesel we first have to add the following dependencies to our
serde = version = "1.0.136", features = ["derive"]
rocket = version = "0.5.0-rc.1", features= ["json"]
diesel = version = "1.4.4", features = ["postgres", "serde_json"]
serde_json = version = "1.0.48", features = ["preserve_order"]
dotenv = "0.15.0"
diesel_migrations = "1.4.0"
Moreover, we need to create a
.env file with the following content:
Diesel is loading these values when starting the application and using the above specific entry to connect to our database.
The endpoints we are going to add to the Rocket server will look like this:
Database entities are defined in
models.rs with the following content:
All the methods the CRUD endpoints are delegating to will be defined in
These methods need to establish a connection to the database. To this extend the following method has been written in the file
Since this article is not about the front-end I won’t give many details here. Just remember, the Rocket server is hosting front-end assets as a product of a Vue app built with Vite. The registration of the route for static assets has been described in the previous article. Essentially, the client will now be adapted to utilize all the provided CRUD endpoints and looks like this:
So far we have a server that provides several endpoints and hosts the front-end assets. Moreover, we have a database that backs persisting some of our entities. Docker has a fantastic tool named
docker-compose to bundle all this together and make it shareable although multiple servers, that is, multiple docker images are involved.
Docker-compose is an addition to the docker engine and you have to install it separately (see here). We add a file named
docker-compose.yml to the project that intends to describe all components (servers) of the application. Its contents are this:
command: ["./wait-for-it.sh", "db:5432", "--", "./target/release/convex-hull"]
So we have two services, the one called
web which build is described in the local
Dockerfile and another called
db. The latter instead of a
build refers to an
image. Each service will run in its own process and we can attach
environment variables to it. Moreover, and this is very crucial to us, the service
web (the Rocket server), depends on the database to be ready to accept requests. For this reason, we do two things:
- We use
depends_onthat tells the service
webdepends on the service
dbon the build level. That is, the former is not getting started before the latter is ready.
- The service
commandthat overrides everything in the
commandare executed after the build finishes. The
wait-fot-itis a utility function that waits for the host
dbto accept requests at port
5432. Only then, does it continue to execute the second part, that is, starting the Rocker instance.
We can make docker-compose build and start the instances by typing:
This runs the container in the current terminal and you can stop it as usual.
One final note I have to make about database migrations. The database started this won’t contain all the necessary table definitions. For this reason, it is necessary to tell Diesel to do all necessary migration whenever the server is being started.
In the original code, you will find the actual call to start Rocket wrapped as follows:
Ok(_) => rocket::build()...
embedded_migrations is a module that becomes available after executing the macro
embed_migrations!(); from the crate
diesel_migrations. It ensures that the database is held up to date w.r.t to all migrations defined in the folder
To get the entire code you can do as follows (requires: git, docker, docker-compose):
git clone https://github.com/applied-math-coding/convex-hull.gitgit checkout v2.0 // brings you to the correct versiondocker-compose up // builds and runs the app// then you can got to http://localhost:8000
We have to admit that this was very much. But this is not related to Diesel or Rocket but to the circumstance that we have built a full-stacked web application.
One final note of care. Although, all the above reads easily it is not as simple as it looks. In particular, when dealing with associations, there are many things to ensure to fit together. Again, this is something not specific to Diesel but a circumstance you encounter with probably all ORMs.
Although using Diesel on top of Rocket produces a very performant and secure app, the type system can be cumbersome to adhere to for larger applications. For this reason, we will look finally at one more approach in my next post.
Thanks for reading!