Building a full-stack app with Next.js - Adding a database
Hello there! This is the second part of a series about building a full-stack app with Next.js. After the first article, we built and deployed to the cloud (using Docker) an app that fetches posts from a API, displays them as a list, with the ability to navigate to a different page to see some post in detail whenever you click on one on the index page. In this article, we are going to add a database and fetch our posts from it, instead of a remote API.
Setting up our local database
We're using Docker for our deployment, but nothing is stopping us from using Docker for development as well! In this case, we'll be using a dockerized database so we don't have to deal with installing a new service in our machine, setting up users and what have you.
We are going to use docker-compose to spin up a PostgreSQL database. Once you have it installed, create a .env
file and a docker-compose.yml
file next to your node_modules
with the following contents
# docker-componse.yml
version: '3.8'
services:
pg:
image: "postgres:alpine"
container_name: "next-db"
env_file:
- ".env"
ports:
- "5555:5432"
volumes:
- "pg:/var/lib/postgresql/data"
restart: "always"
volumes:
pg:
# .env
POSTGRES_PASSWORD=password
POSTGRES_USER=user
POSTGRES_DB=db
POSTGRES_HOST=localhost
POSTGRES_PORT=5555
A quick rundown of what we just did. Having these files set up, we will create a docker container based on the postgres:alpine
image. That container will create a PostgreSQL database named db
, with a user named user
whose password is password
(you can of course change this values to your like). The POSTGRES_HOST
and POSTGRES_PORT
environmental variables will be only used in our app.
Create your container by running
docker-compose up -d
This will download required docker images in case you didn't have them already and create every specified container in the docker-compose.yml
file.
We mapped container's port 5432 to our local 5555 port, so we could access our dockerized Postgres instance by running
psql -h localhost -p 5555 -U user -d db
Or, if you don't have psql
(PostgreSQL client) installed locally, you could achieve the same by creating a docker container and running psql
from it, and deleting it after closing the psql
session
docker run -ti --rm --network=host postgres:alpine psql -h localhost -p 5555 -U user -d db
After typing our provided password (if your didn't change the .env
file, it should be password
), we should see a psql
prompt, in which we could write some SQL statements.
Connect our app to our local database
Awesome! Up to this point we have our PostgreSQL database running. Next thing we have to do is connecting our app to it.
To do so, we are going to use TypeORM, an Object-Relational Mapping library for TypeScript. Install required dependencies with the following command
npm install --save typeorm pg reflect-metadata
Next, we'll have to update our tsconfig.json
file to enable some TypeScript settings TypeORM requires
"compilerOptions": {
.
.
.
"emitDecoratorMetadata": true,
"experimentalDecorators": true
}
Add the following line at the top of your _app.tsx
file
import "reflect-metadata"
We are almost done with this part. Next thing we'll do is define our Post
model. Although we already had one, we need to create a class now and decorate it with the @Entity
decorator so TypeORM can create the underlying tables in our database, and knows how to retrieve, update, and save such an object. Create a models
directory next to your pages
directory and add the following file
// post.model.ts
import { Entity, Column, PrimaryColumn } from "typeorm";
@Entity()
export class Post {
@PrimaryColumn()
id: number;
@Column({ length: 256 })
title: string;
@Column("text")
body: string;
}
Great. We now need to establish a connection with our database somewhere in our application. For that, we are going to create a utility function that returns a connection. Create a directory called utils
(next to pages/
) and create a index.ts
file inside of it
// utils/index.ts
import { Post } from "../models/post.model";
import { getConnection, createConnection } from "typeorm";
export async function getOrCreateConnection() {
try {
const conn = getConnection();
return conn;
} catch (e) {
return createConnection({
type: "postgres",
host: process.env.POSTGRES_HOST as string,
port: parseInt(process.env.POSTGRES_PORT as string),
username: process.env.POSTGRES_USER as string,
password: process.env.POSTGRES_PASSWORD as string,
database: process.env.POSTGRES_DB as string,
entities: [Post],
synchronize: true,
logging: false
});
}
}
This function tries to get the default connection, and creates a new one if it doesn't exist.
Next.js picks up automatically any .env
files in our project, and uses it to populate the process.env
object with its values. We can now use those values to establish a connection with our database. As you can also see, we are indicating our entities to TypeORM in the entities
entry (we only have one entity, but if we were to add more we would have to update that list so TypeORM can create the corresponding tables in the database)
In your home page (pages/index.tsx
), get a connection in the getServerSideProps
function
// pages/index.tsx
export async function getServerSideProps() {
const conn = await getOrCreateConnection();
.
.
.
If you now try to navigate to http://localhost:3000/
, a 404 page will be shown and a error message will be displayed in your server console saying that Support for the experimental syntax 'decorators-legacy' isn't currently enabled
. To solve that, stop your development server, and install the following dev dependencies
npm install --save-dev @babel/plugin-proposal-class-properties @babel/plugin-proposal-decorators babel-plugin-transform-typescript-metadata
(yes it's ugly, I know)
Then, create a .babelrc
file next to your node_modules
and add the following content
{
"presets": ["next/babel"],
"plugins": [
"babel-plugin-transform-typescript-metadata",
["@babel/plugin-proposal-decorators", { "legacy": true }]
]
}
Next.js will automatically apply any configuration to Babel if a config file is available. Start the development server (npm run dev
) once again and it should work just fine
Fetching data from our local database
After sorting out some issue we had with TypeORM, it's time of us to fetch some posts. To do so, we need to get the corresponding repository from the connection object
// pages/index.tsx
import { Post } from "../models/post.model";
export async function getServerSideProps() {
const conn = await getOrCreateConnection();
const postRepo = conn.getRepository(Post)
.
.
.
Strangely enough, that fails. To sort it out, we need to tweak it a little bit
const postRepo = conn.getRepository<Post>("Post")
It certainly does not look pretty, but it solves our problem (if you know why this issue occurs, please reach out to me, you will find my contact info at the bottom of the article).
Once we have our posts repository, we can fetch our posts from the database and make queries to it!
const localPosts = await postRepo.find();
console.log(`${localPosts.length} posts fetched from the database`);
If you refresh your page, you will see that your server console printed it fetched 0 posts from the database, which makes sense, as we don't have any posts in our database right now. What can we do right now? We don't have any form to create a new post either. We can access the database directly, so we could INSERT
some records into it.
Connect to your database as you did before, and run some INSERT
statements to create some posts. If you like, you can run \d post
in your psql
console to see how TypeORM created your post
table. It should look something like this
db=# \d post
Table "public.post"
Column | Type | Collation | Nullable | Default
--------+------------------------+-----------+----------+---------
id | integer | | not null |
title | character varying(256) | | not null |
body | text | | not null |
Indexes:
"PK_be5fda3aac270b134ff9c21cdee" PRIMARY KEY, btree (id)
db=# INSERT INTO post VALUES (1, 'My first post', 'Hello world!'), (2, 'Next.js is awesome', 'It really is great!'), (3, 'TypeScript also rocks', 'TS rlz');
INSERT 0 3
If you now refresh your page, it should display a message in the server console saying that it fetched three posts from your database.
Okay great, we now have a local database with some data, our app is connecting to the database and fetching some posts. What we now need to do is pass that data to our page component as props.
export async function getServerSideProps() {
const conn = await getOrCreateConnection();
const postRepo = conn.getRepository<Post>("Post");
const posts = await postRepo.find();
console.log(`${posts.length} posts fetched from the database`);
return {
props: { msg: "Hello world!", posts }
};
}
Refresh your page, aaaand it fails. It now says that it cannot serialize our Post
object. To sort it out, we may have to serialize our data objects, and then parse them back again in our page component
import { InferGetServerSidePropsType } from "next";
import { Fragment } from "react";
import { PostList } from "../components/PostList";
import { getOrCreateConnection } from "../utils";
import { Post } from "../models/post.model";
export async function getServerSideProps() {
const conn = await getOrCreateConnection();
const postRepo = conn.getRepository<Post>("Post");
const posts = (await postRepo.find()).map(p => JSON.stringify(p));
console.log(`${posts.length} posts fetched from the database`);
return {
props: { msg: "Hello world!", posts }
};
}
export default function Home({
posts,
msg
}: InferGetServerSidePropsType<typeof getServerSideProps>) {
const postObjs = posts.map(p => JSON.parse(p) as Post);
return (
<Fragment>
<h1 className="m-4 text-center text-4xl text-red-500">{msg}</h1>
<PostList posts={postObjs} />
</Fragment>
);
}
We are now fetching our posts from the database, converting them to a string, passing them to the component as props, and finally parsing them back as Post
objects. Maybe there is a better solution to this issue than serializing and parsing our data objects manually, but I haven't found it yet (if you have please let me know).
You can see that our home page now displays our posts, great! But our PostList
and PostListItem
components are outdated - their props expect a Post
that's defined in our types.d.ts
file, so we should update that. First, delete types.d.ts
. Then, import Post
model in both components/PostList.tsx
and components/PostListItem.tsx
(I know, we are now importing our Post
model everywhere we reference, instead of having it globally available - tough luck)
If you click on a post on the home page now, the detail page shows other post (the one fetched from the API), we have to update that page as well
// pages/[id].tsx
import { GetServerSidePropsContext, InferGetServerSidePropsType } from "next";
import { Fragment } from "react";
import { getOrCreateConnection } from "../../utils";
import { Post } from "../../models/post.model";
export async function getServerSideProps(context: GetServerSidePropsContext) {
const { id } = context.params;
const conn = await getOrCreateConnection();
const postRepo = conn.getRepository<Post>("Post");
const post = JSON.stringify(
await postRepo.findOneOrFail(parseInt(id as string))
);
return {
props: { post }
};
}
export default function PostDetailPage({
post
}: InferGetServerSidePropsType<typeof getServerSideProps>) {
const postObj = JSON.parse(post) as Post;
return (
<Fragment>
<h1 className="m-4 text-center text-3xl text-red-400">{postObj.title}</h1>
<p className="m-8">{postObj.body}</p>
</Fragment>
);
}
Nothing new here. We are using the findOneOrFail
method from the postRepository
object to fetch a single Post
, serializing it, and then parsing it back to a Post
object in our page component
Deployment
We have successfully integrated a database into our app. Now instead of fetching posts from a REST API, we are fetching them from a PostgreSQL database running locally. Now we have to deploy our app's new version to the cloud, but what can we do about our database?
GCP has its own serverless SQL provider (Cloud SQL), but after being charged 15 USD for an instance I didn't use at all I wouldn't recommend it for such a toy project. Unfortunately it doesn't have a free-tier so I wouldn't even consider it for this case.
If we search for other cloud providers, we can find that AWS offers RDS, which has a pretty generous free quota of 750 hours a month, so we might as well choose RDS over Cloud SQL for this project.
Here is the thing, our app is running in Cloud Run, which gives us a HTTPS URL to visit our app once deployed. As our container that's running the app can be torn down at any time, we don't get a fixed IP to referece, only the URL. On the other hand, RDS allows us to configure security rules to allow inbound and outbound traffic from or to a specific IP, or a specific VPC, but not to allow traffic from a URL, so we couldn't tell RDS to only allow incoming traffic from our app. A workaround we could implement here, is allowing all traffic to our database, so our app running in Cloud Run could communicate with our database running in RDS. Although this solves our issue, we would be exposing our database to the world
What about migrating our app to AWS then? It is a very feasible option, yes. We could deploy our app to an ECS cluster using AWS Fargate, which is a somewhat similar product to Cloud Run (serverless containers). I tried this alternative, and I found it quite more complicated that using Cloud Run (creating a cluster, using a load balancer to get a URL, creating a repository to deploy your image, and so on). Also, there were two things I didn't like about deploying this app to AWS:
- Lack of SSL certificate out-of-the-box. In case you noticed, our app running on Cloud Run is accessible via a HTTPS URL. GCP takes care of provisioning our app with a certificate for secure communications.
- Pricing. Running containers on Fargate is expensive. I deployed a simple nginx container over the weekend, and I had been charged 1 USD on Monday already, for the simplest container that received no traffic at all.
So, my recommendation for this app is Cloud Run (GCP) for our app, and RDS (AWS) for the database. This multi-cloud approach is based solely on a pricing matter, we don't want to spend any money on a toy app, don't we? Nonetheless is a trade-off, where we are choosing pricing over security (our database would of exposed to the world)
So, create a PostgreSQL database in RDS (save your master username and password!), and once you have it available you need to edit its security rules to allow any incoming traffic. To do so, do the follwing
- Go to RDS dashboard and select your database
- In the
Security group rules
card, select any security group - Select
Actions
>Edit inbound rules
- Add a new rule with type
All traffic
, and sourceAnywhere
, and save changes
Having done that, now we have to tell somehow to our app running in Cloud Run how to reach to our PostgreSQL database. In development, we are using a .env
file, and credentials are picked up from environmental variables. We are doing the same in here, but without a .env
file. Cloud Run allows us to define environmental variable to our service, so that's what we are going to use
- Go to GCP console, and then to Cloud Run dashboard
- Select your service
- Click
EDIT & DEPLOY NEW REVISON
- Select the
Variables
tab - Add the following environmental variables
5.
POSTGRES_HOST
. Your RDS database endpointPOSTGRES_PORT
. 5432 by defaultPOSTGRES_USER
. Your RDS database usernamePOSTGRES_PASSWORD
. Your RDS database username passwordPOSTGRE_DB
.postgres
by default
- Click
DEPLOY
Before deploying, we need to indicate docker to ignore our .env
file when building our image. To do so, just add the following line to .dockerignore
.env
Our app now has the corresponding environmental variables, but we haven't deployed our app's latest version. To do so, repeat the deployment steps made in the first part of this series. If you like, you can add some really handy npm scripts to automate this process a little bit
"scripts": {
"gcp:build": "gcloud builds submit --tag gcr.io/<YOUR_PROJECT_ID>/my_nextjs_app
",
"gcp:deploy": "gcloud run deploy <YOUR_SERVICE_NAME> --image gcr.io/<YOUR_PROJECT_ID> --platform managed --region us-central1 --allow-unauthenticated"
}
Now in order to build your app and deploy it to Cloud Run, you only need to run
npm run gcp:build && npm run gcp:deploy
Build your image, deploy your app, and check the live version. You shouldn't see any data, as our RDS database doesn't have any data whatsoever. Create some data by connecting to your RDS database using psql
(just like we did before, but you will now have to change the host you were indicating to psql
via the -h
flag)
docker run -ti --rm --network=host postgres:alpine psql -h <YOUR_RDS_ENDPOINT> -p 5432 -U <YOUR_RDS_USER> -d <YOUR_DB_NAME>
In this second part of the series we didn't add any features to our app, but completed a migration to displaying data from a database. It did require quite some work but we made it through! In the next part we are going to add a form so we can add and edit posts, and we are going to see how we can send data back and forth to the server. Also, we will add a button to delete posts.
Hope you liked it!
🐦 @LucianoSerruya
📧 lucianoserruya (at) gmail (dot) com