Hello, guys! I’m back with the third post of the mini-series: Vanilla GraphQL With NodeJS And PostgreSQL. In today’s post, we will replace the in-memory database with a real one that is used a lot in production: PostgreSQL. We will learn how to utilize Docker‘s power to set up the database in an extremely easy and enjoyable way. Then, we will rework the DB class to interact with the database by building SQL queries. That sounds like a lot of work but believe me, there’s gonna be a lot of fun!
For those who missed out my previous posts, or want to jump to the next ones, please use the links below:
- Setting Up Application
- Refactoring
- Adding Database (this post)
- Session & Authorization (TBA)
- Rewrite Schema Definition (TBA)
Alright, let’s get started!
Preparation
Just like my previous posts, today we will continue the work that we’ve done before. So you can either use your source code from previous post, or if you want to start out fresh, clone from my GitHub repository and get ready as follows:
git clone https://github.com/ChunML/vanilla-graphql
cd vanilla-graphql
git checkout 2-refactoring-source-code
yarn
mkdir dist
After that, let’s run yarn start and verify that all four resolvers
work properly: register
, login
, getUser
, and getUsers
. If nothing strange happens, let’s jump right into the next section.
Setting Up PostgreSQL
As I said earlier, we will use Docker to set up the database because it will make your life much easier. Besides, setting up PostgreSQL is not something that we want to spend our precious time on, right?
The first thing is to make sure you have docker
& docker-compose
installed. Here’s some useful links that you can use:
Below is the versions of my docker
and docker-compose
at the time of writing this post:
$ docker -v
Docker version 20.10.0, build 7287ab3
$ docker-compose -v
docker-compose version 1.27.4, build 40524192
So we have docker
and docker-compose
installed. Next, let’s create a file called Dockerfile
inside db
folder. That file will give instructions on how to set up an image for PostgreSQL database.
Creating Dockerfile
touch ./src/db/Dockerfile
The content of the file is super simple, though. We only need to pull the posgresql
image and then specify the script to execute when starting the container:
FROM postgres:12.5-alpine
COPY create.sql /docker-entrypoint-initdb.d
What script do we want it to execute at the beginning? Well, we need to use a database for the application, so we need to create it first. And the easiest way to do this is via a .sql
file, let’s go ahead and create it:
touch ./src/db/create.sql
Then, inside the newly created file, add the following line to create a database named graphql_app
, that’s the database that we’re gonna use:
CREATE DATABASE graphql_app
That’s what we need to do to prepare for the postgresql
image. We can go ahead to build and run the image. However, I rarely run a docker image directly. Instead, I love running docker images using docker-compose
. It provides a much better way to do the configuration, especially when you want to run two or more containers.
Creating docker-compose.yml
That being said, we need to create a file named docker-compose.yml
. We will put all the configuration on how containers should run here.
touch docker-compose.yml
Let’s fill it with the content below. That’s the settings for our PostgreSQL container:
version: "3.7"
services:
db:
build:
context: ./src/db
dockerfile: Dockerfile
ports:
- 5050:5432
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
postgres_data:
Pretty straight-forward, right? One tip when using Docker is I’d rather not use the default port for a specific application. For instance, the default port for PostgreSQL is 5432
, but here I expose the port 5050
on the host machine. The reason behind that is, on some systems (like Ubuntu or macOS), there’s a high chance that you already have PostgreSQL pre-installed. Therefore, your application will try to connect to your system’s PostgreSQL rather than the one in Docker’s container. One way to be sure is not to use the default port. Doing that also forces you to look up how to configure things rather than relying on default settings.
Alright, cool! We can finally start the whole thing:
docker-compose up
If you saw something like below instead of something unusual, then we have successfully fire up PostgresQL! Half of today’s job is done!
db_1 | PostgreSQL init process complete; ready for start up.
db_1 |
db_1 | 2020-12-22 07:19:20.933 UTC [1] LOG: starting PostgreSQL 12.5 on x86_64-pc-linux-musl, compiled by gcc (Alpine 9.3.0) 9.3.0, 64-bit
db_1 | 2020-12-22 07:19:20.933 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-12-22 07:19:20.933 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-12-22 07:19:20.936 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-12-22 07:19:20.950 UTC [46] LOG: database system was shut down at 2020-12-22 07:19:20 UTC
db_1 | 2020-12-22 07:19:20.954 UTC [1] LOG: database system is ready to accept connections
Adding adminer
We came to the final step of setting up the database. It’s not either crucial or mandatory but I think it’s great to have a tool to visualize what’s inside the database. That’s what adminer
is for. Setting up adminer
in docker-compose.yml
is pretty simple. First, make sure we stop the current container:
docker-compose down
Then, update docker-compose.yml
like below:
version: "3.7"
services:
db:
build:
context: ./src/db
dockerfile: Dockerfile
ports:
- 5050:5432
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
adminer: // new
image: adminer // new
depends_on: // new
- db // new
ports: // new
- 8080:8080 // new
volumes:
postgres_data:
Now, if we run again, we will now have two containers running, all came from the same docker-compose.yml
above.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f0662a05b9f0 adminer "entrypoint.sh docke…" 25 minutes ago Up 2 seconds 0.0.0.0:8080->8080/tcp vanilla_graphql_adminer_1
d28e53451898 reproduce_db "docker-entrypoint.s…" 25 minutes ago Up 2 seconds 0.0.0.0:5050->5432/tcp vanilla_graphql_db_1
The beauty of using adminer
is, if you open the browser and access http://localhost:8080, you will see something like an admin panel (PHP dudes might be thinking of phpAdmin
right now 😉):

adminer
login screenWe can go ahead and log in with the following information:
- System: PostgreSQL
- Server: db (this is what we named the database container inside
docker-compose.yml
) - Username: postgres
- Password: postgres
We can leave the Database
field blank or if you want to be specific, fill in graphql_app
, our database name. After logging in, we can see graphql_app
database was created already. Great!

adminer
after logging inCongratulations! We have finished setting up PostgreSQL using Docker. It wasn’t hard after all, was it? I told you 😉.
Rewriting DB Class
Now, it’s time to introduce PostgreSQL in our application. That would involve updating the DB class quite a little bit. But first, we need to install pg
and its type definition from another package called @types/pg
.
yarn add pg
yarn add -D @types/pg
Connecting to PostgreSQL
After installing the two packages, we can start swapping the in-memory database with PostgreSQL. First thing first, we need to connect to PostgreSQL that we set up earlier. There are two options: create a Client
or create a connection Pool
. We will use Pool
for this project. You can read up about the reason here.
Open ./src/db/index.ts
and change the line that declares database
field as follows:
import { Pool } from "pg";
export default class DB {
database: Pool
...
}
Next, we will use it to create a new connection Pool
inside constructor
method:
export default class DB {
database: Pool;
constructor() {
this.database = new Pool({
host: "localhost",
port: 5050,
user: "postgres",
password: "postgresl",
database: "graphql_app",
});
}
...
}
That’ll work, but I don’t want to hard-code the configuration like above. Normally, we would want to be able to change a few settings, so it’s a good idea to let the caller pass the configuration into the constructor
method. Since we’re using Typescript, we need to define a type for the database configuration inside ./src/types.ts
as well. The code will look like this:
export interface DBConfig {
host: string;
port: number;
database: string;
user: string;
password: string;
}
import { DBConfig, User } from "../types";
export default class DB {
database: Pool;
constructor({ host, port, user, password, database }: DBConfig) {
this.database = new Pool({
host,
port,
user,
password,
database,
});
}
Now in ./src/index.ts
, we can change the initialization of DB class as follows:
const main = (): void => {
const app = express();
const db = new DB({
host: "localhost",
port: 5050,
user: "postgres",
password: "postgres",
database: "graphql_app",
});
...
}
That’s all we need to do to create a connection to PostgreSQL!
Creating users
Table
Next, we need to define a table to store the data in the database. Since we are storing information of User Entity (i.e. model), by convention we need to create a table called users
. The users
table will contain the following columns:
id
username
password
created_at
last_login
It looks pretty much like the User
class, doesn’t it? There’s one little difference but we’ll get to it later.
(By the way, I won’t go into details about database stuff like Schema
, Entity
, etc in this post. Feel free to look those up if you need to.)
Now, it’s time to write some SQL query 😉. Here’s how it looks like to create users
table:
CREATE TABLE IF NOT EXISTS users (
id serial PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
password VARCHAR(255) NOT NULL,
created_at TIMESTAMP NOT NULL,
last_login TIMESTAMP NOT NULL
)
So the thing is, how can we execute that query? Remember the Pool
object that we have (DB
‘s database
field)? We can execute SQL queries by running its query
method. One quick note though, that method returns a Promise
, so we need to change main
into an async
function:
const main = async (): Promise<void> => {
const app = express();
const db = ...
await db.database.query(`CREATE TABLE IF NOT EXISTS users (
id serial PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
password VARCHAR(255) NOT NULL,
created_at TIMESTAMP NOT NULL,
last_login TIMESTAMP NOT NULL
)`);
...
}
Alright, great! Now if we take a look at adminer
screen, inside graphql_app
database, we should see a table called users
there:

graphql_app
database with users
tableNow if we investigate the users
table by clicking at it, we can see its schema definition (or structure) and later on, the data that it’s storing.

users
tableGreat! So we have successfully run our first SQL query to create the users
table.
Now you started to wonder: why can’t we just name the columns the same as User
class’ fields (i.e. in camelCase)? In other words, why can’t we write this query instead:
CREATE TABLE IF NOT EXISTS users (
id serial PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
password VARCHAR(255) NOT NULL,
createdAt TIMESTAMP NOT NULL,
lastLogin TIMESTAMP NOT NULL
)
Okay, if you use that query, then the table would look like this:

users
table with camelCase column namesHow did that happen? Well, in short, databases like PostgreSQL prefers identifiers (such as column names) to be lowercase so they automatically do the conversion, which is why we should stick to snake_case
naming convention. But if you really insist on using camelCase for naming, technically you can: by putting double quotes around, like this:
CREATE TABLE IF NOT EXISTS users (
id serial PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
password VARCHAR(255) NOT NULL,
"createdAt" TIMESTAMP NOT NULL,
"lastLogin" TIMESTAMP NOT NULL
)
Again, I don’t recommend doing it that way. We’d better stick to the convention to avoid unwanted behaviors, especially if you’re not a database expert (I’m not, either).
Alright, what’s next? We got the connection to the database, we created the users table, now we can rewrite all the other methods to get user(s), register a new user and log a user in.
Rewriting findAll
Let’s start with the easiest: findAll
method. The query to get all the rows from one table is also dead simple, which is the famous SELECT * FROM <table_name>
, where <table_name>
is users
.
Just like how we created users
table, we can run any queries via query
method of the Pool
object. Let’s execute the query and examine the result:
async findAll(): Promise<User[]> {
const result = await this.database.query("SELECT * FROM users");
console.log(result);
}
Before we can actually see something, we need to have users in the table first, don’t we? Since we haven’t re-implemented insert
method yet, let’s head to adminer
to create one user with the following fields:
- users:
testUser
- password:
$argon2i$v=19$m=4096,t=3,p=1$3FYZSRH2IUYqoQ7BQkCX9Q$FRCjtx/+WOCDnw56/LVZeSkNyyKPKC7hkrVVSK2VcFY
(I hashed it for you, the original password istestUser
) - created_at: select
now
- last_login: select
now

Hit save and we have one user in the table. Cool!
Now if we go to the GraphiQL playground and run getUsers
, we would see a bunch of stuff printed out in the console.
[nodemon] Result {
[nodemon] command: 'SELECT',
[nodemon] rowCount: 1,
[nodemon] oid: null,
[nodemon] rows: [
[nodemon] {
[nodemon] id: 1,
[nodemon] username: 'testUser',
[nodemon] password: '$argon2i$v=19$m=4096,t=3,p=1$3FYZSRH2IUYqoQ7BQkCX9Q$FRCjtx/+WOCDnw56/LVZeSkNyyKPKC7hkrVVSK2VcFY',
[nodemon] created_at: 2020-12-24T16:39:56.343Z,
[nodemon] last_login: 2020-12-24T16:39:56.343Z
[nodemon] }
[nodemon] ],
[nodemon] ...
What we really care about is the rowCount
and rows
fields. In order for findAll
method to produce the same result as before, which is an array of all users, we can do like this:
async findAll(): Promise<User[]> {
const result = await this.database.query("SELECT * FROM users");
const { rows } = result;
return rows.map(
(row) =>
new User({
id: row.id,
username: row.username,
password: row.password,
createdAt: row.created_at,
lastLogin: row.last_login,
})
);
}
Now it’s a working piece of code. But there are three things I want to change:
- We already explicitly called query on database twice, and we’ll likely call it three more times => we should create a separate method for executing queries
- If we hover above
result
, it will showconst result: QueryResult<any>
=> we want to strictly type it DB
‘s methods should only execute and return queries’ results =>resolvers
should do the rest (validating and mapping results to response object etc)
Let’s tackle them one by one. First, let’s create a new method named runQuery
inside DB
class. This method will be responsible for executing the queries to the database. The returning type was inferred from the result
above.
async runQuery(
query: string,
values: (string | Date)[]
): Promise<QueryResult<any>> {
}
We pass in values
as a parameter. Later on when we rewrite insert
or update
, we will need that. Since our fields are either string
or Date
, values
is has type (string | Date)[]
.
Let’s execute and return the query’s result. We can also add a few more lines to calculate how much time the query needed to execute.
async runQuery(
query: string,
values: (string | Date)[]
): Promise<QueryResult<any>> {
const start = Date.now(); // new
const result = await this.database.query(query, values); // new
const duration = Date.now() - start; // new
console.log(`Query ${query} executed in ${duration / 1000}s.`); // new
return result; // new
}
Next, let’s adapt the new change. Open ./src/index.ts
and update the way we create users
table. This query doesn’t require any values so we can just pass an empty array.
// await db.database.query(`CREATE TABLE IF NOT EXISTS users (
// id serial PRIMARY KEY,
// username VARCHAR(50) UNIQUE NOT NULL,
// password VARCHAR(255) NOT NULL,
// created_at TIMESTAMP NOT NULL,
// last_login TIMESTAMP NOT NULL
// )`);
await db.runQuery(
`CREATE TABLE IF NOT EXISTS users (
id serial PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
password VARCHAR(255) NOT NULL,
created_at TIMESTAMP NOT NULL,
last_login TIMESTAMP NOT NULL
)`,
[]
);
Then we can rewrite the findAll
method similarly. To make it more generic, findAll
can accept a parameter called table
to specify which table to read from:
async findAll(table: string): Promise<User[]> {
const query = `SELECT * FROM ${table}`;
const result = await this.runQuery(query, []);
const { rows } = result;
return rows.map(
(row) =>
new User({
id: row.id,
username: row.username,
password: row.password,
createdAt: row.created_at,
lastLogin: row.last_login,
})
);
}
We got one down, two to go. Next, let’s make the query result strictly typed. In order to do that, let’s create a new folder called entities
and a new file called UserEntity.ts
in it.
mkdir ./src/entities
touch ./src/entities/UserEntity.ts
Let’s fill the new file as follows:
/* eslint-disable camelcase */
export default class UserEntity {
id: string;
username: string;
password: string;
created_at: string;
last_login: string;
}
As you can see, it’s basically just a class that has field names exactly the same as users
table’s column names. I intentionally made the table’s column names and User’s field names different (created_at
vscreatedAt
and last_login
vslastLogin
) so that we can see that without using an ORM, we would have to explicitly manage two separate classes (or interfaces). If we count the User
object type in GraphQL’s schema, that’s three of them to watch for!
So for now let’s update the runQuery
method inside DB
class:
async runQuery(
query: string,
values: (string | Date)[]
): Promise<QueryResult<UserEntity>> {
const start = Date.now();
const result = await this.database.query<UserEntity>(query, values);
const duration = Date.now() - start;
console.log(`Query ${query} executed in ${duration / 1000}s.`);
return result;
}
Now if we hover above result, we can see that it has QueryResult<UserEntity>
type now. Cool!
The last thing to do is to update the getUsers
resolver to process the query result and create the appropriate response, rather than relying entirely on findAll
.
First, let’s clean up findAll
method:
findAll(table: string): Promise<QueryResult<UserEntity>> {
const query = `SELECT * FROM ${table}`;
return this.runQuery(query, []);
}
Then, update getUsers
resolver:
getUsers: async (_args: null, context: MyContext): Promise<User[]> => {
const result = await context.db.findAll("users");
const { rows } = result;
return rows.map(
(row) =>
new User({
id: row.id,
username: row.username,
password: row.password,
createdAt: row.created_at,
lastLogin: row.last_login,
})
);
},
And we’re done rewriting findAll
method.
I know, I know that was a lot of writing and moving stuff around. That was why I chose to update findAll
first. Since its logic is so simple, we can have fun making other updates as well.
Alright, before moving on, let’s give it a test real quick:

getUsers
after updating findAll
Okay, let’s move on to the next one: findOne
.
Rewriting findOne
Updating findAll
laid a good base for us to do the rest. In findOne
, we just need to change the query slightly to choose the record that matches the criteria which is the id
parameter. In SQL, we can achieve that with the WHERE
keyword. And with this query, we need to pass values
(which is an array containing id
) to runQuery
method.
findOne(table: string, id: string): Promise<QueryResult<UserEntity>> {
const query = `SELECT * FROM ${table} WHERE id=$1`;
const values = [id];
return this.runQuery(query, values);
}
That would work, but may not the best way to do it. Why? Because we can only pass id
as the query criteria. What if we instead want to pass username
or a combination of them? We can’t.
So instead of passing id
specifically, let’s pass in an object that contains all the fields that we want for matching criteria.
findOne(
table: string,
conditions: { [id: string]: string | Date } // update id => conditions object
): Promise<QueryResult<UserEntity>> {
Then, what we’ll do is construct the query dynamically based on the conditions
object. To make it easier, let’s look at how the query looks like if we want both id
and username
as matching conditions: SELECT * FROM users WHERE id=$1 AND username=$2
. So, do you have any idea on how to dynamically create the query?
We can do something like this:
const query = `SELECT * FROM ${table} WHERE ${Object.keys(conditions)
.map((condition, id) => `${condition}=$${id + 1}`)
.join(" AND ")}`;
For the values
, it’s much simpler:
const values = Object.values(conditions);
So our new findOne
will look like below:
findOne(
table: string,
conditions: { [id: string]: string | Date }
): Promise<QueryResult<UserEntity>> {
const query = `SELECT * FROM ${table} WHERE ${Object.keys(conditions)
.map((condition, id) => `${condition}=$${id + 1}`)
.join(" AND ")}`;
const values = Object.values(conditions);
return this.runQuery(query, values);
}
Okay, cool! Let’s update getUser
resolver:
getUser: async (
{ id }: { id: string },
context: MyContext
): Promise<User | null> => {
const result = await context.db.findOne("users", { id });
if (result.rowCount === 0) {
return null;
}
const { rows } = result;
return new User({
id: rows[0].id,
username: rows[0].username,
password: rows[0].password,
createdAt: rows[0].created_at,
lastLogin: rows[0].last_login,
});
},
We can now test our getUser
resolver. Hopefully it’ll work.

getUser
after updating findOne
It does! Yay! Let’s move on to the next one.
Rewriting insert
Here comes the big boy. But don’t worry, it just requires more steps than the other ones so let’s tackle them one by one.
First, let’s change the signature of the method. Here’s what we have now:
async insert(input: {
username: string;
password: string;
}): User | null
The question to ask is, what should this method do? Right now there’s a lot going on in there: checking if the user already exists, hashing a new password, and then actually inserting the new user in the database. We should only keep the last task and let register
resolver take care of the others.
So, the insert
method should receive a table name (like findOne
and findAll
) + an object to insert to that table. We will talk about how we create that object shortly. The returning type is the same as the two methods above.
insert(
table: string,
newObj: { [id: string]: string | Date }
): Promise<QueryResult<UserEntity>>
Next, we need to create a SQL query to insert one record to the table. In this case, the query would look like this:
INSERT INTO users(username, password, created_at, last_login) VALUES ($1, $2, $3, $4) RETURNING *
Note that we need to add RETURN *
otherwise we won’t have the added record back in the query result.
So, here’s how we can dynamically construct that query:
const query = `INSERT INTO ${table}(${Object.keys(newObj).join(
", "
)}) VALUES (${Object.keys(newObj)
.map((_, id) => `$${id + 1}`)
.join(", ")}) RETURNING *`;
And again, the values
is super straight-forward:
const values = Object.values(newObj);
Then we can execute the query to get the result back:
return this.runQuery(query, values);
So the new insert
looks like this:
insert(
table: string,
newObj: { [id: string]: string | Date }
): Promise<QueryResult<UserEntity>> {
const query = `INSERT INTO ${table}(${Object.keys(newObj).join(
", "
)}) VALUES (${Object.keys(newObj)
.map((_, id) => `$${id + 1}`)
.join(", ")}) RETURNING *`;
const values = Object.values(newObj);
return this.runQuery(query, values);
}
Also, note that insert
no longer needs to add async
keyword because we’re returning a Promise
already and inside we don’t have to await
anything! Alright, let’s update register
resolver, shall we?
We don’t have to change the signature so just keep it as it is:
register: async (
input: UserInput,
context: MyContext
): Promise<User | null>
Next, we will check if the inputted username
already exists. We can do that by calling the findOne
method above and check its result. If there is any record that matches, we’ll return null
:
const { username, password } = input;
const usersWithThatUsername = await context.db.findOne("users", {
username,
});
if (usersWithThatUsername.rowCount > 0) {
return null;
}
Then, we will pass a new user object to the insert
method we have just implemented above (with the password
hashed).
import argon2 from "argon2";
...
const hashedPassword = await argon2.hash(password);
const result = await context.db.insert("users", {
username,
password: hashedPassword,
created_at: new Date(),
last_login: new Date(),
});
Finally, we will create a new user from the query result above for the response:
const row = result.rows[0];
const user = new User({
id: row.id,
username: row.username,
password: row.password,
createdAt: row.created_at,
lastLogin: row.last_login,
});
return user;
So the whole register
resolver will look like this:
register: async (
input: UserInput,
context: MyContext
): Promise<User | null> => {
const { username, password } = input;
const usersWithThatUsername = await context.db.findOne("users", {
username,
});
if (usersWithThatUsername.rowCount > 0) {
return null;
}
const hashedPassword = await argon2.hash(password);
const result = await context.db.insert("users", {
username,
password: hashedPassword,
created_at: new Date(),
last_login: new Date(),
});
const row = result.rows[0];
const user = new User({
id: row.id,
username: row.username,
password: row.password,
createdAt: row.created_at,
lastLogin: row.last_login,
});
return user;
},
That was a big change, wasn’t it? Let’s go ahead and test the new register
resolver:

register
after updating insert
It’s working! Great! We got one more to go: let’s update the update
!
Rewriting update
Alright, guys, we’ve reached the last mission. Just as what we did with insert
method, we need to move out some irrelevant logic and update
should only be taking care of updating existing records in the table that match some criteria.
That being said, the first thing we need to do is to change its signature. We need to pass two objects: one contains information on fields to be updated and the other one is for the matching criteria:
update(
table: string,
newObj: { [id: string]: string | Date },
conditions: { [id: string]: string | Date }
): Promise<QueryResult<UserEntity>>
Then again, we want to dynamically create the update
query. For instance, the query we need to build in this case looks something like this:
UPDATE users SET last_login=$1 WHERE username=$2
As you can see, the tricky part is at the WHERE
clause, which we must count from the last index of SET
, not from 1
. We can create such queries dynamically from newObj
and conditions
as follows:
const query = `UPDATE ${table} SET ${Object.entries(newObj)
.map((entry, id) => `${entry[0]}=$${id + 1}`)
.join(", ")} WHERE ${Object.keys(conditions)
.map(
(condition, id) =>
`${condition}=$${Object.values(newObj).length + id + 1}`
)
.join(" AND ")}`;
The values
array also requires some modification since we need to concatenate values of two objects. With ES6, we can do that every easily:
const values = [...Object.values(newObj), ...Object.values(conditions)]
Then we can execute the query and return the result:
return this.runQuery(query, values);
So the new update
method should look like this:
update(
table: string,
newObj: { [id: string]: string | Date },
conditions: { [id: string]: string | Date }
): Promise<QueryResult<UserEntity>> {
const query = `UPDATE ${table} SET ${Object.entries(newObj)
.map((entry, id) => `${entry[0]}=$${id + 1}`)
.join(", ")} WHERE ${Object.keys(conditions)
.map(
(condition, id) =>
`${condition}=$${Object.values(newObj).length + id + 1}`
)
.join(" AND ")}`;
const values = [...Object.values(newObj), ...Object.values(conditions)];
return this.runQuery(query, values);
}
Great! Next, let’s update login
resolver. Again, we don’t have to modify its signature:
login: async (input: UserInput, context: MyContext): Promise<User | null>
First, we will check if the user exists in the database. If it doesn’t, we’ll return null
:
const { username, password } = input;
const usersWithThatUsername = await context.db.findOne("users", {
username,
});
if (usersWithThatUsername.rowCount === 0) {
return null;
}
Then, we will create a user object from the query result:
const row = usersWithThatUsername.rows[0];
const user = new User({
id: row.id,
username: row.username,
password: row.password,
createdAt: row.created_at,
lastLogin: row.last_login,
});
After that, we can verify if the inputted password is correct:
const isPasswordValid = await argon2.verify(user.password, password);
if (!isPasswordValid) {
return null;
}
Finally, we will update the found record with last_login
being the current timestamp by calling our new update
method above:
user.lastLogin = new Date();
await context.db.update(
"users",
{ last_login: user.lastLogin },
{ username }
);
return user;
Eventually, the new login
resolver will look like this:
login: async (input: UserInput, context: MyContext): Promise<User | null> => {
const { username, password } = input;
const usersWithThatUsername = await context.db.findOne("users", {
username,
});
if (usersWithThatUsername.rowCount === 0) {
return null;
}
const row = usersWithThatUsername.rows[0];
const user = new User({
id: row.id,
username: row.username,
password: row.password,
createdAt: row.created_at,
lastLogin: row.last_login,
});
const isPasswordValid = await argon2.verify(user.password, password);
if (!isPasswordValid) {
return null;
}
user.lastLogin = new Date();
await context.db.update(
"users",
{ last_login: user.lastLogin },
{ username }
);
return user;
},
Alright, cool! Let’s give it a test real quick:

login
after updating updateAnd that’s it! Mission accomplished!
In case you encountered any problems and want to compare to my code, please clone my GitHub repo and check out the branch for this post:
git clone https://github.com/ChunML/vanilla-graphql
cd vanilla-graphql
git checkout 3-adding-database
Conclusions
Congratulations, guys! In today’s long post, we have gone through the probably toughest task of this whole mini-series: adding PostgreSQL to replace the in-memory database. I hope after this post, you now know how to set up any database using docker
and docker-compose
, as well as construct queries dynamically and execute them via database driver packages like pg
. More than that, thank you guys so much for sticking this long. Take good care of yourselves and I will see you in the next post!