In this tutorial, you'll learn how to build a CRUD app with FastAPI, GraphQL, and Orator ORM.
Contents
Objectives
By the end of this tutorial, you will be able to:
- Explain why you may want to use GraphQL over REST
- Use Orator ORM to interact with a Postgres database
- Describe what Schemas, Mutations, and Queries are in GraphQL
- Integrate GraphQL into a FastAPI app with Graphene
- Test a GraphQL API with Graphene and pytest
Why GraphQL?
(And why GraphQL over traditional REST?)
REST is the de-facto standard for building web APIs. With REST, you have multiple endpoints for each CRUD operation: GET, POST, PUT, DELETE. Data is gathered by accessing a number of endpoints.
For example, if you wanted to get a particular user's profile info along with their posts and relevant comments, you would need to call four different endpoints:
/users/<id>
returns the initial user data/users/<id>/posts
returns all posts for a given user/users/<post_id>/comments
returns a list of comments per post/users/<id>/comments
returns a list of comments per user
This can result in request overfetching since you'll probably have to get much more data than you need.
Moreover, since one client may have much different needs than other clients, request overfetching and underfetching are common with REST.
GraphQL, meanwhile, is a query language for retrieving data from an API. Instead of having multiple endpoints, GraphQL is structured around a single endpoint whose return value is dependent on what the client wants instead of what the endpoint returns.
In GraphQL, you would structure a query like so to obtain a user's profile, posts, and comments:
query { User(userId: 2){ name posts { title comments { body } } comments { body } } }
Voila! You get all the data in just one request with no overfetching since we specified exactly what we want.
FastAPI supports GraphQL via Starlette and Graphene. Starlette executes GraphQL queries in a separate thread by default when you don't use async request handlers!
Project Setup
Create a directory to hold your project called "fastapi-graphql":
$ mkdir fastapi-graphql
$ cd fastapi-graphql
Create a virtual environment and activate it:
$ python3.9 -m venv env $ source env/bin/activate (env)$
Feel free to make use of other virtual environment tools like Poetry or Pipenv.
Create the following files in the "fastapi-graphql" directory:
main.py db.py requirements.txt
Add the following requirements to requirements.txt file:
fastapi==0.61.1 uvicorn==0.12.2
Uvicorn is an ASGI (Asynchronous Server Gateway Interface) compatible server that will be used for standing up FastAPI.
Install the dependencies:
(env)$ pip install -r requirements.txt
In the main.py file, add the following lines to kick-start the server:
from fastapi import FastAPI app = FastAPI() @app.get('/') def ping(): return {'ping': 'pong'}
To start the server, open your terminal, navigate to the project directory, and enter the following command:
(env)$ uvicorn main:app --reload
Navigate to http://localhost:8000 in your browser of choice. You should see the response:
{ "ping": "pong" }
You've successfully started up a simple FastAPI server. To see the beautiful docs FastAPI has for us, navigate to http://localhost:8000/docs:
And http://localhost:8000/redocs:
Orator ORM
Why Orator?
According to the docs, Orator provides a simple ActiveRecord implementation for working with your databases. Each database table has a corresponding model that's used to interact with that table.
It resembles other popular ActiveRecord implementations like Django's ORM, Laravel's Eloquent, AdonisJs' Lucid, and Active Record in Ruby On Rails. With support for MySQL, Postgres, and SQLite, it emphasizes convention over configuration, which makes it easy to create models since you don't have to explicitly define every single aspect. Relationships are a breeze and very easy to handle as well.
Postgres
Next, download, install, and run Postgres on your system.
Add the appropriate dependencies to your requirements.txt file:
orator==0.9.9 psycopg2-binary==2.8.6
Install:
(env)$ pip install -r requirements.txt
Add the database config to the db.py file:
from orator import DatabaseManager, Schema, Model DATABASES = { "postgres": { "driver": "postgres", "host": "localhost", "database": "db_name", "user": "db_user", "password": "db_password", "prefix": "", "port": 5432 } } db = DatabaseManager(DATABASES) schema = Schema(db) Model.set_connection_resolver(db)
Make sure to update the
db_name
,db_user
, anddb_password
.
Models
Next, let's create models for users, posts, and comments.
Starting with users, create the model and the corresponding migration:
(env)$ orator make:model User -m
The -m
flag creates a migration file. It does not apply it to the database, though.
You should see something similar to:
Model User successfully created. Created migration: 2020_10_26_191507_create_users_table.py
If all goes well, Orator will automatically create a "models" and "migrations" directories in your project's root directory:
├── db.py ├── main.py ├── migrations │ ├── 2020_10_26_191507_create_users_table.py │ └── __init__.py ├── models │ ├── __init__.py │ └── user.py └── requirements.txt
For the user, let's add the following details:
- Name
- Address
- Phone Number
- Sex
Take note of the migration file. You should see the following:
from orator.migrations import Migration class CreateUsersTable(Migration): def up(self): """ Run the migrations. """ with self.schema.create('users') as table: table.increments('id') table.timestamps() def down(self): """ Revert the migrations. """ self.schema.drop('users')
We just need to concern ourselves with he up
method. So, add the following lines immediately after the table.increments(‘id')
line:
table.string('name') table.text('address') table.string('phone_number', 11) table.enum('sex', ['male', 'female'])
Now your migration file should look like this:
from orator.migrations import Migration class CreateUsersTable(Migration): def up(self): """ Run the migrations. """ with self.schema.create('users') as table: table.increments('id') table.string('name') table.text('address') table.string('phone_number', 11) table.enum('sex', ['male', 'female']) table.timestamps() def down(self): """ Revert the migrations. """ self.schema.drop('users')
Create the model and migration files for Post
and Comments
:
(env)$ orator make:model Post -m (env)$ orator make:model Comments -m
Next, we need to update the files.
posts:
from orator.migrations import Migration class CreatePostsTable(Migration): def up(self): """ Run the migrations. """ with self.schema.create('posts') as table: table.increments('id') table.integer('user_id').unsigned() table.foreign('user_id').references('id').on('users') table.string('title') table.text('body') table.timestamps() def down(self): """ Revert the migrations. """ self.schema.drop('posts')
Take note of:
table.integer('user_id').unsigned() table.foreign('user_id').references('id').on('users')
This will create a foreign key: The user
column references the id
column on the users table.
comments:
from orator.migrations import Migration class CreateCommentsTable(Migration): def up(self): """ Run the migrations. """ with self.schema.create('comments') as table: table.increments('id') table.integer('user_id').unsigned().nullable() table.foreign('user_id').references('id').on('users') table.integer('post_id').unsigned().nullable() table.foreign('post_id').references('id').on('posts') table.text('body') table.timestamps() def down(self): """ Revert the migrations. """ self.schema.drop('comments')
To apply the migration, run:
$ orator migrate -c db.py
This should create a users
, posts
and comments
table in the database:
Are you sure you want to proceed with the migration? (yes/no) [no] y Migration table created successfully [OK] Migrated 2020_10_26_191507_create_users_table [OK] Migrated 2020_10_26_192018_create_posts_table [OK] Migrated 2020_10_26_192024_create_comments_table
If you see a configuration error message, make sure the database details in your db.py file are correct.
Next, to ensure our models use the config specified in db.py, update the user.py file in the "models" directory:
from orator.orm import has_many from db import Model class User(Model): @has_many def posts(self): from .post import Post return Post @has_many def comments(self): from .comment import Comments return Comments
The has_many decorator specifies a one to many relationship. Here we are saying a user has a one to many relationship with posts and comments.
Update the Post
and Comments
models as well.
post.py:
from orator.orm import has_many from db import Model class Post(Model): @has_many def comments(self): from .comment import Comments return Comments
comment.py:
from db import Model class Comments(Model): pass
Graphene
To make use of GraphQL, we'll need to first install Graphene, a Python library that allows us to build GraphQL APIs.
Add it to your requirement.txt file:
graphene==2.1.8
Install:
(env)$ pip install -r requirements.txt
Schema
A Schema is the building block for every GraphQL application. It serves as the core for the application gluing together all other parts, like Mutations and Queries.
Create a new file called schema.py in the project root:
import graphene class Query(graphene.ObjectType): say_hello = graphene.String(name=graphene.String(default_value='Test Driven')) @staticmethod def resolve_say_hello(parent, info, name): return f'Hello {name}'
Update main.py:
import graphene from fastapi import FastAPI from starlette.graphql import GraphQLApp from schema import Query app = FastAPI() app.add_route('/graphql', GraphQLApp(schema=graphene.Schema(query=Query))) @app.get('/') def ping(): return {'ping': 'pong'}
Note that we passed in Starlette's GraphQLApp, which ties the Schema to the route.
Start up your server:
(env)$ uvicorn main:app --reload
Navigate to http://localhost:8000/graphql . You should see the GraphQL Playground.
Type in a quick query to make sure all is well:
query { sayHello(name: "Michael") }
You should see:
{ "data": { "sayHello": "Hello Michael" } }
With that, let's add the basic CRUD operations for creating, reading, updating, and deleting users from the database.
Validation with pydantic
We can still use pydantic Models for validation with graphene-pydantic.
Add the dependency:
graphene-pydantic==0.1.0
Install:
(env)$ pip install -r requirements.txt
Create the base pydantic Model along with the input and output objects in a new file called serializers.py:
from typing import List, Optional from graphene_pydantic import PydanticInputObjectType, PydanticObjectType from pydantic import BaseModel class CommentsModel(BaseModel): id: int user_id: int post_id: int body: str class PostModel(BaseModel): id: int user_id: int title: str body: str comments: Optional[List[CommentsModel]] class UserModel(BaseModel): id: int name: str address: str phone_number: str sex: str posts: Optional[List[PostModel]] comments: Optional[List[CommentsModel]] class CommentGrapheneModel(PydanticObjectType): class Meta: model = CommentsModel class PostGrapheneModel(PydanticObjectType): class Meta: model = PostModel class UserGrapheneModel(PydanticObjectType): class Meta: model = UserModel class CommentGrapheneInputModel(PydanticInputObjectType): class Meta: model = CommentsModel exclude_fields = ('id', ) class PostGrapheneInputModel(PydanticInputObjectType): class Meta: model = PostModel exclude_fields = ('id', 'comments') class UserGrapheneInputModel(PydanticInputObjectType): class Meta: model = UserModel exclude_fields = ('id', 'posts', 'comments')
The PydanticInputObjectType
and PydanticObjectType
classes tie the input and outputs together respectively with the pydantic Model ensuring that the inputs and output follow the UserModel
, PostModel
, and CommentsModel
models.
Take Note of the exclude_fields
in Meta
. In each, we removed the autogenerated id
from the validation.
Mutations
Mutations are used in GraphQL to modify data. So, they are used for creating, updating, and deleting data.
Let's use a mutation to create a User
, Post
, and Comment
object and save it in the database.
Add the following Code to the schema.py file:
import graphene from serializers import ( UserGrapheneInputModel, UserGrapheneModel, PostGrapheneInputModel, PostGrapheneModel, CommentGrapheneInputModel, CommentGrapheneModel, ) from models.comment import Comments from models.post import Post from models.user import User class Query(graphene.ObjectType): say_hello = graphene.String(name=graphene.String(default_value='Test Driven')) @staticmethod def resolve_say_hello(parent, info, name): return f'Hello {name}' class CreateUser(graphene.Mutation): class Arguments: user_details = UserGrapheneInputModel() Output = UserGrapheneModel @staticmethod def mutate(parent, info, user_details): user = User() user.name = user_details.name user.address = user_details.address user.phone_number = user_details.phone_number user.sex = user_details.sex user.save() return user class CreatePost(graphene.Mutation): class Arguments: post_details = PostGrapheneInputModel() Output = PostGrapheneModel @staticmethod def mutate(parent, info, post_details): user = User.find_or_fail(post_details.user_id) post = Post() post.title = post_details.title post.body = post_details.body user.posts().save(post) return post class CreateComment(graphene.Mutation): class Arguments: comment_details = CommentGrapheneInputModel() Output = CommentGrapheneModel @staticmethod def mutate(parent, info, comment_details): user = User.find_or_fail(comment_details.user_id) post = Post.find_or_fail(comment_details.post_id) comment = Comments() comment.body = comment_details.body user.comments().save(comment) post.comments().save(comment) return comment class Mutation(graphene.ObjectType): create_user = CreateUser.Field() create_post = CreatePost.Field() create_comment = CreateComment.Field()
For each of the classes -- CreateUser
, CreatePost
, and CreateComment
-- we defined a mutate
method, which is applied when the Mutation is called. This is required.
Update your main.py file as well:
import graphene from fastapi import FastAPI from starlette.graphql import GraphQLApp from schema import Query, Mutation app = FastAPI() app.add_route('/graphql', GraphQLApp(schema=graphene.Schema(query=Query, mutation=Mutation))) @app.get('/') def ping(): return {'ping': 'pong'}
Fire up Uvicorn again. Reload your browser and within the GraphQL Playground at http://localhost:8000/graphql, execute the createUser
mutation:
mutation createUser { createUser(userDetails: { name: "John Doe", address: "Some address", phoneNumber: "12345678", sex: "male" }) { id name posts { body comments { body } } } }
A user object should be returned to you:
{ "data": { "createUser": { "id": 1, "name": "John Doe", "posts": [] } } }
Execute the createPost
mutation also to create a new post:
mutation createPost { createPost(postDetails: { userId: 2, title: "My first Post", body: "This is a Post about myself" }) { id } }
You should see:
{ "data": { "createPost": { "id": 1 } } }
Finally, execute the createComment
mutation to create a new comment:
mutation createComment { createComment(commentDetails: { userId: 2, postId: 1, body: "Another Comment" }) { id body } }
Queries
To retrieve data, as a list or a single object, GraphQL provides us with a Query.
Update the Query
class in schema.py:
class Query(graphene.ObjectType): say_hello = graphene.String(name=graphene.String(default_value='Test Driven')) list_users = graphene.List(UserGrapheneModel) @staticmethod def resolve_say_hello(parent, info, name): return f'Hello {name}' @staticmethod def resolve_list_users(parent, info): return User.all()
Going back to the GraphQL Playground, execute the following query to return a list of users:
query getAllUsers { listUsers { id name posts { title } } }
Results:
{ "data": { "listUsers": [ { "id": 2, "name": "John Doe", "posts": [ { "title": "My first Post" } ] } ] } }
To retrieve a single user, update the Query
class again:
class Query(graphene.ObjectType): say_hello = graphene.String(name=graphene.String(default_value='Test Driven')) list_users = graphene.List(UserGrapheneModel) get_single_user = graphene.Field(UserGrapheneModel, user_id=graphene.NonNull(graphene.Int)) @staticmethod def resolve_say_hello(parent, info, name): return f'Hello {name}' @staticmethod def resolve_list_users(parent, info): return User.all() @staticmethod def resolve_get_single_user(parent, info, user_id): return User.find_or_fail(user_id)
Orator has a built-in
find_or_fail
function which will raise an exception if an invalid ID is passed to it.
Let’s run the getSingleUser
query with a correct ID:
query getUser { getSingleUser(userId: 2) { name posts { title comments { body } } comments { body } } }
The query is expected to return a list of posts which in turn should contain a list of comments for every post object:
{ "data": { "getSingleUser": { "name": "John Doe", "posts": [ { "title": "My first Post", "comments": [ { "body": "Another Comment" }, { "body": "Another Comment" } ] } ], "comments": [ { "body": "Another Comment" }, { "body": "Another Comment" } ] } } }
If you do not need the posts or comments, you can just remove the following block in the query:
query getUser { getSingleUser(userId: 2) { name } }
Results:
{ "data": { "getSingleUser": { "name": "John Doe" } } }
Try with incorrect user ID:
query getUser { getSingleUser(userId: 5999) { name } }
It should return an error:
{ "data": { "getSingleUser": null }, "errors": [ { "message": "No query results found for model [User]", "locations": [ { "line": 2, "column": 3 } ], "path": [ "getSingleUser" ] } ] }
Notice how exceptions are styled as messages.
Tests
Graphene provides a test client for creating a dummy GraphQL client for testing a Graphene app.
We'll be using pytest so add the dependency to your requirements file:
pytest==6.1.1
Install:
(env)$ pip install -r requirements.txt
Next, create a "test" folder, and in that folder add a conftest.py file:
import pytest from orator import DatabaseManager, Model, Schema from orator.migrations import DatabaseMigrationRepository, Migrator @pytest.fixture(autouse=True) def setup_database(): DATABASES = { "sqlite": { "driver": "sqlite", "database": "test.db" } } db = DatabaseManager(DATABASES) Schema(db) Model.set_connection_resolver(db) repository = DatabaseMigrationRepository(db, "migrations") migrator = Migrator(repository, db) if not repository.repository_exists(): repository.create_repository() migrator.reset("migrations") migrator.run("migrations")
Here, we defined our test database settings and applied the migrations. We used an Autouse fixture so that it is invoked automatically before each test to ensure that a clean version of the database is used for every test.
Add another fixture for creating a Graphene test client:
@pytest.fixture(scope="module") def client(): client = Client(schema=graphene.Schema(query=Query, mutation=Mutation)) return client
Add the appropriate imports:
import graphene from graphene.test import Client from schema import Query, Mutation
Next, add fixtures for creating a user, post, and comments:
@pytest.fixture(scope="function") def user(): user = User() user.name = "John Doe" user.address = "United States of Nigeria" user.phone_number = 123456789 user.sex = "male" user.save() return user @pytest.fixture(scope="function") def post(user): post = Post() post.title = "Test Title" post.body = "this is the post body and can be as long as possible" user.posts().save(post) return post @pytest.fixture(scope="function") def comment(user, post): comment = Comments() comment.body = "This is a comment body" user.comments().save(comment) post.comments().save(comment) return comment
Don't forget the model imports:
from models.comment import Comments from models.post import Post from models.user import User
Your conftest.py file should now look like this:
import graphene import pytest from graphene.test import Client from orator import DatabaseManager, Model, Schema from orator.migrations import DatabaseMigrationRepository, Migrator from models.comment import Comments from models.post import Post from models.user import User from schema import Query, Mutation @pytest.fixture(autouse=True) def setup_database(): DATABASES = { "sqlite": { "driver": "sqlite", "database": "test.db" } } db = DatabaseManager(DATABASES) Schema(db) Model.set_connection_resolver(db) repository = DatabaseMigrationRepository(db, "migrations") migrator = Migrator(repository, db) if not repository.repository_exists(): repository.create_repository() migrator.reset("migrations") migrator.run("migrations") @pytest.fixture(scope="module") def client(): client = Client(schema=graphene.Schema(query=Query, mutation=Mutation)) return client @pytest.fixture(scope="function") def user(): user = User() user.name = "John Doe" user.address = "United States of Nigeria" user.phone_number = 123456789 user.sex = "male" user.save() return user @pytest.fixture(scope="function") def post(user): post = Post() post.title = "Test Title" post.body = "this is the post body and can be as long as possible" user.posts().save(post) return post @pytest.fixture(scope="function") def comment(user, post): comment = Comments() comment.body = "This is a comment body" user.comments().save(comment) post.comments().save(comment) return comment
Now, we can start adding some test.
Create a test file called test_query.py, and add the following tests to test the User
model like so:
def test_create_user(client): query = """ mutation { createUser(userDetails: { name: "Test User", sex: "male", address: "My Address", phoneNumber: "123456789", }) { id name address } } """ result = client.execute(query) assert result['data']['createUser']['id'] == 1 assert result['data']['createUser']['name'] == "Test User" def test_get_user_list(client, user): query = """ query { listUsers { name address } } """ result = client.execute(query) assert type(result['data']['listUsers']) == list def test_get_single_user(client, user): query = """ query { getSingleUser(userId: %s){ address } } """ % user.id result = client.execute(query) assert result['data']['getSingleUser'] is not None assert result['data']['getSingleUser']['address'] == user.address
Run the tests with pytest using the following command:
(env)$ python -m pytest -s test
This should execute all the tests. They all should pass:
================================= test session starts ================================= platform darwin -- Python 3.9.0, pytest-6.1.1, py-1.9.0, pluggy-0.13.1 rootdir: /Users/michael/repos/testdriven/fastapi-graphql collected 3 items test/test_query.py ... ================================== 3 passed in 0.05s ==================================
Following the same concept, write tests for Post
and Comment
on your own.
Conclusion
In this tutorial, we covered how to develop and test a GraphQL API with FastAPI, Orator ORM, and pytest.
We looked at how to wire up Orator ORM and Postgres along with FastAPI and GraphQL via Graphene. We also covered how to create GraphQL Schemas, Queries, and Mutations. Finally, we tested our GraphQL API with pytest.
Grab the code from the fastapi-graphql repo.