Introduction
Are you looking to accelerate your backend development process through code generation automation? In this blog post, we will explore how ChatGPT can help expedite your development workflow. We will focus on a specific scenario related to ecommerce and delve into the details of creating backend services using FastAPI and PostgreSQL. Additionally, we’ll cover the deployment of the database and APIs to local Docker containers.
You can find the GitHub repository for this blog post at the following location.
https://github.com/yernenip/GPT-FastAPI-PostgreSQL
Technology Selection
Choosing the right technology stack for this exercise proved to be a challenging task. To ensure rapid development and integration, I considered several key criteria:
- Familiarity: It was important to select a technology stack that I was comfortable working with.
- Understandability: I sought a stack that allowed easy comprehension of generated code and enabled the application of fundamental principles like normalization and general design principles.
- Cloud Native: Since deploying and testing on a docker container was essential, I opted for a cloud-native stack.
- Scalability: Anticipating future scalability requirements, I prioritized a technology stack that could easily accommodate growth.
With these principles in mind, I chose PostgreSQL as the backend database (due to my familiarity with MS SQL Server), Psycopg3 for data access, and FastAPI as the backend REST API. FastAPI, known for its ease of use, cloud-native nature, scalability, and compatibility with Python code, also provides out-of-the-box features for Swagger UI and ReDoc, which I found extremely useful. Here is a screenshot of what my api’s look like in both the user interfaces. I love it!
Scenario: Building Backend Services for an Ecommerce Application
In our scenario, we aim to develop backend services for an ecommerce application that sells physical goods. These services will allow consuming applications to perform product lookup based on various criteria, such as product, SKU, and categories. While we’ll focus on a limited number of APIs for the purpose of this blog post, the objective remains leveraging ChatGPT in software engineering. To help visualize the system design, refer to the system diagram below:
A Note about Prompts
Before proceeding, it’s important to note that the prompts shared below were run multiple times, resulting in slightly varying outputs each time. To provide more details, I made additional tweaks to the prompts. It’s also worth mentioning that I directly used ChatGPT to generate code instead of using the API. Although the code could have been written manually, I opted for generating it using the interface this time.
Engineering the Backend Database in PostgreSQL
To create the backend database, I used the following prompt:
ChatGPT then generated SQL scripts. You can find the scripts on the GitHub repository.
For illustrative purposes, I transformed these scripts into an Entity Relationship Diagram (ERD), as shown below:
Once the tables were created, I requested ChatGPT to generate the insert scripts. You can find the generated SQL scripts for these prompts on the
GitHub repository.
Note: This prompt needs to be run in continuation to the previous prompt.
Engineering the API and Data Access Layers with FastAPI, Psycopg3, and Pydantic
With the database up and running (verified through pgAdmin), I turned my attention to building APIs that access this database. In my initial prompts, I encountered a minor setback due to a prompt (I created out of ignorance) that explicitly requested the use of Psycopg3 for the data access layer. ChatGPT generated code for Psycopg2 instead and changed the import statement to “import psycopg3,” I took it for its word, and went down the rabbit hole. After some troubleshooting, I realized my mistake and read the documentation to understand the issue. Additionally, I discovered that Psycopg3 stable version was released in October 2021, while ChatGPT was trained on data until September 2021. To resolve the problem, I manually wrote the code for the Psycopg3 connection pool using a helpful solution I found on Stack Overflow.
Once I resolved the issue, I adjusted my prompt and finally arrived at the following prompt, which successfully generated the desired code (check on GitHub):
Deploying to Docker
After thoroughly testing the code, I proceeded to write the deployment code. Since I had been testing and verifying different layers, I created separate Dockerfiles (using ChatGPT) for the database layer, the API layer, and the Docker Compose file, allowing me to deploy them together. Here are the respective prompts for each component:
Create Dockerfile for PostgreSQL deployment
After ChatGPT generated the code, I updated it to point to the right script files to execute.
Create Dockerfile for FastAPI deployment
Finally create the docker compose file to automate deployment of both containers
Key Learnings
Throughout this exercise, I gained valuable insights. Here are the key takeaways from this experience:
- Steep Learning Curve: Despite being relatively new to FastAPI and PostgreSQL, I was able to learn and work with them in less than a week. ChatGPT’s code explanations provided clarity and helped me understand how everything came together.
- Easy Code Generation: Once familiar with the process, code generation became smooth and effortless. Copying and pasting generated code, making necessary adjustments, and incorporating it into the application simplified the development process.
- Limitations with the Free Version: Due to word limits (3000 words) and occasional halts in code generation, I often had to request ChatGPT to “continue generating code.” Although this workaround proved effective, it did present some minor inconveniences.
- Beware of AI Hallucinations: It is essential to cross-check and verify code generated by ChatGPT, as it can occasionally produce inaccuracies or discrepancies. In cases like my experience with Psycopg3, where the generated code did not align with the intended outcome, it is crucial to refer to external sources for confirmation.
- Inconsistent Results: The SQL tables generated by ChatGPT varied slightly each time I ran the prompt. For example, in one instance, the Products table included inventory fields (UnitsInStock, UnitsOnOrder, ReorderLevel), which I would have refactored into a separate referential table. Additionally, the creation of relational tables (e.g., products_categories) was inconsistent. I wonder what the temperature setting is for ChatGPT. Maybe I can override it?
- Instructions May Be Ignored: Despite providing specific instructions, there were instances where ChatGPT overlooked them. For example, in the deployment prompt, even though I explicitly mentioned that the Dockerfile for both the API and database had already been created, ChatGPT incorrectly referenced the Dockerfile for the database only. However, reminding ChatGPT of the oversight in subsequent prompts led to the correction of the generated Docker Compose file.
Conclusion
In conclusion, this exercise was both educational and enjoyable. I gained hands-on experience with FastAPI and PostgreSQL within a short period, thanks to ChatGPT’s assistance. Leveraging code generation streamlined the development process, allowing for efficient backend development. While some limitations and challenges were encountered along the way, the overall outcome proved highly beneficial. By utilizing ChatGPT effectively, you can enhance your development workflow and achieve greater efficiency in your projects.
Feel free to reach out if you have any questions or need further assistance. Happy coding!