Practical Five
2025-10-16T23:59
Goals
- Convert Film Explorer to use Supabase to store and persist data
- Learn about
await,async, anduseEffect
Today, you will be converting the standalone version of Film Explorer to use fetch to load its data.
Prerequisites
Create the git repository for your practical by accepting the assignment from GitHub Classroom. This will create a new repository for you with the Next/React infrastructure in place.
Clone the repository to you computer with
git clone(get the name of the repository from GitHub).Open up the
package.jsonfile and add your name as the author of the package.Install the module dependencies by typing
pnpm installin the shell in the terminal in the root directory of your package (the directory that contains the package.json file).Get some practice with our new workflow and start by making a feature branch in your repository.
Make sure that you have completed the Getting Started steps and have Docker Desktop installed.
Background
This version of Film Explorer, much like Simplepedia is loading the data via an import statement.
import filmData from './films.json';While this works, it is atypical. It can cause long initial load times and worse, it means the user only works with a version of the data that only lives in their browser, so changes don’t persist (try rating a few films and then reloading the page). Your task today is to adapt the standalone Film Explorer to use Supabase to persist its data.
Setting up Supabase
Before you dive into the code, let’s set up Supabase first.
Installation
- Open up Docker Desktop. (we are going to install a local instance in a Docker container)
- Install the supabase command line tool (consult Getting started)
- Add the client library with
pnpm add @supabase/supabase-js - Run
supabase initto initialize the system. This will create a supabase directory in your project root. You will be asked if you want to setup settings for Deno for different environments. You can reply “n”. - Start the supabase server with
supabase start
This will take a little bit of time as the tool pulls down all of the resources for the supabase container (which includes a number of sub-containers).
When it is complete, you will see something that looks like this (I’ve replaced the various keys):
API URL: http://127.0.0.1:54321
GraphQL URL: http://127.0.0.1:54321/graphql/v1
S3 Storage URL: http://127.0.0.1:54321/storage/v1/s3
MCP URL: http://127.0.0.1:54321/mcp
Database URL: postgresql://postgres:postgres@127.0.0.1:54322/postgres
Studio URL: http://127.0.0.1:54323
Mailpit URL: http://127.0.0.1:54324
Publishable key: sb_publishable_AAAAAAAAAAAAAAAAAAAAAA
Secret key: sb_secret_BBBBBBBBBBBBBBBBBBBBBBBBBBB
S3 Access Key: CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
S3 Secret Key: DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
S3 Region: local
These are the values we care about:
- API URL: this is the address your project will connect to
- Studio URL: this URL provides a dashboard interface for interacting with your local supabase instance
- Publishable key: this key can be used by your client to gain access to supabase
- Secret key: this key can be used by your server to gain access to supabase
We can use supabase status to repeat this data again later. They are also available in the dashboard.
As described in lecture, we really want to keep the secret key secret. If someone has access to that key, they can do anything to your database. DO NOT commit this to your git repository, even if your repository is private. Once in there, it can be very difficult to purge since your repo hangs on to its history (that is kind of the point, after all). So, even if you delete the reference, you may someday take the repository public for some reason, and someone will poke around in the history and find your key.
Between this practical, homework 4 and the project, you will have a collection of different supabase instances to manage. Probably the best way to manage this is to stop the current instance whenever you switch away from what you are working on with
supabase stopIf you find you really want to run two simultaneously, you can edit the port number in supabase/config.toml to an unused port number before you run supabase start.
The dashboard
Navigate your browser to the address provided as “Studio URL”. If your mouse over the icons on the left, you will see the collection of functions you can perform with the dashboard. You will find this very useful for viewing and interacting with your data as your project gets more complex.
.env file
So, we have some secret keys that are essential for connecting to supabase (and thus for our code to work) but we don’t want in any repository. We also don’t want our code dependent on our local development instance of supabase. This means that we need to have some way to specify some values that our code will depend on, but will not be part of the code itself. This should sound like adding arguments to a function, but we don’t have a function that is being called.
The solution is environment variables. These are values that we set before we start an application that are available to the running code. These are used by all sorts of systems. On Mac and Linux systems, you can see the current environment variables by typing env on the command line. On Windows, you can try typing set into the shell.
There are a number of ways we can set the environment variables, but in the name of automating all of things, we want something convenient as well. The standard has become the use of .env files (pronounced “dot env”). We write key=value pairs into these files, one per line. Our tooling will then make sure that these environment variables are available to us. We will frequently add suffixes like .env.local or .env.development. We will also add them to our .gitignore files to make sure they are never committed.
Next.js supports .env files directly and all variables will be available in the code via process.env.variable, where variable is the environment variable. There is a caveat, however. The variables are only available in the Node environment (i.e., on the server side), which we haven’t even started using. They are not available on the client side, because then our environment would be visible to anyone who happened to load our web app. If we want the variable to be passed along to the client side, we have to name the variable with the prefix NEXT_PUBLIC_. We can put whatever we like after that, but the NEXT_PUBLIC_ must be there 1.
Create a file called .env.local (it is already covered by .gitignore, so it should not get added to your future commits). Add three variables: NEXT_PUBLIC_SUPABASE_URL, NEXT_PUBLIC_SUPABASE_ANON_KEY, and SUPABASE_SECRET_KEY, and set them to the API URL, the Publishable Key and the Secret Key respectively. Your file should look like this (obviously with the values from your running Supabase instance):
NEXT_PUBLIC_SUPABASE_URL=http://127.0.0.1:54321
NEXT_PUBLIC_SUPABASE_ANON_KEY=sb_publishable_AAAAAAAAAAAAAAAAAAAAAA
SUPABASE_SECRET_KEY=sb_secret_BBBBBBBBBBBBBBBBBBBBBBBBBBBSetting up the database
Migrations
Now that the database is running, we need to provide a schema that will support our data needs. As we discussed in class, we want to specify the database schema using migrations so we can rapidly provision a database on a clean system and can keep track of changes in our version control. There is a way to perform this process by designing the database graphically with the dashboard and then pulling it down into a migration, but we are going to to this one completely by hand since it is fairly basic.
Create a new migration with
supabase migration new film-tableThis will create a new file similar to supabase/migrations/20251014033858_film-table.sql. The first part of the file name is a date stamp which makes it unique as well as allowing us to apply the migrations in order. The file itself is a .sql file, so we are going to fill it wil SQL commands.
We are not going to store all of the fields from our original data, we just need the ones that we already are using in our application. We can look at our type definition for a film record to see what this is:
export interface Film {
id: number;
title: string;
poster_path: string;
overview: string;
rating?: number;
release_date: string;
vote_average: number;
}Note that we made rating optional here because it is not a field in the original data. In the database we will make sure it has a real value.
The SQl that we need will not be terribly different (other than being in SQL)
create table if not exists films (
id bigint primary key generated always as identity,
title text,
poster_path text,
overview text,
rating integer,
release_date text,
vote_average float
);This is the simplest translation into the data types available in PostgreSQL – there are many more available.
To apply the migration, run
supabase db resetIf you read the messages, you will see that this is recreating the database from the migration. We don’t currently have anything stored (or configured) in there, but if we did, this would wipe it away and replace it with the pristine scheme we just put in our migration.
If you go look in the dashboard you will find the database now has the films table.
Seed the database
“Seeding” the database means to load it with some initial data. In most instances, this isn’t a step you would do to your production database. The content should come through use of the site. During development, however, it can be convenient to to dump a big set of data into the database at the start to see how the site looks and works as it scales up.
The simplest way to seed the database is to create a new file in the supabase folder called seed.sql. We would then write SQL statements that inserted data into the database (if you look at the output of running supabase db reset is warning that it can’t find this file). Unfortunately, our data is in a JSON file, and there is a lot of it, so it would not be trivial to get our data into this format. Instead, we will write a script that uses the Supabase API to load the data.
First, we will disable automatic seeding. Open supabase/config.toml. Search for “db.seed”. In that section you should see something that says enabled = true. Change that to enabled = false.
Now create a new file supabase/seed.js. This will be a JavaScript script that we will be responsible for running ourselves. I will walk you through its contents.
First, we get the libraries for interacting with the file system (fs) and our tool for making a client to talk to Supabase (createClient)
const fs = require('node:fs');
const { createClient } = require('@supabase/supabase-js');What is this “require”?
JavaScript the language was not designed to support modules. Everything just lived in the global namespace. As tooling started to be developed, two standards emerged. The first, CommonJS, was developed to work like other languages, loading the modules in order as they are written in the file. This is the system adopted by Node.js, and it is mostly appropriate for server side applications. This is the system with require. The second, which uses import, was designed to be more flexible and allow dynamic imports that only grab what is needed from a module. This was deemed better for bundling code for the web, so it won out among the tools that were bundling client side code.
When ES6 came out, this later form was standardized as the ES6 Module System. Unfortunately, Node has been around for a while and there is a lot of legacy code out there, so the team can’t just dump everything and use ES6 modules instead. So, for the time being, we just have to get used to using two different styles of import depending on the context our code is running in (a definite crack in the theory of “use the same language on both the server and the client so we can move rapidly between them and even share code”…)
Next, we will create the Supabase client object
// create the supabase client
const supabase = createClient(process.env.NEXT_PUBLIC_SUPABASE_URL, process.env.SUPABASE_SECRET_KEY);Note that this is using the variables from .env.local. In particular we are using the secret key that bypasses all of the security.
Now we will write the function that actually transforms our JSON input data and loads it into the database. We are writing this as a function so we can run the data loading as an asynchronous operation without need to break out Promise syntax.
async function loadData(supabase, filename) {
try {
// Read and parse the JSON file
const jsonData = JSON.parse(fs.readFileSync(filename, "utf8"));
console.log(`Successfully read JSON file: ${filename}`);
console.log(
`Number of items: ${Array.isArray(jsonData) ? jsonData.length : "Not an array"}`,
);
const films = jsonData.map((d) => ({
title: d.title,
poster_path: d.poster_path,
overview: d.overview,
release_date: d.release_date,
vote_average: d.vote_average,
rating: 0,
}));
const { error } = await supabase.from("films").insert(films);
if (error) {
console.error("Error inserting data:", error);
} else {
console.log("Successfully inserted data");
}
} catch (error) {
console.error(`Error reading or parsing JSON file: ${error.message}`);
process.exit(1);
}
}The remaining code reads the path to the data from from the command line arguments and then calls our function once it is sure that path leads to a real file.
// Get the JSON file path from the first command line argument
const jsonFilePath = process.argv[2];
if (!jsonFilePath) {
console.error("Error: Please provide a JSON file path as the first argument");
console.error("Usage: node seed.js <path-to-json-file>");
process.exit(1);
}
// Check if the file exists
if (!fs.existsSync(jsonFilePath)) {
console.error(`Error: File not found: ${jsonFilePath}`);
process.exit(1);
}
loadData(supabase, jsonFilePath);Run the seed script from the root of the project
node --env-file=.env.local supabase/seed.js data/films.jsonNote how we are loading the variables from the .env.local file.
If that all worked, you should be able to see the data in the dashboard. Find the Table Editor tab on the left, and you should be able to see all of the films loaded into the database.
Automate all the things
We now have two separate processes that we need to use to reset the database. They both need to be preformed, and in the right order (and we need to remember how to call them).
Add a new script to package.json called db:reset. We will have this perform both steps of our process, so set it equal to "pnpm supabase db reset; node --env-file=.env.local supabase/seed.js data/films.json". Adding the semicolon in there allows us to run it as two separate commands.
Run the new script with pnpm run db:reset to make sure it works. You should see it clear the database, apply the migration and then run our seed script.
Database interactions
Now that all of the preliminaries are out of the way, we can start writing some application specific code for interacting with the database.
Setting up the Supabase client
Create the file src/lib/supabase_client.ts (you will need to create the lib folder).
Add the following to the new file:
import { createClient } from "@supabase/supabase-js";
// Client-side Supabase client (for browser/React components)
export function createSupabaseClient() {
const SUPABASE_URL = process.env.NEXT_PUBLIC_SUPABASE_URL;
const SUPABASE_ANON_KEY = process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY;
if (!SUPABASE_URL || !SUPABASE_ANON_KEY) {
throw new Error(
"Missing NEXT_PUBLIC_SUPABASE_URL or NEXT_PUBLIC_SUPABASE_ANON_KEY environment variables"
);
}
return createClient(
SUPABASE_URL,
SUPABASE_ANON_KEY
)
}
export const supabase = createSupabaseClient();The function in this file is just calling the createClient function we saw earlier. This time, however, we are using the Publishable Key. The extra code in there is to do some basic error checking (which makes Typescript happy).
The different piece here, however, is that we are exporting a value not the function. This is making use of a pattern we haven’t encountered before called the singleton pattern. The idea is that the first time a component requests the client, the function will run. However, after that, any further requests for the client will return the same object without running the function again to create a second client (which works, but causes warnings).
Creating a thin wrapper
It is common when interacting with a database to write a thin wrapper over the code that actually interacts with the database. This allows us flexibility in that we could swap out for a different database without disturbing the main application, and it should make testing easier since we can mock out the behavior of the database for testing of the main application.
Create a new file src/lib/db_functions.ts. Add an import for the Supabase client, and an import for our Film type.
import { Film } from "@/types/filmExplorerTypes";
import {supabase} from "@/lib/supabase_client";This will be the home of our database functions.
Fetching the films
In the FilmExplorercomponent, you will find this line:
const [films, setFilms] = useState<Film[]>(filmData);As we have previously discussed, this creates a new piece of state called films and initializes it with the contents of our data on the first render. Change this so that it initializes to an empty array, and remove the line that imports filmData. If you are running the dev server (go ahead and run it if you are not), you will no longer have any films. All you will see is the “Loading” message.
So the first function we will add to src/lib/db_functions.ts will be a function that fetches all of the films from the database – we will call it fetchAllFilms. Add the following to the file.
export async function fetchAllFilms(){
}The export is there so we can use this function in other files, and the async is there because fetching from the database will be an asynchronous operation.
To get all films, we are going to use
const {error, data} = await supabase.from("films").select("*"));In other words, we are just going to grab all of the films.
We are doing this here because the goal of this exercise is just to get a little experience with Supabase and we don’t really have that much data. This is generally not how you should do this in your projects. While there may be moments when it is appropriate to just dump an entire table into memory on the client, there usually is a better way.
We are writing a very thin wrapper, so we want to return the data and the error. The challenge we have is that Typescript will get grumpy about the type of the data (be default it is of type any[] and we will treat it as a Film[]). There are a couple fo solutions to this (Supabase can generate types for us from the scheme which we can tell the client about), but we are going to take a simpler approach. Before we return anything, we will assign the data to a new variable and cast the type:
const collection = data as Film[];Now you can return an object with fields for collection and error.
Now, go back to FilmExplorer.tsx. The question is where are we going to call the function we just wrote. We can’t put this in as the default value for the films state because it is an asynchronous call. We also can’t just put it in the body of the component. It would run every time the component loaded, and every time it ran it would cause the component to render again.
The answer is to use a hook that we haven’t used yet, the useEffect hook. The useEffect hook allows us to execute code in the context of the component, but outside of the render function itself. This is good if we want to update state or if we need to make sure we have a complete render in place before completing an action.
The useEffect function takes two arguments: a function to run, and a watch list. The watch list is an array of values. If the variables ever change, the the useEffect will run. So, for example, you could add a prop variable to the list and if that value ever changed, you could use the new values to, for example, update a piece of state. If you leave the array empty, that means the effect hook will run only once. If you leave the array off completely, then it will always run, but I discourage this. We will use an empty array since we just want to load the data once at the start.
useEffect(()=>{
}, []);Unfortunately, we can’t just call fetchAllFilms in the useEffect directly either because useEffect will not allow us to make its callback an asynchronous function. So, what we will do is create a small helper function inside of the useEffect, and make that an async function. Call your function fetchData(). It should call fetchAllFilms, and if there is good data, it should use setFilms to load it into films. If there is an error it should report it to the console. Make sure this is inside of the useEffect.
Immediately after your function definition, still inside the useEffect, call your function.
If all has gone well, the version of FilmExplorer running in your browser should now once again be displaying the films.
Update the rating
The second change that we want to make will be to update the ratings on the films.
You will hopefully have already observed the setRating function. This function is called when the used clicks on a star. The steps of this function are:
- find the film being rated based on its id
- create a new film object with the data from the old film and the new rating
- create a new collection of films that substitutes our new film for the original one
- call
setFilmsto set this new array as our new state
This map pattern is a good solution to what is a common problem for us when using React. If our state data is an array, then we can’t just update the array – React won’t detect it as a change. We need a brand new array. We have talked about using map to create a shallow copy of a list. This is the same idea, we just substitute in our new object instead of the old one when we get to the one we changed.
We just want to add one piece to this function – we want to update the rating for this film in the database.
This gives us the second database function: updateRating(id, rating). Add updateRating to db_functions.ts.
To update data_ we need a new Supabase function: update(). Much of the rest of the statement will look the same:
const {error} = await supabase.from('films').update({rating: rating}).eq("id", id);The argument to the update() method is an object holding the values we want to change. We then use the eq() method to say which rows to update.
If you leave off the eq, supabase will happily update every row with your change… Look very closely before you run functions that can change the data you are persisting in the database.
Finish the function by returning the error.
Return to FilmExplorer.tsx. Add the function call before you call setFilms(alteredFilms);. Check the error. If you have an error, log it. If not, make the setFilms(alteredFilms); call. You will also need to make this helper function async so that it can handle the await.
If this worked, you should be able to rate movies by clicking on the stars. While yuo could do that before, yuo should find that when the page reloads, the rating persists. If you shutdown the dev server and start it back up, the rating should persist. The only thing that will change the rating now is either the user clicking a new star or the database being reset.
Finishing up
Make sure the tests are passing (with pnpm test) and there are no linting errors (with pnpm check). Once you have fixed any test or linting errors, add and commit any changes you may have made and then push your changes to GitHub. You should then submit your repository to Gradescope as described here.
Requirements
- Should fetch data from local Supabase instance
- Should update ratings on local Supabase instance
- Pass all tests
- Pass all Biome checks
Recall that the Practical exercises are evaluated as “Satisfactory/Not yet satisfactory”. Your submission will need to implement all of the required functionality (i.e., pass all the tests) to be Satisfactory (2 points).
Footnotes
For the full details, consult the documentation↩︎