Optimizing REST API calls
Robert Cooper
Robert Cooper Senior Engineer at Basedash
· June 11, 2021
Robert Cooper
Robert Cooper Senior Engineer at Basedash
· June 11, 2021
Recently, we refactored our codebase at Basedash to fetch our server data with React Query and optimize our REST API calls in the process. The transition to React Query allowed for better code readability, and the optimization of our API calls resulted in half the number of data-fetching API calls and a 3x reduction in the amount of data loaded on initial page load.
This post describes what prompted the move to React Query and the optimizations that were made to our REST API calls and routes in the process.
Our data-fetching logic was hard to follow and we had a lot of bugs because of this. We were using Redux and Redux thunks to coordinate data fetching and storing data in our Redux store. The following pattern was commonly used to fetch data:
useEffect to trigger data fetching.useSelector and render a spinner accordingly.useSelector and update accordingly.💡 If you follow the above pattern, check out createAsyncThunk from Redux Toolkit. It dispatches pending, fulfilled, and rejected actions for you. You only need to write the data-fetching and reject logic.
Things got more complicated if some API calls needed to happen before others. We were also making a lot of API calls on initial page load to get all the data a page needed, but in many cases it did not make sense to keep those calls split up.
We decided to take a shot at using React Query for our data-fetching needs since it seemed to have a nice API for data fetching, and we had been hearing good things about how React Query makes it easy to keep server data in sync on the client side.
While migrating to React Query, we also decided to optimize the number of API calls we made and prevent sending unnecessary data from the server when it was not needed for the UI.
Refactoring to React Query started by analyzing the API calls for existing pages and defining what the optimal data-fetching flow should be.
For one Basedash page, we had 23 data-fetching API calls. Some of those calls requested data that the page UI did not require (for example, billing information and user activities used elsewhere). Some of this data was saved in our normalized Redux store, which we could leverage to save API calls later when that data was needed.
One table view was built from four separate API calls:
columns: GET request to fetch all columns for the tableforeign-keys: GET request to fetch all foreign keys for the tableenum-values: GET request to fetch enum values for enum-type columnsrecords: POST request to fetch table recordsWe found that the first three calls could be combined. We reworked the API so the table data was fetched through:
table: GET request to fetch columns, foreign keys, and enum valuesrecords: POST request to fetch table recordsFollowing this same process, we combined routes that could logically live together. We also mapped which API calls were needed on which pages so we could avoid fetching data that was not necessary for the current page.
ℹ It is not always a bad idea to fetch data not used on the current page if you plan to cache it and avoid API calls later. This is especially true for data users are highly likely to request during their session. You can use React.lazy or react-loadable to preload pages and components.
Also look out for the prefers-reduced-data media query, which can help you decide whether to preload data while still respecting a user’s preference for reduced data usage.
Most of our React Query code uses the useQuery and useMutation hooks. Since those hooks are reused across many components, we created custom hooks that wrap useQuery and useMutation and strongly type the params, options, errors, and data.
Here is an example custom hook:
export const useApiTable = (
params: FetchTableParams,
options?: UseQueryOptions<
FetchTableResponse,
ApiError,
FetchTableResponse,
[string, FetchTableParams]
>
) =>
useQuery<
FetchTableResponse,
ApiError,
FetchTableResponse,
[string, FetchTableParams]
>(
["table", params],
async ({ queryKey }) => {
const [_key, params] = queryKey;
const response = await fetchTable(params);
if (!response.ok) {
if (response.status === 400) {
throw new ApiError(await response.json());
}
throw new ApiError("Network response was not ok");
}
return response.json();
},
options
);
In some cases, we also use queryClient.fetchQuery to fetch queries. For those cases, we sometimes extract the query function so it can be reused:
export const useApiTableQueryFunction: QueryFunction<
FetchTableResponse,
[string, FetchTableParams]
> = async ({ queryKey }) => {
const [_key, params] = queryKey;
const response = await fetchTable(params);
if (!response.ok) {
if (response.status === 400) {
throw new ApiError(await response.json());
}
throw new ApiError("Network response was not ok");
}
return response.json();
};
export const useApiTable = (
params: FetchTableParams,
options?: UseQueryOptions<
FetchTableResponse,
ApiError,
FetchTableResponse,
[string, FetchTableParams]
>
) =>
useQuery<
FetchTableResponse,
ApiError,
FetchTableResponse,
[string, FetchTableParams]
>(["table", params], useApiTableQueryFunction, options);
The data-fetching flow with React Query looks like this:
useQuery with a query key and query function.loading while the request is in progress.data after success, and error if the request fails.onSuccess and onError callbacks for side effects after queries and mutations.React Query also makes it easy to do useful things like retrying failed calls, refetching when users refocus the window, query cancellation, and more.
When updating API data with mutations, we often perform optimistic updates via queryClient.setQueryData in onMutate, so the UI updates instantly before the request completes. If the API call fails, we revert in onError.
In other cases, we do not patch query data manually and instead invalidate queries for refetching. A practical rule of thumb for us is:
That gives us instant UI updates on the current page while avoiding lots of manual cache patching for off-screen data.
We hit an issue with an API call we had refactored to return a large amount of data needed on initial page load (for example, all sidebar items). A bug in a subset of that server logic caused the call to take 40+ seconds because of a timeout/retry mechanism.
That meant users saw a loading screen for 40+ seconds because one combined API response could not fully resolve.
The more data you move into one API call, the more points of failure you introduce for that call, which is risky when a large part of your UI depends on it.
Error handling is also less clear: it becomes harder to tell which part of a large response caused failure, and the client may need to parse complex error structures to render useful UI errors.
When calls are split more thoughtfully, you are in a better position to show partial UI and specific error messages for only the sections that fail.
Another benefit (especially with React Query) is more precise invalidation. Smaller, focused routes make it easier to invalidate the right query and reduce overfetching.
ℹ GraphQL APIs are also useful in this context because they allow more precise field selection in a single request.
With Redux, you can keep a normalized cache where entities are stored without duplication and referenced from one source of truth.
For example, in a Twitter-like app that shows tweet lists and tweet detail pages, a normalized cache might look like:
const store = {
tweets: {
ids: [1, 2, 3],
entities: {
1: { message: "Hello world", replyCount: 8, likes: 30 },
2: { message: "Goodbye world", replyCount: 12, likes: 28 },
3: { message: "YOLO", replyCount: 32, likes: 1003 },
},
},
};
If a user opens tweet 1, likes it, and increments from 30 to 31, the same entity reference updates across views:
const store = {
tweets: {
ids: [1, 2, 3],
entities: {
1: { message: "Hello world", replyCount: 8, likes: 31 },
2: { message: "Goodbye world", replyCount: 12, likes: 28 },
3: { message: "YOLO", replyCount: 32, likes: 1003 },
},
},
};
Because the data has one canonical entity reference, the like count updates instantly in all relevant UI without refetching.
With React Query, you do not get normalized caching by default. So you either:
Invalidating many queries can cause overfetching, but it also guarantees client data stays in sync with server data and avoids reimplementing complex server-side rules in the client cache layer.
Written by
Senior Engineer at Basedash
Robert Cooper is a senior engineer at Basedash who builds full-stack product systems across SQL data infrastructure, APIs, and frontend architecture. His work focuses on application performance, developer velocity, and reliable self-hosted workflows that make data operations easier for teams at scale.
Basedash lets you build charts, dashboards, and reports in seconds using all your data.