Docs
Launch GraphOS Studio

Key arguments in Apollo Client

Using the keyArgs API


We recommend reading

before learning about considerations specific to keyArgs configuration.

The cache can store multiple entries for a single schema . By default, each entry corresponds to a different set of values for the field's .

For example, consider this Query.user :

type Query {
# Returns whichever User object corresponds to `id`
user(id: ID!): User
}

If we for Users with ids 1 and 2, the cache stores entries for both like so:

Cache
{
'ROOT_QUERY': {
'user({"id":"1"})': {
'__ref': 'User:1'
},
'user({"id":"2"})': {
'__ref': 'User:2'
}
}
}

As shown above, each entry's storage key includes the corresponding values. This means that if any of a 's differ between queries, the storage keys also differ, and those queries result in distinct cache entries.

If a has no , its storage key is just its name.

This default behavior is for safety: the cache doesn't know whether it can merge the values returned for different combinations without invalidating data. In the example above, the cache definitely shouldn't merge the results of for Users with ids 1 and 2.

Pagination issues

Certain shouldn't cause the cache to store a separate entry. This is almost always the case for related to paginated lists.

Consider this Query.feed :

type Query {
feed(offset: Int, limit: Int, category: Category): [FeedItem!]
}

The offset and limit enable a client to specify which "page" of the feed it wants to fetch. In an app with an infinitely scrolling feed, the client might initially fetch the first ten items, then fetch the next ten:

# First query
query GetFeedItems {
feed(offset: 0, limit: 10, category: "SPORTS")
}
# Second query
query GetFeedItems {
feed(offset: 10, limit: 10, category: "SPORTS")
}

But because their values differ, these two lists of ten items are cached separately by default. This means that when the second completes, the returned items aren't appended to the original list in the feed!

Cache
{
'ROOT_QUERY': {
// First query
'feed({"offset":"0","limit":"10","category":"SPORTS"})': [
{
'__ref': 'FeedItem:1'
},
// ...additional items...
],
// Second query
'feed({"offset":"10","limit":"10","category":"SPORTS"})': [
{
'__ref': 'FeedItem:11'
},
// ...additional items...
]
}
}

In this case, we don't want offset or limit to be included in a cache entry's storage key. Instead, we want the cache to merge the results of the two above queries into a single cache entry that includes the items from both lists.

To help handle this case, we can

for the .

Setting keyArgs

A key argument is an for a that's included in cache storage keys for that field. By default, all are key arguments, as shown in our feed example:

Cache
{
'ROOT_QUERY': {
// First query
'feed({"offset":"0","limit":"10","category":"SPORTS"})': [
{
'__ref': 'FeedItem:1'
},
// ...additional items...
],
// Second query
'feed({"offset":"10","limit":"10","category":"SPORTS"})': [
{
'__ref': 'FeedItem:11'
},
// ...additional items...
]
}
}

You can override this default behavior by defining a cache

for a particular :

const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["category"],
},
},
},
},
});

This policy for Query.feed includes a keyArgs array, which contains the names of all that the cache should include in its storage keys.

In this case, we don't want the cache to treat offset or limit as key , because those arguments don't change which list we're fetching from. However, we do want to treat category as a key , because we want to store our SPORTS feed separately from other feeds (such as FASHION or MUSIC).

After setting keyArgs as shown, we end up with a single cache entry for our SPORTS feed (note the absence of offset and limit in the storage key):

{
'ROOT_QUERY': {
'feed({"category":"SPORTS"})': [
{
'__ref': 'FeedItem:1'
},
// ...additional items from first query...
{
'__ref': 'FeedItem:11'
},
// ...additional items from second query...
]
}
}

Important: After you define keyArgs for a paginated list like Query.feed, you also need to

for the . Otherwise, the list returned by the second will overwrite the first list instead of merging with it.

Supported values for keyArgs

You can provide the following values for a 's keyArgs:

  • false (indicates that the has no key )
  • An array
    of , , and names
  • A function
    (advanced)

keyArgs array

A keyArgs array can include the types of values shown below. The storage key for a cached uses the values of all , , and included in the array.

  • names:

    // Here, category and id are two arguments of the field
    ["category", "id"]
  • Nested names for input types with sub:

    // Here, details is an input type argument
    // with subfields name and date
    ["details", ["name", "date"] ]
  • names (indicated with @), optionally with one or more of their :

    // Here, @units is a directive that can be applied
    // to the field, and it has a type argument
    ["@units", ["type"] ]
  • names (indicated with $):

    // Here, $userId is a variable that's provided to some
    // operations that include the field
    ["$userId"]

keyArgs function (advanced)

You can define a completely different format for a 's storage key by providing a custom function to keyArgs. This function takes the 's and other context as parameters, and it can return any string to use as the storage key (or a dynamically-generated keyArgs array).

This is for advanced use cases. For details, see

.

Which arguments belong in keyArgs?

When deciding which of a 's to include in keyArgs, it's helpful to start by considering the two extremes: all and no . These initial options help to demonstrate the effects of adding or removing a single argument.

Using all arguments

If all are key arguments (this is the default behavior), every distinct combination of argument values for a results in a distinct cache entry. In other words, changing any argument value results in a different storage key, so the returned value is stored separately. We see this in our pagination example:

Cache
{
'ROOT_QUERY': {
// First query
'feed({"offset":"0","limit":"10","category":"SPORTS"})': [
{
'__ref': 'FeedItem:1'
},
// ...additional items...
],
// Second query
'feed({"offset":"10","limit":"10","category":"SPORTS"})': [
{
'__ref': 'FeedItem:11'
},
// ...additional items...
]
}
}

With this approach, can't return a cached value for a unless all of the 's match a previously cached result. This significantly reduces the cache's hit rate, but it also prevents the cache from returning an incorrect value when differences in arguments are relevant (as with our User example):

Cache
{
'ROOT_QUERY': {
'user({"id":"1"})': {
'__ref': 'User:1'
},
'user({"id":"2"})': {
'__ref': 'User:2'
}
}
}

Using no arguments

If no are key arguments (you configure this by setting keyArgs: false), the 's storage key is just the field's name, without any values appended to it. This means that by default, whenever a returns a value for that field, that value replaces whatever value was already in the cache.

This default behavior is often undesirable (especially for a paginated list), so you can define read and merge functions that use values to determine how a newly returned value is combined with an existing cached value.

Example

Recall this Query.feed from

:

type Query {
feed(offset: Int, limit: Int, category: Category): [FeedItem!]
}

We originally set keyArgs: ["category"] for this to keep feed items from different categories separate. We can achieve the same behavior by setting keyArgs: false and defining the following read and merge functions:

const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: false,
read(existing = {}, { args: { offset, limit, category }}) {
return existing[category]?.slice(offset, offset + limit);
},
merge(existing = {}, incoming, { args: { category, offset = 0 }}) {
const merged = existing[category] ? existing[category].slice(0) : [];
for (let i = 0; i < incoming.length; ++i) {
merged[offset + i] = incoming[i];
}
existing[category] = merged;
return existing;
},
},
},
},
},
});

With the code above, the value of the existing cached value passed to our read and merge functions is a map of category names to FeedItem lists. This map enables our single cached value to store multiple distinct lists. This manual separation is logically equivalent to using keyArgs: ["category"], so the extra effort is often unnecessary.

If we know that feeds with different category values have different data, and we know that our read function never needs simultaneous access to multiple category feeds, we can safely shift the responsibility for the category to keyArgs. This enables us to simplify our read and merge functions to handle only one feed at a time.

Summary

If the logic for storing and retrieving a 's data is identical for different values of a given (like category above), and the distinct values are logically independent from one another, then you should probably add that to keyArgs to avoid handling it in your read and merge functions.

By contrast, that limit, filter, sort, or otherwise reprocess existing data usually do not belong in keyArgs. This is because putting them in keyArgs makes storage keys more diverse, reducing cache hit rate and limiting your ability to use different to retrieve different views of the same data.

As a general rule, read and merge functions can do almost anything with your cached data, but keyArgs often provide similar functionality with less code complexity. Whenever possible you should prefer the limited, declarative API of keyArgs over the unlimited power of functions like merge and read.

The @connection directive

The @connection is a Relay-inspired convention that supports. However, we recommend using keyArgs instead, because you can achieve the same effect with a single keyArgs configuration, whereas you need to include the @connection in every you send to your server.

In other words, whereas Relay encourages the following @connection(...) for Query.feed queries:

const FEED_QUERY = gql`
query Feed($category: FeedCategory!, $offset: Int, $limit: Int) {
feed(category: $category, offset: $offset, limit: $limit) @connection(
key: "feed",
filter: ["category"]
) {
edges {
node { ... }
}
pageInfo {
endCursor
hasNextPage
}
}
}
`;

in , you can use the following (the same query without the @connection(...) ):

const FEED_QUERY = gql`
query Feed($category: FeedCategory!, $offset: Int, $limit: Int) {
feed(category: $category, offset: $offset, limit: $limit) {
edges {
node { ... }
}
pageInfo {
endCursor
hasNextPage
}
}
}
`;

and instead configure keyArgs in your Query.feed policy:

const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["category"],
},
},
},
},
})

If the Query.feed does not have an like category that you can use in keyArgs: [...], then it might make sense to use the @connection after all:

const FEED_QUERY = gql`
query Feed($offset: Int, $limit: Int, $feedKey: String) {
feed(offset: $offset, limit: $limit) @connection(key: $feedKey) {
edges {
node { ... }
}
pageInfo {
endCursor
hasNextPage
}
}
}
`;

If you execute this with different values for the $feedKey , those feed results are stored separately in the cache, whereas normally they would all be stored in the same list.

When choosing a keyArgs configuration for this Query.feed , you should include the @connection as if it were an (the @ tells InMemoryCache you mean a ):

const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["@connection", ["key"]],
},
},
},
},
})

With this configuration, your cache uses a feed:{"@connection":{"key":...}} key instead of just feed to store separate { edges, pageInfo } objects within the ROOT_QUERY object:

expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
'feed:{"@connection":{"key":"some feed key"}}': { edges, pageInfo },
'feed:{"@connection":{"key":"another feed key"}}': { edges, pageInfo },
'feed:{"@connection":{"key":"yet another key"}}': { edges, pageInfo },
// ...
},
})

The ["key"] in keyArgs: ["@connection", ["key"]] means only the key to the @connection is considered, and any other (like filter) are ignored. Passing just key to @connection is usually adequate, but if you want to pass a filter: ["someArg", "anotherArg"] as well, you should instead include those argument names directly in keyArgs:

const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["someArg", "anotherArg", "@connection", ["key"]],
},
},
},
},
})

If any of these or are not provided for the current , they're omitted from the key automatically, without error. This means it's generally safe to include more arguments or directives in keyArgs than you expect to receive in all cases.

As mentioned above, if a keyArgs array is insufficient to specify your desired keys, you can alternatively pass a function for keyArgs, which takes the args object and a { typename, field, fieldName, variables } context parameter. This function can return either a string or a dynamically-generated keyArgs array.

Although keyArgs (and @connection) are useful for more than just paginated , it's worth noting that relayStylePagination configures keyArgs: false by default. You can reconfigure this keyArgs behavior by passing an alternate value to relayStylePagination:

const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: relayStylePagination(["type", "@connection", ["key"]]),
},
},
},
})

In the unlikely event that a keyArgs array is insufficient to capture the id of a , remember that you can pass a function for keyArgs, which allows you to serialize the args object however you want.

Previous
Cursor-based
Next
Overview
Edit on GitHubEditForumsDiscord

© 2024 Apollo Graph Inc.

Privacy Policy

Company