One of the many limitations of ContentStack’s GraphQL API is that it has a 8kb limit on the size of the request body. If you have a complex data structure or allowing for highly dynamic pages to be constructed with Modular Blocks, you may find that you hit this limit. With other GraphQL-based CMSs, you can use re-usable blocks (Global Fields in ContentStack) to at least create fragments to try and reduce duplication in your queries. Unfortunately, ContentStack has a 2 bizarre architectural decisions that make this harder than it should be:

  1. Global Fields can’t be used within other Global Fields. This means if you have a common pattern of Title, Description, Links across multiple Global Fields, you can’t create that once, create 1 fragment and then use that fragment in all your other Global Fields. Take a look at the ideal.graphql and reality.graphql examples below and ask yourself this question for each file - “What happens if I introduce a new Link type?“.

The answer? For ideal.graphql you update 1 Global Field and 1 Fragment. For reality.graphql you need to update x number of Global Fields and x number of Fragments. The ideal.graphql example isn’t a radical idea either, Blocks/Global Fields are meant to be re-usable and create a single source of truth, ContentStack just doesn’t allow you to do this for some reason.

  1. When you create a block in a Modular Block field, you create new blocks with unique GraphQL types, with 1 field that references your block. Let’s say you have a Page content type with a Modular Block field called Flexible Content. If you had 3 blocks that reference Global Fields, ContentStack would generate 3 new GraphQL types for each block.

Why… why would you do this? It is not worth creating individual fragments for these new GraphQL types because they are only used once and are specific to the content type, it also generates the longest type name known to humanity for a API that has a BYTE SIZE LIMIT. Again, this isn’t a radical idea, the ideal.graphql examples are how I’ve structured similar fields & fragments in other GraphQL-based CMSs.


So… knowing this, how can you start optimising your queries to stay within the 8kb limit? I would recommend 2 things:

  1. Minimize your GraphQL query before sending it to ContentStack.
  2. If possible, generate custom Fragment names that are small and random.

Minimize your GraphQL query before sending it to ContentStack.

Note: This has only been tested with a custom fetch based handler that I’ve written. You may need to adjust this for your own use case, for example when using Apollo Client or similar.

I stumbled on a package called gqlmin which has been SUPER useful in reducing the size of my queries. I will note early, this package will minify your query, it will not validate your GraphQL. If your query is malformed, this won’t fix it or report the issues.

In my client, when I construct the body for my fetch request, I run the query through this package.

import gqlmin from "gqlmin";

const body = JSON.stringify({
  query: gqlmin(query),
  ...(isEmpty(variables) ? {} : { variables }),
  ...(isEmpty(operationName) ? { operationName } : {}),
});

On a recent project I have quite a large query, without this optimization I was over the limit.

Query size: (9174/8192) bytes (111.99%)

With this optimisation, I was able to get a decent amount under this limit.

Query size: (7229/8192) bytes (88.24%)

For extra context, my Modular Block has 10 blocks, some with quite a lot of fields and nested blocks, while others are quite simple. So it is a good mixture of complexity.

If possible, generate custom Fragment names that are small and random.

I am using a package I wrote called Pigeon to generate my Fragments & Queries for me. For this project specifically, I am hijacking the Fragment Name generation to be a small, 4 character long string. By default, pigeon will take the __typename and append Fragment to it. This has been fine in the past however, with ContentStack’s size limit, I needed to reduce the number of bytes my fragment names were taking up.

I am using the package nanoid to generate these random strings.

import { customAlphabet } from "nanoid";

const alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz_";
const nanoid = customAlphabet(alphabet, 4);

function frgNameGenerator() {
  const lookup: { [key: string]: string } = {};

  return function (idenifier: string) {
    const name = lookup[idenifier];
    if (name) return name;

    let newName: string | undefined = undefined;
    do {
      newName = nanoid();
      if (Object.values(lookup).includes(newName)) {
        newName = undefined;
      }
    } while (!newName);

    lookup[idenifier] = newName;
    return newName;
  };
}

export const frgName = frgNameGenerator();

Whenever I needed to set the fragmentName in my pigeon configuration, I would use the frgName function. I am creating a lookup table to ensure if I have the same __typename I will always use the same fragment name. In the RARE case that I have a collision, I will continually generate a new fragment name until I find one that isn’t in the lookup table.

With 53 possible characters, at a length of 4 characters per name, there are 7,890,481 possible fragment names.

Before this optimisation, my query size was:

Query size: (7229/8192) bytes (88.24%)

With this optimisation:

Query size: (5098/8192) bytes (62.23%)

Implementing these two changes, I was able to reduce my entire query size by almost half, which is a significant improvement and should give me a lot of headroom for future development.

Bonus: How do I know my query size?

Because I have my custom client, I was able to add a some code before I send the request to get a rough idea of the size of my query.

if (DEBUG_QUERY_SIZE === "true") {
  const regex = /\bquery\s+(\w+)/i;
  const match = query.match(regex);
  const queryName = match ? match[1] : operationName || "Unknown";

  const bytes = Buffer.byteLength(body, "utf8");
  const percentage = (bytes / 8192) * 100;
  const msg = `[${queryName}] Query size: (${bytes}/8192) bytes (${percentage.toFixed(
    2
  )}%)`;

  if (percentage > 80) {
    console.log(`\x1b[31m${msg}\x1b[0m`);
  } else if (percentage > 50) {
    console.log(`\x1b[33m${msg}\x1b[0m`);
  } else {
    console.log(`\x1b[32m${msg}\x1b[0m`);
  }
}

I have a DEBUG_QUERY_SIZE environment variable that I can set to "true" to enable this logging. I grab the query name if it is available for better logging, calculate the bytes of the query and then log the size percentage. If the percentage is over 80%, I log it in red, if it is over 50% I log it in yellow, otherwise it is green.

I am calculating the size of the entire body not just the query. ContentStack will limit you on the size of the body you send to their API, the only thing we can really control is the query, so that is what I focused on to optimise.

Hopefully, this guide has been helpful in understanding how to optimise your queries to stay within ContentStack’s 8kb limit. ContentStack has been the first platform I have used where I have had to consider this limit, so it has been an interesting (albeit unnecessary imo) challenge to overcome.