- Global Fields can’t be used within other Global Fields. This means if you have a common pattern of
Title
,Description
,Links
across multiple Global Fields, you can’t create that once, create 1 fragment and then use that fragment in all your other Global Fields. Take a look at theideal.graphql
andreality.graphql
examples below and ask yourself this question for each file - “What happens if I introduce a new Link type?”.
ideal.graphql
you update 1 Global Field and 1 Fragment. For reality.graphql
you need to update x
number of Global Fields and x
number of Fragments. The ideal.graphql
example isn’t a radical idea either, Blocks/Global Fields are meant to be re-usable and create a single source of truth, ContentStack just doesn’t allow you to do this for some reason.
- When you create a block in a Modular Block field, you create new blocks with unique GraphQL types, with 1 field that references your block. Let’s say you have a
Page
content type with a Modular Block field calledFlexible Content
. If you had 3 blocks that reference Global Fields, ContentStack would generate 3 new GraphQL types for each block.
ideal.graphql
examples are how I’ve structured similar fields & fragments in other GraphQL-based CMSs.
So… knowing this, how can you start optimising your queries to stay within the 8kb limit? I would recommend 2 things:
- Minimize your GraphQL query before sending it to ContentStack.
- If possible, generate custom Fragment names that are small and random.
Minimize your GraphQL query before sending it to ContentStack.
Note: This has only been tested with a custom
fetch
based handler that I’ve written. You may need to adjust this for your own use case, for example when using Apollo Client or similar.body
for my fetch
request, I run the query through this package.
If possible, generate custom Fragment names that are small and random.
I am using a package I wrote called Pigeon to generate my Fragments & Queries for me. For this project specifically, I am hijacking the Fragment Name generation to be a small, 4 character long string. By default, pigeon will take the__typename
and append Fragment
to it. This has been fine in the past however, with ContentStack’s size limit, I needed to reduce the number of bytes my fragment names were taking up.
I am using the package nanoid to generate these random strings.
fragmentName
in my pigeon configuration, I would use the frgName
function. I am creating a lookup
table to ensure if I have the same __typename
I will always use the same fragment name. In the RARE case that I have a collision, I will continually generate a new fragment name until I find one that isn’t in the lookup
table.
With 53 possible characters, at a length of 4 characters per name, there are 7,890,481 possible fragment names.
Before this optimisation, my query size was:
Bonus: How do I know my query size?
Because I have my custom client, I was able to add a some code before I send the request to get a rough idea of the size of my query.DEBUG_QUERY_SIZE
environment variable that I can set to "true"
to enable this logging. I grab the query name if it is available for better logging, calculate the bytes of the query and then log the size percentage. If the percentage is over 80%, I log it in red, if it is over 50% I log it in yellow, otherwise it is green.
I am calculating the size of the entire
body
not just the query. ContentStack will limit you on the size of the body
you send to their API, the only thing we can really control is the query, so that is what I focused on to optimise.Hopefully, this guide has been helpful in understanding how to optimise your queries to stay within ContentStack’s 8kb limit. ContentStack has been the first platform I have used where I have had to consider this limit, so it has been an interesting (albeit unnecessary imo) challenge to overcome.