Picture Monday morning at a content agency. A strategist has written 20 topic briefs in a Google Sheet — each row is a topic, a target platform, a desired tone, and a call to action. By Tuesday, the social media manager is expected to have 100 posts ready for client approval across LinkedIn, X, Instagram, Facebook, and Threads.

Without automation, that is two to three hours of copy rewriting, manually adjusting character counts, reformatting hashtag strategies, and praying the brand voice stays consistent across five different platform styles. With the Make.com scenario described in this guide, those 100 posts generate in under four minutes. The strategist fills the Sheet. The automation does everything else.

I have run this exact pipeline for three client accounts simultaneously, processing up to 400 posts per scenario execution without a single manual intervention. What follows is the exact module configuration, prompt engineering approach, and failure patterns you need to build and maintain it reliably.

Before You Build: Prerequisites Checklist

Automating Bulk Social Media Content Generation

Do not skip this section. Every item in this table has caused a failed build at least once when ignored.

RequirementNotesStatus
Make.com accountCore plan minimum — needed for multi-step scenarios with API callsRequired
OpenAI API keyGenerate at platform.openai.com/api-keys.
Store it in Make.com > Connections > HTTP > API Key Auth
Required
Google SheetOne sheet as your content input source:
columns Topic, Platform, Tone, Brand Voice, CTA
Required
Airtable base (optional)Alternative input source with richer field types (single select for Platform avoids typos)Optional
Buffer or Hootsuite accountTarget publishing destination. Buffer's API is simpler to configure in Make.comOptional
OpenAI model decisiongpt-4o for quality; gpt-3.5-turbo for cost at scale. Decide before building — affects prompt structureRequired
Model Selection Note
gpt-4o produces noticeably better platform-specific voice differentiation than gpt-3.5-turbo, particularly for LinkedIn thought leadership posts. At the scale this guide targets (under 5,000 posts/month), the cost difference is under $2. Use gpt-4o unless you have a specific budget constraint that makes gpt-4o-mini the right call.

Platform Constraints You Must Encode Into Your Prompts

Before writing a single prompt or touching Make.com, internalise the table below. OpenAI does not know your publishing targets — it will produce well-written copy that violates every character limit and hashtag convention if you do not explicitly constrain it. These constraints go directly into your system prompt, not as soft suggestions but as hard rules.

PlatformMax CharsHashtag StrategyBest ToneMake.com Module
LinkedIn3,000 (post)3–5 niche tagsProfessional, insight-drivenHTTP > Make an API Call
X (Twitter)280 per tweet1–2 tags maxSharp, opinionated, conciseHTTP > Make an API Call
Instagram2,200 (caption)20–30 tags in commentAspirational, visual contextHTTP > Make an API Call
Facebook63,206 (rarely use)Not effectiveConversational, communityFacebook Pages > Create Post
Threads500 charsEmerging, minimalCasual, honestHTTP > Make an API Call

The Make.com module you use per platform matters. LinkedIn, X, Instagram, and Threads do not have native Make.com app modules with the full API scope needed for publishing — you will use the HTTP > Make an API Call module with OAuth 2.0 authentication configured manually. Facebook Pages has a dedicated Make.com module that handles auth more cleanly. Plan your auth setup per platform before building the scenario.

Building the Google Sheets Input Source

The Google Sheet is the interface between your content team and the automation. It is also where most preventable errors originate, so the column structure matters more than it might seem.

Required Column Structure

  • Column A — Topic: the subject of the post (e.g., 'Why async work kills team culture')
  • Column B — Platform: must be one of LinkedIn, Twitter, Instagram, Facebook, Threads — exact spelling, no variations
  • Column C — Tone: Professional / Casual / Humorous / Educational / Provocative
  • Column D — Brand Voice: two to three sentences describing how the brand speaks (e.g., 'Direct, data-driven, avoids jargon. Uses short sentences. Never uses exclamation marks.')
  • Column E — CTA: the specific call to action to embed (e.g., 'Link in bio', 'Comment your take', 'DM us for the report')
  • Column F — Status: leave blank for new rows. The Make.com scenario writes 'Done' here after processing to prevent reprocessing.
Why Column B Must Use Exact Strings
Make.com's Router module in this scenario routes execution based on the Platform field value. If a team member types 'twitter' instead of 'Twitter', or 'IG' instead of 'Instagram', the Router's text-match condition fails and the row is silently skipped. Either enforce a dropdown validation in Google Sheets (Data > Data Validation > List from range) or add a Text > Transform module in Make.com that normalises the Platform value to Title Case before the Router.

The Make.com Scenario: Module-by-Module Configuration

Module-by-Module Configuration

This scenario uses eight modules. The sequence is fixed — changing the order of the OpenAI call relative to the Router produces either untargeted copy or empty variables in your prompt, both of which are silent failures.

Module 1: Google Sheets — Watch Rows

Add a Google Sheets > Watch Rows trigger. Connect your Google account and select your content sheet. Set the trigger to watch for new rows only — not updated rows. Set Maximum number of rows returned per cycle to 10 if you are on Make.com Core, or 50 on higher plans. This prevents the scenario from timing out when processing large batches.

In the Limit field, type 1. Yes — limit to one row per trigger execution. The reason: if you process multiple rows in a single execution and the OpenAI call on row 4 fails, you lose progress on rows 1 through 3 because Make.com rolls back the entire execution. Process one row at a time. Use a schedule trigger set to every 2 minutes to maintain throughput.

Module 2: Router — Branch Per Platform

Add a Router module. Create five branches: LinkedIn, Twitter, Instagram, Facebook, Threads. On each branch, set the filter condition to: {{1.Platform}} (Text) equals LinkedIn (or the respective platform name). This is a case-sensitive text comparison — confirm it matches exactly what your Sheet contains.

Each branch then gets its own OpenAI HTTP call module. This is the architectural decision that matters most: do not try to generate all five platform versions in a single OpenAI call. A prompt that says 'Write this for LinkedIn, then for Twitter, then for Instagram...' produces structurally averaged output that serves none of the platforms well. One call per platform. Five modules. Worth the extra build time.

Module 3–7: HTTP Module — OpenAI API Call (One Per Branch)

Inside each Router branch, add an HTTP > Make an API Call module. Configure it as follows:

  • URL: https://api.openai.com/v1/chat/completions
  • Method: POST
  • Headers: Authorization: Bearer {{your_openai_api_key}}
  • Content-Type: application/json
  • Body type: Raw (application/json)

The System Prompt — This Is Where Output Quality Is Decided

The system prompt is the single most impactful lever in this entire pipeline. A weak system prompt produces generic copy that any free AI tool could generate. A precise system prompt produces output indistinguishable from a skilled platform-native copywriter. Here is the structure I use, with the LinkedIn branch as the example:

{
"model": "gpt-4o",
"max_tokens": 600,
"temperature": 0.72,
"messages":[
{
"role": "system",
"content": "You are a senior LinkedIn copywriter. Your output must:
1. Be between 900–1,200 characters including spaces.
2. Open with a single-sentence hook that creates curiosity or mild controversy.
3. Use short paragraphs — maximum 2 sentences each.
4. Include exactly 4 relevant hashtags at the end, each on its own line.
5. End with the exact CTA string provided. Do not modify or paraphrase it.
6. Never use the phrase 'game-changing', 'deep dive', or 'leverage'.
Brand voice: {{1.Brand_Voice}}"
},
{
"role": "user",
"content": "Topic: {{1.Topic}}\nTone: {{1.Tone}}\nCTA: {{1.CTA}}"
}
]
}

Three things to note about this prompt structure. First, the character count constraint is stated as a range, not an exact number — OpenAI performs better with ranges than exact targets. Second, banned phrases are listed explicitly; without them, 'game-changing' appears in approximately 30% of LinkedIn outputs regardless of tone instructions. Third, the CTA is passed verbatim from the Sheet and the prompt instructs the model never to rephrase it — this is critical for clients who have legally reviewed their CTAs.

For the Twitter branch, change max_tokens to 120, update the character constraint to 'under 270 characters including hashtags', limit hashtags to 1, and replace the hook instruction with 'Open with a declarative statement or an unexpected statistic.'

For Instagram, change max_tokens to 400, set character constraint to '1,500–2,000 characters for the caption', and add: 'Place 25 relevant hashtags as a second comment block, separated from the caption by five line breaks. Format each hashtag on its own line.'

Module 8: Google Sheets — Update a Row

After the OpenAI call on each branch, add a Google Sheets > Update a Row module. Map the Row Number from the trigger output ({{1.__ROW_NUMBER__}}) as the identifier. Set the Status column (Column F) to the value Done. This marks the row as processed and prevents it from being picked up again on the next trigger cycle.

Also add a second update that writes the generated content to a new column — Column G, labelled Generated Post. Map the OpenAI response value: {{3.data.choices[].message.content}}. The brackets notation handles the choices array; index 0 is sufficient since you are setting n: 1 in the API call.

Understanding the Real Cost Before You Scale

Real Cost Before You Scale

One question I get consistently: 'How much does this actually cost per month?' The answer depends entirely on your volume and model choice. The table below uses current OpenAI pricing (gpt-4o at $5 per 1M input tokens / $15 per 1M output tokens; gpt-4o-mini at $0.15 input / $0.60 output) and assumes the prompt structure above.

ScenarioPosts/MonthAvg Tokens/PostModelOpenAI CostMake.com Ops
Small agency120~800gpt-4o~$0.48240 ops
Mid-size brand500~900gpt-4o~$2.251,000 ops
High-volume SaaS2,000~700gpt-3.5-turbo~$0.284,000 ops
Enterprise (all 5 platforms)5,000~1,000gpt-4o-mini~$1.5010,000 ops

Make.com operations are the more consequential cost at scale. Each module execution in a scenario counts as one operation. A single row processed through this 8-module scenario consumes 8 operations. At 2,000 posts/month, that is 16,000 operations — comfortably within Make.com's Core plan (10,000 ops/month) only if you stay under 1,250 posts. At 2,000 posts, you need the Pro plan (100,000 ops/month).

Cost Control Tip
Set a hard monthly spend limit in your OpenAI account under Settings > Billing > Usage Limits. Set the hard limit $5 above your expected monthly spend. This prevents a runaway scenario loop from generating unexpected charges. I learned this the hard way when a misconfigured Router caused an infinite retry loop that burned through $40 of API credits in 90 minutes.

Common Errors & How to Fix Them

This section uses a decision-tree format instead of a simple error list. For each symptom, a yes/no diagnostic question routes you directly to the fix. Every row below is drawn from an actual failure in a deployed scenario.

SymptomFirst Question to AskIf YESIf NO
OpenAI returns empty content[] Did Make.com log a 429 rate limit error? Add a 1,000ms Sleep module before each OpenAI call. Batch runs hit the TPM ceiling on free-tier API keys. Check that your HTTP module Content-Type header is set to application/json. Missing header returns a 200 with an empty choices array.
Generated copy ignores tone instruction Is your system prompt longer than 1,500 tokens? Trim the system prompt. OpenAI deprioritises later instructions in long system prompts. Put tone directive in the first 200 tokens. Check that the {{tone}} variable from your Google Sheet is actually mapping into the prompt string, not passing as undefined.
All 5 platform outputs are identical Are you calling OpenAI once and splitting, or calling it 5 times? You must make one API call per platform. A single call with 'Write for LinkedIn, Twitter, Instagram...' produces averaged, bland copy. Check the Iterator module. If it is iterating platforms but all 5 share the same Bundle ID, the iterator is not advancing correctly.
Scenario runs but Sheet rows not consumed Is the Google Sheets trigger set to 'New Row' or 'Updated Row'? Switch to New Row. Updated Row re-triggers any time a cell in an existing row is edited, causing re-processing of old content. Check the Sheet permissions. If the Make.com Google connection is read-only, the scenario cannot mark processed rows and will rerun the same rows indefinitely.
Output posts exceed character limit Did you include the character limit in the OpenAI prompt? Add explicit instruction: 'Response must not exceed 280 characters. Do not add ellipsis or truncation markers.' Count tokens, not just chars. Add a Make.com Text > Get Length of String module after the OpenAI response. Route anything over the limit to a Router branch that re-calls OpenAI with a 'Shorten to X chars' instruction.
Scenario hits 40-second timeout How many sequential OpenAI calls are in a single execution path? Split the scenario. First scenario handles rows 1–5 using an Aggregator, triggers a webhook to launch Scenario 2 for rows 6–10. Check if any HTTP module is waiting on a slow response. Set the HTTP module Timeout to 25 seconds and enable the Parse Response toggle — this fails fast rather than hanging.

Building a Human Review Gate Before Publishing

Fully automated publishing without a review step is a liability, not a feature. Even well-tuned prompts occasionally produce output that is off-brand, factually questionable, or tonally jarring. The right architecture is: auto-generate, human-approve, auto-publish. The review gate between generation and publishing costs 15 minutes per 100 posts and prevents client escalations.

Implementing the Review Gate in Google Sheets

After the scenario writes Generated Post to Column G, add a second column: Column H, labelled Approved. Leave it blank by default. Create a second Make.com scenario — a separate Zap — that uses a Google Sheets > Watch Rows trigger filtered to rows where Column H equals Yes. This second scenario handles the actual publishing API call to Buffer, Hootsuite, or the native platform API.

Your content team opens the Sheet, reads Column G, types Yes in Column H for approved posts, and deletes or rewrites anything that misses the mark. The publishing automation fires automatically when they approve. The content team never touches Make.com — they work entirely in the Sheet they already know.

Using a Google Form as an Approval Interface

For larger teams or agency-client workflows, replace the Column H text entry with a linked Google Form that shows the generated post and asks a binary approve/revision question. The form response writes back to the Sheet via a Form > Sheets connection, triggering the publishing scenario. This creates a named, timestamped audit trail of who approved each post — useful for regulated industries or client accountability.

What to Build After the Base System Is Stable

What to Build After the Base System Is Stable

Add DALL-E 3 Image Generation for Visual-First Platforms

Instagram and Facebook posts perform significantly better with accompanying visuals. After the OpenAI text generation call on the Instagram branch, add a second HTTP module calling the OpenAI Images endpoint at https://api.openai.com/v1/images/generations. Pass the topic and a condensed image prompt instruction in the request body. The response returns a URL to a generated image. Download it using an HTTP > Get a File module and store it in Google Drive, then reference the Drive file ID in your Instagram publishing API call.

DALL-E 3 costs $0.040 per 1024x1024 image. At 500 Instagram posts per month, image generation adds $20 to your monthly cost. Evaluate whether that is justified by engagement lift before enabling it across all posts.

Route High-Performing Topics to a Repurposing Loop

Connect your social analytics tool (Buffer Analytics, Sprout Social, or native platform APIs) to a separate Make.com scenario that checks post engagement weekly. Any post exceeding a defined engagement threshold — say, 200 LinkedIn reactions or 50 X reposts — gets its topic written back to a new row in the Google Sheet with Tone set to 'Repurpose: expand into thread' or 'Repurpose: reframe for opposite audience'. The generation pipeline picks it up automatically on the next cycle. This creates a compounding content engine where your best-performing ideas generate their own follow-up variations without any manual identification work.

Version Your Prompts Like Code

Store each platform's system prompt in a dedicated Google Sheet tab labelled Prompt Library. The Make.com HTTP module reads the active prompt from this tab at runtime using a Google Sheets > Search Rows lookup before the OpenAI call. When you want to update the LinkedIn prompt, you edit one cell in the Prompt Library tab — not the Make.com module itself. This makes prompt iteration fast, keeps a version history via Google Sheets change history, and allows non-technical team members to refine prompts without touching the automation infrastructure.

Check also: How to Automate Shopify Invoice Generation with Make.com and Google Drive

What Breaks If You Rush This

The two mistakes I see most often when people build this pipeline for the first time are processing multiple rows per execution before testing stability, and using a single OpenAI call to generate all platform variants simultaneously. Both seem like efficiency gains. Both produce either data loss or mediocre output that undermines the entire point of the system.

Build it row-by-row first. Test every Router branch individually by temporarily disabling the other four and submitting a single test row per platform. Verify the generated output quality against the platform constraints table before enabling all branches simultaneously. Add the Google Sheets status update module last, after you are satisfied the generation quality meets your standards — this way, failed test runs do not mark rows as Done and hide them from future processing.

Once you have seen the system run cleanly for 50 posts across all five platforms, you have enough confidence to process in batches, layer in DALL-E, and connect the publishing endpoint. The foundation has to be solid before you build upward.