GA4 is a powerful tool. For most campaigns it covers the majority of what you need to know — traffic sources, user behaviour, conversion events, funnel progression. If your tracking is clean and your measurement strategy is solid, GA4 will take you a long way.
But GA4 has a boundary. It sees what happens on your website or app, within the session, for users who have given consent. Everything outside that boundary — what's happening in your ad platforms, what your CRM knows about a lead after they convert, what your A/B testing tool is measuring, what happens after a user logs in — is invisible to it.
For many campaigns, that boundary doesn't matter. For some, it's the difference between understanding your results and misreading them entirely.
This post is about recognising when you've hit that boundary, and what to do about it — practically, based on where your team is right now.
Where GA4 Runs Out
Understanding the limits of GA4 isn't a criticism of the platform — it's just being honest about what it was designed to do. GA4 is a behavioural analytics tool built around sessions and events on your digital properties. It's not a data warehouse, a CRM, or an ad attribution platform.
The gaps show up in predictable places:
Ad platform data
GA4 receives data from ad platforms via click-through tracking and UTM parameters. But the native reporting inside Meta, Google Ads, LinkedIn, or TikTok often tells a different story — different attribution models, different conversion windows, different definitions of what counts as a view or a click. When GA4 and your ad platform disagree on conversions, you need both data sources to understand why.
Running a campaign across multiple paid channels and trying to reconcile performance using GA4 alone is like reading one side of a conversation. You need the platform data alongside the GA4 data to get the full picture.
A/B testing and experimentation tools
If you're running experiments during a campaign — testing landing pages, creative variants, offer structures — your testing tool (Optimizely, VWO, Google Optimize's successors, or whatever you're using) holds experiment assignment data that GA4 doesn't have natively. You can pass experiment variant data into GA4 as a custom dimension, but if that integration isn't set up correctly, your campaign data and your test data live in separate silos and can't be joined.
CRM and post-conversion data
GA4 measures the conversion event. It doesn't know what happened to that lead or customer afterwards — whether the lead was qualified, whether the sale closed, what the customer lifetime value turned out to be. For B2B campaigns especially, the conversion event in GA4 is often just the beginning of the real funnel. The data that tells you whether the campaign actually drove business value lives in the CRM.
Authenticated product experiences
As discussed in part two, when users move from a public-facing site into an authenticated product — a banking app, a SaaS dashboard, a membership portal — GA4 typically loses visibility. The marketing team can measure up to the login wall. Everything beyond it requires a different approach.
Offline conversions
Campaigns that drive phone calls, in-store visits, or sales rep conversations produce conversions that have no automatic digital equivalent. These need to be imported back into GA4 or your data infrastructure manually, or they disappear from your measurement picture entirely.
Analytical Maturity: Where Is Your Organisation?
How you close these gaps depends on where your organisation is analytically. Not every team needs BigQuery. Not every team can use it, even if they want to. Being honest about your current maturity level leads to better decisions than trying to implement infrastructure you're not ready for.
There are broadly three levels:
You're working within GA4's UI, pulling reports manually, and combining data in Excel or Google Sheets. Most organisations are here. It's not a failure state — it's a starting point. Excel is a legitimate and underrated tool for campaign analysis when used deliberately.
You're pulling data from GA4, Meta, Google Ads, and other sources, and joining them manually in a spreadsheet. This works for regular reporting but breaks down at scale — too many sources, too much manual work, too many opportunities for human error.
Your GA4 data flows into BigQuery automatically. Ad platform data is pulled in via connectors or API. CRM data is joined at the user or session level. You can write SQL to answer questions that no dashboard can answer out of the box. Only the most analytically mature organisations operate here consistently.
The measurement strategy workshop surfaces which level you're at — because when you start mapping data points and asking "where does that live?", the answers tell you very quickly what infrastructure you have and what you're missing.
The Case for Excel
Excel gets dismissed as unsophisticated. That's unfair. For teams that aren't ready for BigQuery — or for campaigns where the data volume doesn't justify the infrastructure — Excel is the right tool.
What Excel does well for campaign measurement:
- Combining GA4 exports with ad platform exports for a cross-channel view
- Building a unified campaign performance report that pulls from multiple sources
- Doing attribution analysis manually for campaigns where the volume is manageable
- Tracking actuals against targets when the KPIs are defined upfront
The limitation of Excel isn't the tool — it's the manual work required to keep it current, and the fact that it breaks when data volumes get large or the number of sources grows. But for a team running one or two campaigns at a time with data from three or four sources, a well-structured Excel workbook is faster to build and easier to share than a BigQuery pipeline.
If your measurement strategy workshop reveals that you need to combine data sources and BigQuery isn't feasible in the campaign timeframe — and it often isn't, because setting up a proper BigQuery export and data pipeline takes time and engineering resource — Excel wins. A working analysis in Excel beats a half-built BigQuery pipeline every time.
When BigQuery Is the Answer
BigQuery becomes the right answer when one or more of the following is true:
- You're running campaigns at a scale where manual data combination is no longer feasible
- You need to join GA4 behavioural data with CRM or sales data at the user level
- You need to answer questions that require event-level data — not aggregated reports
- You want to build a historical data archive that outlasts GA4's retention limits
- You're doing A/B testing and need to join experiment assignment data with conversion outcomes
The GA4 BigQuery export gives you raw, event-level data for every session and interaction in your property. Combined with data from your ad platforms, CRM, and internal systems, it becomes the foundation for analysis that's simply not possible inside GA4's UI.
Common data sources joined in BigQuery for campaign measurement:
| Data Source | What It Adds | Join Key |
|---|---|---|
| GA4 BigQuery export | Behavioural data, session data, on-site conversion events | user_pseudo_id, session_id |
| Google Ads | Impression, click, cost data at campaign/ad group level | campaign_id, date |
| Meta Ads | Cross-platform attribution, reach and frequency data | campaign_id, date |
| CRM (Salesforce, HubSpot) | Lead quality, pipeline value, close rate by source | email, user_id |
| Internal sales data | Revenue, LTV, product mix by acquisition source | transaction_id, user_id |
| A/B testing platform | Experiment variant assignment per user | user_pseudo_id, client_id |
The power of this setup is that you can ask questions like: of the users who clicked a Meta ad, visited the site, and submitted a lead form, how many became paying customers — and what was their average contract value? That question cannot be answered inside GA4. It requires joining at least three data sources at the user level.
But building this infrastructure properly takes time, engineering resource, and ongoing maintenance. It's not a campaign sprint activity — it's an investment in analytical capability that pays off over many campaigns. If you're considering setting up a GA4 BigQuery export, start here.
Being Realistic About the Timeline
One of the most important things a measurement strategy workshop does is surface the gap between what a team wants to measure and what they can realistically measure before a campaign launches.
If the workshop reveals that answering your primary measurement question requires a BigQuery pipeline that doesn't exist yet, you have a choice: delay the campaign until the infrastructure is ready, redefine the KPI around what you can measure, or accept that you'll be doing manual data combination in Excel for this campaign while you build toward something better.
None of those answers is wrong. The worst outcome is pretending the gap doesn't exist, launching the campaign, and realising at the end that you can't answer the question you needed to answer.
Analytical maturity is built over time. The measurement strategy workshop is a useful forcing function for that journey — it makes the gaps visible, creates a record of what's needed, and gives the technical team a clear brief for what to build next.
Where to Start
If you've read all three parts of this series, the path forward is clear:
- Get your GA4 instance clean — run the pre-campaign checklist or the automated audit
- Run a measurement strategy workshop before your next significant campaign — define objectives, map tactics, identify gaps
- Be honest about where you are analytically — Excel is fine if that's where you are, BigQuery is the destination if you're ready to invest
If you need help at any stage — whether that's auditing your GA4 property, facilitating a measurement strategy workshop, or building out a BigQuery data pipeline — get in touch.