Product analytics tools fail the same way. A team adopts Amplitude, instruments a few events, builds some charts, and then finds that the questions they actually want to answer — why are users churning, which features drive retention, what does the activation path look like for power users — can't be answered cleanly because the underlying event data doesn't support them.

The charts are there. The data isn't right. You can build a funnel, but the conversion rates look wrong because the events at each step are defined inconsistently. You can look at retention, but the retained cohort and the churned cohort don't look meaningfully different because the events that distinguish them weren't tracked.

The investment that makes Amplitude work isn't in learning the UI. It's in getting the event taxonomy right before you start asking questions.

Building your event taxonomy

An event taxonomy is the complete, structured list of events your product tracks — what each event is called, when it fires, and what properties it carries. It's the foundation everything else in Amplitude is built on. A messy taxonomy creates a messy product analytics practice. A clean one makes analysis significantly more tractable.

Naming conventions

The naming convention that has the best balance of readability and queryability is verb + noun in snake_case. The verb describes what the user did; the noun describes what they did it to.

ActionGood event nameAvoid
User clicks a buttonbutton_clickedclick, Button Click, btn_clk
User views a pagepage_viewedpageview, view_page, Page View
User completes signupsignup_completedsignup, user_signup, Signup Complete
User starts a trialtrial_startedstart_trial, trial, TrialStart
User invites a teammateteammate_invitedinvite, send_invite, Invite Sent
User upgrades planplan_upgradedupgrade, plan_change, Upgrade

The goal is that any team member can read an event name in a chart and immediately understand what it represents. Avoid abbreviations, camelCase, inconsistent capitalization, or event names that describe the technical implementation rather than the user action.

Consistency matters more than perfection. A slightly imperfect naming convention applied consistently across 200 events is significantly more usable than a theoretically better convention applied to 80% of events with exceptions and variations for the rest. Establish the convention before you start instrumenting, and enforce it in code review.

Properties every event should carry

Event properties are what transform a signal ("this happened") into analyzable data ("this happened, in this context, to this type of user, on this platform"). There's a core set of properties that should be on every event in your taxonomy:

Universal event properties
platformweb, ios, android — essential for segmenting behavior by surface
page_typedashboard, settings, onboarding, etc. — where in the product this happened
user_planfree, trial, pro, enterprise — behavior varies significantly by plan
user_roleadmin, member, viewer — actions mean different things for different roles
session_idfor connecting events within a single session for flow analysis

Beyond the universal set, each event should carry properties specific to its context. A report_exported event should carry the report type, format, and destination. A feature_used event should carry the feature name and the context it was accessed from. The question to ask for each event: what would make this event more useful to analyze?

User properties vs event properties

User properties in Amplitude are attributes that describe the user and persist across events — plan type, signup date, company size, acquisition channel. Event properties describe the specific instance of the event. Getting this distinction right matters because it determines what you can segment on in retention and cohort analysis.

A common mistake is storing user-level attributes as event properties. If you want to compare retention curves by signup cohort or plan type, those need to be user properties that Amplitude can use to group users — not event properties that only exist on the events where you happened to include them.

Setting up funnels that answer product questions

Amplitude's funnel analysis is powerful, but only if the funnel is designed around a specific question. A funnel built around "let's see what users do" produces noise. A funnel built around "what percentage of users who start onboarding complete it, and where exactly do they drop off" produces insight.

Define the question before you build the funnel

Before opening the funnel builder, write down:

The last question is easy to skip and frequently matters the most. An onboarding funnel that includes all users, including those who already completed onboarding months ago, will look very different from one scoped to users in their first week. Amplitude's funnel builder lets you apply user segments and time constraints — use them.

The activation funnel

The most important funnel for most product teams is the activation funnel — the path from signup to the moment a user first experiences the product's core value. The exact shape varies by product, but the structure is consistent: signup_completed → [key setup steps] → [first value moment].

The value in the activation funnel isn't just the overall conversion rate. It's the step-level drop-off. When 60% of users drop between "connected data source" and "created first report," that's a product problem worth fixing. The funnel tells you where it is. The user behavior analysis tells you why.

Measuring funnel completion over time

Amplitude's conversion window setting is easy to overlook and significantly affects your numbers. A funnel with a 1-day conversion window and a funnel with a 7-day window will show different completion rates — neither is wrong, but they answer different questions. Be deliberate about the window you choose and document why you chose it, so comparisons over time are meaningful.

Retention analysis — what to measure and how to use it

Retention is where Amplitude earns its reputation. The retention chart — showing what percentage of users who did X in week 1 came back to do Y in subsequent weeks — is the closest thing product analytics has to a north star metric. It tells you whether users are finding enough value to return, and whether that changes over time.

N-day vs bracket retention

Amplitude offers multiple retention definitions. The two most useful for most product teams:

Most teams default to N-day retention because it's the standard, but bracket retention is often more meaningful for B2B tools where weekly or monthly use is the expected pattern rather than daily.

The retained vs churned user comparison

The most valuable retention analysis is comparing what retained users did in their first session or first week versus what churned users did. This is where you find the behaviors that predict retention — the features used, the setup steps completed, the actions taken that distinguish users who stay from users who leave.

Build this as a behavioral cohort comparison in Amplitude: cohort A is users retained to week 4, cohort B is users who churned before week 4. Compare their event frequency, funnel completion, and feature usage in week 1. The differences tell you what your activation flow should be optimizing for.

The "aha moment" isn't always what you think. Teams often assume they know what their product's value moment is — the thing that makes users stick. Retention analysis frequently reveals something different. The feature your team is most proud of may have no correlation with retention. The small workflow step nobody talks about may be the strongest predictor. Let the data tell you before you build your activation strategy around assumptions.

Implementation checklist before you go live

Event taxonomy documented — every event name, trigger condition, and required properties written down and reviewed before instrumentation starts
Naming convention enforced — snake_case verb+noun, consistent capitalization, no abbreviations that aren't in the shared glossary
Universal properties on every event — platform, page_type, user_plan, user_role included in every event payload
User identification implementedidentify() called when users sign in, user properties set correctly, anonymous → identified user stitching working
Test environment separated — development and staging events routed to a separate Amplitude project, not production
Key user journeys QA'd — onboarding flow, core feature use, conversion events verified in Amplitude's event stream before launch
Activation and retention funnels built — core funnels set up with the right cohort scoping and conversion windows before teams start using the data
Chart governance defined — shared dashboards for key metrics, naming conventions for saved charts, process for deprecating outdated analyses

The temptation is to skip the checklist and start analyzing. Every team that does this ends up back at the beginning within three months — either rebuilding the taxonomy or living with data they can't fully trust. The upfront investment in getting the foundation right pays back on every analysis you run for the lifetime of the implementation.

If you're running GA4 alongside Amplitude, make sure both implementations are clean. Broken GA4 data and solid Amplitude data creates contradictory signals that are difficult to reconcile. Our GA4 audit finds the implementation issues before they become decisions made on bad data. Run the GA4 audit — $79 →
Travis Gunn
Founder of GA4 Health Check. Working with Google Analytics since 2013, with over 250 clients audited across almost every industry vertical. 100% Job Success on Upwork for over a decade.