Product analytics tools fail the same way. A team adopts Amplitude, instruments a few events, builds some charts, and then finds that the questions they actually want to answer — why are users churning, which features drive retention, what does the activation path look like for power users — can't be answered cleanly because the underlying event data doesn't support them.
The charts are there. The data isn't right. You can build a funnel, but the conversion rates look wrong because the events at each step are defined inconsistently. You can look at retention, but the retained cohort and the churned cohort don't look meaningfully different because the events that distinguish them weren't tracked.
The investment that makes Amplitude work isn't in learning the UI. It's in getting the event taxonomy right before you start asking questions.
Building your event taxonomy
An event taxonomy is the complete, structured list of events your product tracks — what each event is called, when it fires, and what properties it carries. It's the foundation everything else in Amplitude is built on. A messy taxonomy creates a messy product analytics practice. A clean one makes analysis significantly more tractable.
Naming conventions
The naming convention that has the best balance of readability and queryability is verb + noun in snake_case. The verb describes what the user did; the noun describes what they did it to.
| Action | Good event name | Avoid |
|---|---|---|
| User clicks a button | button_clicked | click, Button Click, btn_clk |
| User views a page | page_viewed | pageview, view_page, Page View |
| User completes signup | signup_completed | signup, user_signup, Signup Complete |
| User starts a trial | trial_started | start_trial, trial, TrialStart |
| User invites a teammate | teammate_invited | invite, send_invite, Invite Sent |
| User upgrades plan | plan_upgraded | upgrade, plan_change, Upgrade |
The goal is that any team member can read an event name in a chart and immediately understand what it represents. Avoid abbreviations, camelCase, inconsistent capitalization, or event names that describe the technical implementation rather than the user action.
Properties every event should carry
Event properties are what transform a signal ("this happened") into analyzable data ("this happened, in this context, to this type of user, on this platform"). There's a core set of properties that should be on every event in your taxonomy:
Beyond the universal set, each event should carry properties specific to its context. A report_exported event should carry the report type, format, and destination. A feature_used event should carry the feature name and the context it was accessed from. The question to ask for each event: what would make this event more useful to analyze?
User properties vs event properties
User properties in Amplitude are attributes that describe the user and persist across events — plan type, signup date, company size, acquisition channel. Event properties describe the specific instance of the event. Getting this distinction right matters because it determines what you can segment on in retention and cohort analysis.
A common mistake is storing user-level attributes as event properties. If you want to compare retention curves by signup cohort or plan type, those need to be user properties that Amplitude can use to group users — not event properties that only exist on the events where you happened to include them.
Setting up funnels that answer product questions
Amplitude's funnel analysis is powerful, but only if the funnel is designed around a specific question. A funnel built around "let's see what users do" produces noise. A funnel built around "what percentage of users who start onboarding complete it, and where exactly do they drop off" produces insight.
Define the question before you build the funnel
Before opening the funnel builder, write down:
- What action is the user trying to complete?
- What's the natural starting point of that journey?
- What's the completion event that signals success?
- What steps between start and completion are meaningful decision points?
- What cohort of users does this funnel apply to?
The last question is easy to skip and frequently matters the most. An onboarding funnel that includes all users, including those who already completed onboarding months ago, will look very different from one scoped to users in their first week. Amplitude's funnel builder lets you apply user segments and time constraints — use them.
The activation funnel
The most important funnel for most product teams is the activation funnel — the path from signup to the moment a user first experiences the product's core value. The exact shape varies by product, but the structure is consistent: signup_completed → [key setup steps] → [first value moment].
The value in the activation funnel isn't just the overall conversion rate. It's the step-level drop-off. When 60% of users drop between "connected data source" and "created first report," that's a product problem worth fixing. The funnel tells you where it is. The user behavior analysis tells you why.
Measuring funnel completion over time
Amplitude's conversion window setting is easy to overlook and significantly affects your numbers. A funnel with a 1-day conversion window and a funnel with a 7-day window will show different completion rates — neither is wrong, but they answer different questions. Be deliberate about the window you choose and document why you chose it, so comparisons over time are meaningful.
Retention analysis — what to measure and how to use it
Retention is where Amplitude earns its reputation. The retention chart — showing what percentage of users who did X in week 1 came back to do Y in subsequent weeks — is the closest thing product analytics has to a north star metric. It tells you whether users are finding enough value to return, and whether that changes over time.
N-day vs bracket retention
Amplitude offers multiple retention definitions. The two most useful for most product teams:
- N-day retention — did the user return on exactly day N? Useful for products with a natural daily or weekly cadence (tools used regularly as part of a workflow).
- Bracket retention — did the user return at any point within a time bracket (e.g., week 2, week 3)? More appropriate for products used less frequently, where exact-day retention would understate actual engagement.
Most teams default to N-day retention because it's the standard, but bracket retention is often more meaningful for B2B tools where weekly or monthly use is the expected pattern rather than daily.
The retained vs churned user comparison
The most valuable retention analysis is comparing what retained users did in their first session or first week versus what churned users did. This is where you find the behaviors that predict retention — the features used, the setup steps completed, the actions taken that distinguish users who stay from users who leave.
Build this as a behavioral cohort comparison in Amplitude: cohort A is users retained to week 4, cohort B is users who churned before week 4. Compare their event frequency, funnel completion, and feature usage in week 1. The differences tell you what your activation flow should be optimizing for.
Implementation checklist before you go live
identify() called when users sign in, user properties set correctly, anonymous → identified user stitching workingThe temptation is to skip the checklist and start analyzing. Every team that does this ends up back at the beginning within three months — either rebuilding the taxonomy or living with data they can't fully trust. The upfront investment in getting the foundation right pays back on every analysis you run for the lifetime of the implementation.
