You asked for churn by cohort
Your AI gave you COUNT(*) with an INNER JOIN and half your users vanished.
Collective intelligence for data analysts
Every fixed query. Every "wrong chart type." Every "that's not how you calculate churn." Captured. Classified. Applied to every generation.
No proprietary data is ever shared — only generic analysis patterns and improvements. Your actual data, company names, and credentials never leave your machine.
0 0 0The Problem
Your AI gave you COUNT(*) with an INNER JOIN and half your users vanished.
It gave you a pie chart.
It started with "This document provides a comprehensive overview of…"
The last 20% is where you live. Bamboo fixes it before you have to.
How It Works
"Analyse churn by signup cohort"
LEFT JOIN. Proper denominator. Line chart. Cohorted. Labelled.
Your fix feeds the community pool. Next analyst's output is better because you used bamboo today.
What's Inside
Each one is a real mistake caught by a real analyst.
Wrong joins that silently drop your data. Date traps. Window frame defaults that nobody remembers correctly.
INNER JOIN users ON ...LEFT JOIN users ON ... then filter NULLs explicitlyOVER (PARTITION BY ...) — default RANGE frameOVER (PARTITION BY ... ROWS BETWEEN ...)WHERE created_at = '2024-03-01'WHERE created_at >= '2024-03-01' AND created_at < '2024-03-02'Wrong denominators that quietly lie to stakeholders. Growth calcs that flip sign on negative base periods.
AVG(regional_average)SUM(total) / SUM(count) — weighted averageend_of_period_countstart_of_period_countCOUNT(user_id) after JOINCOUNT(DISTINCT user_id) — JOINs create fanoutThe right chart for the data. Axes that don't mislead. Colours that work for everyone.
revenue by region$4.2M$0 — truncated axes exaggerate differencesNULL handling that doesn't surprise you at 2am. Timezone logic that actually works across DST boundaries.
WHERE status = NULL — returns 0 rowsWHERE status IS NULLpd.to_datetime(col) with mixed formatspd.to_datetime(col, format='%Y-%m-%d') — explicit formatLead with the finding. Compare to something. Know your audience. Stop hedging every number.
Why This Is Different
It's not "awesome-sql-tips.md".
It's a living system.
When you fix a query, bamboo captures the diff. The server classifies it: "query-logic — always use LEFT JOIN when right table might have gaps."
When the same fix happens 5 times from 5 different analysts, the pattern gets a confidence score of 5. By the time it reaches you, the AI doesn't make that mistake anymore.
The skill gets measurably better every week. Not because someone updates a doc. Because analysts use it.
And your data stays yours. Only generic patterns like "use LEFT JOIN" or "start y-axis at zero" are shared — never your actual queries, tables, column names, or results. Every submission is filtered for PII before it touches the server.
Install
Works with Claude Code and Cursor. SQL, Python, and R. Patterns update on every pull.
Paste this into Claude Code:
Fetch https://bamboo.up.railway.app and install the bamboo skill
Claude Code will read this page, download the skill file, and save it to .claude/skills/bamboo.skill. No API key needed. Set up the auto-capture hook and your corrections flow back to the community without you thinking about it.
Paste this into Cursor:
Fetch https://bamboo.up.railway.app and install the bamboo skill into this project
Cursor will read this page, find the download link, and install the skill. No API key required.
Paste this into any AI agent:
Fetch https://bamboo.up.railway.app and install the bamboo skill into this project
Any agent that can fetch URLs and write files will read this page, find the download link, and install the skill. Works with Windsurf, Codex, Aider, and others. No API key required.
You're going to fix that JOIN anyway. You're going to change that pie chart to a bar chart. You're going to move the insight to the first line.
The only question is whether the next analyst has to fix the same things.
"400 analysts already fixed that."