One tricky thing I noticed while looking into tracking brand mentions in AI search for a new Beamtrace article: we tend to treat “being mentioned” as binary – but it’s not
“I recommend X” vs “X is one of many tools” → completely different impact
Cited vs uncited mentions signal very different levels of trust
Even the position in a response (first vs last) changes visibility a lot
When you check AI answers, what do you personally consider a “good” mention?(being recommended, being cited, or just being included at all?)
I’ve been thinking about this more like a funnel: mentions = awareness, recommendations = preference, citations = trust. I’d say a “good” mention for me is when at least 2 are present. Plus if we appear randomly – let’s be real, it’s just noise. If it’s consistent across prompts, then it probably means something.
Interesting question - couldn’t help but jump in, Kristina
I think being happy about “just being mentioned” is a pretty common trap (been there myself ). It’s better than nothing, but if I think about it as a user, I’d trust an answer much more if a tool is actually recommended vs just listed.
Not sure about citations though. Do you really pay attention to them when reading AI answers?
Haha yes! Simply being mentioned is kind of the “bare minimum” of AI visibility nowadays.
On citations, I’d say they’re more of an authority & trust signal than something people actively evaluate. But they do change the framing: “according to X” vs just a plain statement hits differently, even if you don’t verify it.