Ask ChatGPT’s default and premium models the same question, and they’ll cite almost entirely different sources, according to a Writesonic analysis.

GPT-5.4 Thinking, ChatGPT’s premium model, sent 56% of its citations to brand websites. GPT-5.3 Instant, the default for all logged-in ChatGPT users, sent 8%.

Across all prompts, the two models shared only 7% of their cited sources. The reason comes down to how each model searches the web before answering.

Same Question, Different Search Strategy

When models were asked about CRM software, GPT-5.3 sent one broad query and cited techradar.com and designrevision.com. GPT-5.4 sent separate queries restricted to hubspot.com, salesforce.com, and attio.com for pricing, then checked g2.com and capterra.com for reviews.

GPT-5.4 averaged 8.5 sub-queries, many of them restricted to specific domains, and used site: operators in 156 of its 423 total queries. No other ChatGPT model tested used site: operators at all.

OpenAI’s documentation says ChatGPT search rewrites prompts, but doesn’t note how models decide which domains to target or when to use site: operators.

Where The Citations Land

GPT-5.3 leaned heavily on third-party content. Blog posts and articles made up 32% of its citations, with Forbes (15 citations), TechRadar (10), and Tom’s Guide (10) as the top domains.

GPT-5.4 went the other direction. Brand homepages accounted for 22% of citations, pricing pages 19%, and product pages 10%.

GPT-5.3 cited 4 pricing pages across all 49 conversations that triggered web search. GPT-5.4 cited 138. For brands that gate pricing behind a “contact sales” page, this could mean GPT-5.4 has less to work with when answering comparison queries.

On head-to-head comparison prompts like “HubSpot vs Salesforce vs Pipedrive,” GPT-5.3 never cited a brand website. GPT-5.4 cited brands 83% to 100% of the time on those same prompts.

How This Connects To Search Rankings

Writesonic used SerpAPI to check whether cited domains also appeared in Google and Bing results for the same query.

For GPT-5.3, 47% of cited domains also appeared in Google results. The overlap suggests that Google rankings are at least partially predictive for the default model.

For GPT-5.4, 75% of cited domains didn’t appear in Google or Bing results for the same user prompt. That suggests GPT-5.4 may rely less on traditional search rankings and more on targeted domain queries, though that hasn’t been independently verified.

Why This Matters

Brand visibility in ChatGPT may depend on which model a user is running.

For the default model, third-party coverage on review sites and media outlets appears to drive citations. For the premium model, first-party content, particularly pricing and product pages, appears to matter more.

Looking Ahead

As ChatGPT continues rolling out new models, the patterns identified here may change.

Most cited URLs in the test sample included utm_source=chatgpt.com, giving brands a way to measure referral traffic directly in analytics.




Source link


administrator