A simple guide for e-commerce brands in the AI search era

How parts and equipment sellers can adapt their content strategy as AI search engines like ChatGPT and Perplexity change how customers find products.

The Astro logo with the word One.

Key Insights

  • Keywords are dead, question-answer pairs are everything. AI doesn't care about keywords. Instead, it looks for concise, authoritative answers that it can weave into a response for the user. These question answer pairs should be strategically inserted across pages to maximize the probabilty of AI picking them up.
  • Reddit and other forums (e.g. JustAnswer Appliances for appliance parts) are favored by AI models, especially ChatGPT. Discover which threads are getting cited, and engage in these spaces as a company rep to show up in more reponses.
  • This new content strategy should be carefully optimized. Routinely simulate viable user queries to popular models, see which content is working effectively, and iterate from there.

Keyword stuffing is over

Traditional SEO rewarded long‑form pages packed with keyword variations. AI search engines such as ChatGPT, Perplexity, and Google’s AI Overviews now lift short, factual snippets, stats, and first‑hand quotes directly into their answers. If your brand isn’t the source of those snippets, you risk disappearing from the millions of users asking product questions to AI daily.


Why AI search optimization matters more for parts & equipment sellers

  1. Customers (both pro and DIY) are turning to AI engines for technical troubleshooting. Instead of searching Google and clicking through results, buyers now ask ChatGPT questions like “What pump fits a Whirlpool WRS325SDHZ?” or “Is part W10348269 compatible with model ABC?” If your specifications and compatibility data aren’t structured for AI extraction, you’re invisible to this growing segment.

  2. Quality troubleshooting content becomes more important. With conversational AI, customers ask sequential diagnostic questions—starting with “Why is my prep table compressor hot?”, then “How to test a compressor?”, before finally asking “What compressor fits model ABC?”. AI engines surface whoever provides the clearest answer for each question. Competitors with better troubleshooting guides can intercept customers early in their journey and win the eventual parts sale. Traditional SEO-optimized troubleshooting content that stuff keywords into pages


1. Measure your traffic

  • Most AI platforms send a traditional Referer header when a user clicks on a link to your content, so you can monitor traffic from ChatGPT, Perplexity, and others in Google Analytics or your preferred analytics tool just like any other referring site.
  • For ChatGPT, every referred URL includes ?utm_source=chatgpt.com.
  • Google AI Overview is harder to track directly. When users click source links from the AI summaries, it appears as regular Google search traffic with https://www.google.com as the referrer.
  • Each session signals tens of unseen impressions.
  • Track branded mentions inside AI engines weekly; the delta tells you if you’re the answer—or the after‑thought.

All AI search products still use traditional search indexes under the hood. When you ask ChatGPT about a product, it queries Bing’s index. When Google’s AI Overview generates an answer, it pulls from the same crawled content that powers regular Google Search results.

This means your existing SEO foundation still matters tremendously. AI models need to first discover your content through traditional crawling and indexing before they can cite it in their responses. A well-optimized site structure, clean URLs, proper internal linking, and fast loading times remain critical—they determine whether your content enters the AI’s source pool at all.

The key difference is what happens after discovery. While traditional search ranked pages by authority and keyword relevance, AI engines now parse that same indexed content for factual snippets, structured data, and authoritative quotes to weave into conversational answers.

AI ProductMonthly Active UsersSearch Engine Used
ChatGPT600M+Bing
Google Gemini400M+Google Search (internal)
Google AI Overview1.5B+Google Search (internal)
Google AI Mode-Google Search (internal)
Microsoft Copilot30M+Bing
Perplexity15M+Google Search, Bing (also maintains its own indexing/crawling)
Claude20M+Brave Search
Grok(not disclosed)X platform, xAI Live Search
DeepSeek100M+(not disclosed)
Meta AI1B+Bing, Google Search
Mistral1M+Brave Search

Note: User numbers are approximate and based on public estimates.

AI Product Domains (for analytics filtering):

chatgpt.com, gemini.google.com, copilot.microsoft.com, edgeservices.bing.com, perplexity.ai, claude.ai, grok.com, chat.deepseek.com, meta.ai, chat.mistral.ai

Crawlers

Each platform uses specialized bots with different purposes—some for training models, others for real-time user queries. Check your robots.txt file to ensure you’re not accidentally blocking these crawlers, and verify that any Crawl-delay directives aren’t set too high (keep under 10 seconds to avoid hindering discovery).

ChatGPT

OpenAI uses three distinct web crawlers for different purposes:

OAI-SearchBot - Used to link and surface websites in ChatGPT’s search results. This bot is specifically for search functionality and is not used to crawl content for training AI models. To ensure your site appears in search results, allow this bot in your robots.txt file.

ChatGPT-User - Handles user-initiated actions in ChatGPT and Custom GPTs. When users ask questions, this agent may visit web pages to retrieve current information. This is not used for automatic crawling or AI training.

GPTBot - Crawls content that may be used for training OpenAI’s generative AI foundation models. Disallowing GPTBot signals that your site’s content should not be used in training generative AI models.

OpenAI publishes IP address ranges for each bot. For detailed crawler management, see OpenAI’s official bot documentation.

Perplexity

Perplexity uses two distinct crawlers to gather and index information:

PerplexityBot - Designed to surface and link websites in search results. To ensure your site appears in Perplexity search results, allow this bot in your robots.txt file.

Perplexity-User - Supports real-time user queries. When users ask questions, this crawler may visit your pages to provide accurate answers and citations. Generally ignores robots.txt rules since it’s user-requested.

For detailed crawler management, see Perplexity’s official crawler documentation.

Claude

Anthropic uses three distinct crawlers for different purposes:

ClaudeBot - Collects web content for AI model training and development. Blocking this bot signals that future site materials should be excluded from training datasets.

Claude-User - Supports real-time user queries. When users ask Claude questions, this agent may access websites to retrieve current information. Disabling it prevents content retrieval for user-directed searches.

Claude-SearchBot - Analyzes web content to improve search result quality and accuracy. Blocking this reduces your site’s visibility in Claude’s search responses.

All three bots respect standard robots.txt directives and support the non-standard Crawl-delay extension. For detailed crawler management, see Anthropic’s official crawler documentation.

2. Map buyer questions & simulate prompts

  1. Brainstorm 25–50 core scenarios: diagnostics, comparisons (“stainless vs aluminium food pan”), compatibility (“Will part #123 fit Whirlpool WRS325SDHZ?”).
  2. Feed those prompts into each AI product via a simple script and export the answers + citations.
  3. Score yourself on visibility (mentioned, cited, primary recommendation, or absent).

3. Improve your metadata

Schema.org annotations (defined in JSON-LD script tags) are an important piece of metadata that LLMs directly parse for context around your content. Here’s what matters most:

TypeWhy it Matters to LLMs
Product schema with GTIN, MPN, price, compatibility, availabilityLets the model answer “Is this in stock?” without hallucination.
LocalBusiness + areaServedBoosts impressions for location‑qualified searches (e.g., “same‑day parts near Portland”).
contentLocation on blogsSignals regional relevance even for national sellers.

In addition to schema annotations, some have proposed a new standard, similar to robots.txt, called llms.txt for LLM-specific optimization. This proposed standard is a simple text file hosted at your root domain that gives AI crawlers a curated map of your most important pages—think product categories, FAQs, and return policies. The goal is to remove ambiguity by providing language models with structured, priority content rather than forcing them to ingest large pages and guess what matters most on your site.

While llms.txt remains a proposed standard without universal adoption, major AI companies like Anthropic and Google have published their own llms.txt files. Despite this, no LLM provider has formally committed to parsing llms.txt so our recommendation is that llms.txt shouldn’t be a major focus or investment right now.

4. Build authority with FAQs and smart citations

  • Add FAQs under each product page—this surfaces direct Q&A snippets for engines to quote verbatim.
  • Inline, in‑text citations outperform plain backlinks. Write “According to UL 197 standards (UL.com)….”—the model sees the trusted domain and the snippet.
  • Sprinkle first‑hand data: successful repair rates, compatibility coverage across equipment models, customer satisfaction data, or performance benchmarks from your lab testing.
  • Identify third‑party pages the AI already cites (Reddit threads, trade forums) and engage there visibly as a company rep.

Pro tip: keep FAQ answers ≤ 280 characters; that brevity matches snippet length bias.


FAQs

How do LLMs recognize a citation?

A clear, unambiguous mention of the source in the same paragraph—with or without a live link—carries the most weight. Backlinks still matter for discovery, but inline references drive answer selection.

What is location targeting, and should I care if I sell nationwide?

Yes, even nationwide sellers benefit from location targeting. Declaring areaServed and contentLocation tells AI models which cities to surface you for queries like “auto parts in Denver” or “same-day parts near Portland.” You can target multiple major metropolitan areas to capture local intent searches—many B2B buyers still prefer suppliers with local presence, and location signals boost visibility even for broader searches.


Key takeaway

Your site is no longer a destination; it’s a library that feeds AI with authoritative snippets. Make every product detail page, FAQ, and support article a top‑shelf reference and the engines will sell for you.

Need help? Send me an email here, and I’ll send over our simulation engine for measuring AI search blindspots.

An image of Daniel Zamoshchin

Written on by Daniel Zamoshchin

Share this article: