Vue.js SPAs have a quirk that changes the blocking picture entirely: bots that don't run JavaScript see an empty <div id="app"></div> shell — your content is invisible to them. This means useHead() noai tags don't get delivered. The reliable approach is the server layer: nginx, Express, or your CDN's response header config.
Vue SPA vs Nuxt.js: This guide is for standalone Vue SPAs (Vite + Vue 3, no SSR). If you're using Nuxt.js, your pages are server-rendered and bots see full HTML — see the Nuxt.js guide instead. For Vue SPAs: your content is invisible to non-JS bots by default, but your noai meta tags are too. Use server-level headers.
| Method |
|---|
| robots.txt in Vite public/ Always — first thing to add |
| noai meta tag via @vueuse/head useHead() As a belt-and-suspenders addition (not sufficient alone) |
| X-Robots-Tag response header Best approach for SPAs — set in nginx/Express |
| nginx hard block (403) Self-hosted or VPS deployments |
| Express server middleware block When serving Vue dist/ via Express |
| Netlify / Vercel headers config Netlify or Vercel static deployments |
Vite copies everything in public/ directly to dist/ on build. Put your robots.txt here — it becomes /robots.txt in production without any route configuration.
User-agent: GPTBot Disallow: / User-agent: ChatGPT-User Disallow: / User-agent: OAI-SearchBot Disallow: / User-agent: ClaudeBot Disallow: / User-agent: anthropic-ai Disallow: / User-agent: Google-Extended Disallow: / User-agent: Bytespider Disallow: / User-agent: CCBot Disallow: / User-agent: PerplexityBot Disallow: / User-agent: meta-externalagent Disallow: / User-agent: Amazonbot Disallow: / User-agent: Applebot-Extended Disallow: / User-agent: xAI-Bot Disallow: / User-agent: DeepSeekBot Disallow: / User-agent: MistralBot Disallow: / User-agent: Diffbot Disallow: / User-agent: cohere-ai Disallow: / User-agent: AI2Bot Disallow: / User-agent: Ai2Bot-Dolma Disallow: / User-agent: YouBot Disallow: / User-agent: DuckAssistBot Disallow: / User-agent: omgili Disallow: / User-agent: omgilibot Disallow: / User-agent: webzio-extended Disallow: / User-agent: gemini-deep-research Disallow: / User-agent: * Allow: /
Vue CLI (webpack) projects: Same pattern — put robots.txt in the public/ folder at the project root. Vue CLI also copies public/ to dist/ on build.
Add noai meta tags using @vueuse/head or the built-in useHead() composable (Vue 3.4+). This works for bots that execute JavaScript, but most AI training crawlers do not. Use this in addition to server-level headers — not instead of them.
<script setup>
import { useHead } from '@vueuse/head'
useHead({
meta: [
{
name: 'robots',
content: 'noai, noimageai',
},
],
})
</script>
<template>
<router-view />
</template><script setup>
import { useHead } from '@vueuse/head'
useHead({
title: 'My Product — Shop',
meta: [
{ name: 'description', content: 'Product description here.' },
{ name: 'robots', content: 'noai, noimageai' },
],
})
</script>
<template>
<main>
<!-- product content -->
</main>
</template>For maximum compatibility — including bots that don't run JS — inject the meta tag directly into the static index.html template. This is served in the raw HTML shell before any JavaScript runs.
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<!-- Block AI training bots — delivered before JS executes -->
<meta name="robots" content="noai, noimageai" />
<title>My App</title>
</head>
<body>
<div id="app"></div>
<script type="module" src="/src/main.ts"></script>
</body>
</html>index.html meta tag caveat: This tag applies globally to every route since the SPA serves the same index.html for all URLs. If you want to allow AI crawlers on some routes, you cannot use this approach — use per-route useHead() calls instead. For most apps, global noai from index.html is the right call.
The X-Robots-Tag HTTP response header delivers noai signals at the server level — before any JavaScript runs, and for every response including the HTML shell, API responses, and static assets. This is the most reliable method for Vue SPAs.
server {
listen 80;
server_name myapp.com www.myapp.com;
root /var/www/my-vue-app/dist;
index index.html;
# noai header for all responses
add_header X-Robots-Tag "noai, noimageai" always;
# SPA fallback — all routes serve index.html
location / {
try_files $uri $uri/ /index.html;
}
# Cache static assets (but still send X-Robots-Tag)
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff2)$ {
expires 1y;
add_header Cache-Control "public, immutable";
add_header X-Robots-Tag "noai, noimageai" always;
}
}import express from 'express';
import path from 'path';
import { fileURLToPath } from 'url';
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const app = express();
const PORT = process.env.PORT || 3000;
const DIST = path.join(__dirname, 'dist');
// Set X-Robots-Tag on all responses
app.use((req, res, next) => {
res.setHeader('X-Robots-Tag', 'noai, noimageai');
next();
});
// Serve Vue dist/
app.use(express.static(DIST));
// SPA fallback
app.get('*', (req, res) => {
res.sendFile(path.join(DIST, 'index.html'));
});
app.listen(PORT, () => console.log(`Server on ${PORT}`));Stop AI bots entirely before your Vue app is served. This works for bots that ignore robots.txt (Bytespider, Diffbot). Add a map block to your nginx config and return 403 for matched user agents.
# In /etc/nginx/nginx.conf — inside http { } block:
map $http_user_agent $blocked_ai_bot {
default 0;
"~*GPTBot" 1;
"~*ChatGPT-User" 1;
"~*OAI-SearchBot" 1;
"~*ClaudeBot" 1;
"~*anthropic-ai" 1;
"~*Google-Extended" 1;
"~*Bytespider" 1;
"~*CCBot" 1;
"~*PerplexityBot" 1;
"~*meta-externalagent" 1;
"~*Amazonbot" 1;
"~*Applebot-Extended" 1;
"~*xAI-Bot" 1;
"~*DeepSeekBot" 1;
"~*MistralBot" 1;
"~*Diffbot" 1;
"~*cohere-ai" 1;
"~*AI2Bot" 1;
"~*Ai2Bot-Dolma" 1;
"~*omgili" 1;
"~*omgilibot" 1;
"~*webzio-extended" 1;
"~*gemini-deep-research" 1;
}
# In your site's server { } block:
server {
listen 80;
server_name myapp.com;
root /var/www/my-vue-app/dist;
index index.html;
add_header X-Robots-Tag "noai, noimageai" always;
# Block matched AI bots
if ($blocked_ai_bot) {
return 403;
}
location / {
try_files $uri $uri/ /index.html;
}
}If you're serving your Vue SPA via an Express/Node.js server (common for SSR-adjacent setups or custom server logic), add bot-blocking middleware before the static file handler.
import express from 'express';
import path from 'path';
import { fileURLToPath } from 'url';
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const app = express();
const DIST = path.join(__dirname, 'dist');
const AI_BOT_PATTERN = /GPTBot|ChatGPT-User|OAI-SearchBot|ClaudeBot|anthropic-ai|Google-Extended|Bytespider|CCBot|PerplexityBot|meta-externalagent|Amazonbot|Applebot-Extended|xAI-Bot|DeepSeekBot|MistralBot|Diffbot|cohere-ai|AI2Bot|Ai2Bot-Dolma|YouBot|DuckAssistBot|omgili|omgilibot|webzio-extended|gemini-deep-research/i;
// Block AI bots — runs before static files and SPA fallback
app.use((req, res, next) => {
const ua = req.headers['user-agent'] ?? '';
// Always allow robots.txt through
if (req.path === '/robots.txt') return next();
if (AI_BOT_PATTERN.test(ua)) {
return res.status(403).send('Forbidden');
}
res.setHeader('X-Robots-Tag', 'noai, noimageai');
next();
});
app.use(express.static(DIST));
app.get('*', (req, res) => {
res.sendFile(path.join(DIST, 'index.html'));
});
app.listen(process.env.PORT ?? 3000);For static Vue deployments on Netlify or Vercel, configure response headers in their platform config files. This adds X-Robots-Tag to every response at the edge — no server process needed.
[build]
command = "npm run build"
publish = "dist"
[[headers]]
for = "/*"
[headers.values]
X-Robots-Tag = "noai, noimageai"
# SPA redirects — all routes to index.html
[[redirects]]
from = "/*"
to = "/index.html"
status = 200/* X-Robots-Tag: noai, noimageai
{
"headers": [
{
"source": "/(.*)",
"headers": [
{
"key": "X-Robots-Tag",
"value": "noai, noimageai"
}
]
}
],
"rewrites": [
{
"source": "/((?!api/.*).*)",
"destination": "/index.html"
}
]
}Hard blocking on Netlify/Vercel: Neither platform supports user-agent conditions in their headers config. For hard 403 blocking on these platforms, use Cloudflare WAF in front of your Netlify/Vercel deployment — proxy your custom domain through Cloudflare and add a WAF custom rule (free plan: 5 rules).
| Scenario | Bot sees |
|---|---|
| Vue SPA — no JS execution | <div id="app"></div> shell only |
| Vue SPA — useHead() only | Empty shell (JS required) |
| Vue SPA — X-Robots-Tag header | Response header on every request |
| Nuxt.js SSR | Full rendered HTML |
| Vue + Vite SSG (vite-ssg) | Pre-rendered HTML per route |
| Bot | Operator |
|---|---|
| GPTBot | OpenAI |
| ChatGPT-User | OpenAI |
| OAI-SearchBot | OpenAI |
| ClaudeBot | Anthropic |
| anthropic-ai | Anthropic |
| Google-Extended | |
| Bytespider | ByteDance |
| CCBot | Common Crawl |
| PerplexityBot | Perplexity |
| meta-externalagent | Meta |
| Amazonbot | Amazon |
| Applebot-Extended | Apple |
| xAI-Bot | xAI |
| DeepSeekBot | DeepSeek |
| MistralBot | Mistral |
| Diffbot | Diffbot |
| cohere-ai | Cohere |
| AI2Bot | Allen Institute |
| Ai2Bot-Dolma | Allen Institute |
| YouBot | You.com |
| DuckAssistBot | DuckDuckGo |
| omgili | Webz.io |
| omgilibot | Webz.io |
| webzio-extended | Webz.io |
| gemini-deep-research |
No. Vue SPAs serve an empty <div id="app"></div> shell as the initial HTML. Bots that don't execute JS see only your index.html skeleton — not your rendered content, product listings, or blog posts. This makes Vue SPAs paradoxically private for non-JS bots, but your useHead() noai tags are also invisible.
In public/robots.txt at the project root. Vite copies everything in public/ directly to dist/ without processing — no route needed. Do not put it in src/.
Only for bots that execute JavaScript. Most AI training crawlers (CCBot, Bytespider, GPTBot) do not run JS. Use the index.html meta tag or X-Robots-Tag response headers for reliable delivery.
Netlify: add netlify.toml with [[headers]] targeting /*. Vercel: add vercel.json with a headers array. For hard 403 blocking on both platforms, proxy through Cloudflare WAF.
No. Navigation guards run after the JS bundle loads client-side. AI training crawlers typically don't execute JavaScript and never reach your router. Use nginx, Express, or Cloudflare WAF for effective blocking.
Nuxt.js renders HTML server-side so bots see full content including meta tags. Vue SPA serves an empty HTML shell — non-JS bots see nothing. For AI bot blocking, use server-level X-Robots-Tag headers in both cases. For Nuxt.js, see the Nuxt.js guide.
Is your site protected from AI bots?
Run a free scan to check your robots.txt, meta tags, and overall AI readiness score.