SEO · Crawlers · Agents

Protocols & Machine-Friendly Files

Feeds, sitemap, robots.txt, llms.txt, and formal instructions for search engines, agents, and custom integrations—all in one place.

Sitemaps & Discovery

Official XML endpoints that help Google, Bing, and other crawlers understand the site structure.

XML
Sitemap Index

Main index referencing every autogenerated sitemap.

application/xml

Full Sitemap

Contains all public URLs (<= 50k), refreshed at each deploy.

application/xml

Multi-format feeds

RSS, Atom, and JSON Feed with full text, hreflang alternates, and WebSub hubs.

Feeds
RSS 2.0

Default for every reader.

application/rss+xml

Atom 1.0

W3C-compliant syndication.

application/atom+xml

JSON Feed 1.1

Perfect for APIs and automations.

application/feed+json

Feed documentation

Detailed explanation plus pillar-specific feeds.

guide

Crawler & LLM instructions

Plain-text manifests defining usage policies and offering machine-ingestable versions of the content.

txt
robots.txt

Crawl permissions + sitemap reference.

text/plain

llms.txt

Semantic index for agents/LLMs with the key entry points.

text/plain

llms-full.txt

Extended full-text dump for controlled ingestion.

text/plain

RSL license

Rights Statement Language machine-readable policy.

application/xml

Search & integration

Endpoints enabling custom search integrations inside browsers or native apps.

Search
OpenSearch descriptor

Lets you install ireneburresi.dev as a browser search engine.

application/xml

Implementation notes

Consistent HTTP headers

All static assets and feeds expose Content-Type, Cache-Control, and ETag so you can leverage conditional requests (304).

WebSub / real-time

Feeds expose rel="hub" pointing to PubSubHubbub: anyone subscribed is notified automatically after every deploy.

Content Negotiation

Requesting /rss.xml with different Accept headers will automatically return RSS/Atom/JSON without changing the URL.

Need custom automations or want to embed the feeds in your workflow? Get in touch.