How I set up SEO and LLM discoverability for my Phoenix app

I recently shipped the discoverability layer for Wish List Palace, a free wish list app for coordinating family gift giving. This post covers everything I put in place: standard search engine SEO, structured data, and the newer llms.txt standard for getting picked up by AI tools and training crawlers. I hope people stumble upon this site. People who for whatever reason have a need for it, but I also definitely didn’t want to buy ads or try to exploit SEO. The wishlist app space is already super crowded so I don’t think any of those strategies would help much but making it easier for crawlers and LLMs to understand what the app is and what it does was the goal.

The stack is Phoenix LiveView, but most of this applies to any web app. Also LiveView isn’t really necessary for what follows since it’s mostly about setting up certain static assets to serve.

Standard SEO foundations

Meta and Open Graph tags

Every page gets a full set of tags in <head>:

  • <meta name="description"> — the snippet shown in Google results
  • Open Graph tags (og:title, og:description, og:type, og:url, og:image) — controls the preview card when a URL is shared on WhatsApp, iMessage, Slack, etc.
  • Twitter Card tags — same idea for Twitter/X
  • <link rel="canonical"> — prevents duplicate content issues when a CDN like Cloudflare serves cached variants at different URLs

In Phoenix, the root layout sets sensible defaults. Individual LiveViews can override them in mount/3 via assigns:

{:ok, assign(socket,
  page_title: "How It Works — Wish List Palace",
  page_description: "Step-by-step guide to creating lists, sharing, and claiming gifts.",
  canonical_url: "https://wishlistpalace.com/how-it-works"
)}

One gotcha: og:image is commonly set to an SVG logo, but WhatsApp, Twitter/X, and iMessage silently ignore SVGs. You need a PNG or JPG (1200×630px is the recommended size) to get actual preview images in link cards.

robots.txt

The classic way to let crawlers (who want to obey your suggestions, at least) know what’s the deal with your web app is yourdomain.com/robots.txt. The first thing to do is disallow authenticated and admin routes so crawlers don’t waste crawl budget on pages they can’t access anyway. Since wishlistpalace has users with accounts and you have to be authenticated to see your own lists, there is no need for robots to visit /lists since they are not authenticated users.

Explicitly allow the public pages and LLM files. Basically whatever call to action stuff you have you want the robots to check out. Also I added a /how-it-works page to give an explanation of the basics. In theory this could be used by an LLM to tell a user how to use the site.

User-agent: *
Allow: /
Allow: /how-it-works
Allow: /llms.txt
Allow: /llms-full.txt
Disallow: /home
Disallow: /lists
Disallow: /claimed
Disallow: /admin
Disallow: /auth

Sitemap: https://wishlistpalace.com/sitemap.xml

These days if you want traffic to your site, not only do you want search engine crawlers to pick you up but also it would be cool if LLMs suggested people use your site. That’s of course easier said than done, but it can’t hurt to get some positive impression of your site in the training sets being collected. The explicit Allow lines matter for training crawlers like Common Crawl (CCBot), which tend to be conservative. More on this below.

sitemap.xml

A static sitemap covering every public indexable page. In Phoenix this lives in priv/static/sitemap.xml and gets served by Plug.Static. I give the how-it-works page a higher priority than register/sign-in since it has real content worth ranking:

<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  <url>
    <loc>https://wishlistpalace.com/</loc>
    <priority>1.0</priority>
  </url>
  <url>
    <loc>https://wishlistpalace.com/how-it-works</loc>
    <priority>0.8</priority>
  </url>
  <url>
    <loc>https://wishlistpalace.com/register</loc>
    <priority>0.5</priority>
  </url>
  <url>
    <loc>https://wishlistpalace.com/sign-in</loc>
    <priority>0.3</priority>
  </url>
</urlset>

Submit this to Google Search Console and Bing Webmaster Tools. Bing is maybe also worth it because the alternative search engines often source crawl results from Bing in some form or fashion.

JSON-LD structured data

Schema.org markup embedded in every page as a <script type="application/ld+json"> tag tells Google explicitly what kind of thing this is. For a web app, SoftwareApplication is the right type:

{
  "@context": "https://schema.org",
  "@type": "SoftwareApplication",
  "name": "Wish List Palace",
  "url": "https://wishlistpalace.com",
  "applicationCategory": "LifestyleApplication",
  "operatingSystem": "Web",
  "description": "Coordinate family gift giving without spoiling surprises. Share wish lists, claim items secretly, and avoid duplicate gifts.",
  "offers": {
    "@type": "Offer",
    "price": "0",
    "priceCurrency": "USD"
  }
}

This can unlock app-style rich results in search. It’s also increasingly read by AI-powered search tools as a structured signal about what a site does. If you later add a review system, an aggregateRating property qualifies you for star ratings in search results.

LLM discoverability with llms.txt

The llmstxt.org standard is a convention for helping LLM-powered tools — AI chatbots, AI search engines, coding assistants — understand what a site is and how to use it. The idea is simple: serve a markdown file at /llms.txt that summarises the site and links to deeper content.

I serve two files:

/llms.txt — the index, following the spec format:

# Wish List Palace

> Wish List Palace is a free wish list app for families and friends...

## Docs

- [Full guide](https://wishlistpalace.com/llms-full.txt): Complete guide covering all features

## Pages

- [How it works](https://wishlistpalace.com/how-it-works): Step-by-step walkthrough
- [Home](https://wishlistpalace.com/): Landing page
- [Register](https://wishlistpalace.com/register): Create a free account

/llms-full.txt — a comprehensive markdown guide covering every feature: creating lists, adding items (with links, priorities, and tags), sharing, and how gift claiming works privately.

In Phoenix, these are static files in priv/static/. The only configuration needed is adding them to the static_paths/0 allowlist so Plug.Static serves them:

def static_paths,
  do: ~w(assets fonts images favicon.ico robots.txt sitemap.xml llms.txt llms-full.txt)

Search ranking strategy

Here is generally how I am thinking about improving the ranking of wishlistpalace. Trying to get ranked on queries like “wishlist app” which are dominated by Amazon, MyRegistry, and Giftster is probably not worth it on my budget. These sites have years of domain authority and thousands of backlinks. Competing directly for those terms is probably worthless, even in the long run.

Instead I kind of want to show up for the long tail of queries. These have far less competition and are better intent-matched:

  • “wish list app where others can claim gifts”
  • “how to share a christmas list without spoilers”
  • “wish list app no duplicates”
  • “alternative to amazon wish list”

Someone searching those phrases has exactly the problem this app solves. Ranking page 1 for five specific long-tail queries beats ranking page 4 for a head term.

I can hope to have some domain authority built through backlinks from trusted external sites over time. One-time directory listings (AlternativeTo, SaaSHub, Product Hunt) and community posts (Hacker News Show HN, relevant subreddits) are what I aim for though I’m waiting to finish some more features before engaging with those. But even a few backlinks over time can compound.