ibsolutions.dev: an intelligent services provider website

A RAG-style semantic search experiment on ibsolutions.dev: hybrid embeddings, keyword scoring, and serverless retrieval to match visitor queries to proven client work.

What we solved

Buyers land on a services site, browse through multiple pages, and leave without finding proof that matches their specific problem. We tested whether a plain language search could close that gap and get visitors to relevant work faster.

What we built

A lightweight RAG-style search on the Solutions page that matches visitor queries to existing client work and solution entries. Results are designed for non-technical visitors: a short impact sentence, a link to the matching work, and a clear CTA to request a scoped pilot.

How it works

  1. At build time each solution and related content is converted into OpenAI embeddings and stored.
  2. A visitor types a short problem description. The query is embedded at request time.
  3. The system computes semantic similarity, applies a keyword boost when query words appear in title or description fields, and combines these into a hybrid score.
  4. The top 3 results above a relevance threshold are returned. If none pass, a suggested fallback is shown.
  5. Each result card contains a short impact line, a link to the matching Work or Solution page, and a CTA to request a pilot.

The search lives on /solutions and is backed by a serverless API endpoint (/api/search-solutions) for query embedding and scoring. Embeddings are generated at build time to avoid per-visit costs. A conservative relevance threshold is used in early pilots and iterated on as query data accumulates.

Operational notes

  • Log search queries and outcome confidence for ongoing quality and conversion analysis.
  • Keep the UI language buyer-friendly; remove technical jargon from result cards.
  • Iterate on keyword boosts for title and description fields as real query patterns emerge.
  • Generate and store embeddings at build time to avoid per-visit embedding costs.

What we learned

The biggest takeaway is that search queries reveal how prospects actually describe their problems, which is often different from how we write about our services. Logging those queries creates a feedback loop that sharpens positioning and content over time. We also confirmed that hybrid scoring (semantic similarity plus keyword boost) outperforms pure semantic search for a small, curated catalog like ours.

Read more

A deeper look at the challenge this solves, the impact we measured, and how marketing, sales and operations teams can apply the same pattern.


Implemented solutions

Used technologies

Want to build something like this?

Lab projects often become production solutions. Share your idea and we'll help you turn it into a real product.