Member-only story

How I Built My Own GitHub Copilot — And How You Can Too

Daniel García
5 min readSep 24, 2024

--

Photo by Fotis Fotopoulos on Unsplash

Not a Medium Member yet? Click here to read the full article.

Ever wished you could just talk to your codebase and get answers instantly? I did too. The idea of having a conversational AI assistant that understands my code sounded like a dream. So, I decided to make it a reality using Ollama, a local Large Language Model (LLM), and a Retrieval-Augmented Generation (RAG) system. The best part? It was easier than I thought, and I’m here to show you how you can do it too.

Rethinking How We Interact with Code

Imagine being able to ask your codebase questions like:

  • “Which function handles user authentication?”
  • “How does data flow through the payment processing module?”
  • “Show me examples of API calls to the database.”

Instead of digging through files and documentation, you get instant answers. This is not science fiction; it’s possible today with the right tools.

Introducing Ollama and the Codebase Chatbot

I stumbled upon the ollama_copilot_enterprise repository, which promised a way to chat with your codebase using Ollama. Intrigued, I decided to give it a try.

--

--

Daniel García
Daniel García

Written by Daniel García

Lifetime failure - I write as I learn 🤖

No responses yet