LLMs and RAGs

This section covers Large Language Models, Retrieval-Augmented Generation, and their applications in modern AI systems.

Recent Posts

Hosting LLMs Locally via Ollama

A practical guide to running large language models locally using Ollama, including setup, model management, API integration, and performance optimization for offline AI applications.

Published: Coming Soon

A Case Study of RAG Built for Clinical Reports using LlamaIndex

An in-depth case study demonstrating how to build a Retrieval-Augmented Generation system for clinical medical reports using LlamaIndex, including data preprocessing, vector storage, and evaluation metrics.

Published: Coming Soon