Forays Into AI

The only way to discover the limits of the possible is to go beyond them into the impossible. - Arthur C. Clarke

Can You Be a Successful Programmer in 2027 Without AI Skills?

As AI transforms the tech landscape, will programmers need to adapt and learn AI to stay relevant and successful in 2027 and beyond? I would say the answer is a clear yes, but there is more to it.

Building a Simple Multi-Agent Physics Teacher Application with AutoGen

Learn how to build a simple multi-agent application using the AutoGen framework to create a physics teacher application where a student agent interacts with a teacher agent to learn about Newton's laws of motion.

Creating a Real-time Chat Application with Streamlit and Neo4j

Learn how to build a chat application with Streamlit and Neo4j in our latest tutorial. We'll guide you through setting up Docker, using an open source LLM, and managing chat histories with Neo4j. Perfect for both beginners, this post provides all the tools needed to create an AI-enhanced chat application. Dive into the code and start building today!

Building a simple chat application using Streamlit and Langchain

Learn how to create a user-friendly chat application with Streamlit and Langchain, integrating semantic search for enhanced interactions.

Building a simple document search using FAISS and OpenAI

This blog post explores constructing a semantic search system using FAISS and Sentence Transformers, focusing on processing, indexing, and querying documents based on semantic content. It offers a step-by-step guide from data preparation to implementing a retrieval-augmented runner for nuanced search capabilities.

Harnessing Semantic Search and LLMs for Insightful Data Analysis

Explore how the combination of semantic search and large language models (LLMs) provides powerful tools for organizations to sift through unstructured data, enhancing understanding and insights.

Running an LLM Locally Using Ollama

Ollama offers a straightforward way to run Large Language Models like GPT and BERT locally via Docker, bypassing the need for cloud services and enabling direct access from personal hardware.

Running LLMs Locally with Mozilla-Ocho’s llamafile

Mozilla-Ocho's llamafile project revolutionizes access to Large Language Models (LLMs) by enabling easy local operation on various systems, democratizing the use of advanced AI technologies.