Forays Into AI

The only way to discover the limits of the possible is to go beyond them into the impossible. - Arthur C. Clarke

Creating a Real-time Chat Application with Streamlit and Neo4j

Learn how to build a chat application with Streamlit and Neo4j in our latest tutorial. We'll guide you through setting up Docker, using an open source LLM, and managing chat histories with Neo4j. Perfect for both beginners, this post provides all the tools needed to create an AI-enhanced chat application. Dive into the code and start building today!

Building a simple chat application using Streamlit and Langchain

Learn how to create a user-friendly chat application with Streamlit and Langchain, integrating semantic search for enhanced interactions.

Building a simple document search using FAISS and OpenAI

This blog post explores constructing a semantic search system using FAISS and Sentence Transformers, focusing on processing, indexing, and querying documents based on semantic content. It offers a step-by-step guide from data preparation to implementing a retrieval-augmented runner for nuanced search capabilities.

Harnessing Semantic Search and LLMs for Insightful Data Analysis

Explore how the combination of semantic search and large language models (LLMs) provides powerful tools for organizations to sift through unstructured data, enhancing understanding and insights.

Hyperparameter Tuning

Hyperparameter tuning is akin to adjusting settings for optimal performance, crucial in refining machine learning models for better accuracy and efficiency.

UK Publishes Framework for Generative AI

The UK's new framework outlines ten core principles for the safe and responsible use of generative AI in government and public sector organisations, aiming to enhance productivity while ensuring ethical practices.

Running an LLM Locally Using Ollama

Ollama offers a straightforward way to run Large Language Models like GPT and BERT locally via Docker, bypassing the need for cloud services and enabling direct access from personal hardware.

Running LLMs Locally with Mozilla-Ocho’s llamafile

Mozilla-Ocho's llamafile project revolutionizes access to Large Language Models (LLMs) by enabling easy local operation on various systems, democratizing the use of advanced AI technologies.