Forays Into AI

The only way to discover the limits of the possible is to go beyond them into the impossible. - Arthur C. Clarke

Running an LLM Locally Using Ollama

In the evolving world of AI, Large Language Models (LLMs) like GPT and BERT have become pivotal. However, accessing these models typically requires cloud services. But you can run some of these LLMs locally, right from your own hardware, assuming you have adequate system resources.

Running Ollama using Docker

Ollama is a command-line chatbot that simplifies the use of large language models, which offers a range of open-source models like Mistral, Llama 2, Code Llama, and more, catering to different requirements (see Ollama on GitHub for minimum system requirements).

If, like me, you already have docker installed and want to avoid installing more stuff, you can just use docker (see Ollama on Docker Hub):

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
# then run the LLM
docker exec -it ollama ollama run llama2

Here is that chatbot telling me a joke and helping me solve an equation: Chatbot telling a joke

There are a number of other LLMs you can run as listed at Ollama Library, e.g.:

docker exec -it ollama ollama run mistral

You can also invoke this via an API call:

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "mistral",
  "prompt":"what is python?"
}'

This approach with Ollama and Docker opens up new possibilities for utilizing LLMs directly from personal devices, ensuring both accessibility and privacy for users.

TaggedLLMsOllamaDockerLocal AI ExecutionAI Accessibility

Can You Be a Successful Programmer in 2027 Without AI Skills?

As AI transforms the tech landscape, will programmers need to adapt and learn AI to stay relevant and successful in 2027 and beyond? I would say the answer is a clear yes, but there is more to it.

Building a Simple Multi-Agent Physics Teacher Application with AutoGen

Learn how to build a simple multi-agent application using the AutoGen framework to create a physics teacher application where a student agent interacts with a teacher agent to learn about Newton's laws of motion.

Creating a Real-time Chat Application with Streamlit and Neo4j

Learn how to build a chat application with Streamlit and Neo4j in our latest tutorial. We'll guide you through setting up Docker, using an open source LLM, and managing chat histories with Neo4j. Perfect for both beginners, this post provides all the tools needed to create an AI-enhanced chat application. Dive into the code and start building today!

Building a simple chat application using Streamlit and Langchain

Learn how to create a user-friendly chat application with Streamlit and Langchain, integrating semantic search for enhanced interactions.