Ollama, the popular platform for running language models locally, has announced experimental Vulkan support. This expansion significantly increases the range of compatible GPUs, now including many AMD and Intel models. By leveraging the Vulkan API, Ollama aims to democratize AI development and deployment, making it easier for users to run models on a wider variety of hardware configurations. Users can find more details on Reddit.