Sunday drop-in workshop: Install your own AI on Linux
I’m running a drop-in 4hr workshop on Jitsi this Sunday for helping people learn about and install their own AI LLMs on Linux. These’ll be for text chat and text-to-image though i’m happy to share how to run STT and TTS in Python if anyone’s interested.
ALL experience levels are welcome from brand new Linux users to gurus of CLI.
If you just want to check out the software instead of installing it, i’ll be doing screen share demos and links to live demos. There’ll also be plenty of room for questions.
Workshop format will be adhoc depending on who arrives and when, i’m expecting to be helping people 1 on 1 as they come in with free chat but if there’s a lot of people i’ll be doing a guided installation that everyone does together.
Schedule: 2023-07-09T18:00:00Z→2023-07-09T22:00:00Z
What hardware do I need?
- For text chat (basic): 4GB of CPU ram or higher, no GPU needed
- For text chat (advanced): Nvidia GPU with >=6GB of VRAM and CPU ram
- For text-to-image: Nvidia GPU with >=6GB of VRAM and CPU RAM
Can I use an AMD GPU instead of Nvidia?
I don’t have experience with running AI on AMD but there’s options in some projects incl’ text-chat and text-to-image that make AMD GPU’s work, usually at the cost of needing double the CPU RAM. Happy to help you troubleshoot.
How else can I prepare?
If you’re running an Nvidia GPU, make sure the drivers from Nvidia are installed along with the cuda toolkit.
sudo apt install nvidia-cuda-toolkit
Install the Nvidia driver from your package manager:
Ubuntu Guide: 2 Ways to Install Nvidia Driver on Ubuntu 22.04 (GUI & Command Line)
Debian Guide: How to Clean Install NVIDIA Drivers on Debian 11
If everything is installed correctly, you should see your card after running this command in a terminal:
nvidia-smi
How do I get help outside the workshop?
Ask questions here, post in Help Desk, or @Ulfnic in TuxDigital Telegram / Matrix (note: primary TuxDigital chat is Discord) or Linux Saloon Telegram.
images in this post were rendered locally using stable-diffusion on a GeForce RTX 3060