Generative AI for DevOps: Build Your Own LLM Server (English) (Virtueel)
Lesmethode :
Virtueel
Algemeen :
As a DevOps engineer, do you rely on ChatGPT but face restrictions at work due to company policy or sensitive data? Are you curious about hosting your own Large Language Model (LLM)?
In the one-day workshop Generative AI for DevOps Engineers, developed by the open-source experts at AT Computing and exclusively available via Vijfhart, you will learn how to set up and run your own local version of "ChatGPT". Everything is built on open-source technology without sending any data to the cloud.
By attending this course, you will directly meet the EU AI Act requirements on AI Literacy: risks, privacy, data sharing, model bias, and hallucinations are all thoroughl…
Er zijn nog geen veelgestelde vragen over dit product. Als je een vraag hebt, neem dan contact op met onze klantenservice.
Lesmethode :
Virtueel
Algemeen :
As a DevOps engineer, do you rely on ChatGPT but face
restrictions at work due to company policy or sensitive data? Are
you curious about hosting your own Large Language Model (LLM)?
In the one-day workshop Generative AI for DevOps Engineers,
developed by the open-source experts at AT Computing and
exclusively available via Vijfhart, you will learn how to set up
and run your own local version of "ChatGPT". Everything is built on
open-source technology without sending any data to the cloud.
By attending this course, you will directly meet the EU AI Act
requirements on AI Literacy: risks, privacy, data sharing, model
bias, and hallucinations are all thoroughly addressed. This ensures
you not only gain hands-on skills, but also remain fully compliant
with upcoming regulations.
The day begins with an introduction to Generative AI, exploring
available LLMs and how to use them effectively. You will then dive
into the hardware side, including GPU acceleration on virtual
machines and in containers.
Through hands-on exercises, you will set up your own LLM server and
even create a custom "model". You will also learn how to connect a
web-based client to your LLM effectively building your own ChatGPT
clone.
On top of that, you will explore how to integrate the LLM API with
Python, apply Retrieval Augmented Generation (RAG) to work with
your own documents, analyze images, and even perform log analysis
using your LLM.
Doel :
After following this course, you will have knowledge of the internal workings of GenAI in general and LLM's in particular. You will also learn how to set up a local LLM server without sending your data to a Cloud environment. This course covers all topics for AI Literacy as required by the EU AI Act regulation.
Doelgroep :
The following prior knowledge is required:
- Linux Infrastructure
- Linux/UNIX Fundamentals
Knowledge of a programming language, preferably Python, is
beneficial.
Voorkennis :
To follow the course, experience with the (Linux) command line
interface is required, for example, Linux/UNIX Fundamentals or
Linux Infrastructure. Knowledge of a programming language,
preferably Python, is beneficial.
Onderwerpen :
- What is Generative AI?
- Which LLMs are available and how do you use them?
- Tokens, vectors, and parameters
- Hardware: CPU versus GPU: how to apply GPU acceleration on
Virtual Machines and in containers (Docker)
- Setting up an LLM server
- Creating your own LLM model
- Applying a web-based client to your own LLM
- Connecting to the LLM API with Python
- Using your own documents with your own LLM
- Analyzing images with your own LLM
- Log analysis using your LLM
Er zijn nog geen veelgestelde vragen over dit product. Als je een vraag hebt, neem dan contact op met onze klantenservice.

