Artificial intelligence has become an extremely important area for IT professionals and engineers with the scientific breakthroughs and practical applications of generative AI systems, especially its Large Language Model (LLM) variant such as OpenAI’s GPT, Google’s Gemini and many other closed- and open-source models. Due to its importance and impact on every aspect of our lives, understanding the concepts, functionalities and practical usage of generative AI systems is quickly becoming essential for all IT and other technical professionals as well as for managers with technical background.
This training focuses on Large Language Models (LLMs) concepts and techniques on a high
level as well as on techniques and tools of LLM application development.
Main topics:
- Introduction to LLM based applications
- The Foundation Technologies of LLMs (Neural Networks, Tokenizer, Transformer)
- The 3-phase training process of LLMs (pre-training, fine-tuning, RLHF)
- Using LLMs via APIs
- Prompt engineering
- Retriever Augmented Generation (RAG)
- Creating LLM chains with LangChain
- LLM Agents
- Fast Web Interface Prototyping for LLMs (Gradio)
- Debugging and Evaluating LLM-based apps (Langsmith)
- Fine-tuning Open-source LLM models
Besides gaining a basic understanding of the theory of Large Language Models (LLMs) as well as other technologies used in LLM-based applications, students will be able to examine their features and play with them during instructor’s demonstration and lab exercises.
This training is part of the AI portfolio of Component Soft which explores essential AI topics,
such as:
- AI-110 Intro to Large Language Model (LLMs) and LLM-based apps
- AI-434 GenAI Application Development with LLMs (OpenAI GPT, Google Gemini, Meta Llama, Mistral)
Structure: 50% lecture, 25% demonstration by the instructor, 25% hands on lab exercises
Target audience: Technical managers as well as IT and telco professionals who want to familiarize themselves with Large Language Models (LLMs) and LLM based applications.