Training Component Soft

Training goals

code: AI-110 | version: v2

Artificial intelligence has become an extremely important area for IT professionals and  engineers with the scientific breakthroughs and practical applications of generative AI  systems, especially its Large Language Model (LLM) variant such as OpenAI’s GPT, Google’s Gemini and many other closed- and open-source models. Due to its importance and impact on every aspect of our lives, understanding the concepts, functionalities and practical usage of generative AI systems is quickly becoming essential for all IT and other technical professionals as well as for managers with technical background.

 

This training focuses on Large Language Models (LLMs) concepts and techniques on a high

level as well as on techniques and tools of LLM application development.

 

Main topics:

  • Introduction to LLM based applications
  • The Foundation Technologies of LLMs (Neural Networks, Tokenizer, Transformer)
  • The 3-phase training process of LLMs (pre-training, fine-tuning, RLHF)
  • Using LLMs via APIs
  • Prompt engineering
  • Retriever Augmented Generation (RAG)
  • Creating LLM chains with LangChain
  • LLM Agents
  • Fast Web Interface Prototyping for LLMs (Gradio)
  • Debugging and Evaluating LLM-based apps (Langsmith)
  • Fine-tuning Open-source LLM models

Besides gaining a basic understanding of the theory of Large Language Models (LLMs) as well as other technologies used in LLM-based applications, students will be able to examine their features and play with them during instructor’s demonstration and lab exercises.

 

This training is part of the AI portfolio of Component Soft which explores essential AI topics,

such as:

  • AI-110 Intro to Large Language Model (LLMs) and LLM-based apps
  • AI-434 GenAI Application Development with LLMs (OpenAI GPT, Google Gemini, Meta Llama, Mistral)

 

Structure: 50% lecture, 25% demonstration by the instructor, 25% hands on lab exercises

 

Target audience: Technical managers as well as IT and telco professionals who want to familiarize themselves with Large Language Models (LLMs) and LLM based applications.

Conspect Show list

  • Module 1. Introduction to LLM based applications
    • Main usage areas of LLM-based applications
    • Main types of LLM-based applications
    • Building blocks of LLM-based applications
    • Lab: Testing a simple LLM-based application
  • Module 2. The Foundation Technologies of LLMs (Neural Networks, Tokenizer, Transformer)
    • From human neural cells to artificial neural networks
    • Deep Neural Networks
    • Transfer Learning
    • Intuition of the transformer model
    • Main elements of transformers: tokenizer, embeddings, encoder, decoder
    • Variations on the transformer architecture
    • Popular transformer models
    • Lab: Testing popular LLM foundational models
  • Module 3. The 3-phase training process of LLMs (pre-training, fine-tuning, RLHF)
    • Pre-training of LLMs
    • How does pre-training basically work?
    • Training data set, computational and financial challenges
    • LLM Fine-tuning techniques.
    • How does fine-tuning basically work?
    • Parameter efficient fine-tuning (PEFT) with LoRA and quantized parameters
    • Reinforcement Learning with Human feedback (RLHF)
    • Why do we need RLHF in the first place?
    • Methods and main steps of RLHF
    • Lab: Examining an LLM family before and after fine-tuning and RLHF
  • Module 4. Prompt engineering
    • What is prompt engineering?
    • Prompt engineering terminology and concepts
    • The “Just Ask” Principle, Zero-shot prompts
    • Prompts with Few-shot learning
    • Prompt Chaining
    • Chain of Thought Prompting
    • Prompts with Personas
    • Lab: Demonstrating basic prompt techniques
  • Module 5. Prompt engineering
    • What is prompt engineering?
    • Prompt engineering terminology and concepts
    • The “Just Ask” Principle, Zero-shot prompts
    • Prompts with Few-shot learning
    • Prompt Chaining, Chain of Thought Prompting
    • Prompts with Personas
    • Lab: Demonstration of Prompt engineering in an LLM app
  • Module 6. Retriever Augmented Generation (RAG)
    • What is Retriever Augmented Generation (RAG)?
    • How does RAG work?
    • Syntactic vs. Semantic Similarity
    • Text embedding
    • Vector Databases
    • Lab: Demonstration of Retriever Augmented Generation (RAG) in an LLM app
  • Module 7. Creating LLM chains with LangChain
    • What are LLM chains?
    • LangChain architecture
    • Main Building Blocks: Models, Prompts and Output Parsers
    • Building LLM chains from building blocks
    • LangChain Memory
    • LangChain Agents
    • Lab: Demonstration of the usage of LangChain in an LLM app.
  • Module 8. LLM Agents
    • Motivations for LLM Agents
    • Main Features of LLM Agents
    • Main Building Blocks: Functions, Tools, Agents, ReAct execution logic
    • Implementing ReAct with complex prompts and with function-calling LLMs
    • Different ways of creating agents in Langchain
    • Limitations of the ReAct models, possible new directions
    • Lab: Demonstration of agents with GPT, Gemini as well as popular opensource LLMs
Download conspect training as PDF

Additional information

Prerequisites

Basic understanding of AI concepts, basic Python programming skills, user experience with ChatGPT or similar chatbots.

Difficulty level
Duration 1 day
Certificate

The participants will obtain certificates signed by Component Soft (course completion).

Trainer

Certified Component Soft Trainer. 

Other training Component Soft | Generative AI

Contact form

Please fill form below to obtain more info about this training.







* Fields marked with (*) are required !!!

Information on data processing by Compendium - Centrum Edukacyjne Spółka z o.o.

TRAINING PRICE FROM 650 EUR

  • In order to propose a date for this training, please contact the Sales Department