Training Cloudera

Training goals dlearning

code: CL-DTSH

This four-day hands-on training course delivers the key concepts and expertise participants need to ingest and process data on a Hadoop cluster using the most up-to-date tools and techniques. Employing Hadoop ecosystem projects such as Spark, Hive, Flume, Sqoop, and Impala, this training course is the best preparation for the real-world challenges faced by Hadoop developers. Participants learn to identify which tool is the right one to use in a given situation, and will gain hands-on experience in developing using those tools.

Through instructor-led discussion and interactive, hands-on exercises, participants will learn Apache Spark and how it integrates with the entire Hadoop ecosystem, learning:

  • How data is distributed, stored, and processed in a Hadoop cluster
  • How to use Sqoop and Flume to ingest data
  • How to process distributed data with Apache Spark
  • How to model structured data as tables in Impala and Hive
  • How to choose the best data storage format for different data usage patterns
  • Best practices for data storage

Conspect Show list

  1. Introduction to Hadoop and the Hadoop Ecosystem
    • Problems with Traditional Large-scale Systems
    • Hadoop!
    • The Hadoop EcoSystem
  2. Hadoop Architecture and HDFS
    • Distributed Processing on a Cluster
    • Storage: HDFS Architecture
    • Storage: Using HDFS
    • Resource Management: YARN Architecture
    • Resource Management: Working with YARN
  3. Importing Relational Data with Apache Sqoop
    • Sqoop Overview
    • Basic Imports and Exports
    • Limiting Results
    • Improving Sqoop’s Performance
    • Sqoop 2
  4. Introduction to Impala and Hive
    • Introduction to Impala and Hive
    • Why Use Impala and Hive?
    • Comparing Hive to Traditional Databases
    • Hive Use Cases
  5. Modeling and Managing Data with Impala and Hive
    • Data Storage Overview
    • Creating Databases and Tables
    • Loading Data into Tables
    • HCatalog
    • Impala Metadata Caching
  6. Data Formats
    • Selecting a File Format
    • Hadoop Tool Support for File Formats
    • Avro Schemas
    • Using Avro with Hive and Sqoop
    • Avro Schema Evolution
    • Compression
  7. Data Partitioning
    • Partitioning Overview
    • Partitioning in Impala and Hive
  8. Capturing Data with Apache Flume
    • What is Apache Flume?
    • Basic Flume Architecture
    • Flume Sources
    • Flume Sinks
    • Flume Channels
    • Flume Configuration
  9. Spark Basics
    • What is Apache Spark?
    • Using the Spark Shell
    • RDDs (Resilient Distributed Datasets)
    • Functional Programming in Spark
  10. Working with RDDs in Spark
    • A Closer Look at RDDs
    • Key-Value Pair RDDs
    • MapReduce
    • Other Pair RDD Operations
  11. Writing and Deploying Spark Applications
    • Spark Applications vs. Spark Shell
    • Creating the SparkContext
    • Building a Spark Application (Scala and Java)
    • Running a Spark Application
    • The Spark Application Web UI
    • Configuring Spark Properties
    • Logging
  12. Parallel Programming with Spark
    • Review: Spark on a Cluster
    • RDD Partitions
    • Partitioning of File-based RDDs
    • HDFS and Data Locality
    • Executing Parallel Operations
    • Stages and Tasks
  13. Spark Caching and Persistence
    • RDD Lineage
    • Caching Overview
    • Distributed Persistence
  14. Common Patterns in Spark Data Processing
    • Common Spark Use Cases
    • Iterative Algorithms in Spark
    • Graph Processing and Analysis
    • Machine Learning
    • Example: k-means
  15. Preview: Spark SQL
    • Spark SQL and the SQL Context
    • Creating DataFrames
    • Transforming and Querying DataFrames
    • Saving DataFrames
    • Comparing Spark SQL with Impala
  16. Conclusion
Download conspect training as PDF

Additional information


This course is designed for developers and engineers who have programming experience. Apache Spark examples and hands-on exercises are presented in Scala and Python, so the ability to program in one of those languages is required. Basic familiarity with the Linux command line is assumed. Basic knowledge of SQL is helpful. Prior knowledge of Hadoop is not required.

Difficulty level
Duration 4 days

The participants will obtain certificates signed by Cloudera (training completion). Also this course is an excellent place to start for people working towards the CCP: Data Engineer certification. Although further study is required before passing the exam , this course covers many of the subjects tested in the CCP: Data Engineer exam.


Certified Cloudera Instructor.

Additional informations

After successfully completing this course, we recommend that participants attend Cloudera’s Developer Training for Spark and Hadoop II: Advanced Techniques course, which builds on the foundations taught here.

Other training Cloudera | Developer

Training thematically related


Big Data

Contact form

Please fill form below to obtain more info about this training.

* Fields marked with (*) are required !!!

Information on data processing by Compendium - Centrum Edukacyjne Spółka z o.o.

2180 EUR


Discount codes

Discount code may refer to (training, producer, deadline). If you have a discount code, enter it in the appropriate field.
(green means entering the correct code | red means the code is incorrect)



Traditional training

Sessions organised at Compendium CE are usually held in our locations in Kraków and Warsaw, but also in venues designated by the client. The group participating in training meets at a specific place and specific time with a coach and actively participates in laboratory sessions.

Dlearning training

You may participate from at any place in the world. It is sufficient to have a computer (or, actually a tablet, or smartphone) connected to the Internet. Compendium CE provides each Distance Learning training participant with adequate software enabling connection to the Data Center. For more information, please visit site



Electronic materials

Electronic Materials: These are electronic training materials that are available to you based on your specific application: Skillpipe, eVantage, etc., or as PDF documents.

Ctab materials

Ctab materials: the price includes ctab tablet and electronic training materials or traditional training materials and supplies provided electronically according to manufacturer's specifications (in PDF or EPUB form). The materials provided are adapted for display on ctab tablets. For more information, check out the ctab website.



    • General information
    • Guaranteed dates
    • Last minute (-10%)
    • Language of the training
    • English
Book a training appointment

Upcoming Cloudera training

Training schedule Cloudera