Training Cloudera

Training goals dlearning

code: C-DTSaH

Scala and Python developers will learn key concepts and gain the expertise needed to ingest and process data, and develop high-performance applications using Apache Spark 2.

This four-day hands-on training course delivers the key concepts and expertise developers need to use Apache Spark to develop high-performance parallel applications. Participants will learn how to use Spark SQL to query structured data and Spark Streaming to perform real-time processing on streaming data from a variety of sources. Developers will also practice writing applications that use core Spark to perform ETL processing and iterative algorithms. The course covers how to work with “big data” stored in a distributed file system, and execute Spark applications on a Hadoop cluster. After taking this course, participants will be prepared to face real-world challenges and build applications to execute faster decisions, better decisions, and interactive analysis, applied to a wide variety of use cases, architectures, and industries.

Course Objectives:

  • How the Apache Hadoop ecosystem fits in with the data processing lifecycle
  • How data is distributed, stored, and processed in a Hadoop cluster
  • How to write, configure, and deploy Apache Spark applications on a Hadoop cluster
  • How to use the Spark shell and Spark applications to explore, process, and analyze distributed data
  • How to query data using Spark SQL, DataFrames, and Datasets
  • How to use Spark Streaming to process a live data stream

Conspect Show list

  1. Introduction
  2. Introduction to Apache Hadoop and the Hadoop Ecosystem 
    • Apache Hadoop Overview
    • Data Processing 
    • Introduction to the Hands-On Exercises
  3. Apache Hadoop File Storage
    • Apache Hadoop Cluster Components
    • HDFS Architecture
    • Using HDFS 
  4. Distributed Processing on an Apache Hadoop Cluster
    • YARN Architecture
    • Working With YARN
  5. Apache Spark Basics
    • What is Apache Spark?
    • Starting the Spark Shell
    • Using the Spark Shell 
    • Getting Started with Datasets and DataFrames
    • DataFrame Operations
  6. Working with DataFrames and Schemas
    • Creating DataFrames from Data Sources
    • Saving DataFrames to Data Sources
    • DataFrame Schemas 
    • Eager and Lazy Execution
  7. Analyzing Data with DataFrame Queries
    • Querying DataFrames Using Column Expressions
    • Grouping and Aggregation Queries
    • Joining DataFrames 
  8. RDD Overview
    • RDD Overview 
    • RDD Data Sources
    • Creating and Saving RDDs 
    • RDD Operations
  9. Transforming Data with RDDs
    • Writing and Passing Transformation Functions 
    • Transformation Execution
    • Converting Between RDDs and DataFrames 
  10. Aggregating Data with Pair RDDs
    • Key-Value Pair RDDs 
    • Map-Reduce
    • Other Pair RDD Operations 
  11. Querying Tables and Views with SQL
    • Querying Tables in Spark Using SQL 
    • Querying Files and Views
    • The Catalog API 
  12. Working with Datasets in Scala
    • Datasets and DataFrames 
    • Creating Datasets
    • Loading and Saving Datasets 
    • Dataset Operations
  13. Writing, Configuring, and Running Spark Applications
    • Writing a Spark Application 
    • Building and Running an Application
    • Application Deployment Mode 
    • The Spark Application Web UI
    • Configuring Application Properties 
  14. Spark Distributed Processing
    • Review: Apache Spark on a Cluster 
    • RDD Partitions
    • Example: Partitioning in Queries 
    • Stages and Tasks
    • Job Execution Planning 
    • Example: Catalyst Execution Plan
    • Example: RDD Execution Plan 
  15. Distributed Data Persistence
    • DataFrame and Dataset Persistence 
    • Persistence Storage Levels
    • Viewing Persisted RDDs 
  16. Common Patterns in Spark Data Processing
    • Common Apache Spark Use Cases 
    • Iterative Algorithms in Apache Spark
    • Machine Learning 
    • Example: k-means
  17. Introduction to Structured Streaming
    • Apache Spark Streaming Overview 
    • Creating Streaming DataFrames
    • Transforming DataFrames 
    • Executing Streaming Queries
  18. Structured Streaming with Apache Kafka
    • Overview
    • Receiving Kafka Messages
    • Sending Kafka Messages
  19. Aggregating and Joining Streaming DataFrames
    • Streaming Aggregation
    • Joining Streaming DataFrames
  20. Conclusion
  21. Message Processing with Apache Kafka
    • What Is Apache Kafka?
    • Apache Kafka Overview
    • Scaling Apache Kafka
    • Apache Kafka Cluster Architecture
    • Apache Kafka Command Line Tools
Download conspect training as PDF

Additional information

Prerequisites

This course is designed for developers and engineers who have programming experience, but prior knowledge of Spark and Hadoop is not required. Apache Spark examples and hands-on exercises are presented in Scala and Python. The ability to program in one of those languages is required. Basic familiarity with the Linux command line is assumed. Basic knowledge of SQL is helpful.

Difficulty level
Duration 4 days
Certificate

The participants will obtain certificates signed by Cloudera (course completion).

Upon completion of the course, attendees are encouraged to continue their study and register for the CCA Spark and Hadoop Developer exam. Certification is a great differentiator. It helps establish you as a leader in the field, providing employers and customers with tangible evidence of your skills and expertise https://www.cloudera.com/about/training/certification/cca-spark.html

Trainer

Certified Cloudera Instructor.

Other training Cloudera | Developer

Training thematically related

Programming

Databases

Big Data

Contact form

Please fill form below to obtain more info about this training.







* Fields marked with (*) are required !!!

Information on data processing by Compendium - Centrum Edukacyjne Spółka z o.o.

2700 EUR

FORM OF TRAINING ?

 

TRAINING MATERIALS ?

 

SELECT TERM TRAINING

    • General information
    • Guaranteed dates
    • Last minute (-10%)
    • Language of the training
    • English
    • General information
    • Guaranteed dates
    • Last minute (-10%)
    • Language of the training
    • English
    • General information
    • Guaranteed dates
    • Last minute (-10%)
    • Language of the training
    • English
Book a training appointment
close

Traditional training

Sessions organised at Compendium CE are usually held in our locations in Kraków and Warsaw, but also in venues designated by the client. The group participating in training meets at a specific place and specific time with a coach and actively participates in laboratory sessions.

Dlearning training

You may participate from at any place in the world. It is sufficient to have a computer (or, actually a tablet, or smartphone) connected to the Internet. Compendium CE provides each Distance Learning training participant with adequate software enabling connection to the Data Center. For more information, please visit dlearning.eu site

close

Paper materials

Traditional materials: The price includes standard materials issued in the form of paper books, printed or other, depending on the arrangements with the manufacturer.

Electronic materials

Electronic materials: These are electronic training materials that are available to you based on your specific application: Skillpipe, eVantage, etc., or as PDF documents.

Ctab materials

Ctab materials: the price includes ctab tablet and electronic training materials or traditional training materials and supplies provided electronically according to manufacturer's specifications (in PDF or EPUB form). The materials provided are adapted for display on ctab tablets. For more information, check out the ctab website.

Upcoming Cloudera training

Training schedule Cloudera