Training Google Cloud

Training goals

code: G-SDPD

This training is intended for big data practitioners who want to further their understanding of Dataflow in order to advance their data processing applications. Beginning with foundations, this training explains how Apache Beam and Dataflow work together to meet your data processing needs without the risk of vendor lock-in.The section on developing pipelines covers how you convert your business logic into data processing applications that can run on Dataflow. This training culminates with a focus on operations, which reviews the most important lessons for operating a data application on Dataflow, including monitoring, troubleshooting, testing, and reliability. 

Objectives: 

  • Demonstrate how Apache Beam and Dataflow work together to fulfill your organization’s data processing needs. 
  • Summarize the benefits of the Beam Portability Framework and enable it for your Dataflow pipelines.
  • Enable Shuffle and Streaming Engine, for batch and streaming pipelines respectively, for maximum performance. 
  • Enable Flexible Resource Scheduling for more cost-efficient performance. 
  • Select the right combination of IAM permissions for your Dataflow job. 
  • Implement best practices for a secure data processing environment. 
  • Select and tune the I/O of your choice for your Dataflow pipeline. 
  • Use schemas to simplify your Beam code and improve the performance of your pipeline. 
  • Develop a Beam pipeline using SQL and DataFrames. 
  • Perform monitoring, troubleshooting, testing and CI/CD on Dataflow pipelines.

Audience: 

  • Data engineer
  • Data analysts and data scientists aspiring to develop data engineering skills

Conspect Show list

  • Introduction
    • Introduce the course objectives.
    • Demonstrate how Apache Beam and Dataflow work together to fulfill your organization’s data processing needs.
  • Beam Portability
    • Summarize the benefits of the Beam Portability Framework.
    • Customize the data processing environment of your pipeline using custom containers.
    • Review use cases for cross-language transformations.
    • Enable the Portability framework for your Dataflow pipelines.
  • Separating Compute and Storage with Dataflow
    • Enable Shuffle and Streaming Engine, for batch and streaming pipelines respectively, for maximum performance.
    • Enable Flexible Resource Scheduling for more cost-efficient performance.
  • IAM, Quotas, and Permissions
    • Select the right combination of IAM permissions for your Dataflow job.
    • Determine your capacity needs by inspecting the relevant quotas for your Dataflow jobs.
  • Security
    • Select your zonal data processing strategy using Dataflow, depending on your data locality needs.
    • Implement best practices for a secure data processing environment.
  • Beam Concepts Review
    • Review main Apache Beam concepts (Pipeline, PCollections, PTransforms, Runner, reading/writing, Utility PTransforms, side inputs), bundles and DoFn
  • Windows, Watermarks, Triggers
    • Implement logic to handle your late data.
    • Review different types of triggers.
    • Review core streaming concepts (unbounded PCollections, windows).
  • Sources and Sinks
    • Write the I/O of your choice for your Dataflow pipeline.
    • Tune your source/sink transformation for maximum performance.
    • Create custom sources and sinks using SDF.
  • Schemas
    • Introduce schemas, which give developers a way to express structured data in their Beam pipelines.
    • Use schemas to simplify your Beam code and improve the performance of your pipeline.
  • State and Timers
    • Identify use cases for state and timer API implementations.
    • Select the right type of state and timers for your pipeline.
  • Best Practices
    • Implement best practices for Dataflow pipelines.
  • Dataflow SQL and DataFrames
    • Develop a Beam pipeline using SQL and DataFrames.
  • Beam Notebooks
    • Prototype your pipeline in Python using Beam notebooks.
    • Use Beam magics to control the behavior of source recording in your notebook.
    • Launch a job to Dataflow from a notebook.
  • Monitoring
    • Navigate the Dataflow Job Details UI.
    • Interpret Job Metrics charts to diagnose pipeline regressions.
    • Set alerts on Dataflow jobs using Cloud Monitoring.
  • Logging and Error Reporting
    • Use the Dataflow logs and diagnostics widgets to troubleshoot pipeline issues.
  • Troubleshooting and Debug
    • Use a structured approach to debug your Dataflow pipelines.
    • Examine common causes for pipeline failures.
  • Performance
    • Understand performance considerations for pipelines.
    • Consider how the shape of your data can affect pipeline performance.
  • Testing and CI/CD
    • Testing approaches for your Dataflow pipeline.
    • Review frameworks and features available to streamline your CI/CD workflow for Dataflow pipelines.
  • Reliability
    • Implement reliability best practices for your Dataflow pipelines.
  • Flex Templates
    • Using flex templates to standardize and reuse Dataflow pipeline code.
  • Summary
    • Summary
Download conspect training as PDF

Additional information

Prerequisites

To get the most out of this course, participants should have completed the following courses: 

  • “Building Batch Data Pipelines” 
  • “Building Resilient Streaming Analytics Systems”
Difficulty level
Duration 3 days
Certificate

The participants will obtain certificates signed by Google Cloud Platform

Trainer

Authorized Google Cloud Platform Trainer

Other training Google Cloud | Data Analysis

Training thematically related

Cloud

Big Data

Data analysis

DevOps

Contact form

Please fill form below to obtain more info about this training.







* Fields marked with (*) are required !!!

Information on data processing by Compendium - Centrum Edukacyjne Spółka z o.o.

PRICE 1500 EUR

FORM OF TRAINING ?

 

TRAINING MATERIALS ?

 

SELECT TRAINING DATE

    • General information
    • Guaranteed dates
    • Last minute (-10%)
    • Language of the training
    • English
Book a training appointment
close

Traditional training

Sessions organised at Compendium CE are usually held in our locations in Kraków and Warsaw, but also in venues designated by the client. The group participating in training meets at a specific place and specific time with a coach and actively participates in laboratory sessions.

Dlearning training

You may participate from at any place in the world. It is sufficient to have a computer (or, actually a tablet, or smartphone) connected to the Internet. Compendium CE provides each Distance Learning training participant with adequate software enabling connection to the Data Center. For more information, please visit dlearning.eu site

close

Paper materials

Traditional materials: The price includes standard materials issued in the form of paper books, printed or other, depending on the arrangements with the manufacturer.

Electronic materials

Electronic materials: These are electronic training materials that are available to you based on your specific application: Skillpipe, eVantage, etc., or as PDF documents.

Ctab materials

Ctab materials: the price includes ctab tablet and electronic training materials or traditional training materials and supplies provided electronically according to manufacturer's specifications (in PDF or EPUB form). The materials provided are adapted for display on ctab tablets. For more information, check out the ctab website.

Upcoming Google Cloud training

Training schedule
Google Cloud