
- This class has passed.
Serverless data processing with Dataflow
January 1 @ 8:00 AM - 5:00 PM AEST
Class Navigation
Serverless Data Processing with Dataflow
What you Will learn
- Demonstrate how Apache Beam and Dataflow work together to fulfill your organization’s data processing needs.
- Summarize the benefits of the Beam Portability Framework and enable it for your Dataflow pipelines.
- Enable Shuffle and Streaming Engine, for batch and streaming pipelines respectively, for maximum performance.
- Enable Flexible Resource Scheduling for more cost-efficient performance.
- Select the right combination of IAM permissions for your Dataflow job.
- Implement best practices for a secure data processing environment. • Select and tune the I/O of your choice for your Dataflow pipeline.
- Use schemas to simplify your Beam code and improve the performance of your pipeline.
- Develop a Beam pipeline using SQL and DataFrames.
- Perform monitoring, troubleshooting, testing and CI/CD on Dataflow pipelines.
What's Included?
Instructor Live Training
An instructor will answer your questions
OFFICIAL GOOGLE CLOUD CONTENT
Course content reflects the latest google cloud class
hands on labs
Real world hands on labs provided by Qwiklabs and supported by instructor
CertIficate of completion
Receive official certificate on completion of 80% of labs
Who's this course for?
- Data Engineer
- Data Analysts and Data Scientists aspiring to develop Data Engineering skills
Level
- Advanced
Language
- Delivered in English
Duration
- 3 x 8 hour sessions
Prerequisites
- Completed “Building Batch Data Pipelines”
- Completed “Building Resilient Streaming Analytics Systems”
products
- Dataflow
- Cloud Operations
Course Content
Topics
- Course Introduction
- Beam and Dataflow Refresher
Objectives
- Introduce the course objectives.
- Demonstrate how Apache Beam and Dataflow work together to fulfill your
organization’s data processing needs.
Topics
- Beam Portability
- Runner v2
- Container Environments
- Cross-Language TransformS
Objectives
- Summarize the benefits of the Beam Portability Framework.
- Customize the data processing environment of your pipeline using custom
containers. - Review use cases for cross-language transformations.
- Enable the Portability framework for your Dataflow pipelines.
Activities
- quiz
Topics
- Dataflow
- Dataflow Shuffle Service
- Dataflow Streaming Engine
- Flexible Resource Scheduling
Objectives
- Enable Shuffle and Streaming Engine, for batch and streaming pipelines respectively,
for maximum performance. - Enable Flexible Resource Scheduling for more cost-efficient performance.
Activies
- quiz
Topics
- IAM
- Quota
Objectives
- Select the right combination of IAM permissions for your Dataflow job.
- Determine your capacity needs by inspecting the relevant quotas for your
Dataflow jobs.
Activies
- quiz
Topics
- Data Locality
- Shared VPC
- Private IPs
- CMEK
Objectives
- Select your zonal data processing strategy using Dataflow, depending on your data
locality needs. - Implement best practices for a secure data processing environment.
Activities
- Hands-on lab and quiz
Topics
- Beam Basics
- Utility Transforms
- DoFn Lifecycle
Objectives
- Review main Apache Beam concepts (Pipeline, PCollections, PTransforms, Runner,
reading/writing, Utility PTransforms, side inputs), bundles and DoFn Lifecycle.
Activities
- Hands-on lab and quiz
Topics
- Windows
- Watermarks
- Triggers
Objectives
- Implement logic to handle your late data.
- Review different types of triggers.
- Review core streaming concepts (unbounded PCollections, windows).
Activities
- Hands-on lab and quiz
Topics
- Sources and Sinks
- Text IO and File IO
- BigQuery IO
- PubSub IO
- Kafka IO
- Bigable IO
- Avro IO
- Splittable DoFn
Objectives
- Write the I/O of your choice for your Dataflow pipeline.
- Tune your source/sink transformation for maximum performance.
- Create custom sources and sinks using SDF.
Activities
- quiz
Topics
- Beam Schemas
- Code Examples
Objectives
- Introduce schemas, which give developers a way to express structured data
in their Beam pipelines. - Use schemas to simplify your Beam code and improve the performance of your pipeline.
Activities
- Hands-on lab and quiz
Topics
- State API
- Timer API
- Summary
Objectives
- Identify use cases for state and timer API implementations.
- Select the right type of state and timers for your pipeline.
Activities
- Quiz
Topics
- Schemas
- Handling unprocessable Data
- Error Handling
- AutoValue Code Generator
- JSON Data Handling
- Utilize DoFn Lifecycle
- Pipeline Optimizations
Objectives
- Implement best practices for Dataflow pipelines.
Activities
- Hands-on lab and quiz
Topics
- Dataflow and Beam SQL
- Windowing in SQL
- Beam DataFrames
Objectives
- Develop a Beam pipeline using SQL and DataFrames.
Activities
- Hands-on lab and quiz
Topics
- Beam Notebooks
Objectives
- Prototype your pipeline in Python using Beam notebooks.
- Launch a job to Dataflow from a notebook.
Activities
- Quiz
Topics
- Job List
- Job Info
- Job Graph
- Job Metrics
- Metrics Explorer
Objectives
- Navigate the Dataflow Job Details UI.
- Interpret Job Metrics charts to diagnose pipeline regressions.
- Set alerts on Dataflow jobs using Cloud Monitoring.
Activities
- quiz
Topics
- Logging
- Error Reporting
Objectives
- Use the Dataflow logs and diagnostics widgets to troubleshoot pipeline issues.
Activities
- quiz
Topics
- Troubleshooting Workflow
- Types of Troubles
Objectives
- Use a structured approach to debug your Dataflow pipelines.
- Examine common causes for pipeline failures.
Activities
- Hands-on lab and quiz
Topics
- Pipeline Design
- Data Shape
- Source, Sinks, and External Systems
- Shuffle and Streaming Engine
Objectives
- Understand performance considerations for pipelines.
- Consider how the shape of your data can affect pipeline performance.
Activities
- quiz
Topics
- Testing and CI/CD Overview
- Unit Testing
- Integration Testing
- Artifact Building
- Deployment
Objectives
- Testing approaches for your Dataflow pipeline.
- Review frameworks and features available to streamline your CI/CD workflow for Dataflow pipelines.
Activities
- Hands-on labs and quiz
Topics
- Introduction to Reliability
- Monitoring
- Geolocation
- Disaster Recovery
- High Availability
Objectives
- Implement reliability best practices for your Dataflow pipelines.
Activities
- Quiz
Topics
- Classic Templates
- Flex Templates
- Using Flex Templates
- Google-provided Templates
Objectives
- Using flex templates to standardize and reuse Dataflow pipeline code.
Activities
- Hands-on labs and quiz
Topics
- Summary
Objectives
- Quick recap of training topics
sign up to be notified for upcoming classes
Have Questions?
No worries. Send us a quick message and we’ll be happy to answer any questions you have.

Ref: T-SDPDF-A-01
Details
- Date:
- January 1
- Time:
-
8:00 AM - 5:00 PM AEST
- Class Tags:
- Course: Serverless data processing with Dataflow
- https://axalon.io/training/google-cloud/data-engineering-and-analytics/serverless-dataprocessing-with-dataflow/
Location
Instructor
- Axalon Academy
- Email:
- training@axalon.io
- View Instructor Website
Other
- Competencies
- Advanced
- Learning Path
- Database Engineer
- Event Type
- Live Virtual Training Day