Cloudera Developer Training

*** This offering is restricted to employees of Aetna/CVS Health only. *** This OnDemand offering provides you with a 180-day subscription that begins on the date of purchase. Skills Gained Through...

Read More
$2,012 USD
Course Code SPARK-HADOOP-OD-CVS
Available Formats Self Paced
6119 Reviews star_rate star_rate star_rate star_rate star_half
Course Image

*** This offering is restricted to employees of Aetna/CVS Health only. ***

This OnDemand offering provides you with a 180-day subscription that begins on the date of purchase.

Skills Gained

Through instructor-led discussion and interactive, hands-on exercises, participants will learn Apache Spark and how it integrates with the entire Hadoop ecosystem, learning:

  • How data is distributed, stored, and processed in a Hadoop cluster
  • How to use Sqoop and Flume to ingest data
  • How to process distributed data with Apache Spark
  • How to model structured data as tables in Impala and Hive
  • How to choose the best data storage format for different data usage patterns
  • Best practices for data storage

Prerequisites

This course is designed for developers and engineers who have programming experience. Apache Spark examples and hands on exercises are presented in Scala and Python, so the ability to program in one of those languages is required. Basic familiarity with the Linux command line is assumed. Basic knowledge of SQL is helpful. Prior knowledge of Hadoop is not required.

Course Details

Subscription Details

This OnDemand offering provides you with a 180-day subscription that begins on the date of purchase. While the subscription is active, you will have unlimited access to the course training materials which includes recorded course lectures and demonstrations, assessment components, and hands-on exercise instructions. You will also receive 15 runtime hours of access to the online hands-on exercise environment accessible though web browser. You can start the exercise environment when you are ready to use it. You can stop or pause it when you are done for the time being, then return anytime to continue where you left off. The exercise environment remains accessible until you have used the runtime hours or the subscription period ends, whichever occurs first.

Course Outline

Introduction to Hadoop and the Hadoop Ecosystem

  • Problems with Traditional Large-Scale Systems
  • Hadoop!
  • Data Storage and Ingest
  • Data Processing
  • Data Analysis and Exploration
  • Hadoop Architecture and HDFS
  • Importing Relational Data with Apache Sqoop
  • Introduction to Impala and Hive
  • Modeling and Managing Data with Impala and Hive
  • Data Formats
  • Data File Partitioning
  • Other Ecosystem Tools
  • Introduction to the Hands-On Exercises

Hadoop Architecture and HDFS

  • Distributed Processing on a Cluster
  • Storage: HDFS Architecture
  • Storage: Using HDFS
  • Resource Management: YARN Architecture
  • Resource Management: Working with YARN

Importing Relational Data with Apache Sqoop

  • Sqoop Overview
  • Basic Imports and Exports
  • Limiting Results
  • Improving Sqoop’s Performance
  • Sqoop 2

Introduction to Impala and Hive

  • Introduction to Impala and Hive
  • Why Use Impala and Hive?
  • Querying Data With Impala and Hive
  • Comparing Hive and Impala to Traditional Databases

Modeling and Managing Data with Impala and Hive

  • Data Storage Overview
  • Creating Databases and Tables
  • Loading Data into Tables
  • HCatalog
  • Impala Metadata Caching

Data Formats

  • Selecting a File Format
  • Hadoop Tool Support for File Formats
  • Avro Schemas
  • Using Avro with Impala, Hive, and Sqoop
  • Avro Schema Evolution
  • Compression

Data File Partitioning

  • Partitioning Overview
  • Partitioning in Impala and Hive

Capturing Data with Apache Flume

  • What is Apache Flume?
  • Basic Flume Architecture
  • Flume Sources
  • Flume Sinks
  • Flume Channels
  • Flume Configuration

Spark Basics

  • What is Apache Spark?
  • Using the Spark Shell
  • RDDs (Resilient Distributed Datasets)
  • Functional Programming in Spark

Working with RDDs in Spark

  • Creating RDDs
  • Other General RDD Operations

Writing and Deploying Spark Applications

  • Spark Applications vs. Spark Shell
  • Creating the SparkContext
  • Building a Spark Application (Scala and Java)
  • Running a Spark Application
  • The Spark Application Web UI
  • Configuring Spark Properties
  • Logging

Parallel Processing in Spark

  • Review: Spark on a Cluster
  • RDD Partitions
  • Partitioning of File-Based RDDs
  • HDFS and Data Locality
  • Executing Parallel Operations
  • Stages and Tasks

Spark RDD Persistence

  • RDD Lineage
  • RDD Persistence Overview
  • Distributed Persistence

Common Patterns in Spark Data Processing

  • Common Spark Use Cases
  • ExitCertified® Corporation and iMVP® are registered trademarks of ExitCertified ULC and ExitCertified
  • Corporation and Tech Data Corporation, respectively
  • Copyright ©2019 Tech Data Corporation and ExitCertified ULC & ExitCertified Corporation.
  • All Rights Reserved.
  • Generated 6
  • DataFrames and Spark SQL
  • Iterative Algorithms in Spark
  • Graph Processing and Analysis
  • Machine Learning
  • Example: k-means

DataFrames and Spark SQL

  • Spark SQL and the SQL Context
  • Creating DataFrames
  • Transforming and Querying DataFrames
  • Saving DataFrames
  • DataFrames and RDDs
  • Comparing Spark SQL, Impala, and Hive-on-Spark
Contact Us 1-800-803-3948
Contact Us
FAQ Get immediate answers to our most frequently asked qestions. View FAQs arrow_forward