This four-day hands-on training course delivers the key concepts and expertise participants need to ingest and process data on a Hadoop cluster using the most up-to-date tools and techniques. Employing Hadoop ecosystem projects such as Spark (including Spark Streaming and Spark SQL), Flume, Kafka, and Sqoop, this training course is the best preparation for the real-world challenges faced by Hadoop developers.
Through expert-led discussion and interactive, hands-on exercises, participants will learn how to:
- Distribute, store, and process data in a Hadoop cluster
- Write, configure, and deploy Apache Spark applications on a Hadoop cluster
- Use the Spark shell for interactive data analysis
- Process and query structured data using Spark SQL
- Use Spark Streaming to process a live data stream
- Use Flume and Kafka to ingest data for Spark Streaming
This course is designed for developers and engineers who have programming experience, but prior knowledge of Hadoop is not required.
- Apache Spark examples and hands-on exercises are presented in Scala and Python. The ability to program in one of those languages is required
- Basic familiarity with the Linux command line is assumed
- Basic knowledge of SQL is helpful