Apache Zeppelin - Big Data Visualization Tool
Apache Zeppelin - Big Data Visualization Tool for Big data Engineers An Open Source Tool (Free Source)
Development ,Data Science,Big Data
Lectures -24
Duration -1.5 hours
Lifetime Access
Lifetime Access
30-days Money-Back Guarantee
Get your team access to 10000+ top Tutorials Point courses anytime, anywhere.
Course Description
Apache Zeppelin - Big Data Visualization Tool for Big Data Engineers An Open Source Tool (Free Source for Data Visualization)
Learn the latest Big Data Technology - Apache Zeppelin! Learn to use it with one of the most popular programming Data Visualization Tools!
One of the most valuable technology skills is the ability to analyze huge data sets, and this course is specifically designed to introduce you to one of the best technologies for this task: Apache Zeppelin! Top technology companies, such as Google, Facebook, Netflix, Airbnb, Amazon, NASA, and more, are all using Apache Zeppelin to solve their big data problems!
Master Big Data Visualization with Apache Zeppelin.
Various types of Interpreters to integrate with various big data ecosystem
Apache Zeppelin provides a web-based notebook along with 20-plus Interpreters to interact with and facilitates collaboration from a WebUI. Zeppelin supports Data Ingestion, Data Discovery, Data Analysis, and Data Visualization.
Using an integration of Interpreters is very simple and seamless.
Resultant data can be exported or stored in various sources or can be explored with various visualizations and can be analyzed with pivot graphs like the setup
This course introduces every aspect of visualization, from story to numbers, to architecture, to code. Tell your story with charts on the web. Visualization always reflects the reality of the data.
We will Learn:
Data Ingestion in Zeppelin Environment
Configuring Interpreter
How to Use Zeppelin to Process Data in Spark Scala, Python, SQL and MySQL
Data Discovery
Data Analytics in Zeppelin
Data Visualization
Pivot Chart
Dynamic Forms
Various types of Interpreters to integrate with various big data ecosystem
Visualization of results from big data
Goals
- Data Ingestion in Zeppelin Environment
- Configuring Interpreter in Zeppelin
- How to Use Zeppelin to Process Data in Spark Scala, Spark Python, SQL and MySQL
- Data Discovery
- Data Analytics in Zeppelin
- Data Visualization
- Pivot Chart
- Dynamic Forms
- Various types of Interpreters to integrate with various big data ecosystem
- Visualization of results from big data
Prerequisites
- Basics of Data Analytics
- Big data basics will be an added advantage
- Basics on SQL queries
- Basics on different data visualization

Curriculum
Check out the detailed breakdown of what’s inside the course
Introduction
20 Lectures
-
Introduction 03:26 03:26
-
What is Apache Zeppelin 04:08 04:08
-
Installation Steps on Linux machines
-
(Latest) Installing Apache Zeppelin (0.10.1)
-
(Hands on) Installation Steps on Ubuntu 20.04 05:31 05:31
-
Regarding IBM Skills Network 01:06 01:06
-
(Optional) Free Account creation in IBM Skills Network Labs 01:51 01:51
-
(Optional) Launch Apache Zeppelin in IBM Skills Network Labs 02:32 02:32
-
(Optional) Loading Data into IBM Skills Developer Lab 01:52 01:52
-
Spark with Zeppelin (Hands on Demo) 07:44 07:44
-
SQL Support in Zeppelin Part 1(MySQL Remote Database Connectivity Hands on Demo) 06:11 06:11
-
SQL Support in Zeppelin Part 2(MySQL Remote Database Connectivity Hands on Demo) 05:14 05:14
-
(Hands On) Configure Hive Interpreter in Apache Zeppelin 03:45 03:45
-
Configure Hive Interpreter in Apache Zeppelin
-
Hadoop Configuration Setting
-
Starting Hadoop,Hive, Zeppelin 10:31 10:31
-
Hive with Zeppelin 08:03 08:03
-
Python with Zeppelin 02:20 02:20
-
Mini Project on Twitter Data Analysis 17:29 17:29
-
Thank you 00:20 00:20
Zeppelin Basics
3 Lectures

Instructor Details

Bigdata Engineer
I am Solution Architect with 12+ year’s of experience in Banking, Telecommunication and Financial Services industry across a diverse range of roles in Credit Card, Payments, Data Warehouse and Data Center programmes
My role as Bigdata and Cloud Architect to work as part of Bigdata team to provide Software Solution.
Responsibilities includes,
- Support all Hadoop related issues
- Benchmark existing systems, Analyse existing system challenges/bottlenecks and Propose right solutions to eliminate them based on various Big Data technologies
- Analyse and Define pros and cons of various technologies and platforms
- Define use cases, solutions and recommendations
- Define Big Data strategy
- Perform detailed analysis of business problems and technical environments
- Define pragmatic Big Data solution based on customer requirements analysis
- Define pragmatic Big Data Cluster recommendations
- Educate customers on various Big Data technologies to help them understand pros and cons of Big Data
- Data Governance
- Build Tools to improve developer productivity and implement standard practices
I am sure the knowledge in these courses can give you extra power to win in life.
All the best!!
Course Certificate
Use your certificate to make a career change or to advance in your current career.

Our students work
with the Best


































Related Video Courses
View MoreAnnual Membership
Become a valued member of Tutorials Point and enjoy unlimited access to our vast library of top-rated Video Courses
Subscribe now
Online Certifications
Master prominent technologies at full length and become a valued certified professional.
Explore Now