Introduction to Big Data complete course is currently being offered by UC San Diego through Coursera platform.

Learning Outcomes for Introduction to Big Data Course!

At the end of this course, you will be able to:

* Describe the Big Data landscape including examples of real world big data problems including the three key sources of Big Data: people, organizations, and sensors. 

* Explain the V’s of Big Data (volume, velocity, variety, veracity, valence, and value) and why each impacts data collection, monitoring, storage, analysis and reporting.

* Get value out of Big Data by using a 5-step process to structure your analysis. 

* Identify what are and what are not big data problems and be able to recast big data problems as data science questions.

* Provide an explanation of the architectural components and programming models used for scalable big data analysis.

* Summarize the features and value of core Hadoop stack components including the YARN resource and job management system, the HDFS file system and the MapReduce programming model.

Instructors for Introduction to Big Data Course!

- Ilkay Altintas

- Amarnath Gupta

Skills You Will Gain

  • Big Data
  • Apache Hadoop
  • Mapreduce
  • Cloudera

Also Check: How to Apply for Coursera Financial Aid

Introduction to Big Data Coursera Week 1 Quiz Answers!

Foundations for Big Data

Q1) Which of the following is the best description of why it is important to learn about the foundations for big data?
  • Foundations stand the test of time.
  • Foundations is all that is required to show a mastery of big data concepts.
  • Foundations allow for the understanding of practical concepts in Hadoop.
  • Foundations help you revisit calculus concepts required in the understanding of big data.

Q2) What is the benefit of a commodity cluster?
  • Enables fault tolerance
  • Prevents network connection failure
  • Prevents individual component failures
  • Much faster than a traditional super computer

Q3) What is a way to enable fault tolerance?
  • Distributed Computing
  • System Wide Restart
  • Better LAN Connection
  • Data Parallel Job Restart

Q4) What are the specific benefit(s) to a distributed file system?
  • Large Storage
  • High Concurrency
  • Data Scalability
  • High Fault Tolerance

Q5) Which of the following are general requirements for a programming language in order to support big data models?
  • Handle Fault Tolerance
  • Utilize Map Reduction Methods
  • Support Big Data Operations
  • Enable Adding of More Racks
  • Optimization of Specific Data Types

Post a Comment

Previous Post Next Post