Combo with "HaDoop" and 27 others

Ace your homework & exams now with Quizwiz!

How will you write a custom partitioner for a Hadoop job?

- Create a new class that extends Partitioner Class - Override method getPartition - In the wrapper that runs the Mapreduce, either - Add the custom partitioner to the job programmatically using method set Partitioner Class or - add the custom partitioner to the job as a config file (if your wrapper reads from config file or oozie)

Consider case scenario: In M/R system, - HDFS block size is 64 MB

- Input format is FileInputFormat - We have 3 files of size 64K, 65Mb and 127Mb How many input splits will be made by Hadoop framework? Hadoop will make 5 splits as follows: - 1 split for 64K files - 2 splits for 65MB files - 2 splits for 127MB files

What will a Hadoop job do if you try to run it with an output directory that is already present? Will it

- Overwrite it - Warn you and continue - Throw an exception and exit The Hadoop job will throw an exception and exit.

Using command line in Linux, how will you

- See all jobs running in the Hadoop cluster - Kill a job? Hadoop job - list Hadoop job - kill jobID

Name the most common Input Formats defined in Hadoop? Which one is default?

- TextInputFormat - KeyValueInputFormat - SequenceFileInputFormat TextInputFormat is the Hadoop default.

The command for removing a file from hadoop recursively is: hadoop dfs ___________ <directory>

-rmr

Have you ever used Counters in Hadoop. Give us an example scenario?

...

. How did you debug your Hadoop code?

1. use counters 2. use the interface provided by hadoop web ui

Which of the following is NOT true: A) Hadoop is decentralized B) Hadoop is distributed. C) Hadoop is open source. D) Hadoop is highly scalable.

A

three

A Hadoop file is automatically stored in ___ places.

What is a Task Tracker in Hadoop? How many instances of TaskTracker run on a Hadoop Cluster

A TaskTracker is a slave node daemon in the cluster that accepts tasks (Map, Reduce and Shuffle operations) from a JobTracker. There is only One Task Tracker process run on any hadoop slave node. Task Tracker runs on its own JVM process. Every TaskTracker is configured with a set of slots, these indicate the number of tasks that it can accept. The TaskTracker starts a separate JVM processes to do the actual work (called as Task Instance) this is to ensure that process failure does not take down the task tracker. The TaskTracker monitors these task instances, capturing the output and exit codes. When the Task instances finish, successfully or not, the task tracker notifies the JobTracker. The TaskTrackers also send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These message also inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated

Big Data

An assortment of such a huge and complex data that it becomes very tedious to capture, store, process, retrieve, and analyze it with the help of on-hand database management tools or traditional processing techniques.

Fill in the blank. The solution to cataloging the increasing number of web pages in the late 1900's and early 2000's was _______.

Automation

JobTracker in Hadoop performs following actions

Client applications submit jobs to the Job tracker. The JobTracker talks to the NameNode to determine the location of the data The JobTracker locates TaskTracker nodes with available slots at or near the data The JobTracker submits the work to the chosen TaskTracker nodes. The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they are deemed to have failed and the work is scheduled on a different TaskTracker. A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do then: it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may may even blacklist the TaskTracker as unreliable. When the work is completed, the JobTracker updates its status. Client applications can poll the JobTracker for information.

________ is shipped to the nodes of the cluster instead of _________.

Code, Data

What are combiners? When should I use a combiner in my MapReduce Job?

Combiners are used to increase the efficiency of a MapReduce program. They are used to aggregate intermediate map output locally on individual mapper outputs. Combiners can help you reduce the amount of data that needs to be transferred across to the reducers. You can use your reducer code as a combiner if the operation performed is commutative and associative. The execution of combiner is not guaranteed, Hadoop may or may not execute a combiner. Also, if required it may execute it more then 1 times. Therefore your MapReduce jobs should not depend on the combiners execution.

Why is Hadoop good for "big data"?

Companies need to analyze that data to make large-scale business decisions.

HDFS, MapReduce

Core components of Hadoop are ___ and ___.

Which of the following is NOT Hadoop drawbacks? A) inefficient join operation B) security issue C) does not optimize query for user D) high cost E) MapReduce is difficult to implement

D

The __________ holds the data in the HDFS and the application connects with the __________ to send and retrieve data from the cluster.

Datanode, Namenode

What is Distributed Cache in Hadoop?

Distributed Cache is a facility provided by the MapReduce framework to cache files (text, archives, jars and so on) needed by applications during execution of the job. The framework will copy the necessary files to the slave node before any tasks for the job are executed on that node.

Lists three drawbacks of using Hadoop.

Does not work well with small amounts of data, MapReduce is difficult to implement or understand, does not guarantee atomicity transactions

What is the logo for Hadoop?

Elephant

T/F: Hadoop is good at storing semistructured data.

False

T/F: Hadoop is not recommended to company with small amount of data but it is highly recommended if this data requires instance analysis.

False

T/F: The Cassandra File System has many advantages over HDFS, but simpler deployment is not one of them.

False

T/F: The main benefit of HadoopDB is that it is more scalable than Hadoop while maintaining the same performance level on structured data analysis workloads.

False

T/F: Your user tries to log in to your website. Hadoop is a good technology to store and retrieve their login data.

False

streaming

HDFS is a file system designed for storing very large files with streaming data access patterns, running clusters on commodity hardware.

file system data

HDFS is a highly fault-tolerant, with high throughput, suitable for applications with large data sets, streaming access to ____ ____ ____ and can be built out of commodity hardware.

What is HDFS ? How it is different from traditional file systems?

HDFS, the Hadoop Distributed File System, is responsible for storing huge data on the cluster. This is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. HDFS is designed to support very large files. Applications that are compatible with HDFS are those that deal with large data sets. These applications write their data only once but they read it one or more times and require these reads to be satisfied at streaming speeds. HDFS supports write-once-read-many semantics on files.

When comparing Hadoop and RDBMS, which is the best solution for speed?

HaDoop

Which is cheaper for larger scale power and storage? Hadoop or RDBMS?

HaDoop

Explain why the performance of join operation in Hadoop is inefficient.

HaDoop does not have indicies so the entire dataset is copied in the join operation.

E xplain the benefit of Hadoop versus other nondistributed parallel framworks in terms of their hardware requirements.

HaDoop does not require high performance computers to be powerful. It can run on consumer grade hardware.

Why is Hadoop's file redundancy less problematic than it could be?

HaDoop is cheap and cost-effective.

What is the characteristic of streaming API that makes it flexible run MapReduce jobs in languages like Perl, Ruby, Awk etc.?

Hadoop Streaming allows to use arbitrary programs for the Mapper and Reducer phases of a MapReduce job by having both Mappers and Reducers receive their input on stdin and emit output (key, value) pairs on stdout.

scalable

Hadoop is ________ as more nodes can be added to it.

How many Daemon processes run on a Hadoop system?

Hadoop is comprised of five separate daemons. Each of these daemon run in its own JVM. Following 3 Daemons run on Master nodes NameNode - This daemon stores and maintains the metadata for HDFS. Secondary NameNode - Performs housekeeping functions for the NameNode. JobTracker - Manages MapReduce jobs, distributes individual tasks to machines running the Task Tracker. Following 2 Daemons run on each Slave nodes DataNode - Stores actual HDFS data blocks. TaskTracker - Responsible for instantiating and monitoring individual Map and Reduce tasks.

database

Hadoop is not a ______, it is an architecture with a filesystem called HDFS.

Map Reduce

Hadoop uses __ __ to process large data sets.

parallel

Hadoop uses the concept of MapReduce which enables it to divide the query into small parts and process them in ___.

Name three features of Hive.

HiveQL, Indexing, Different Storage types

What is HDFS Block size? How is it different from traditional file system block size?

In HDFS data is split into blocks and distributed across multiple nodes in the cluster. Each block is typically 64Mb or 128Mb in size. Each block is replicated multiple times. Default is to replicate each block three times. Replicas are stored on different nodes. HDFS utilizes the local file system to store each HDFS block as a separate file. HDFS Block size can not be compared with the traditional file system block size.

two

In Hadoop, when we store a file, it automatically gets replicated at _______ other locations also.

When is the reducers are started in a MapReduce job?

In a MapReduce job reducers do not start executing the reduce method until the all Map jobs have completed. Reducers start copying intermediate key-value pairs from the mappers as soon as they are available. The programmer defined reduce method is called only after all the mappers have finished.

How is the splitting of file invoked in Hadoop framework?

It is invoked by the Hadoop framework by running getInputSplit()method of the Input format class (like FileInputFormat) defined by the user.

Suppose Hadoop spawned 100 tasks for a job and one of the task failed. What will Hadoop do?

It will restart the task again on some other TaskTracker and only if the task fails more than four (default setting and can be changed) times will it kill the job.

What is a JobTracker in Hadoop? How many instances of JobTracker run on a Hadoop Cluster?

JobTracker is the daemon service for submitting and tracking MapReduce jobs in Hadoop. There is only One Job Tracker process run on any hadoop cluster. Job Tracker runs on its own JVM process. In a typical production cluster its run on a separate machine. Each slave node is configured with job tracker node location. The JobTracker is single point of failure for the Hadoop MapReduce service. If it goes down, all running jobs are halted.

What is JobTracker?

JobTracker is the service within Hadoop that runs MapReduce jobs on the cluster.

How does speculative execution work in Hadoop?

JobTracker makes different TaskTrackers process same input. When tasks complete, they announce this fact to the JobTracker. Whichever copy of a task finishes first becomes the definitive copy. If other copies were executing speculatively, Hadoop tells the TaskTrackers to abandon the tasks and discard their outputs. The Reducers then receive their inputs from whichever Mapper completed successfully, first.

How NameNode Handles data node failures?

NameNode periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode. When NameNode notices that it has not recieved a hearbeat message from a data node after a certain amount of time, the data node is marked as dead. Since blocks will be under replicated the system begins replicating the blocks that were stored on the dead datanode. The NameNode Orchestrates the replication of data blocks from one datanode to another. The replication data transfer happens directly between datanodes and the data never passes through the namenode.

Is Hadoop a database? Explain.

No, HaDoop is a file system.

Does MapReduce programming model provide a way for reducers to communicate with each other? In a MapReduce job can a reducer communicate with another reducer?

Nope, MapReduce programming model does not allow reducers to communicate with each other. Reducers run in isolation.

What project was Hadoop originally a part of and what idea was that project based on?

Nutch. It was based on returning web search results faster by distributing data and calculations across different compters.

What is the relationship between Jobs and Tasks in Hadoop?

One job is broken down into one or many tasks in Hadoop.

Facebook

One of the initial users of Hadoop

Hadoop can run jobs in ________ to tackle large volumes of data.

Parallel

After the Map phase finishes, the Hadoop framework does "Partitioning, Shuffle and sort". Explain what happens in this phase?

Partitioning: It is the process of determining which reducer instance will receive which intermediate keys and values. Each mapper must determine for all of its output (key, value) pairs which reducer will receive them. It is necessary that for any key, regardless of which mapper instance generated it, the destination partition is the same. Shuffle: After the first map tasks have completed, the nodes may still be performing several more map tasks each. But they also begin exchanging the intermediate outputs from the map tasks to where they are required by the reducers. This process of moving map outputs to the reducers is known as shuffling. Sort: Each reduce task is responsible for reducing the values associated with several intermediate keys. The set of intermediate keys on a single node is automatically sorted by Hadoop before they are presented to the Reducer.

When comparing Hadoop and RDBMS, which is the best solution for big data?

RDBMS

If reducers do not start before all mappers finish then why does the progress on MapReduce job shows something like Map(50%) Reduce(10%)? Why reducers progress percentage is displayed when mapper is not finished yet?

Reducers start copying intermediate key-value pairs from the mappers as soon as they are available. The progress calculation also takes in account the processing of data transfer which is done by reduce process, therefore the reduce progress starts showing up as soon as any intermediate key-value pair for a mapper is available to be transferred to reducer. Though the reducer progress is updated still the programmer defined reduce method is called only after all the mappers have finished.

Describe how Sqoop transfers data from a relational database to Hadoop.

Runs a query on a relational database and exports into files in a variety of formats. They are then saved on HDFS.

What is configuration of a typical slave node on Hadoop cluster? How many JVMs run on a slave node?

Single instance of a Task Tracker is run on each Slave node. Task tracker is run as a separate JVM process. Single instance of a DataNode daemon is run on each Slave node. DataNode daemon is run as a separate JVM process. One or Multiple instances of Task Instance is run on each slave node. Each task instance is run as a separate JVM process. The number of Task instances can be controlled by configuration. Typically a high end machine is configured to run more task instances.

Hadoop achieves parallelism by dividing the tasks across many nodes, it is possible for a few slow nodes to rate-limit the rest of the program and slow down the program. What mechanism Hadoop provides to combat this?

Speculative Execution.

What is the meaning of speculative execution in Hadoop? Why is it important?

Speculative execution is a way of coping with individual Machine performance. In large clusters where hundreds or thousands of machines are involved there may be machines which are not performing as fast as others. This may result in delays in a full job due to only one machine not performaing well. To avoid this, speculative execution in hadoop can run multiple copies of same map or reduce task on different slave nodes. The results from first node to finish are used.

What is Hadoop Streaming?

Streaming is a generic API that allows programs written in virtually any language to be used as Hadoop Mapper and Reducer implementations.

Hadoop works best with _________ and ___________ data, while Relational Databases are best with the first one.

Structured, Unstructured

What is a Task instance in Hadoop? Where does it run?

Task instances are the actual MapReduce jobs which are run on each slave node. The TaskTracker starts a separate JVM processes to do the actual work (called as Task Instance) this is to ensure that process failure does not take down the task tracker. Each Task Instance runs on its own JVM process. There can be multiple processes of task instance running on a slave node. This is based on the number of slots configured on task tracker. By default a new task instance JVM process is spawned for a task.

What's a tasktracker?

TaskTracker is a node in the cluster that accepts tasks like MapReduce and Shuffle operations - from a JobTracker.

What is the difference between TextInputFormat and KeyValueInputFormat class?

TextInputFormat: It reads lines of text files and provides the offset of the line as key to the Mapper and actual line as Value to the mapper. KeyValueInputFormat: Reads text file and parses lines into key, Val pairs. Everything up to the first tab character is sent as key to the Mapper and the remainder of the line is sent as value to the mapper.

What is a Combiner?

The Combiner is a 'mini-reduce' process which operates only on data generated by a mapper. The Combiner will receive as input all data emitted by the Mapper instances on a given node. The output from the Combiner is then sent to the Reducers, instead of the output from the Mappers.

What is the difference between HDFS and NAS ?

The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. Following are differences between HDFS and NAS In HDFS Data Blocks are distributed across local drives of all machines in a cluster. Whereas in NAS data is stored on dedicated hardware. HDFS is designed to work with MapReduce System, since computation are moved to data. NAS is not suitable for MapReduce since data is stored seperately from the computations. HDFS runs on a cluster of machines and provides redundancy usinga replication protocal. Whereas NAS is provided by a single machine therefore does not provide data redundancy.

What is the purpose of RecordReader in Hadoop?

The InputSplit has defined a slice of work, but does not describe how to access it. The RecordReader class actually loads the data from its source and converts it into (key, value) pairs suitable for reading by the Mapper. The RecordReader instance is defined by the Input Format.

What is the Hadoop MapReduce API contract for a key and value Class?

The Key must implement the org.apache.hadoop.io.WritableComparable interface. The value must implement the org.apache.hadoop.io.Writable interface.

What is a NameNode? How many instances of NameNode run on a Hadoop Cluster?

The NameNode is the centerpiece of an HDFS file system. It keeps the directory tree of all files in the file system, and tracks where across the cluster the file data is kept. It does not store the data of these files itself. There is only One NameNode process run on any hadoop cluster. NameNode runs on its own JVM process. In a typical production cluster its run on a separate machine. The NameNode is a Single Point of Failure for the HDFS Cluster. When the NameNode goes down, the file system goes offline. Client applications talk to the NameNode whenever they wish to locate a file, or when they want to add/copy/move/delete a file. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data lives.

How JobTracker schedules a task?

The TaskTrackers send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These message also inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated. When the JobTracker tries to find somewhere to schedule a task within the MapReduce operations, it first looks for an empty slot on the same server that hosts the DataNode containing the data, and if not, it looks for an empty slot on a machine in the same rack.

Volume, Velocity, Variety

The Three Characteristics of Big Data

Define "fault tolerance".

The ability of a system to continue operating after the failure of some of its components.

containers

The data is stored in HDFS which does not have any predefined ______.

If no custom partitioner is defined in Hadoop then how is data partitioned before it is sent to the reducer?

The default partitioner computes a hash value for the key and assigns the partition based on this result.

What are some typical functions of Job Tracker?

The following are some typical tasks of JobTracker:- - Accepts jobs from clients - It talks to the NameNode to determine the location of the data. - It locates TaskTracker nodes with available slots at or near the data. - It submits the work to the chosen TaskTracker nodes and monitors progress of each task by receiving heartbeat signals from Task tracker.

Where is the Mapper Output (intermediate kay-value data) stored ?

The mapper output (intermediate data) is stored on the Local file system (NOT HDFS) of each individual mapper nodes. This is typically a temporary directory location which can be setup in config by the hadoop administrator. The intermediate data is cleaned up after the Hadoop Job completes.

Benefits of distributed cache

This is because distributed cache is much faster. It copies the file to all trackers at the start of the job. Now if the task tracker runs 10 or 100 Mappers or Reducer, it will use the same copy of distributed cache. On the other hand, if you put code in file to read it from HDFS in the MR Job then every Mapper will try to access it from HDFS hence if a TaskTracker run 100 map jobs then it will try to read this file 100 times from HDFS. Also HDFS is not very efficient when used like this.

FIll in the blank. Hadoop lacks notion of ________ and _______. Therefore, the analyzed result generated by Hadoop may or may not be 100% accurate.

Transaction Consistency, Recovery Checkpoint

T/F: Hadoop is open source.

True

What is InputSplit in Hadoop?

When a Hadoop job is run, it splits input files into chunks and assign each split to a mapper to process. This is called InputSplit.

Describe what happens when a slave node in a Hadoop cluster is destroyed and how the master node compensates.

When the slaves heartbeat stops sending, the master moves it's tasks to other slave nodes.

Can I set the number of reducers to zero?

Yes, Setting the number of reducers to zero is a valid configuration in Hadoop. When you set the reducers to zero no reducers will be executed, and the output of each mapper will be stored to a separate file on HDFS. [This is different from the condition when reducers are set to a number greater than zero and the Mappers output (intermediate data) is written to the Local file system(NOT HDFS) of each mappter slave node.]

Is it possible to have Hadoop job output in multiple directories? If yes, how?

Yes, by using Multiple Outputs class.

Is it possible to provide multiple input to Hadoop? If yes then how can you give multiple directories as input to the Hadoop job?

Yes, the input format class provides methods to add multiple directories as input to a Hadoop job.

How can you set an arbitrary number of Reducers to be created for a job in Hadoop?

You can either do it programmatically by using method setNumReduceTasks in the Jobconf Class or set it up as a configuration setting.

How can you set an arbitrary number of mappers to be created for a job in Hadoop?

You can't set it

Structured data

___ ____ is the data that is easily identifiable as it is organized in structure. The most common form of __ ___ is a database where specific information is stored in tables, that is, rows and columns. [same word]

Map Reduce

____ ____ is a set of programs used to access and manipulate large data sets over a Hadoop cluster.

unstructured data

____ ____ refers to any data that cannot be identified easily. It could be in the form of images, videos, documents, email, logs, and random text. It is not in the form of rows and columns.

RDBMS

_________ is useful when you want to seek one record from Big Data, whereas, Hadoop will be useful when you want Big Data in one shot and perform analysis on that later.

HDFS

___________ is used to store large datasets in Hadoop.

What mechanism does Hadoop framework provide to synchronise changes made in Distribution Cache during runtime of the application?

none

What is Writable & WritableComparable interface?

org.apache.hadoop.io.Writable is a Java interface. Any key or value type in the Hadoop Map-Reduce framework implements this interface. Implementations typically implement a static read(DataInput) method which constructs a new instance, calls readFields(DataInput) and returns the instance. org.apache.hadoop.io.WritableComparable is a Java interface. Any type which is to be used as a key in the Hadoop Map-Reduce framework should implement this interface. WritableComparable objects can be compared to each other using Comparators.

What is a IdentityMapper and IdentityReducer in MapReduce ?

org.apache.hadoop.mapred.lib.IdentityMapper Implements the identity function, mapping inputs directly to outputs. If MapReduce programmer do not set the Mapper Class using JobConf.setMapperClass then IdentityMapper.class is used as a default value. org.apache.hadoop.mapred.lib.IdentityReducer Performs no reduction, writing all input values directly to the output. If MapReduce programmer do not set the Reducer Class using JobConf.setReducerClass then IdentityReducer.class is used as a default value.


Related study sets

Chapter 5- Market for Foreign Exchange

View Set

22 Herzberg's Motivation-Hygiene Theory

View Set

Simple and Multiple Linear Regression

View Set

Lesson 5 Estructura 5.3 Ser and estar

View Set