Showing posts with label hbase. Show all posts
Showing posts with label hbase. Show all posts

Friday, February 21, 2014

What happens in Major compaction in HBase?

1. Delete the data which is masked by tombstone
2. Delete the data which has expired TTL
3. Compact several small HFile into a single larger one

Tuesday, April 17, 2012

What is the difference between HDFS and NAS ?

    The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. Following are differences between HDFS and NAS
    • In HDFS Data Blocks are distributed across local drives of all machines in a cluster. Whereas in NAS data is stored on dedicated hardware.
    • HDFS is designed to work with Map Reduce System, since computation are moved to data. NAS is not suitable for Map Reduce since data is stored separately from the computations.
    • HDFS runs on a cluster of machines and provides redundancy using replication protocol. Whereas NAS is provided by a single machine therefore does not provide data redundancy.

What is a Job Tracker in Hadoop? How many instances of Job Tracker run on a Hadoop Cluster?

    Job Tracker is the daemon service for submitting and tracking Map Reduce jobs in Hadoop. There is only One Job Tracker process run on any hadoop cluster. Job Tracker runs on its own JVM process. In a typical production cluster its run on a separate machine. Each slave node is configured with job tracker node location. The Job Tracker is single point of failure for the Hadoop Map Reduce service. If it goes down, all running jobs are halted. Job Tracker in Hadoop performs following actions(from Hadoop Wiki:)
    • Client applications submit jobs to the Job tracker.
    • The JobTracker talks to the NameNode to determine the location of the data
    • The JobTracker locates TaskTracker nodes with available slots at or near the data
    • The JobTracker submits the work to the chosen TaskTracker nodes.
    • The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they are deemed to have failed and the work is scheduled on a different TaskTracker.
    • A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do then: it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may may even blacklist the TaskTracker as unreliable.
    • When the work is completed, the JobTracker updates its status.
    • Client applications can poll the JobTracker for information.

What is compute and Storage nodes?

Compute Node: This is the computer or machine where your actual business logic will be executed.

Storage Node: This is the computer or machine where your file system reside to store the processing data.

In most of the cases compute node and storage node would be the same machine.

What is Map Reduce ?

Map reduce is an algorithm or concept to process Huge amount of data in a faster way. As per its name you can divide it Map and Reduce.

  • The main Map Reduce job usually splits the input data-set into independent chunks. (Big data sets in the multiple small datasets)
  • Map Task: will process these chunks in a completely parallel manner (One node can process one or more chunks).
  • The framework sorts the outputs of the maps.
  • Reduce Task : And the above output will be the input for the reduce tasks, produces the final result.

Your business logic would be written in the Mapped Task and Reduced Task.

Typically both the input and the output of the job are stored in a file-system (Not database). The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.

What is Hadoop framework

Hadoop is a open source framework which is written in java by apache software foundation. This framework is used to write software application which requires to process vast amount of data (It could handle multi TB of data). It works in-parallel on large clusters which could have 1000 of computers (Nodes) on the clusters. It also process data very reliably and fault-tolerant manner. See the below image how does it looks.

Live

Your Ad Here