🐘
Hadoop
Day 2: Hive + Hadoop Ecosystem — Deep Interview Guide
🐘
🐘
Hadoop · Section 5 of 9

Day 2: Hive + Hadoop Ecosystem — Deep Interview Guide

🔒

This section is locked

Unlock every deep-dive, lab, mock interview, and memory map across all 10 topics.

View Plans — from ₹299/month

Already have a plan? Sign in

Day 2: Hive + Hadoop Ecosystem — Deep Interview Guide

All 3 levels per topic — from fundamentals to advanced scenarios Key for today: Hive is your SQL layer on top of HDFS. Understand it deeply. Focus: Hive internals, optimizations (CRITICAL!), HBase, Sqoop, Oozie, ZooKeeper

🧠 MASTER MEMORY MAP — Day 2

🧠 HIVE ARCHITECTURE = "DMCTE"
HIVE ARCHITECTURE"DMCTE"
DDriver (receives SQL queries, manages lifecycle)
MMetastore (schema + HDFS location mapping — the catalog)
CCompiler (parses SQL → query plan)
TTez/MR Engine (executes the plan)
EExecution (launches tasks on YARN)
HIVE OPTIMIZATION"VECTOR-TOP":
VVectorization (process 1024 rows at once, not 1)
EEngine: use Tez, not MapReduce (10x faster DAG vs chain)
CColumn pruning (SELECT only needed columns)
TTable stats: ANALYZE TABLE for CBO
OORC format (columnar + predicate pushdown)
RREPARTITION/REDISTRIBUTE (avoid data skew)
TTez DAG (avoid MapReduce chains)
OOrder of joins (small table FIRST for map join)
PPartition pruning (WHERE on partition columns only)
HIVE FILE FORMATS"OPTA"
OORC (best for Hive: columnar, ACID, compression)
PParquet (best for cross-tool: Spark, Impala, Hive)
TText (simple, no compression, never use in production)
AAvro (row-based, best for Kafka/Sqoop schema evolution)
PARTITIONING RULES"QDLF"
QQuery filter columns → partition by those columns
DDate/region usually best partition keys
LLow cardinality (10s to 100s of values, not millions!)
FFew files per partition (avoid small files inside partitions)

SECTION 1: HIVE ARCHITECTURE

What is Hive?

Hive is SQL-on-Hadoop. You write HiveQL (SQL dialect), Hive translates it to MapReduce or Tez jobs that run on YARN. Data lives in HDFS, schema lives in Metastore. Analogy: Hive is the "database frontend" for HDFS. HDFS stores the data like a file system