Master the most practical data structure for real-world problems. Learn hash functions, collision resolution strategies, custom implementation, C# Dictionary and HashSet APIs, sorted variants, common patterns, and solve 14 comprehensive interview problems with complete solutions.
A hash table is an unordered data structure that stores key-value pairs. It uses a hash function to map keys to array indices, enabling average O(1) lookup, insertion, and deletion. When multiple keys hash to the same index, a collision occurs and must be resolved.
A hash table consists of three components: (1) a hash function that converts keys into array indices, (2) an underlying array (buckets), and (3) a collision resolution strategy. The hash function is deterministic—the same key always produces the same index.
// Hash Table Concept var phoneBook = new Dictionary<string, string>(); // O(1) average insertion phoneBook["Alice"] = "555-1234"; phoneBook["Bob"] = "555-5678"; phoneBook["Charlie"] = "555-9999"; // O(1) average lookup string alicePhone = phoneBook["Alice"]; // "555-1234" // O(1) average deletion phoneBook.Remove("Bob"); // O(1) average containment check bool hasCharlie = phoneBook.ContainsKey("Charlie"); // true
| Operation | Hash Table | Array | Linked List |
|---|---|---|---|
| Lookup by key | O(1) avg | O(n) | O(n) |
| Insert | O(1) avg | O(n) | O(1) |
| Delete | O(1) avg | O(n) | O(n) |
| Space | O(n) | O(n) | O(n) |
| Ordered? | No | Yes | Yes |
A good hash function distributes keys uniformly across buckets, minimizing collisions. In C#, all objects implement GetHashCode() which must be overridden for custom types to ensure correct behavior in hash tables.
public class Person { public string Name { get; set; } public int Age { get; set; } // Override GetHashCode for use in hash tables public override int GetHashCode() { // Combine hash codes of fields that define equality unchecked { int hash = 17; hash = hash * 31 + Name.GetHashCode(); hash = hash * 31 + Age.GetHashCode(); return hash; } } // Must also override Equals for consistency public override bool Equals(object obj) { if (obj is not Person p) return false; return Name == p.Name && Age == p.Age; } }
// Simple mod hash for integers int SimpleHash(int key, int capacity) { return key % capacity; } // Division method int DivisionHash(int key, int tableSize) { return key % tableSize; } // Multiplication method int MultiplicationHash(int key, int tableSize) { double A = 0.6180339887; // Golden ratio - 1 return (int)(tableSize * ((key * A) % 1)); } // String hash (simplified) int StringHash(string key, int tableSize) { int hash = 0; foreach (char c in key) { hash = (hash * 31 + c) % tableSize; } return hash; }
Chaining stores colliding keys in a linked list at each bucket. Simple, performs well even with high load factors, and allows flexible resizing. Worst-case O(n) if all keys hash to same bucket, but average remains O(1).
When inserting a key-value pair, compute hash(key) to get bucket index. If that bucket's linked list is empty, add the pair. If occupied, traverse the list to check if key exists (update value) or append new pair (O(1) append at head).
// Chaining with Linked Lists public class HashTableChaining<K, V> { private class Entry { public K Key { get; set; } public V Value { get; set; } public Entry Next { get; set; } public Entry(K key, V value) { Key = key; Value = value; Next = null; } } private Entry[] buckets; private int count; private int capacity; public HashTableChaining(int initialCapacity = 16) { capacity = initialCapacity; buckets = new Entry[capacity]; count = 0; } // Compute bucket index private int GetBucketIndex(K key) { int hash = key.GetHashCode(); return Math.Abs(hash) % capacity; } // O(1) average insertion public void Add(K key, V value) { if (key == null) throw new ArgumentNullException(nameof(key)); // Check load factor and resize if needed double loadFactor = (double)count / capacity; if (loadFactor > 0.75) Resize(); int idx = GetBucketIndex(key); Entry entry = buckets[idx]; // Search existing chain while (entry != null) { if (entry.Key.Equals(key)) { entry.Value = value; // Update existing return; } entry = entry.Next; } // Insert at head of chain Entry newEntry = new Entry(key, value); newEntry.Next = buckets[idx]; buckets[idx] = newEntry; count++; } // O(1) average retrieval public bool TryGet(K key, out V value) { int idx = GetBucketIndex(key); Entry entry = buckets[idx]; while (entry != null) { if (entry.Key.Equals(key)) { value = entry.Value; return true; } entry = entry.Next; } value = default; return false; } // O(1) average removal public bool Remove(K key) { int idx = GetBucketIndex(key); Entry entry = buckets[idx]; Entry prev = null; while (entry != null) { if (entry.Key.Equals(key)) { if (prev == null) buckets[idx] = entry.Next; // Remove from head else prev.Next = entry.Next; // Remove from middle count--; return true; } prev = entry; entry = entry.Next; } return false; } // Double table size when load factor > 0.75 private void Resize() { Entry[] oldBuckets = buckets; capacity *= 2; buckets = new Entry[capacity]; count = 0; // Rehash all entries into new table foreach (Entry entry in oldBuckets) { while (entry != null) { Add(entry.Key, entry.Value); entry = entry.Next; } } } public int Count => count; }
Open Addressing stores colliding entries in other empty slots within the same array. Uses probing sequences: linear probing (check next slot), quadratic probing (check 1, 4, 9, ... slots away), and double hashing (use secondary hash). Requires low load factors and rehashing when full.
When hash(key) maps to occupied slot, check hash(key) + 1, hash(key) + 2, etc. until finding empty slot. Simple but suffers from clustering—many collisions create chains.
// Open Addressing with Linear Probing public class HashTableLinearProbing<K, V> { private class Slot { public K Key { get; set; } public V Value { get; set; } public bool IsOccupied { get; set; } public bool IsDeleted { get; set; } } private Slot[] slots; private int count; public HashTableLinearProbing(int capacity = 16) { slots = new Slot[capacity]; for (int i = 0; i < capacity; i++) slots[i] = new Slot(); count = 0; } private int GetBucketIndex(K key) { int hash = Math.Abs(key.GetHashCode()); return hash % slots.Length; } // Linear Probing: check hash, hash+1, hash+2, ... private int FindSlot(K key, bool forInsert) { int idx = GetBucketIndex(key); int deleteIdx = -1; for (int i = 0; i < slots.Length; i++) { int probeIdx = (idx + i) % slots.Length; Slot slot = slots[probeIdx]; if (!slot.IsOccupied) { if (slot.IsDeleted && deleteIdx == -1) deleteIdx = probeIdx; // Remember deleted slot if (forInsert) return deleteIdx == -1 ? probeIdx : deleteIdx; return -1; // Key not found } if (slot.Key.Equals(key)) return probeIdx; // Found existing key } return -1; // Table full } public void Add(K key, V value) { if (count > slots.Length * 0.5) Resize(); // Keep load factor <= 0.5 int idx = FindSlot(key, true); if (idx == -1) throw new Exception("Table full"); bool isNew = !slots[idx].IsOccupied; slots[idx].Key = key; slots[idx].Value = value; slots[idx].IsOccupied = true; slots[idx].IsDeleted = false; if (isNew) count++; } public bool TryGet(K key, out V value) { int idx = FindSlot(key, false); if (idx == -1) { value = default; return false; } value = slots[idx].Value; return true; } public bool Remove(K key) { int idx = FindSlot(key, false); if (idx == -1) return false; slots[idx].IsDeleted = true; slots[idx].IsOccupied = false; count--; return true; } private void Resize() { Slot[] oldSlots = slots; slots = new Slot[oldSlots.Length * 2]; for (int i = 0; i < slots.Length; i++) slots[i] = new Slot(); count = 0; // Rehash all non-deleted entries foreach (Slot slot in oldSlots) { if (slot.IsOccupied && !slot.IsDeleted) Add(slot.Key, slot.Value); } } }
Quadratic probing checks indices: hash(k), hash(k)+1, hash(k)+4, hash(k)+9, etc. (offsets = i²). Reduces clustering vs linear probing. Double hashing uses secondary hash: hash(k) + i·hash2(k). Both require load factor ≤ 0.5 for guaranteed empty slot.
// Quadratic Probing: offset = i² private int FindSlotQuadratic(K key, bool forInsert) { int idx = GetBucketIndex(key); for (int i = 0; i < slots.Length; i++) { int probeIdx = (idx + i * i) % slots.Length; // Quadratic offset Slot slot = slots[probeIdx]; if (!slot.IsOccupied) return forInsert ? probeIdx : -1; if (slot.Key.Equals(key)) return probeIdx; } return -1; } // Double Hashing: offset = i * hash2(key) private int SecondaryHash(K key) { return 7 - (Math.Abs(key.GetHashCode()) % 7); } private int FindSlotDoubleHash(K key, bool forInsert) { int idx = GetBucketIndex(key); int step = SecondaryHash(key); for (int i = 0; i < slots.Length; i++) { int probeIdx = (idx + i * step) % slots.Length; Slot slot = slots[probeIdx]; if (!slot.IsOccupied) return forInsert ? probeIdx : -1; if (slot.Key.Equals(key)) return probeIdx; } return -1; }
Building a production-quality hash table from scratch. This implementation uses chaining, includes resizing logic, and demonstrates all core operations: Add, Get, Remove, Contains, with O(1) average complexity.
public class SimpleHashTable<K, V> { private class Node { public K Key; public V Value; public Node Next; public Node(K k, V v) { Key = k; Value = v; Next = null; } } private Node[] buckets; private int capacity; private int size; public SimpleHashTable(int initialCapacity = 16) { if (initialCapacity < 1) throw new ArgumentException(); capacity = initialCapacity; buckets = new Node[capacity]; size = 0; } // Hash function: map key to bucket index private int Hash(K key) { if (key == null) throw new ArgumentNullException(nameof(key)); int hashCode = key.GetHashCode(); return Math.Abs(hashCode) % capacity; } // O(1) average: Add or update key-value pair public void Add(K key, V value) { // Check load factor (size/capacity) double loadFactor = (double)size / capacity; if (loadFactor > 0.75) { Resize(); } int idx = Hash(key); Node node = buckets[idx]; // Search for existing key in chain while (node != null) { if (node.Key.Equals(key)) { node.Value = value; // Update value return; } node = node.Next; } // Key not found, insert at head Node newNode = new Node(key, value); newNode.Next = buckets[idx]; buckets[idx] = newNode; size++; } // O(1) average: Get value by key public bool TryGetValue(K key, out V value) { int idx = Hash(key); Node node = buckets[idx]; while (node != null) { if (node.Key.Equals(key)) { value = node.Value; return true; } node = node.Next; } value = default; return false; } // O(1) average: Check if key exists public bool ContainsKey(K key) { return TryGetValue(key, out _); } // O(1) average: Remove key-value pair public bool Remove(K key) { int idx = Hash(key); Node node = buckets[idx]; Node prev = null; while (node != null) { if (node.Key.Equals(key)) { if (prev == null) buckets[idx] = node.Next; else prev.Next = node.Next; size--; return true; } prev = node; node = node.Next; } return false; } // Double capacity and rehash all entries private void Resize() { Node[] oldBuckets = buckets; capacity *= 2; buckets = new Node[capacity]; size = 0; // Rehash all old entries into new buckets foreach (Node node in oldBuckets) { while (node != null) { Add(node.Key, node.Value); node = node.Next; } } } public int Count => size; public int Capacity => capacity; }
// Create hash table for student IDs -> names var students = new SimpleHashTable<int, string>(); // O(1) additions students.Add(101, "Alice"); students.Add(102, "Bob"); students.Add(103, "Charlie"); // O(1) lookup if (students.TryGetValue(102, out string name)) Console.WriteLine(name); // Bob // O(1) containment check bool exists = students.ContainsKey(105); // false // O(1) removal students.Remove(101); Console.WriteLine($"Table size: {students.Count}"); // 2
The C# Dictionary<K,V> and HashSet<T> are production-quality generic hash table implementations. Dictionary maps keys to values; HashSet stores unique elements. Both provide O(1) average-case operations.
// Create dictionary var dict = new Dictionary<string, int>(); // Add items (throws KeyNotFoundException if duplicate) dict.Add("apple", 1); dict.Add("banana", 2); // Indexer for add/update dict["cherry"] = 3; dict["apple"] = 10; // Update // Safe lookup - doesn't throw if (dict.TryGetValue("apple", out int value)) Console.WriteLine(value); // 10 // Check existence bool hasKey = dict.ContainsKey("banana"); // true bool hasValue = dict.ContainsValue(3); // true // Remove item bool removed = dict.Remove("banana"); // true // TryRemove (C# 8+) - doesn't throw bool wasRemoved = dict.Remove("banana", out int val); // Collections: Keys, Values, Count, Clear var keys = dict.Keys; // ICollection of keys var values = dict.Values; // ICollection of values int count = dict.Count; // 2 dict.Clear(); // Remove all items // Iteration foreach (var kvp in dict) { Console.WriteLine($"{kvp.Key}: {kvp.Value}"); }
| Operation | Average | Worst Case | Notes |
|---|---|---|---|
| Add (indexer) | O(1) | O(n) | Updates if key exists |
| Add (method) | O(1) | O(n) | Throws if key exists |
| Remove | O(1) | O(n) | Returns bool |
| TryGetValue | O(1) | O(n) | Safe lookup |
| ContainsKey | O(1) | O(n) | Existence check |
| ContainsValue | O(n) | O(n) | Full scan required |
| Clear | O(n) | O(n) | Removes all items |
// Create hash set var numbers = new HashSet<int> { 1, 2, 3, 4 }; // Add/Remove items (duplicates ignored) bool added = numbers.Add(5); // true bool notAdded = numbers.Add(2); // false (already exists) bool removed = numbers.Remove(1); // true // Containment check - O(1) bool contains = numbers.Contains(3); // true // Count, Clear int count = numbers.Count; // 4 numbers.Clear(); // O(n) // Set operations var set1 = new HashSet<int> { 1, 2, 3 }; var set2 = new HashSet<int> { 2, 3, 4 }; // Union: add all elements from set2 set1.UnionWith(set2); // set1 = {1, 2, 3, 4} // Intersection: keep only common elements set1.IntersectWith(set2); // set1 = {2, 3} // Difference: remove elements in set2 set1.ExceptWith(set2); // set1 = {} // Symmetric difference: keep non-common set1.SymmetricExceptWith(set2); // XOR operation // Queries (don't modify) bool isSubset = set1.IsSubsetOf(set2); bool isSuperset = set1.IsSupersetOf(set2); bool overlaps = set1.Overlaps(set2); bool disjoint = !set1.Overlaps(set2);
SortedDictionary and SortedSet are ordered variants backed by Red-Black Trees. They maintain keys in sorted order at O(log n) cost per operation. Use when you need ordering or range queries.
| Feature | Dictionary | SortedDictionary |
|---|---|---|
| Lookup by key | O(1) avg | O(log n) |
| Insert | O(1) avg | O(log n) |
| Delete | O(1) avg | O(log n) |
| Enumeration order | Insertion order | Sorted by key |
| Range queries | No | No (use GetViewBetween) |
// Create sorted dictionary (keys ordered) var scores = new SortedDictionary<string, int>(); scores.Add("Charlie", 85); scores.Add("Alice", 95); scores.Add("Bob", 88); // Enumerate in sorted key order foreach (var kvp in scores) { // Alice: 95, Bob: 88, Charlie: 85 Console.WriteLine($"{kvp.Key}: {kvp.Value}"); } // O(log n) operations int aliceScore = scores["Alice"]; bool removed = scores.Remove("Bob"); // Get view between keys (C# 4.0+) var view = scores.GetViewBetween("Alice", "Charlie"); // Returns submapping with keys >= "Alice" and <= "Charlie"
// Create sorted set (unique, ordered elements) var primes = new SortedSet<int> { 2, 3, 5, 7, 11 }; // Enumerate in sorted order foreach (int p in primes) Console.WriteLine(p); // 2, 3, 5, 7, 11 // O(log n) operations primes.Add(13); bool contains = primes.Contains(7); // true // Range queries int min = primes.Min; // 2 int max = primes.Max; // 13 // Get element by relative position bool found = primes.TryGetValue(5, out int val); // true // View operations (C# 5+) var view = primes.GetViewBetween(3, 11); // {3, 5, 7, 11} // Reverse enumeration foreach (int p in primes.Reverse()) Console.WriteLine(p); // 13, 11, 7, 5, 3, 2
Real-world problems repeat patterns. Master frequency counting, complement finding, grouping by attribute, and two-pointer techniques combined with hash tables.
Count occurrences of elements. Build a map, increment count for each element.
// Count word frequencies public Dictionary<string, int> CountFrequencies(string[] words) { var freq = new Dictionary<string, int>(); foreach (string word in words) { if (freq.ContainsKey(word)) freq[word]++; else freq[word] = 1; } return freq; } // With GetValueOrDefault (C# 8+) public Dictionary<string, int> CountFrequenciesModern(string[] words) { var freq = new Dictionary<string, int>(); foreach (string word in words) { freq[word] = freq.GetValueOrDefault(word) + 1; } return freq; }
Find if target = num1 + num2 exists. For each number, check if (target - number) exists in map.
// Find two numbers that sum to target public int[] TwoSum(int[] nums, int target) { var seen = new Dictionary<int, int>(); // value -> index for (int i = 0; i < nums.Length; i++) { int complement = target - nums[i]; if (seen.ContainsKey(complement)) { return new[] { seen[complement], i }; } // Store num and its index (handles duplicates) if (!seen.ContainsKey(nums[i])) seen[nums[i]] = i; } return new int[0]; // Not found } // Time: O(n), Space: O(n)
Group items by a key attribute. Map key -> list of items.
// Group words by length public Dictionary<int, List<string>> GroupByLength(string[] words) { var groups = new Dictionary<int, List<string>>(); foreach (string word in words) { int len = word.Length; if (!groups.ContainsKey(len)) groups[len] = new List<string>(); groups[len].Add(word); } return groups; } // Time: O(n), Space: O(n)
Find subarrays with specific sum. Use hash map of prefix sums: map[prefix] = count of indices with that prefix.
// Count subarrays summing to target public int SubarraySumEqualsK(int[] nums, int k) { var prefixCount = new Dictionary<int, int>(); prefixCount[0] = 1; // Base case: prefix 0 before array int sum = 0; int count = 0; foreach (int num in nums) { sum += num; int target = sum - k; // If (sum - k) exists, we found valid subarrays if (prefixCount.ContainsKey(target)) count += prefixCount[target]; // Record current prefix sum if (!prefixCount.ContainsKey(sum)) prefixCount[sum] = 0; prefixCount[sum]++; } return count; } // Time: O(n), Space: O(n)
Track element frequencies in current window. Move window, update frequencies.
// Longest substring without repeating characters public int LengthOfLongestSubstring(string s) { var charIndex = new Dictionary<char, int>(); int maxLen = 0; int left = 0; for (int right = 0; right < s.Length; right++) { char c = s[right]; // If duplicate, move left pointer past previous occurrence if (charIndex.ContainsKey(c)) { left = Math.Max(left, charIndex[c] + 1); } charIndex[c] = right; maxLen = Math.Max(maxLen, right - left + 1); } return maxLen; } // Time: O(n), Space: O(min(n, 26))
Master hash tables through comprehensive problems. Each includes problem statement, approach explanation, complete C# code with syntax highlighting, time/space complexity analysis, and edge cases.
Use one-pass hash map. As iterating, for each number check if (target - number) exists in map. If yes, return indices. If no, store current number in map for future lookups. O(n) time, O(n) space.
public int[] TwoSum(int[] nums, int target) { var map = new Dictionary<int, int>(); for (int i = 0; i < nums.Length; i++) { int complement = target - nums[i]; if (map.ContainsKey(complement)) { return new[] { map[complement], i }; } if (!map.ContainsKey(nums[i])) { map[nums[i]] = i; } } return new int[0]; } // Time: O(n), Space: O(n)
Count character frequencies in both strings. Compare frequencies. If equal, they're anagrams. Alternative: sort and compare, but hash map is more efficient conceptually.
public bool IsAnagram(string s, string t) { if (s.Length != t.Length) return false; var freq = new Dictionary<char, int>(); foreach (char c in s) { if (!freq.ContainsKey(c)) freq[c] = 0; freq[c]++; } foreach (char c in t) { if (!freq.ContainsKey(c)) return false; freq[c]--; if (freq[c] < 0) return false; } return true; } // Time: O(n), Space: O(1) [max 26 chars]
Use sorted string as key. Anagrams have same sorted representation. Map sorted_string -> list of anagrams. O(n k log k) where n is count, k is avg length.
public IList<IList<string>> GroupAnagrams(string[] strs) { var groups = new Dictionary<string, List<string>>(); foreach (string s in strs) { // Sort characters to get canonical form char[] chars = s.ToCharArray(); Array.Sort(chars); string key = new string(chars); if (!groups.ContainsKey(key)) groups[key] = new List<string>(); groups[key].Add(s); } return new List<IList<string>>(groups.Values); } // Time: O(n k log k), Space: O(n k)
Count frequencies with hash map. Use min-heap of size k to track top k elements. When heap exceeds k, remove min. Final heap contains k most frequent. O(n log k).
public int[] TopKFrequent(int[] nums, int k) { // Count frequencies var freq = new Dictionary<int, int>(); foreach (int n in nums) { if (!freq.ContainsKey(n)) freq[n] = 0; freq[n]++; } // Min heap by frequency (order by ascending count) var heap = new PriorityQueue<int, int>(); foreach (int num in freq.Keys) { heap.Enqueue(num, freq[num]); if (heap.Count > k) heap.Dequeue(); } var result = new List<int>(); while (heap.Count > 0) result.Add(heap.Dequeue()); return result.ToArray(); } // Time: O(n log k), Space: O(n)
Convert array to hash set O(n). For each number, only start counting if it's sequence start (num-1 not in set). Count consecutive from there. O(n) because each element visited at most twice.
public int LongestConsecutive(int[] nums) { var numSet = new HashSet<int>(nums); int longest = 0; foreach (int num in numSet) { // Only count from start of sequence if (!numSet.Contains(num - 1)) { int length = 1; int current = num; while (numSet.Contains(current + 1)) { length++; current++; } longest = Math.Max(longest, length); } } return longest; } // Time: O(n), Space: O(n)
Use prefix sum technique. Track cumulative sum and its frequency. For each position, check if (current_sum - k) exists. Count += frequency. This identifies valid subarrays. O(n) time.
public int SubarraySum(int[] nums, int k) { var prefixSum = new Dictionary<int, int>(); prefixSum[0] = 1; // Base: empty prefix int sum = 0; int count = 0; foreach (int num in nums) { sum += num; int target = sum - k; if (prefixSum.ContainsKey(target)) count += prefixSum[target]; if (!prefixSum.ContainsKey(sum)) prefixSum[sum] = 0; prefixSum[sum]++; } return count; } // Time: O(n), Space: O(n)
Use hash set. Iterate array, check if element in set. If yes, return true. If no, add to set. Return false if complete. O(n) time, O(n) space.
public bool ContainsDuplicate(int[] nums) { var seen = new HashSet<int>(); foreach (int num in nums) { if (seen.Contains(num)) return true; seen.Add(num); } return false; } // Time: O(n), Space: O(n)
Maintain sliding window of size k. Use set to track elements in current window. As sliding right, add new element. If duplicate, return true. Remove leftmost when window exceeds k.
public bool ContainsNearbyDuplicate(int[] nums, int k) { var window = new HashSet<int>(); for (int i = 0; i < nums.Length; i++) { if (window.Contains(nums[i])) return true; window.Add(nums[i]); if (window.Count > k) window.Remove(nums[i - k]); } return false; } // Time: O(n), Space: O(min(n, k))
Use bucketing strategy. Divide number line into buckets of size (t+1). Numbers in same bucket or adjacent buckets may satisfy condition. Check only current and adjacent buckets. O(n) amortized.
public bool ContainsNearbyAlmostDuplicate(int[] nums, int k, long t) { if (t < 0 || k < 1) return false; var buckets = new Dictionary<long, long>(); long bucketSize = t + 1; for (int i = 0; i < nums.Length; i++) { long remappedNum = ((long)nums[i] < 0) ? (((long)nums[i] + 1) / bucketSize - 1) : ((long)nums[i] / bucketSize); long bucket = remappedNum; // Check current bucket if (buckets.ContainsKey(bucket)) return true; // Check adjacent buckets if (buckets.ContainsKey(bucket - 1) && Math.Abs((long)nums[i] - buckets[bucket - 1]) <= t) return true; if (buckets.ContainsKey(bucket + 1) && Math.Abs((long)nums[i] - buckets[bucket + 1]) <= t) return true; buckets[bucket] = (long)nums[i]; if (i >= k) buckets.Remove(((((long)nums[i - k] < 0) ? (((long)nums[i - k] + 1) / bucketSize - 1) : ((long)nums[i - k] / bucketSize))); } return false; } // Time: O(n), Space: O(min(n, k))
Two-pass approach: First pass counts all characters. Second pass finds first with count 1. O(n) time, O(1) space (max 26 letters).
public int FirstUniqChar(string s) { var freq = new Dictionary<char, int>(); // Count frequencies foreach (char c in s) { if (!freq.ContainsKey(c)) freq[c] = 0; freq[c]++; } // Find first with count 1 for (int i = 0; i < s.Length; i++) { if (freq[s[i]] == 1) return i; } return -1; } // Time: O(n), Space: O(1)
Maintain two maps: s->t and t->s. For each pair, check consistency. If s[i] maps to different t chars or vice versa, return false.
public bool IsIsomorphic(string s, string t) { if (s.Length != t.Length) return false; var sToT = new Dictionary<char, char>(); var tToS = new Dictionary<char, char>(); for (int i = 0; i < s.Length; i++) { char sc = s[i]; char tc = t[i]; // Check s -> t mapping if (sToT.ContainsKey(sc)) { if (sToT[sc] != tc) return false; } else { sToT[sc] = tc; } // Check t -> s mapping if (tToS.ContainsKey(tc)) { if (tToS[tc] != sc) return false; } else { tToS[tc] = sc; } } return true; } // Time: O(n), Space: O(1)
Similar to isomorphic strings. Two bidirectional maps: pattern char -> word, word -> pattern char. Verify one-to-one mapping consistency.
public bool WordPattern(string pattern, string s) { string[] words = s.Split(' '); if (pattern.Length != words.Length) return false; var charToWord = new Dictionary<char, string>(); var wordToChar = new Dictionary<string, char>(); for (int i = 0; i < pattern.Length; i++) { char c = pattern[i]; string w = words[i]; if (charToWord.ContainsKey(c)) { if (charToWord[c] != w) return false; } else { charToWord[c] = w; } if (wordToChar.ContainsKey(w)) { if (wordToChar[w] != c) return false; } else { wordToChar[w] = c; } } return true; } // Time: O(n + m), Space: O(n)
Three hash sets per iteration: row, column, 3x3 box. Check each non-empty cell against all three. If conflict, invalid. O(1) since grid fixed 9x9.
public bool IsValidSudoku(char[][] board) { var rows = new HashSet<char>[9]; var cols = new HashSet<char>[9]; var boxes = new HashSet<char>[9]; for (int i = 0; i < 9; i++) { rows[i] = new HashSet<char>(); cols[i] = new HashSet<char>(); boxes[i] = new HashSet<char>(); } for (int i = 0; i < 9; i++) { for (int j = 0; j < 9; j++) { char c = board[i][j]; if (c == '.') continue; int boxIdx = (i / 3) * 3 + (j / 3); if (rows[i].Contains(c) || cols[j].Contains(c) || boxes[boxIdx].Contains(c)) return false; rows[i].Add(c); cols[j].Add(c); boxes[boxIdx].Add(c); } } return true; } // Time: O(1), Space: O(1)
Use dictionary + doubly-linked list. Dictionary maps key to node. Linked list maintains order: most recent at tail, least recent at head. On access, move node to tail. On capacity exceed, remove head.
public class LRUCache { private class Node { public int Key, Val; public Node Prev, Next; } private int capacity; private readonly Dictionary<int, Node> cache; private Node head, tail; // Dummy nodes public LRUCache(int capacity) { this.capacity = capacity; cache = new Dictionary<int, Node>(); head = new Node(); tail = new Node(); head.Next = tail; tail.Prev = head; } private void AddToTail(Node node) { node.Next = tail; node.Prev = tail.Prev; tail.Prev.Next = node; tail.Prev = node; } private void Remove(Node node) { node.Prev.Next = node.Next; node.Next.Prev = node.Prev; } public int Get(int key) { if (!cache.ContainsKey(key)) return -1; Node node = cache[key]; Remove(node); AddToTail(node); return node.Val; } public void Put(int key, int value) { if (cache.ContainsKey(key)) { Node node = cache[key]; node.Val = value; Remove(node); AddToTail(node); } else { if (cache.Count == capacity) { Node lru = head.Next; Remove(lru); cache.Remove(lru.Key); } Node newNode = new Node { Key = key, Val = value }; cache[key] = newNode; AddToTail(newNode); } } } // Time: O(1) for get/put, Space: O(capacity)
Know when to use hash tables versus arrays, linked lists, trees, or heaps. Each excels at different problems.
| Structure | Lookup | Insert | Delete | Order | Best For |
|---|---|---|---|---|---|
| Hash Table | O(1) avg | O(1) avg | O(1) avg | No | Key-value, counting, grouping |
| Sorted Dict | O(log n) | O(log n) | O(log n) | Yes | Ordered keys, range queries |
| Array | O(1) | O(n) | O(n) | Yes | Index access, sequences |
| Linked List | O(n) | O(1) | O(1) | Yes | Frequent insertions, LRU |
| Binary Tree | O(log n) avg | O(log n) avg | O(log n) avg | Yes | Ordered, balanced access |
| Heap | O(n) | O(log n) | O(log n) | Partial | Priority queue, top-k |