2792. Count Nodes That Are Great Enough 🔒
Problem Description
You are given the root of a binary tree and an integer k
. Your task is to count how many nodes in the tree are "great enough."
A node is considered great enough if it satisfies both of these conditions:
- Its subtree (including the node itself) contains at least
k
nodes - The node's value is greater than at least
k
nodes' values in its subtree
The subtree of a node includes the node itself and all its descendants.
For example, if k = 3
, a node is great enough if:
- Its subtree has 3 or more nodes (counting itself and all descendants)
- Its value is larger than at least 3 nodes' values in that subtree
The solution uses a depth-first search (DFS) approach with a min-heap to efficiently track the k
largest values in each subtree. The heap maintains at most k
elements, where the smallest of these k
largest values is at the top. By negating values when storing them, the code converts Python's min-heap into effectively tracking the largest values. When a node's value is greater than the smallest of the k
largest values in its subtree (checked via -l[0] < root.val
), and the subtree has exactly k
nodes, the node is counted as great enough.
The function returns the total count of nodes that meet both criteria.
Flowchart Walkthrough
First, let's pin down the algorithm using the Flowchart. Here's a step-by-step walkthrough:
Is it a graph?
- Yes: While this is a binary tree problem, a tree is a special type of graph (specifically, a connected acyclic graph). Binary trees are graphs with nodes and edges where each node has at most two children.
Is it a tree?
- Yes: The problem explicitly states we're working with a binary tree. We need to traverse the tree and analyze properties of each node and its subtree.
DFS
- Yes: We arrive at DFS (Depth-First Search) as our algorithm choice.
Conclusion: The flowchart suggests using a DFS approach for this problem.
This makes perfect sense because:
- We need to process each node's entire subtree before determining if that node is "great enough"
- We need to collect information from children before processing the parent (post-order traversal)
- DFS allows us to recursively compute subtree properties - gathering the count of nodes and their values from each subtree
- The bottom-up nature of DFS lets us build up the k largest values from leaves to root, which is exactly what we need to check if a node's value is greater than at least k nodes in its subtree
The solution implements DFS by recursively traversing left and right subtrees, maintaining a heap of the k largest values seen so far, and counting nodes that satisfy both conditions (having at least k nodes in subtree and value greater than at least k nodes).
Intuition
When we need to determine if a node is "great enough," we must know two things about its subtree: how many nodes it contains and what values those nodes have. This naturally suggests a bottom-up approach where we first process the leaves, then work our way up to the root.
The key insight is that we don't need to track all node values in a subtree - we only care about the k
largest values. Why? Because if a node's value is greater than the k
-th largest value in its subtree, it's automatically greater than at least k
nodes. This is where the heap data structure becomes useful.
Consider how we can efficiently maintain the k
largest values as we traverse the tree. A min-heap of size k
is perfect for this - the smallest element in this heap represents the k
-th largest value we've seen. When we add a new value:
- If the heap has fewer than
k
elements, we simply add it - If the heap already has
k
elements, we add the new value and remove the smallest, ensuring we always keep thek
largest
The clever trick in the solution is using negative values in the heap. Python's heapq
is a min-heap, but by negating values when we insert them, the "minimum" negative value actually represents the maximum positive value. So -l[0]
gives us the k
-th largest value in the subtree.
As we traverse using DFS, at each node we:
- Merge the heaps from left and right subtrees (keeping only the
k
largest) - Check if we have at least
k
nodes and if the current node's value beats thek
-th largest - Add the current node's value to the heap for its parent to use
This bottom-up accumulation ensures that by the time we evaluate any node, we have complete information about its subtree, allowing us to make the "great enough" determination accurately.
Learn more about Tree, Depth-First Search, Divide and Conquer and Binary Tree patterns.
Solution Approach
The solution implements a post-order DFS traversal with a clever heap-based tracking system. Let's break down the implementation:
Core Data Structure - Min-Heap with Size Limit:
The push
helper function maintains a heap of at most k
elements:
def push(pq, x):
heappush(pq, x)
if len(pq) > k:
heappop(pq)
This ensures we only keep the k
largest values (stored as negatives) by removing the smallest when the heap exceeds size k
.
DFS Traversal Pattern:
The dfs
function processes nodes in post-order:
def dfs(root):
if root is None:
return []
l, r = dfs(root.left), dfs(root.right)
First recursively process left and right children, getting their heaps of k
largest values.
Merging Subtree Information:
for x in r: push(l, x)
We merge the right subtree's heap into the left subtree's heap. By reusing l
as our accumulator and pushing elements through our size-limited push
function, we efficiently maintain only the k
largest values from both subtrees.
Checking "Great Enough" Condition:
if len(l) == k and -l[0] < root.val:
nonlocal ans
ans += 1
len(l) == k
: Ensures the subtree has at leastk
nodes-l[0] < root.val
: Since we store negative values,-l[0]
gives us thek
-th largest value. If the current node's value is greater than this, it's greater than at leastk
nodes in its subtree
Adding Current Node:
push(l, -root.val) return l
We add the current node's negated value to the heap (maintaining the k
largest) and return this heap for the parent node to use.
The algorithm's time complexity is O(n * k * log k)
where n
is the number of nodes, as each node performs heap operations on a heap of size at most k
. The space complexity is O(h * k)
where h
is the height of the tree, due to the recursion stack and heaps maintained at each level.
Ready to land your dream job?
Unlock your dream job with a 5-minute evaluator for a personalized learning plan!
Start EvaluatorExample Walkthrough
Let's walk through a concrete example with k = 2
and the following binary tree:
5 / \ 3 7 / / \ 2 6 9
We'll trace through the DFS traversal to see how the algorithm determines which nodes are "great enough."
Step 1: Process leaf node 2
- DFS returns to node 2
- Left and right children are null, so
l = []
andr = []
- After merging:
l = []
(empty) - Check:
len(l) = 0 < k
, so node 2 is NOT great enough - Add node 2:
push(l, -2)
→l = [-2]
- Return
[-2]
to parent
Step 2: Process node 3
- Left child returns
[-2]
, right child returns[]
- After merging:
l = [-2]
- Check:
len(l) = 1 < k
, so node 3 is NOT great enough - Add node 3:
push(l, -3)
→l = [-3, -2]
(heap maintains smallest at top) - Return
[-3, -2]
to parent
Step 3: Process leaf node 6
- Similar to node 2, returns
[-6]
to parent
Step 4: Process leaf node 9
- Similar to node 2, returns
[-9]
to parent
Step 5: Process node 7
- Left child returns
[-6]
, right child returns[-9]
- Merge right into left:
push([-6], -9)
→l = [-9, -6]
- Check:
len(l) = 2 = k
and-l[0] = -(-9) = 9
. Is7 > 9
? No - Node 7 is NOT great enough (its value isn't greater than at least 2 nodes)
- Add node 7:
push(l, -7)
→l = [-9, -6, -7]
- Since
len(l) > k
, pop smallest:l = [-7, -6]
(keeping 2 largest: 7 and 6) - Return
[-7, -6]
to parent
Step 6: Process root node 5
- Left child returns
[-3, -2]
, right child returns[-7, -6]
- Merge right into left:
push([-3, -2], -7)
→[-7, -2, -3]
, pop smallest →[-3, -2]
push([-3, -2], -6)
→[-6, -2, -3]
, pop smallest →[-3, -2]
- After merging:
l = [-3, -2]
(the 2 largest values from all descendants) - Check:
len(l) = 2 = k
and-l[0] = -(-3) = 3
. Is5 > 3
? Yes! - Node 5 IS great enough!
ans = 1
- Add node 5:
push(l, -5)
→[-5, -2, -3]
, pop smallest →[-3, -2]
Final Result: Only node 5 is "great enough" because:
- Its subtree has at least 2 nodes (it has 6 total)
- Its value (5) is greater than at least 2 nodes' values in its subtree (greater than 2 and 3)
The algorithm returns ans = 1
.
Solution Implementation
1# Definition for a binary tree node.
2# class TreeNode:
3# def __init__(self, val=0, left=None, right=None):
4# self.val = val
5# self.left = left
6# self.right = right
7
8from heapq import heappush, heappop
9from typing import Optional, List
10
11class Solution:
12 def countGreatEnoughNodes(self, root: Optional[TreeNode], k: int) -> int:
13 def maintain_k_largest(min_heap: List[int], value: int) -> None:
14 """
15 Maintains a min-heap of size k containing the k largest values.
16 If heap exceeds size k, removes the smallest element.
17 """
18 heappush(min_heap, value)
19 if len(min_heap) > k:
20 heappop(min_heap)
21
22 def traverse_and_count(node: Optional[TreeNode]) -> List[int]:
23 """
24 Performs post-order traversal to collect k largest values from subtree.
25 Returns a min-heap containing up to k largest values from the subtree.
26 """
27 # Base case: empty node returns empty heap
28 if node is None:
29 return []
30
31 # Recursively process left and right subtrees
32 left_heap = traverse_and_count(node.left)
33 right_heap = traverse_and_count(node.right)
34
35 # Merge right subtree's k largest values into left heap
36 for value in right_heap:
37 maintain_k_largest(left_heap, value)
38
39 # Check if current node value is greater than k-th largest in subtree
40 # The heap stores negative values, so we negate back for comparison
41 if len(left_heap) == k and -left_heap[0] < node.val:
42 nonlocal count
43 count += 1
44
45 # Add current node's value (negated for max-heap behavior using min-heap)
46 maintain_k_largest(left_heap, -node.val)
47
48 return left_heap
49
50 # Initialize counter for nodes meeting the criteria
51 count = 0
52
53 # Start the traversal from root
54 traverse_and_count(root)
55
56 return count
57
1/**
2 * Definition for a binary tree node.
3 * public class TreeNode {
4 * int val;
5 * TreeNode left;
6 * TreeNode right;
7 * TreeNode() {}
8 * TreeNode(int val) { this.val = val; }
9 * TreeNode(int val, TreeNode left, TreeNode right) {
10 * this.val = val;
11 * this.left = left;
12 * this.right = right;
13 * }
14 * }
15 */
16class Solution {
17 private int greatNodeCount; // Counter for nodes that are "great enough"
18 private int k; // The k-th largest element threshold
19
20 /**
21 * Counts the number of nodes in the tree that are greater than
22 * the k-th largest value in their subtree (excluding the node itself).
23 *
24 * @param root The root of the binary tree
25 * @param k The k value for finding k-th largest element
26 * @return The count of nodes that satisfy the condition
27 */
28 public int countGreatEnoughNodes(TreeNode root, int k) {
29 this.k = k;
30 this.greatNodeCount = 0;
31
32 // Perform depth-first search to process the tree
33 dfs(root);
34
35 return greatNodeCount;
36 }
37
38 /**
39 * Performs a post-order traversal to collect subtree values and
40 * check if current node is "great enough".
41 * Uses a max heap to maintain the k smallest values from the subtree.
42 *
43 * @param node The current node being processed
44 * @return A max heap containing up to k smallest values from the subtree
45 */
46 private PriorityQueue<Integer> dfs(TreeNode node) {
47 // Base case: return empty max heap for null nodes
48 if (node == null) {
49 return new PriorityQueue<>(Comparator.reverseOrder());
50 }
51
52 // Recursively process left and right subtrees
53 PriorityQueue<Integer> leftHeap = dfs(node.left);
54 PriorityQueue<Integer> rightHeap = dfs(node.right);
55
56 // Merge right subtree values into left heap
57 // Keep only k smallest values by removing the largest when size exceeds k
58 for (int value : rightHeap) {
59 leftHeap.offer(value);
60 if (leftHeap.size() > k) {
61 leftHeap.poll(); // Remove the largest value (max heap property)
62 }
63 }
64
65 // Check if current node is "great enough"
66 // Node is great if it's greater than the k-th smallest value in its subtree
67 if (leftHeap.size() == k && leftHeap.peek() < node.val) {
68 greatNodeCount++;
69 }
70
71 // Add current node's value to the heap for parent's processing
72 leftHeap.offer(node.val);
73 if (leftHeap.size() > k) {
74 leftHeap.poll(); // Maintain at most k smallest values
75 }
76
77 return leftHeap;
78 }
79}
80
1/**
2 * Definition for a binary tree node.
3 * struct TreeNode {
4 * int val;
5 * TreeNode *left;
6 * TreeNode *right;
7 * TreeNode() : val(0), left(nullptr), right(nullptr) {}
8 * TreeNode(int x) : val(x), left(nullptr), right(nullptr) {}
9 * TreeNode(int x, TreeNode *left, TreeNode *right) : val(x), left(left), right(right) {}
10 * };
11 */
12class Solution {
13public:
14 int countGreatEnoughNodes(TreeNode* root, int k) {
15 int result = 0;
16
17 // Lambda function for DFS traversal that returns a min-heap of k largest values
18 function<priority_queue<int>(TreeNode*)> dfs = [&](TreeNode* node) -> priority_queue<int> {
19 // Base case: return empty min-heap for null nodes
20 if (!node) {
21 return priority_queue<int>();
22 }
23
24 // Recursively process left and right subtrees
25 auto leftHeap = dfs(node->left);
26 auto rightHeap = dfs(node->right);
27
28 // Merge right subtree's heap into left subtree's heap
29 // Maintain only k largest elements using min-heap property
30 while (!rightHeap.empty()) {
31 leftHeap.push(rightHeap.top());
32 rightHeap.pop();
33
34 // Keep only k largest elements
35 if (leftHeap.size() > k) {
36 leftHeap.pop(); // Remove smallest element
37 }
38 }
39
40 // Check if current node is "great enough"
41 // Node is great if exactly k descendants exist and all are smaller than current node
42 if (leftHeap.size() == k && leftHeap.top() < node->val) {
43 ++result;
44 }
45
46 // Add current node's value to the heap
47 leftHeap.push(node->val);
48
49 // Maintain heap size of at most k elements
50 if (leftHeap.size() > k) {
51 leftHeap.pop(); // Remove smallest element
52 }
53
54 return leftHeap;
55 };
56
57 // Start DFS traversal from root
58 dfs(root);
59
60 return result;
61 }
62};
63
1/**
2 * Definition for a binary tree node.
3 */
4class TreeNode {
5 val: number;
6 left: TreeNode | null;
7 right: TreeNode | null;
8
9 constructor(val?: number, left?: TreeNode | null, right?: TreeNode | null) {
10 this.val = (val === undefined ? 0 : val);
11 this.left = (left === undefined ? null : left);
12 this.right = (right === undefined ? null : right);
13 }
14}
15
16/**
17 * Custom MinHeap implementation to maintain k largest elements
18 */
19class MinHeap {
20 private heap: number[];
21
22 constructor() {
23 this.heap = [];
24 }
25
26 push(val: number): void {
27 this.heap.push(val);
28 this.bubbleUp(this.heap.length - 1);
29 }
30
31 pop(): number | undefined {
32 if (this.heap.length === 0) return undefined;
33 if (this.heap.length === 1) return this.heap.pop();
34
35 const min = this.heap[0];
36 this.heap[0] = this.heap.pop()!;
37 this.bubbleDown(0);
38 return min;
39 }
40
41 top(): number | undefined {
42 return this.heap[0];
43 }
44
45 size(): number {
46 return this.heap.length;
47 }
48
49 isEmpty(): boolean {
50 return this.heap.length === 0;
51 }
52
53 private bubbleUp(index: number): void {
54 while (index > 0) {
55 const parentIndex = Math.floor((index - 1) / 2);
56 if (this.heap[parentIndex] <= this.heap[index]) break;
57
58 [this.heap[parentIndex], this.heap[index]] = [this.heap[index], this.heap[parentIndex]];
59 index = parentIndex;
60 }
61 }
62
63 private bubbleDown(index: number): void {
64 while (true) {
65 let minIndex = index;
66 const leftChild = 2 * index + 1;
67 const rightChild = 2 * index + 2;
68
69 if (leftChild < this.heap.length && this.heap[leftChild] < this.heap[minIndex]) {
70 minIndex = leftChild;
71 }
72 if (rightChild < this.heap.length && this.heap[rightChild] < this.heap[minIndex]) {
73 minIndex = rightChild;
74 }
75
76 if (minIndex === index) break;
77
78 [this.heap[minIndex], this.heap[index]] = [this.heap[index], this.heap[minIndex]];
79 index = minIndex;
80 }
81 }
82}
83
84/**
85 * Counts nodes that are "great enough" - nodes whose value is greater than
86 * all of their exactly k descendants in the subtree
87 *
88 * @param root - The root of the binary tree
89 * @param k - The number of descendants to check
90 * @returns The count of nodes that are "great enough"
91 */
92function countGreatEnoughNodes(root: TreeNode | null, k: number): number {
93 let result = 0;
94
95 /**
96 * DFS traversal that returns a min-heap containing at most k largest values
97 * from the subtree rooted at the given node
98 *
99 * @param node - Current node being processed
100 * @returns MinHeap containing k largest values from subtree
101 */
102 function dfs(node: TreeNode | null): MinHeap {
103 // Base case: return empty min-heap for null nodes
104 if (!node) {
105 return new MinHeap();
106 }
107
108 // Recursively process left and right subtrees
109 const leftHeap = dfs(node.left);
110 const rightHeap = dfs(node.right);
111
112 // Merge right subtree's heap into left subtree's heap
113 // Maintain only k largest elements using min-heap property
114 while (!rightHeap.isEmpty()) {
115 leftHeap.push(rightHeap.top()!);
116 rightHeap.pop();
117
118 // Keep only k largest elements by removing the smallest
119 if (leftHeap.size() > k) {
120 leftHeap.pop();
121 }
122 }
123
124 // Check if current node is "great enough"
125 // Node is great if exactly k descendants exist and all are smaller than current node's value
126 if (leftHeap.size() === k && leftHeap.top()! < node.val) {
127 result++;
128 }
129
130 // Add current node's value to the heap
131 leftHeap.push(node.val);
132
133 // Maintain heap size of at most k elements
134 if (leftHeap.size() > k) {
135 leftHeap.pop(); // Remove smallest element
136 }
137
138 return leftHeap;
139 }
140
141 // Start DFS traversal from root
142 dfs(root);
143
144 return result;
145}
146
Time and Space Complexity
Time Complexity: O(n * k * log k)
where n
is the number of nodes in the tree.
The algorithm performs a depth-first search traversal visiting each node exactly once. At each node, we perform the following operations:
- Merge the right subtree's heap into the left subtree's heap: This involves pushing at most
k
elements from the right heap into the left heap, where each push operation takesO(log k)
time. - Check if the current node satisfies the condition:
O(1)
time to access the minimum element. - Push the current node's value into the heap:
O(log k)
time.
Since we maintain heaps of size at most k
, each heap operation (push/pop) takes O(log k)
time. In the worst case, at each node we perform O(k)
heap operations, resulting in O(k * log k)
work per node. With n
nodes total, the overall time complexity is O(n * k * log k)
.
Space Complexity: O(n + h * k)
where h
is the height of the tree.
The space complexity consists of:
- Recursive call stack:
O(h)
whereh
is the height of the tree (could beO(n)
in the worst case of a skewed tree). - Heap storage: At each level of recursion, we maintain a heap of size at most
k
. In the worst case, we haveO(h)
heaps active simultaneously in the call stack, each containing at mostk
elements, giving usO(h * k)
space. - Total nodes stored: Across all heaps during the entire execution, we store at most
n
node values (each node's value is stored once).
Therefore, the total space complexity is O(n + h * k)
, which simplifies to O(n)
when k
is constant and the tree is balanced (h = O(log n)
), or O(n * k)
in the worst case of a skewed tree where h = O(n)
.
Learn more about how to find time and space complexity quickly.
Common Pitfalls
Pitfall 1: Incorrect Heap Size Check Timing
Problem: A common mistake is checking if a node is "great enough" after adding the current node's value to the heap, rather than before.
Incorrect approach:
# Wrong: Adding node value first, then checking
maintain_k_largest(left_heap, -node.val)
if len(left_heap) == k and -left_heap[0] < node.val:
count += 1
Why it fails: After adding the current node's value, the heap now includes the node itself when checking if it's greater than k nodes. This leads to comparing the node against itself, which violates the problem's requirement that the node should be greater than k other nodes in its subtree.
Solution: Always check the "great enough" condition before adding the current node's value to the heap:
# Correct: Check first, then add
if len(left_heap) == k and -left_heap[0] < node.val:
count += 1
maintain_k_largest(left_heap, -node.val)
Pitfall 2: Forgetting to Negate Values for Max-Heap Simulation
Problem: Using positive values in the heap while expecting max-heap behavior from Python's min-heap.
Incorrect approach:
# Wrong: Using positive values in a min-heap
maintain_k_largest(left_heap, node.val) # This maintains k smallest, not largest
if len(left_heap) == k and left_heap[0] < node.val: # Wrong comparison
count += 1
Why it fails: Python's heapq implements a min-heap. Without negating values, you'll track the k smallest values instead of the k largest, completely inverting the logic.
Solution: Consistently negate values when pushing to simulate max-heap behavior:
# Store as negative
maintain_k_largest(left_heap, -node.val)
# Compare with negated heap top
if len(left_heap) == k and -left_heap[0] < node.val:
count += 1
Pitfall 3: Misunderstanding the Subtree Size Requirement
Problem: Checking len(left_heap) >= k
instead of exactly len(left_heap) == k
.
Incorrect approach:
# Wrong: Using >= instead of ==
if len(left_heap) >= k and -left_heap[0] < node.val:
count += 1
Why it fails: Before adding the current node, if the heap has exactly k-1 elements, the subtree (excluding current node) has k-1 nodes. After adding the current node, the total becomes k nodes. Using >=
would incorrectly count nodes whose subtrees have fewer than k total nodes.
Solution: Use strict equality to ensure exactly k nodes exist in the subtree when including the current node:
# Correct: Ensures subtree has exactly k nodes after including current
if len(left_heap) == k and -left_heap[0] < node.val:
count += 1
Which technique can we use to find the middle of a linked list?
Recommended Readings
Everything About Trees A tree is a type of graph data structure composed of nodes and edges Its main properties are It is acyclic doesn't contain any cycles There exists a path from the root to any node Has N 1 edges where N is the number of nodes in the tree and
https assets algo monster cover_photos dfs svg Depth First Search Prereqs Recursion Review problems recursion_intro Trees problems tree_intro With a solid understanding of recursion under our belts we are now ready to tackle one of the most useful techniques in coding interviews Depth First Search DFS As the name suggests
https assets algo monster cover_photos dnd svg Divide and Conquer The main idea of divide and conquer is to split the main problem into two smaller components assuming that each one of the components had already been solved recursively and then try to solve the bigger problem using the solutions
Want a Structured Path to Master System Design Too? Don’t Miss This!