1485. Clone Binary Tree With Random Pointer 🔒
Problem Description
You are given a binary tree where each node has an extra pointer called random
that can point to any node in the tree or be null. Your task is to create a complete deep copy of this tree.
A deep copy means creating an entirely new tree with new nodes that have the same structure and values as the original tree. The new tree should preserve:
- The same binary tree structure (left and right child relationships)
- The same node values
- The same random pointer connections (but pointing to nodes in the new tree, not the original)
The tree nodes have four attributes:
val
: the integer value stored in the nodeleft
: pointer to the left childright
: pointer to the right childrandom
: pointer to any node in the tree (or null)
The solution uses a depth-first search (DFS) approach with memoization. It maintains a hash map mp
to store the mapping between original nodes and their copies. This prevents creating duplicate copies of the same node and ensures that random pointers correctly reference nodes in the new tree.
The algorithm works as follows:
- If the current node is null, return null
- If the node has already been copied (exists in the hash map), return the existing copy
- Create a new copy node with the same value
- Store the mapping in the hash map
- Recursively copy the left child, right child, and random pointer target
- Return the copied node
This approach handles the challenge of the random pointers potentially creating cycles or pointing to nodes that haven't been created yet by using the hash map to track all created copies.
Flowchart Walkthrough
First, let's pin down the algorithm using the Flowchart. Here's a step-by-step walkthrough:
Is it a graph?
- Yes: While this is specifically a binary tree problem, a tree is a special type of graph (an acyclic connected graph). Additionally, the random pointers can create arbitrary connections between nodes, making this graph-like in nature.
Is it a tree?
- Yes: The base structure is explicitly a binary tree with left and right child pointers. Even though we have additional random pointers, the fundamental structure remains a tree.
DFS
- Yes: We arrive at DFS as our solution approach.
Conclusion: The flowchart suggests using DFS (Depth-First Search) for this problem.
Why DFS is the Right Choice
The DFS pattern is particularly well-suited for this problem because:
-
Tree Traversal: DFS naturally traverses tree structures by exploring each branch completely before backtracking.
-
Handling Random Pointers: The random pointers can point to any node in the tree, including nodes that haven't been visited yet or nodes that have already been processed. DFS with memoization elegantly handles this by:
- Creating nodes on first encounter
- Returning existing copies from the hash map on subsequent encounters
- Preventing infinite recursion if cycles exist through random pointers
-
Deep Copy Requirements: DFS allows us to recursively create the entire structure, ensuring all connections (left, right, and random) are properly established in the new tree.
-
Memoization Integration: The recursive nature of DFS works perfectly with a hash map to track already-copied nodes, which is essential when dealing with the random pointers that could create complex reference patterns.
Intuition
When we need to create a deep copy of a data structure, the natural approach is to traverse it and create new nodes as we go. For a regular binary tree, this would be straightforward - we'd recursively copy each node and its children. However, the random pointers add a significant complication.
The key challenge is that a random pointer might point to:
- A node we haven't visited yet (deeper in the tree)
- A node we've already copied (an ancestor or a node in another branch)
- Null
If we try to naively follow and copy every pointer, we might end up creating duplicate copies of the same node, or worse, get stuck in infinite recursion if there are cycles through the random pointers.
The breakthrough insight is to treat this as a graph traversal problem where we need to visit each node exactly once. We need a way to remember which nodes we've already copied. This naturally leads us to use a hash map that maps original nodes to their copies.
Think of it like this: imagine you're duplicating a complex network of roads (the tree) where some roads have special tunnels (random pointers) that can connect to any other road. As you build your duplicate network, you need a registry to track which roads you've already built. When you encounter a tunnel, you check your registry:
- If the destination road already exists in your new network, connect to it
- If it doesn't exist yet, build it now and add it to your registry
- This ensures every road is built exactly once, regardless of how many tunnels lead to it
The DFS approach emerges naturally because we need to fully explore each path - including the random pointer - before moving on. By checking our hash map first before creating any node, we ensure that each node in the original tree gets exactly one corresponding node in the copied tree, preserving all the complex interconnections created by the random pointers.
Learn more about Tree, Depth-First Search, Breadth-First Search and Binary Tree patterns.
Solution Approach
The implementation uses a recursive DFS approach with memoization to handle the deep copy. Let's walk through the key components:
Data Structure:
- Hash map
mp
: Maps original nodes to their corresponding copied nodes. This serves as our memoization table and prevents duplicate copies.
The DFS Function:
The core logic is encapsulated in the dfs
helper function that handles four main cases:
-
Base Case - Null Node:
if root is None: return None
If we encounter a null pointer (whether from left, right, or random), we simply return None.
-
Memoization Check:
if root in mp: return mp[root]
Before creating a new copy, we check if this node has already been copied. If it exists in our hash map, we return the existing copy. This is crucial for handling random pointers that might point to already-processed nodes.
-
Node Creation and Caching:
copy = NodeCopy(root.val) mp[root] = copy
We create a new node with the same value as the original and immediately store it in our hash map. This is done BEFORE recursively processing children to handle potential cycles through random pointers.
-
Recursive Copying:
copy.left = dfs(root.left) copy.right = dfs(root.right) copy.random = dfs(root.random)
We recursively process all three pointers. The beauty of this approach is that we treat all pointers uniformly - the
dfs
function doesn't need to know whether it's processing a tree edge or a random pointer.
Why This Order Matters:
The algorithm strategically creates and caches the node before processing its pointers. This prevents infinite recursion in cases where:
- A node's random pointer points to itself
- Multiple nodes have random pointers creating a cycle
- A child's random pointer points back to an ancestor
Time and Space Complexity:
- Time:
O(n)
where n is the number of nodes, as each node is visited exactly once - Space:
O(n)
for the hash map and the recursion stack in the worst case (skewed tree)
The elegance of this solution lies in its simplicity - by treating the structure as a graph and using memoization, we handle all the complex cases of random pointers without any special logic.
Ready to land your dream job?
Unlock your dream job with a 5-minute evaluator for a personalized learning plan!
Start EvaluatorExample Walkthrough
Let's walk through a small example to illustrate how the solution handles the deep copy with random pointers.
Original Tree:
1 (random → 3)
/ \
2 3 (random → 2)
(random → null)
Step-by-step execution:
-
Start with root (node 1):
- Check hash map: empty, node 1 not found
- Create new node with value 1:
copy1 = NodeCopy(1)
- Store in hash map:
mp[node1] = copy1
- Process left child (node 2)
-
Process node 2:
- Check hash map: node 2 not found
- Create new node:
copy2 = NodeCopy(2)
- Store in hash map:
mp[node2] = copy2
- Process left child: null, return None
- Process right child: null, return None
- Process random: null, return None
- Return
copy2
to parent
-
Back to node 1, process right child (node 3):
- Check hash map: node 3 not found
- Create new node:
copy3 = NodeCopy(3)
- Store in hash map:
mp[node3] = copy3
- Process left child: null, return None
- Process right child: null, return None
- Process random (points to node 2):
- Check hash map: node 2 found! Return existing
copy2
- Check hash map: node 2 found! Return existing
- Set
copy3.random = copy2
- Return
copy3
to parent
-
Back to node 1, process random (points to node 3):
- Check hash map: node 3 found! Return existing
copy3
- Set
copy1.random = copy3
- Check hash map: node 3 found! Return existing
-
Complete the connections:
copy1.left = copy2
copy1.right = copy3
- Return
copy1
as the root of the copied tree
Final Copied Tree:
copy1 (random → copy3)
/ \
copy2 copy3 (random → copy2)
(random → null)
Key Observations:
- When node 3's random pointer needed node 2, it was already in the hash map, so we reused the existing copy
- When node 1's random pointer needed node 3, it had just been created and cached, preventing duplicate creation
- The hash map ensures each original node maps to exactly one copied node, preserving the structure perfectly
Solution Implementation
1# Definition for Node.
2# class Node:
3# def __init__(self, val=0, left=None, right=None, random=None):
4# self.val = val
5# self.left = left
6# self.right = right
7# self.random = random
8
9from typing import Optional
10
11class Solution:
12 def copyRandomBinaryTree(self, root: Optional['Node']) -> Optional['NodeCopy']:
13 """
14 Creates a deep copy of a binary tree where each node has an additional random pointer.
15
16 Args:
17 root: The root of the original binary tree with random pointers
18
19 Returns:
20 The root of the copied binary tree maintaining all connections
21 """
22
23 def dfs(node: Optional['Node']) -> Optional['NodeCopy']:
24 """
25 Recursively copies nodes using depth-first search.
26 Uses memoization to handle the random pointer connections.
27
28 Args:
29 node: Current node being processed
30
31 Returns:
32 The copied node with all its connections
33 """
34 # Base case: if node is None, return None
35 if node is None:
36 return None
37
38 # If we've already created a copy of this node, return it
39 # This handles cycles and multiple references to the same node
40 if node in node_map:
41 return node_map[node]
42
43 # Create a new copy of the current node
44 copied_node = NodeCopy(node.val)
45
46 # Store the mapping from original to copied node
47 # This must be done before recursive calls to handle cycles
48 node_map[node] = copied_node
49
50 # Recursively copy the left subtree
51 copied_node.left = dfs(node.left)
52
53 # Recursively copy the right subtree
54 copied_node.right = dfs(node.right)
55
56 # Recursively copy the random pointer connection
57 copied_node.random = dfs(node.random)
58
59 return copied_node
60
61 # HashMap to store the mapping from original nodes to copied nodes
62 node_map = {}
63
64 # Start the DFS from the root
65 return dfs(root)
66
1/**
2 * Definition for Node.
3 * public class Node {
4 * int val;
5 * Node left;
6 * Node right;
7 * Node random;
8 * Node() {}
9 * Node(int val) { this.val = val; }
10 * Node(int val, Node left, Node right, Node random) {
11 * this.val = val;
12 * this.left = left;
13 * this.right = right;
14 * this.random = random;
15 * }
16 * }
17 */
18
19class Solution {
20 // HashMap to store mapping from original nodes to their copied nodes
21 private Map<Node, Node> nodeMap;
22
23 /**
24 * Creates a deep copy of a binary tree with random pointers.
25 * Each node has left, right, and random pointers.
26 *
27 * @param root The root of the original binary tree
28 * @return The root of the copied binary tree
29 */
30 public Node copyRandomBinaryTree(Node root) {
31 // Initialize the HashMap for tracking visited nodes
32 nodeMap = new HashMap<>();
33 // Start the depth-first search traversal
34 return deepCopy(root);
35 }
36
37 /**
38 * Recursively creates a deep copy of the tree using DFS.
39 * Uses memoization to handle cycles and multiple references to the same node.
40 *
41 * @param originalNode The current node being copied
42 * @return The copied node corresponding to the original node
43 */
44 private Node deepCopy(Node originalNode) {
45 // Base case: if the node is null, return null
46 if (originalNode == null) {
47 return null;
48 }
49
50 // If this node has already been copied, return the existing copy
51 // This handles cycles and multiple references to the same node
52 if (nodeMap.containsKey(originalNode)) {
53 return nodeMap.get(originalNode);
54 }
55
56 // Create a new node with the same value as the original
57 Node copiedNode = new Node(originalNode.val);
58
59 // Store the mapping before recursive calls to handle cycles
60 nodeMap.put(originalNode, copiedNode);
61
62 // Recursively copy the left subtree
63 copiedNode.left = deepCopy(originalNode.left);
64
65 // Recursively copy the right subtree
66 copiedNode.right = deepCopy(originalNode.right);
67
68 // Recursively copy the random pointer
69 copiedNode.random = deepCopy(originalNode.random);
70
71 // Return the copied node
72 return copiedNode;
73 }
74}
75
1/**
2 * Definition for a Node.
3 * struct Node {
4 * int val;
5 * Node *left;
6 * Node *right;
7 * Node *random;
8 * Node() : val(0), left(nullptr), right(nullptr), random(nullptr) {}
9 * Node(int x) : val(x), left(nullptr), right(nullptr), random(nullptr) {}
10 * Node(int x, Node *left, Node *right, Node *random) : val(x), left(left), right(right), random(random) {}
11 * };
12 */
13
14class Solution {
15public:
16 /**
17 * Creates a deep copy of a binary tree with random pointers
18 * @param root - The root of the original binary tree
19 * @return The root of the copied binary tree
20 */
21 Node* copyRandomBinaryTree(Node* root) {
22 // Hash map to store mapping from original nodes to copied nodes
23 // This prevents infinite loops and ensures each node is copied only once
24 unordered_map<Node*, Node*> originalToCopy;
25 return deepCopy(root, originalToCopy);
26 }
27
28private:
29 /**
30 * Recursively copies the tree using DFS traversal
31 * @param node - Current node being processed
32 * @param originalToCopy - Map maintaining original to copy node mappings
33 * @return Pointer to the copied node
34 */
35 Node* deepCopy(Node* node, unordered_map<Node*, Node*>& originalToCopy) {
36 // Base case: if node is null, return null
37 if (!node) {
38 return nullptr;
39 }
40
41 // If this node has already been copied, return the existing copy
42 // This handles cycles created by random pointers
43 if (originalToCopy.find(node) != originalToCopy.end()) {
44 return originalToCopy[node];
45 }
46
47 // Create a new node with the same value as the original
48 Node* copiedNode = new Node(node->val);
49
50 // Store the mapping before recursive calls to handle cycles
51 originalToCopy[node] = copiedNode;
52
53 // Recursively copy left subtree
54 copiedNode->left = deepCopy(node->left, originalToCopy);
55
56 // Recursively copy right subtree
57 copiedNode->right = deepCopy(node->right, originalToCopy);
58
59 // Copy the random pointer (may point to any node in the tree)
60 copiedNode->random = deepCopy(node->random, originalToCopy);
61
62 return copiedNode;
63 }
64};
65
1/**
2 * Definition for a Node.
3 */
4class Node {
5 val: number;
6 left: Node | null;
7 right: Node | null;
8 random: Node | null;
9
10 constructor(val?: number, left?: Node | null, right?: Node | null, random?: Node | null) {
11 this.val = (val === undefined ? 0 : val);
12 this.left = (left === undefined ? null : left);
13 this.right = (right === undefined ? null : right);
14 this.random = (random === undefined ? null : random);
15 }
16}
17
18/**
19 * Creates a deep copy of a binary tree with random pointers
20 * @param root - The root of the original binary tree
21 * @returns The root of the copied binary tree
22 */
23function copyRandomBinaryTree(root: Node | null): Node | null {
24 // Hash map to store mapping from original nodes to copied nodes
25 // This prevents infinite loops and ensures each node is copied only once
26 const originalToCopyMap = new Map<Node, Node>();
27 return deepCopy(root, originalToCopyMap);
28}
29
30/**
31 * Recursively copies the tree using DFS traversal
32 * @param node - Current node being processed
33 * @param originalToCopyMap - Map maintaining original to copy node mappings
34 * @returns Pointer to the copied node
35 */
36function deepCopy(node: Node | null, originalToCopyMap: Map<Node, Node>): Node | null {
37 // Base case: if node is null, return null
38 if (!node) {
39 return null;
40 }
41
42 // If this node has already been copied, return the existing copy
43 // This handles cycles created by random pointers
44 if (originalToCopyMap.has(node)) {
45 return originalToCopyMap.get(node)!;
46 }
47
48 // Create a new node with the same value as the original
49 const copiedNode = new Node(node.val);
50
51 // Store the mapping before recursive calls to handle cycles
52 originalToCopyMap.set(node, copiedNode);
53
54 // Recursively copy left subtree
55 copiedNode.left = deepCopy(node.left, originalToCopyMap);
56
57 // Recursively copy right subtree
58 copiedNode.right = deepCopy(node.right, originalToCopyMap);
59
60 // Copy the random pointer (may point to any node in the tree)
61 copiedNode.random = deepCopy(node.random, originalToCopyMap);
62
63 return copiedNode;
64}
65
Time and Space Complexity
Time Complexity: O(n)
where n
is the number of nodes in the binary tree.
The algorithm performs a depth-first search (DFS) traversal of the tree. Each node is visited exactly once due to the memoization using the mp
dictionary. When we encounter a node for the first time, we create its copy and recursively process its left
, right
, and random
pointers. If we encounter a node that has already been processed (exists in mp
), we simply return the cached copy in O(1)
time. Since each node is processed once and each operation per node takes constant time, the overall time complexity is O(n)
.
Space Complexity: O(n)
where n
is the number of nodes in the binary tree.
The space complexity consists of two components:
- HashMap storage (
mp
): The dictionary stores a mapping from each original node to its corresponding copy, requiringO(n)
space forn
nodes. - Recursion call stack: In the worst case (a skewed tree), the recursion depth can be
O(n)
, requiringO(n)
space for the call stack. In the best case (a balanced tree), the recursion depth would beO(log n)
.
Therefore, the overall space complexity is O(n)
dominated by both the HashMap storage and potentially the recursion stack in the worst case.
Learn more about how to find time and space complexity quickly.
Common Pitfalls
Pitfall 1: Creating the Node Copy After Recursive Calls
The Problem: A common mistake is to create the copied node and store it in the hash map AFTER making the recursive calls:
def dfs(node):
if node is None:
return None
if node in node_map:
return node_map[node]
# WRONG: Recursive calls before creating the copy
left_copy = dfs(node.left)
right_copy = dfs(node.right)
random_copy = dfs(node.random)
# Creating copy after recursion
copied_node = NodeCopy(node.val)
copied_node.left = left_copy
copied_node.right = right_copy
copied_node.random = random_copy
node_map[node] = copied_node
return copied_node
Why This Fails: This approach breaks when there are cycles through random pointers. For example:
- If a node's random pointer points to itself
- If there's a cycle: Node A → (random) → Node B → (random) → Node A
The recursion will never terminate because the node isn't in the hash map when we check for it recursively, leading to infinite recursion and stack overflow.
The Solution: Always create the node and cache it in the hash map BEFORE making recursive calls:
def dfs(node):
if node is None:
return None
if node in node_map:
return node_map[node]
# CORRECT: Create and cache first
copied_node = NodeCopy(node.val)
node_map[node] = copied_node # Cache immediately
# Then make recursive calls
copied_node.left = dfs(node.left)
copied_node.right = dfs(node.right)
copied_node.random = dfs(node.random)
return copied_node
Pitfall 2: Using a Local Hash Map Instead of Instance/Closure Variable
The Problem: Creating a new hash map for each recursive call:
def copyRandomBinaryTree(self, root):
def dfs(node):
node_map = {} # WRONG: Local to each call
if node is None:
return None
if node in node_map:
return node_map[node]
# ... rest of the logic
Why This Fails: Each recursive call gets its own hash map, so the memoization doesn't work. The same node will be copied multiple times if referenced by multiple random pointers, breaking the pointer relationships in the copied tree.
The Solution: Use a hash map that's shared across all recursive calls, either as:
- A closure variable (as in the provided solution)
- An instance variable of the class
- A parameter passed through the recursion
def copyRandomBinaryTree(self, root):
node_map = {} # Shared across all dfs calls
def dfs(node):
# Uses the closure variable node_map
if node in node_map:
return node_map[node]
# ... rest of the logic
Pitfall 3: Forgetting to Handle the Random Pointer
The Problem: Only copying the tree structure (left and right) but forgetting about the random pointer:
def dfs(node):
if node is None:
return None
if node in node_map:
return node_map[node]
copied_node = NodeCopy(node.val)
node_map[node] = copied_node
copied_node.left = dfs(node.left)
copied_node.right = dfs(node.right)
# WRONG: Forgot to copy random pointer!
return copied_node
Why This Fails:
The copied tree will have all random pointers as None or pointing to nodes in the original tree (if you directly assign copied_node.random = node.random
), which violates the deep copy requirement.
The Solution: Treat the random pointer the same way as left and right pointers - recursively copy the node it points to:
copied_node.random = dfs(node.random) # Don't forget this line!
Which of the following shows the order of node visit in a Breadth-first Search?
Recommended Readings
Everything About Trees A tree is a type of graph data structure composed of nodes and edges Its main properties are It is acyclic doesn't contain any cycles There exists a path from the root to any node Has N 1 edges where N is the number of nodes in the tree and
https assets algo monster cover_photos dfs svg Depth First Search Prereqs Recursion Review problems recursion_intro Trees problems tree_intro With a solid understanding of recursion under our belts we are now ready to tackle one of the most useful techniques in coding interviews Depth First Search DFS As the name suggests
https assets algo monster cover_photos bfs svg Breadth First Search on Trees Hopefully by this time you've drunk enough DFS Kool Aid to understand its immense power and seen enough visualization to create a call stack in your mind Now let me introduce the companion spell Breadth First Search BFS
Want a Structured Path to Master System Design Too? Don’t Miss This!