2846. Minimum Edge Weight Equilibrium Queries in a Tree

HardTreeGraphArrayStrongly Connected Component
Leetcode Link

Problem Description

This problem presents an undirected tree with n nodes, labeled from 0 to n - 1. We are given n and an array edges where each edges[i] contains three integers [u_i, v_i, w_i] signifying that there is an edge between nodes u_i and v_i with weight w_i.

We are also provided with an array queries where each queries[i] has two integers [a_i, b_i]. For every query [a_i, b_i] we need to determine the minimum number of operations required to make the weights of all edges on the path from a_i to b_i equal. An operation is defined as changing the weight of any edge in the tree to any value.

The question clarifies that each query is independent, which means that after every query the tree reverts to its original state before the next query is performed. Additionally, the path from node a_i to node b_i consists of a sequence of distinct nodes that are connected by edges in the tree.

The answer should be an array answer of length m, where m is the number of queries and answer[i] is the answer to the i-th query.

Intuition

To approach this problem, we need to understand two key concepts: tree traversal and Lowest Common Ancestor (LCA).

Firstly, because the tree's edges are undirected, we can choose any node as the root and traverse the tree to establish parent-child relationships. Here, node 0 is chosen as the root, and we use Breadth-First Search (BFS) to traverse the tree. During traversal, we keep count of the occurrence of each weight on the path from the root to each node.

Secondly, for each query, we need to find the LCA of the two given nodes to determine the path from one node to the other. We can utilize a technique known as Binary Lifting to achieve this. Binary Lifting precomputes an ancestor table that allows us to quickly find the LCA.

The key to solving the queries efficiently is to understand that the number of operations required to make all weights on the path equal, is the length of the path minus the maximum count of any weight occurring on that path. This works because we can change all the other edges to match the weight that appears most frequently on the path.

So our steps are:

  • Traverse the tree from the root and record each path's weights count and the depth of each node.
  • Compute the Binary Lifting table to find the LCA of any two nodes efficiently.
  • For each query, find the LCA of the given nodes, then calculate the difference in depth from each node to the LCA and subtract the maximum count of a weight along the path to find the minimum operations required.

The solution provided in Python achieves this by using several nested loops: the outer loop goes through each query, and the inner loops are used to climb up the ancestor table to find LCAs and calculate the maximum weight count on the path.

Learn more about Tree and Graph patterns.

Not Sure What to Study? Take the 2-min Quiz to Find Your Missing Piece:

Which data structure is used to implement priority queue?

Solution Approach

The implementation of this solution in Python involves several algorithms and data structures for efficient computation:

  1. Breadth-First Search (BFS): The solution begins with a BFS traversal of the tree, starting from the root node 0. During the traversal, the cnt array is maintained for each node i, which stores the frequency count of edge weights from the root to that node. The depth array keeps track of the depth of each node, and the p array records the parent of each node.

  2. Binary Lifting Technique: After BFS traversal, binary lifting is applied to preprocess the f array, which is an ancestor table. Each f[i][j] represents the 2^jth ancestor of node i. This ancestor table allows us to jump logarithmically in the path from a node towards the root and is used to find the LCA of two nodes.

  3. LCA finding and Query Processing: For each query, the solution process goes as follows:

    • Normalize the depth of two nodes u and v. If u is deeper, climb up using the ancestor table until both nodes are at the same depth.
    • If u and v are not the same, continue climbing up in tandem for both nodes until their ancestors are the same, which is just above their LCA.
    • Once the LCA is found, or if u and v are the same, we find the count of the most frequently occurring edge weight on the path between u and v. This count is calculated by summing up the weights from each node to the LCA and subtracting the double count at LCA.
    • The answer for each query is the total depth difference between u and v minus the count of the most common weight.

To summarize, the data structures used are arrays for p, cnt, depth, and f, with the f array having two dimensions. The algorithm uses BFS for tree traversal, binary lifting for LCA preprocessing, and an efficient LCA finding method to process each query. The solution encapsulates tree properties and advanced data structures to address the problem of minimizing operations on a tree path.

Discover Your Strengths and Weaknesses: Take Our 2-Minute Quiz to Tailor Your Study Plan:

A heap is a ...?

Example Walkthrough

Let's illustrate the solution approach using a small example tree:

1Tree:
2   0
3  / \
41(2) 2(3)
5    /
6  3(2)

Here, the numbers in parentheses are the weights of the edges leading to that node from its parent.

Suppose, n = 4. These nodes are connected by the edges given in the edges array as follows:

1edges = [[0, 1, 2], [0, 2, 3], [2, 3, 2]]

Consider we have one query = [1, 3], we need to determine the minimum number of operations required to make the weights of all edges on the path from a_i to b_i equal.

Following the solution steps:

  1. Perform BFS starting from root node (0):

    • Node 0: cnt[0] = {}, depth[0] = 0
    • Node 1: cnt[1] = {2: 1}, depth[1] = 1
    • Node 2: cnt[2] = {3: 1}, depth[2] = 1
    • Node 3: cnt[3] = {2: 1, 3: 1}, depth[3] = 2
    • p array is parent information: p = [0, 0, 0, 2]
  2. Construct the Binary Lifting table:

    • Since this is a small tree, we only need the immediate parent for each node, which is the p array itself.
  3. Process the query [1, 3]:

    • Normalize the depths: depth[1] = 1, depth[3] = 2. Since node 3 is deeper, look up its parent using p[3] => 2 to bring both nodes at the same depth.
    • Find the LCA: Now both nodes 1 and 3 are at the same depth. Climb up the ancestors to find the LCA. The nodes' parents are different, so keep climbing. Since the root 0 is the next ancestor for both, node 0 is the LCA.
    • Determine the most frequent edge weight on the path from 1 to 3: Both nodes have edge weight 2 on their path to the LCA, being the most frequently occurring edge weight on the path.
    • Calculate the number of operations needed: The path length is depth[1] + depth[3] - 2 * depth[LCA] = 1 + 2 - 2 * 0 = 3. The max count of any weight is 2, so the minimum operations required is 3 - 2 = 1.

Therefore, for the given query [1, 3], the answer would be 1, as we only need to change one edge to make all the weights equal (either change edge 0-1 to 2 or change edge 0-2 and 2-3 to 3). This method allows for efficient processing of queries on tree paths.

Solution Implementation

1from collections import deque
2from typing import List
3
4class Solution:
5    def minOperationsQueries(
6        self, n: int, edges: List[List[int]], queries: List[List[int]]
7    ) -> List[int]:
8        # Maximum number of bits needed to represent n
9        max_bits = n.bit_length()
10      
11        # Graph representation where 'g[node]' contains pairs (neighbor, weight)
12        graph = [[] for _ in range(n)]
13      
14        # 'jump_over[i][j]' represents 2^j-th ancestor of node i
15        jump_over = [[0] * max_bits for _ in range(n)]
16      
17        # Array to store parent of each node
18        parent = [0] * n
19      
20        # Array to store count of each letter for each node
21        count_letters = [None] * n
22      
23        # Array to store depth of each node
24        depth = [0] * n
25      
26        # Construct the graph based on the input edges
27        for u, v, w in edges:
28            graph[u].append((v, w - 1))
29            graph[v].append((u, w - 1))
30      
31        # Initialize the count of letters for the root node
32        count_letters[0] = [0] * 26
33      
34        # BFS to populate 'parent', 'count_letters', 'jump_over', and 'depth'
35        queue = deque([0])
36        while queue:
37            current = queue.popleft()
38          
39            # Initialize jump_over array
40            jump_over[current][0] = parent[current]
41            for j in range(1, max_bits):
42                jump_over[current][j] = jump_over[jump_over[current][j - 1]][j - 1]
43          
44            for neighbor, weight in graph[current]:
45                if neighbor != parent[current]:
46                    parent[neighbor] = current
47                    count_letters[neighbor] = count_letters[current][:]
48                    count_letters[neighbor][weight] += 1
49                    depth[neighbor] = depth[current] + 1
50                    queue.append(neighbor)
51      
52        # Array to store the answer for each query
53        answers = []
54      
55        # Process each query
56        for u, v in queries:
57            # Ensure 'u' is the deeper node
58            node_u, node_v = (u, v) if depth[u] >= depth[v] else (v, u)
59          
60            # Level up node_u to the same depth as node_v
61            for j in reversed(range(max_bits)):
62                if depth[node_u] - depth[node_v] >= (1 << j):
63                    node_u = jump_over[node_u][j]
64          
65            # Find the lowest common ancestor of u and v
66            for j in reversed(range(max_bits)):
67                if jump_over[node_u][j] != jump_over[node_v][j]:
68                    node_u, node_v = jump_over[node_u][j], jump_over[node_v][j]
69          
70            # If they are not the same, level up once
71            if node_u != node_v:
72                node_u = parent[node_u]
73          
74            # Calculate the maximum overlap of the letters
75            max_overlap = max(count_letters[u][j] + count_letters[v][j] - 2 * count_letters[node_u][j] for j in range(26))
76          
77            # Calculate the result for this query
78            answers.append(depth[u] + depth[v] - 2 * depth[node_u] - max_overlap)
79      
80        # Return all answers
81        return answers
82
1class Solution {
2    public int[] minOperationsQueries(int n, int[][] edges, int[][] queries) {
3        // The number of bits needed to represent the number n
4        int height = 32 - Integer.numberOfLeadingZeros(n);
5        List<int[]>[] graph = new List[n];
6      
7        // Initialize adjacency list for each vertex in the graph
8        Arrays.setAll(graph, i -> new ArrayList<>());
9      
10        // f will hold the ancestors for binary lifting
11        int[][] ancestors = new int[n][height];
12        // parent stores the direct parent of each node
13        int[] parent = new int[n];
14        // cnt will store the frequency of the letters along the path
15        int[][] count = new int[n][0];
16        // depth stores the depth for each node
17        int[] depth = new int[n];
18      
19        // Build the graph
20        for (var edge : edges) {
21            int from = edge[0], to = edge[1], weight = edge[2] - 1;
22            graph[from].add(new int[] {to, weight});
23            graph[to].add(new int[] {from, weight});
24        }
25      
26        // Initialize the count for the root node
27        count[0] = new int[26]; // assuming 26 letters
28        Deque<Integer> queue = new ArrayDeque<>();
29        queue.offer(0);
30      
31        // BFS to setup ancestors, count, and depth
32        while (!queue.isEmpty()) {
33            int node = queue.poll();
34            ancestors[node][0] = parent[node];
35            // Setting up the ancestors for binary lifting
36            for (int i = 1; i < height; ++i) {
37                ancestors[node][i] = ancestors[ancestors[node][i - 1]][i - 1];
38            }
39            for (var next : graph[node]) {
40                int neighbor = next[0], letterIndex = next[1];
41                if (neighbor != parent[node]) {
42                    parent[neighbor] = node;
43                    count[neighbor] = count[node].clone();
44                    count[neighbor][letterIndex]++;
45                    depth[neighbor] = depth[node] + 1;
46                    queue.offer(neighbor);
47                }
48            }
49        }
50      
51        int numQueries = queries.length;
52        int[] answer = new int[numQueries];
53      
54        // Handle each query
55        for (int i = 0; i < numQueries; ++i) {
56            int node1 = queries[i][0], node2 = queries[i][1];
57            int lcaNode = findLCA(node1, node2, depth, ancestors, height);
58          
59            int maxLetterFrequency = 0;
60            // Calculate the maximum letter frequency on the path
61            for (int j = 0; j < 26; ++j) {
62                maxLetterFrequency = Math.max(maxLetterFrequency, 
63                                              count[node1][j] + count[node2][j] - 2 * count[lcaNode][j]);
64            }
65          
66            // Find the minimum operations required
67            answer[i] = depth[node1] + depth[node2] - 2 * depth[lcaNode] - maxLetterFrequency;
68        }
69      
70        return answer;
71    }
72  
73    private int findLCA(int x, int y, int[] depth, int[][] ancestors, int height) {
74        // Make sure that x is the deeper node
75        if (depth[x] < depth[y]) {
76            int temp = x;
77            x = y;
78            y = temp;
79        }
80      
81        // Equalize the depth of the two nodes
82        for (int i = height - 1; i >= 0; --i) {
83            if (depth[x] - depth[y] >= (1 << i)) {
84                x = ancestors[x][i];
85            }
86        }
87      
88        // Find the lowest common ancestor using binary lifting
89        if (x == y) {
90            return x;
91        }
92        for (int i = height - 1; i >= 0; --i) {
93            if (ancestors[x][i] != ancestors[y][i]) {
94                x = ancestors[x][i];
95                y = ancestors[y][i];
96            }
97        }
98      
99        // x and y are now the children of LCA
100        return ancestors[x][0];
101    }
102}
103
1#include <vector>
2#include <queue>
3#include <cstring>
4using namespace std;
5
6class Solution {
7public:
8    // Method to return the minimal operations for the queries
9    vector<int> minOperationsQueries(int n, vector<vector<int>>& edges, vector<vector<int>>& queries) {
10        // The maximum power of 2 needed for the binary lifting
11        int maxPow = 32 - __builtin_clz(n);
12      
13        // Adjacency list for the graph, stores pairs of (node, character index)
14        vector<pair<int, int>> graph[n];
15      
16        // The binary lifting table, f[node][j] is the 2^j-th ancestor of node
17        int liftingTable[n][maxPow];
18        // The parent of each node in the tree
19        int parents[n];
20        // The count of each character on the path from the root to a given node
21        int charCount[n][26];
22        // The depth of each node in the tree
23        int depth[n];
24      
25        // Initialize all tables to zero
26        memset(liftingTable, 0, sizeof(liftingTable));
27        memset(charCount, 0, sizeof(charCount));
28        memset(depth, 0, sizeof(depth));
29        memset(parents, 0, sizeof(parents));
30      
31        // Construct the graph
32        for (auto& edge : edges) {
33            int u = edge[0], v = edge[1], charIdx = edge[2] - 1; // Note: Decreasing edge[2] to match 0-based index
34            graph[u].emplace_back(v, charIdx);
35            graph[v].emplace_back(u, charIdx);
36        }
37      
38        // BFS to populate binary lifting table, char count and depth
39        queue<int> q;
40        q.push(0); // Assuming the root of the tree is node 0
41        while (!q.empty()) {
42            int current = q.front();
43            q.pop();
44          
45            // Establish binary lifting table for current node
46            liftingTable[current][0] = parents[current];
47            for (int j = 1; j < maxPow; ++j) {
48                liftingTable[current][j] = liftingTable[liftingTable[current][j - 1]][j - 1];
49            }
50          
51            // Process child nodes
52            for (auto& [child, charIdx] : graph[current]) {
53                if (child != parents[current]) {
54                    parents[child] = current;
55                    memcpy(charCount[child], charCount[current], sizeof(charCount[current]));
56                    charCount[child][charIdx]++;
57                    depth[child] = depth[current] + 1;
58                    q.push(child);
59                }
60            }
61        }
62      
63        vector<int> ans; // Vector to hold the answers for each query
64        // Evaluate each query
65        for (auto& query : queries) {
66            int u = query[0], v = query[1];
67            int ancestorU = u, ancestorV = v;
68          
69            // Equalize depths using binary lifting if necessary
70            if (depth[ancestorU] < depth[ancestorV]) {
71                swap(ancestorU, ancestorV);
72            }
73            for (int j = maxPow - 1; j >= 0; --j) {
74                if (depth[ancestorU] - depth[ancestorV] >= (1 << j)) {
75                    ancestorU = liftingTable[ancestorU][j];
76                }
77            }
78          
79            // Find LCA (Lowest Common Ancestor) of u and v
80            if (ancestorU != ancestorV) {
81                for (int j = maxPow - 1; j >= 0; --j) {
82                    if (liftingTable[ancestorU][j] != liftingTable[ancestorV][j]) {
83                        ancestorU = liftingTable[ancestorU][j];
84                        ancestorV = liftingTable[ancestorV][j];
85                    }
86                }
87                ancestorU = parents[ancestorU];
88            }
89          
90            // Calculate the maximum same characters on the path from u to v
91            int maxSameChar = 0;
92            for (int j = 0; j < 26; ++j) {
93                maxSameChar = max(maxSameChar, charCount[u][j] + charCount[v][j] - 2 * charCount[ancestorU][j]);
94            }
95            // Add to answer the distance minus the maximum same characters
96            ans.push_back(depth[u] + depth[v] - 2 * depth[ancestorU] - maxSameChar);
97        }
98      
99        return ans;
100    }
101};
102
1type Graph = Array<Array<[number, number]>>;
2type LiftingTable = Array<Array<number>>;
3type Parents = Array<number>;
4type CharCount = Array<Array<number>>;
5type Depth = Array<number>;
6type Query = Array<number>;
7type Edge = Array<number>;
8
9// Adjusting the maximum power of 2 needed for binary lifting based on the number of nodes
10function getMaxPower(n: number): number {
11  return Math.floor(Math.log2(n)) + 1;
12}
13
14// Initialize a 2D array with dimensions [n][maxPow] and fill it with zeros
15function initialize2DArray(n: number, maxPow: number): LiftingTable {
16  return Array.from({ length: n }, () => Array(maxPow).fill(0));
17}
18
19// Function to return the minimal operations for the queries
20function minOperationsQueries(n: number, edges: Edge[], queries: Query[]): number[] {
21  const maxPow: number = getMaxPower(n);
22  const graph: Graph = new Array(n).fill(0).map(() => []);
23  const liftingTable: LiftingTable = initialize2DArray(n, maxPow);
24  const parents: Parents = new Array(n).fill(0);
25  const charCount: CharCount = new Array(n).fill(null).map(() => new Array(26).fill(0));
26  const depth: Depth = new Array(n).fill(0);
27
28  // Construct the graph
29  edges.forEach(edge => {
30    const [u, v, charIdx] = edge;
31    graph[u].push([v, charIdx - 1]); // Decrease edge[2] to match 0-based index
32    graph[v].push([u, charIdx - 1]);
33  });
34
35  // BFS to populate binary lifting table, character count, and depth
36  const queue: number[] = [0]; // Assuming the root of the tree is node 0
37  while (queue.length > 0) {
38    const current = queue.shift()!;
39    liftingTable[current][0] = parents[current];
40    for (let j = 1; j < maxPow; j++) {
41      liftingTable[current][j] = liftingTable[liftingTable[current][j - 1]][j - 1];
42    }
43  
44    graph[current].forEach(([child, charIdx]) => {
45      if (child !== parents[current]) {
46        parents[child] = current;
47        charCount[child] = [...charCount[current]];
48        charCount[child][charIdx]++;
49        depth[child] = depth[current] + 1;
50        queue.push(child);
51      }
52    });
53  }
54
55  const answers: number[] = []; // Vector to hold the answers for each query
56
57  // Process each query
58  queries.forEach(query => {
59    let [u, v] = query;
60    let ancestorU = u;
61    let ancestorV = v;
62  
63    // Equalize depths using binary lifting if necessary
64    if (depth[ancestorU] < depth[ancestorV]) {
65      [ancestorU, ancestorV] = [ancestorV, ancestorU];
66    }
67    for (let j = maxPow - 1; j >= 0; j--) {
68      if (depth[ancestorU] - depth[ancestorV] >= (1 << j)) {
69        ancestorU = liftingTable[ancestorU][j];
70      }
71    }
72  
73    // Find LCA (Lowest Common Ancestor) of u and v
74    if (ancestorU !== ancestorV) {
75      for (let j = maxPow - 1; j >= 0; j--) {
76        if (liftingTable[ancestorU][j] !== liftingTable[ancestorV][j]) {
77          ancestorU = liftingTable[ancestorU][j];
78          ancestorV = liftingTable[ancestorV][j];
79        }
80      }
81      ancestorU = parents[ancestorU];
82    }
83  
84    // Calculate the maximum same characters on the path from u to v
85    let maxSameChar: number = 0;
86    for (let j = 0; j < 26; j++) {
87      maxSameChar = Math.max(maxSameChar, charCount[u][j] + charCount[v][j] - 2 * charCount[ancestorU][j]);
88    }
89    // Add to answer the distance minus the maximum same characters
90    answers.push(depth[u] + depth[v] - 2 * depth[ancestorU] - maxSameChar);
91  });
92
93  return answers;
94}
95
Not Sure What to Study? Take the 2-min Quiz:

A person thinks of a number between 1 and 1000. You may ask any number questions to them, provided that the question can be answered with either "yes" or "no".

What is the minimum number of questions you needed to ask so that you are guaranteed to know the number that the person is thinking?

Time and Space Complexity

Time Complexity

The given Python code defines a method to calculate a certain type of operation for queries on a tree specified by edges. The tree's preprocessing phase includes performing a breadth-first search (BFS) and setting up LCA (Lowest Common Ancestor) preprocessing. The time complexity of these operations is analyzed as follows:

  1. BFS Traversal: The BFS traversal through all nodes to calculate depth, count arrays, and set parent pointers has a time complexity of O(N) where N is the number of nodes.

  2. LCA Preprocessing: The preprocessing for LCA involves filling an ancestor table of size N by logN (where m = logN). This has a time complexity of O(NlogN) because every node i calculates up to logN ancestor steps.

  3. Queries Processing: Each query involves climbing up the LCA table to find the lowest common ancestor and calculating the maximum contribution for each alphabet. This operation is done in O(logN) for each query due to the potential size of the jump with binary lifting. Then, the maximum contribution (across 26 alphabets at most) calculation for each query takes constant time, so we can consider it as O(1) for each query. Given Q queries, this part takes O(QlogN).

Combining the above, the overall time complexity of the code is dominated by the queries processing part and is O(NlogN + QlogN).

Space Complexity

The space complexity of the code is determined by the various arrays used throughout the solution.

  1. Adjacency List: For the graph representation g, O(N) space is used.

  2. LCA Ancestor Table: The ancestor table f uses O(NlogN) space due to storing logN ancestors for N nodes.

  3. Parent, Count, and Depth Arrays: Arrays such as p, cnt, and depth each use O(N) space.

  4. Query Results: The query results ans could potentially hold O(Q) space if there were Q queries.

The largest space consuming data structure is the ancestor table f with O(NlogN) space. All other data structures including the array for BFS and auxiliary count arrays use linear space in terms of the number of nodes. Therefore, the overall space complexity of the code is O(NlogN).

Learn more about how to find time and space complexity quickly using problem constraints.

Fast Track Your Learning with Our Quick Skills Quiz:

Which algorithm should you use to find a node that is close to the root of the tree?


Recommended Readings


Got a question? Ask the Teaching Assistant anything you don't understand.

Still not clear? Ask in the Forum,  Discord or Submit the part you don't understand to our editors.


TA 👨‍🏫