Facebook Pixel

1782. Count Pairs Of Nodes

Problem Description

You have an undirected graph with n nodes (numbered from 1 to n) and a list of edges. Each edge edges[i] = [ui, vi] represents an undirected connection between nodes ui and vi. Note that there can be multiple edges between the same pair of nodes.

For any pair of nodes (a, b), we define incident(a, b) as the total number of edges that are connected to either node a or node b. This counts:

  • All edges connected to node a
  • All edges connected to node b
  • But edges between a and b are counted only once (not double-counted)

You're given an array of queries. For each queries[j], you need to count how many pairs of nodes (a, b) satisfy both conditions:

  1. a < b (to avoid counting the same pair twice)
  2. incident(a, b) > queries[j] (the total number of edges connected to either node exceeds the query value)

Return an array answers where answers[j] contains the count of valid node pairs for the j-th query.

Example Understanding: If node 1 has 3 edges connected to it, node 2 has 4 edges connected to it, and there are 2 edges directly between nodes 1 and 2, then:

  • Node 1's degree = 3
  • Node 2's degree = 4
  • incident(1, 2) = 3 + 4 - 2 = 5 (we subtract the edges between them since they were counted in both degrees)
Quick Interview Experience
Help others by sharing your interview experience
Have you seen this problem before?

Intuition

The key insight is that for a pair of nodes (a, b), the value incident(a, b) equals degree[a] + degree[b] - edges_between(a, b). This is because when we add the degrees of both nodes, we count the edges between them twice, so we need to subtract them once.

A naive approach would be to check all possible pairs (a, b) for each query, but with up to n = 20,000 nodes, this would require checking n*(n-1)/2 pairs for each query, which is too slow.

The breakthrough comes from realizing we can use a two-step approach:

  1. Optimistic counting with binary search: First, assume there are no direct edges between any pair of nodes. In this case, incident(a, b) = degree[a] + degree[b]. If we sort the degrees, for each node a with degree degree[a], we can use binary search to find how many nodes b have degree[b] such that degree[a] + degree[b] > query. This gives us an upper bound on the answer.

  2. Correction for direct edges: The optimistic count above is incorrect for pairs that actually have edges between them. Specifically, if nodes a and b have k edges between them, then the actual incident(a, b) = degree[a] + degree[b] - k. So we need to check: did we count this pair in step 1 (meaning degree[a] + degree[b] > query), but shouldn't have (meaning degree[a] + degree[b] - k ≤ query)? If so, we subtract 1 from our answer.

This approach is efficient because:

  • Step 1 takes O(n log n) per query using binary search on sorted degrees
  • Step 2 only needs to check pairs that actually have edges between them (much fewer than all possible pairs)
  • We precompute degrees and edge counts between pairs using hash tables for quick lookups

Learn more about Graph, Two Pointers, Binary Search and Sorting patterns.

Solution Approach

The implementation follows the two-step approach described in the intuition:

Step 1: Preprocessing

  • Create a cnt array to store the degree of each node (converting to 0-indexed)
  • Use a hash table g to store the count of edges between each pair of nodes
  • For each edge [a, b]:
    • Convert to 0-indexed: a = a - 1, b = b - 1
    • Normalize the pair as (min(a, b), max(a, b)) to avoid duplicates
    • Increment cnt[a] and cnt[b] (degree counting)
    • Increment g[(a, b)] (edge count between this pair)

Step 2: Create sorted degree array

  • Create s = sorted(cnt) for efficient binary search
  • This allows us to quickly find how many nodes have degree above a certain threshold

Step 3: Process each query For each query value t:

  1. Optimistic counting with binary search:

    • For each degree value x at position j in the sorted array s
    • Find the smallest index k where s[k] > t - x using bisect_right(s, t - x, lo=j + 1)
    • The lo=j + 1 ensures we only count pairs where a < b (avoiding duplicates)
    • Add n - k to the answer (number of valid pairs with node having degree x)
  2. Correction for actual edges:

    • Iterate through all pairs (a, b) that have edges between them (stored in g)
    • For each pair with v edges between them:
      • Check if we counted this pair optimistically: cnt[a] + cnt[b] > t
      • But shouldn't have: cnt[a] + cnt[b] - v <= t
      • If both conditions are true, subtract 1 from the answer

Time Complexity:

  • Preprocessing: O(E) where E is the number of edges
  • Per query: O(n log n) for binary search + O(unique pairs with edges)
  • Total: O(E + Q * (n log n + P)) where Q is queries and P is unique pairs with edges

Space Complexity: O(n + P) for storing degrees and edge counts between pairs

Ready to land your dream job?

Unlock your dream job with a 3-minute evaluator for a personalized learning plan!

Start Evaluator

Example Walkthrough

Let's walk through a small example to illustrate the solution approach.

Given:

  • n = 4 nodes (numbered 1 to 4)
  • edges = [[1,2], [2,3], [2,4], [3,4], [3,4]] (note: two edges between nodes 3 and 4)
  • queries = [4, 5]

Step 1: Preprocessing

First, calculate each node's degree and track edges between pairs:

  • Node 1: connected to edge [1,2] → degree = 1
  • Node 2: connected to edges [1,2], [2,3], [2,4] → degree = 3
  • Node 3: connected to edges [2,3], [3,4], [3,4] → degree = 3
  • Node 4: connected to edges [2,4], [3,4], [3,4] → degree = 3

Edge counts between pairs:

  • (1,2): 1 edge
  • (2,3): 1 edge
  • (2,4): 1 edge
  • (3,4): 2 edges

Step 2: Create sorted degree array

  • Original degrees: [1, 3, 3, 3]
  • Sorted degrees: [1, 3, 3, 3]

Step 3: Process Query 1 (t = 4)

Optimistic counting:

  • For node with degree 1 (position 0): Find nodes with degree > 3. Using binary search from position 1 onward, we find no such nodes. Count = 0
  • For node with degree 3 (position 1): Find nodes with degree > 1. From position 2 onward, we have 2 nodes. Count = 2
  • For node with degree 3 (position 2): Find nodes with degree > 1. From position 3 onward, we have 1 node. Count = 1
  • For node with degree 3 (position 3): No nodes after it. Count = 0
  • Total optimistic count = 0 + 2 + 1 + 0 = 3

Correction for actual edges: Check each pair with edges:

  • Pair (1,2): degree[1] + degree[2] = 1 + 3 = 4, not > 4, so wasn't counted optimistically. No correction needed.
  • Pair (2,3): degree[2] + degree[3] = 3 + 3 = 6 > 4, and 6 - 1 = 5 > 4. No correction needed.
  • Pair (2,4): degree[2] + degree[4] = 3 + 3 = 6 > 4, and 6 - 1 = 5 > 4. No correction needed.
  • Pair (3,4): degree[3] + degree[4] = 3 + 3 = 6 > 4, but 6 - 2 = 4 ≤ 4. We counted this pair but shouldn't have. Subtract 1.

Final answer for query 4: 3 - 1 = 2

Step 3: Process Query 2 (t = 5)

Optimistic counting:

  • For node with degree 1 (position 0): Find nodes with degree > 4. No such nodes. Count = 0
  • For node with degree 3 (position 1): Find nodes with degree > 2. From position 2 onward, we have 2 nodes. Count = 2
  • For node with degree 3 (position 2): Find nodes with degree > 2. From position 3 onward, we have 1 node. Count = 1
  • For node with degree 3 (position 3): No nodes after it. Count = 0
  • Total optimistic count = 0 + 2 + 1 + 0 = 3

Correction for actual edges: Check each pair with edges:

  • Pair (1,2): degree[1] + degree[2] = 4, not > 5. Not counted optimistically. No correction.
  • Pair (2,3): degree[2] + degree[3] = 6 > 5, and 6 - 1 = 5 ≤ 5. We counted this pair but shouldn't have. Subtract 1.
  • Pair (2,4): degree[2] + degree[4] = 6 > 5, and 6 - 1 = 5 ≤ 5. We counted this pair but shouldn't have. Subtract 1.
  • Pair (3,4): degree[3] + degree[4] = 6 > 5, and 6 - 2 = 4 ≤ 5. We counted this pair but shouldn't have. Subtract 1.

Final answer for query 5: 3 - 3 = 0

Result: answers = [2, 0]

The valid pairs for query 4 are (2,3) and (2,4), both with incident count of 5. For query 5, no pairs have incident count > 5.

Solution Implementation

1from collections import defaultdict
2from bisect import bisect_right
3from typing import List
4
5class Solution:
6    def countPairs(
7        self, n: int, edges: List[List[int]], queries: List[int]
8    ) -> List[int]:
9        # Initialize degree count for each node (0-indexed)
10        node_degrees = [0] * n
11      
12        # Dictionary to store edge frequencies between pairs of nodes
13        # Key: (smaller_node, larger_node), Value: edge count
14        edge_frequencies = defaultdict(int)
15      
16        # Process each edge to count degrees and edge frequencies
17        for node1, node2 in edges:
18            # Convert to 0-indexed
19            node1, node2 = node1 - 1, node2 - 1
20          
21            # Ensure consistent ordering with smaller node first
22            smaller_node, larger_node = min(node1, node2), max(node1, node2)
23          
24            # Increment degree for both nodes
25            node_degrees[smaller_node] += 1
26            node_degrees[larger_node] += 1
27          
28            # Track frequency of this specific edge
29            edge_frequencies[(smaller_node, larger_node)] += 1
30      
31        # Create sorted copy of degrees for binary search
32        sorted_degrees = sorted(node_degrees)
33      
34        # Initialize result array for each query
35        result = [0] * len(queries)
36      
37        # Process each query
38        for query_idx, threshold in enumerate(queries):
39            # Count pairs where sum of degrees > threshold
40            # Using two-pointer technique with binary search
41            for j, current_degree in enumerate(sorted_degrees):
42                # Find smallest index where sorted_degrees[k] + current_degree > threshold
43                # Search only in elements after current position to avoid duplicate pairs
44                k = bisect_right(sorted_degrees, threshold - current_degree, lo=j + 1)
45              
46                # All nodes from index k to n-1 form valid pairs with current node
47                result[query_idx] += n - k
48          
49            # Adjust for multi-edges: some pairs were overcounted
50            # If a pair has multiple edges, their actual incident count is less than
51            # the sum of their degrees (since shared edges are counted twice)
52            for (node_a, node_b), edge_count in edge_frequencies.items():
53                # Total incident edges without removing duplicates
54                total_with_duplicates = node_degrees[node_a] + node_degrees[node_b]
55              
56                # Actual incident edges (removing duplicate counting)
57                actual_incident_edges = total_with_duplicates - edge_count
58              
59                # If we counted this pair but shouldn't have, decrement
60                if total_with_duplicates > threshold and actual_incident_edges <= threshold:
61                    result[query_idx] -= 1
62      
63        return result
64
1class Solution {
2    public int[] countPairs(int n, int[][] edges, int[] queries) {
3        // Array to store the degree (number of edges) for each node
4        int[] degree = new int[n];
5      
6        // Map to store edge frequencies
7        // Key: encoded edge (smaller_node * n + larger_node)
8        // Value: frequency of this edge (handles multiple edges between same nodes)
9        Map<Integer, Integer> edgeFrequency = new HashMap<>();
10      
11        // Process each edge
12        for (int[] edge : edges) {
13            // Convert to 0-indexed
14            int nodeA = edge[0] - 1;
15            int nodeB = edge[1] - 1;
16          
17            // Increment degree for both nodes
18            degree[nodeA]++;
19            degree[nodeB]++;
20          
21            // Encode the edge with smaller node first to ensure uniqueness
22            int encodedEdge = Math.min(nodeA, nodeB) * n + Math.max(nodeA, nodeB);
23            // Track edge frequency (for handling multiple edges between same nodes)
24            edgeFrequency.merge(encodedEdge, 1, Integer::sum);
25        }
26      
27        // Create a sorted copy of degrees for binary search
28        int[] sortedDegrees = degree.clone();
29        Arrays.sort(sortedDegrees);
30      
31        // Array to store results for each query
32        int[] result = new int[queries.length];
33      
34        // Process each query
35        for (int queryIndex = 0; queryIndex < queries.length; queryIndex++) {
36            int threshold = queries[queryIndex];
37          
38            // Count pairs using two-pointer/binary search approach
39            for (int i = 0; i < n; i++) {
40                int currentDegree = sortedDegrees[i];
41                // Find the minimum degree needed for the pair to exceed threshold
42                int minimumRequiredDegree = threshold - currentDegree;
43                // Binary search for the first node with degree > minimumRequiredDegree
44                int firstValidIndex = binarySearchFirstGreater(sortedDegrees, minimumRequiredDegree, i + 1);
45                // All nodes from firstValidIndex to end form valid pairs with node i
46                result[queryIndex] += n - firstValidIndex;
47            }
48          
49            // Adjust for overcounting due to shared edges
50            for (Map.Entry<Integer, Integer> entry : edgeFrequency.entrySet()) {
51                // Decode the edge
52                int nodeA = entry.getKey() / n;
53                int nodeB = entry.getKey() % n;
54                int sharedEdges = entry.getValue();
55              
56                // Check if this pair was counted but shouldn't be
57                // (degree sum exceeds threshold, but after removing shared edges, it doesn't)
58                if (degree[nodeA] + degree[nodeB] > threshold && 
59                    degree[nodeA] + degree[nodeB] - sharedEdges <= threshold) {
60                    result[queryIndex]--;
61                }
62            }
63        }
64      
65        return result;
66    }
67  
68    /**
69     * Binary search to find the first index where arr[index] > target
70     * @param arr sorted array to search in
71     * @param target the value to compare against
72     * @param startIndex the starting index for the search
73     * @return the first index where arr[index] > target, or arr.length if none exists
74     */
75    private int binarySearchFirstGreater(int[] arr, int target, int startIndex) {
76        int left = startIndex;
77        int right = arr.length;
78      
79        while (left < right) {
80            int mid = (left + right) >> 1;  // Equivalent to (left + right) / 2
81            if (arr[mid] > target) {
82                right = mid;  // Found a valid element, try to find earlier one
83            } else {
84                left = mid + 1;  // Current element not valid, search right half
85            }
86        }
87      
88        return left;
89    }
90}
91
1class Solution {
2public:
3    vector<int> countPairs(int n, vector<vector<int>>& edges, vector<int>& queries) {
4        // Store degree count for each node
5        vector<int> degree(n);
6      
7        // Map to store edge frequencies between pairs of nodes
8        // Key: encoded pair (smaller_node * n + larger_node)
9        unordered_map<int, int> edgeFrequency;
10      
11        // Count degrees and edge frequencies
12        for (auto& edge : edges) {
13            int nodeA = edge[0] - 1;  // Convert to 0-indexed
14            int nodeB = edge[1] - 1;  // Convert to 0-indexed
15          
16            // Increment degree for both nodes
17            ++degree[nodeA];
18            ++degree[nodeB];
19          
20            // Encode the edge as a unique key (smaller node first)
21            int encodedKey = min(nodeA, nodeB) * n + max(nodeA, nodeB);
22            ++edgeFrequency[encodedKey];
23        }
24      
25        // Create a sorted copy of degrees for binary search
26        vector<int> sortedDegrees = degree;
27        sort(sortedDegrees.begin(), sortedDegrees.end());
28      
29        // Process each query
30        vector<int> result(queries.size());
31        for (int queryIdx = 0; queryIdx < queries.size(); ++queryIdx) {
32            int threshold = queries[queryIdx];
33          
34            // Count pairs using two-pointer technique with binary search
35            for (int leftIdx = 0; leftIdx < n; ++leftIdx) {
36                int currentDegree = sortedDegrees[leftIdx];
37              
38                // Find the first position where sum exceeds threshold
39                int rightIdx = upper_bound(sortedDegrees.begin() + leftIdx + 1, 
40                                          sortedDegrees.end(), 
41                                          threshold - currentDegree) - sortedDegrees.begin();
42              
43                // Add count of valid pairs with current node
44                result[queryIdx] += n - rightIdx;
45            }
46          
47            // Adjust for overcounting due to shared edges
48            for (auto& [encodedPair, sharedEdgeCount] : edgeFrequency) {
49                int nodeA = encodedPair / n;
50                int nodeB = encodedPair % n;
51              
52                // Check if pair was counted but shouldn't be after considering shared edges
53                if (degree[nodeA] + degree[nodeB] > threshold && 
54                    degree[nodeA] + degree[nodeB] - sharedEdgeCount <= threshold) {
55                    --result[queryIdx];
56                }
57            }
58        }
59      
60        return result;
61    }
62};
63
1/**
2 * Counts pairs of nodes where the sum of their degrees exceeds each query threshold
3 * @param n - Number of nodes (1-indexed)
4 * @param edges - Array of edges between nodes
5 * @param queries - Array of threshold values to check against
6 * @returns Array of counts for each query
7 */
8function countPairs(n: number, edges: number[][], queries: number[]): number[] {
9    // Array to store the degree of each node (0-indexed)
10    const nodeDegrees: number[] = new Array(n).fill(0);
11  
12    // Map to store edge frequencies between node pairs
13    // Key: encoded pair (smaller_index * n + larger_index)
14    // Value: number of edges between the pair
15    const edgeFrequencies: Map<number, number> = new Map();
16  
17    // Count degrees and edge frequencies
18    for (const [nodeA, nodeB] of edges) {
19        // Increment degrees (convert to 0-indexed)
20        nodeDegrees[nodeA - 1]++;
21        nodeDegrees[nodeB - 1]++;
22      
23        // Encode the edge pair with smaller index first
24        const smallerIndex = Math.min(nodeA - 1, nodeB - 1);
25        const largerIndex = Math.max(nodeA - 1, nodeB - 1);
26        const encodedPair = smallerIndex * n + largerIndex;
27      
28        // Update edge frequency for this pair
29        edgeFrequencies.set(encodedPair, (edgeFrequencies.get(encodedPair) || 0) + 1);
30    }
31  
32    // Create a sorted copy of degrees for binary search
33    const sortedDegrees: number[] = [...nodeDegrees].sort((a, b) => a - b);
34  
35    // Process each query
36    const results: number[] = [];
37    for (const threshold of queries) {
38        let pairCount = 0;
39      
40        // Count pairs using two-pointer technique with binary search
41        for (let j = 0; j < sortedDegrees.length; j++) {
42            // Find the smallest index where sortedDegrees[index] > threshold - sortedDegrees[j]
43            const targetIndex = binarySearchUpperBound(
44                sortedDegrees, 
45                threshold - sortedDegrees[j], 
46                j + 1
47            );
48            // All nodes from targetIndex to end form valid pairs with node j
49            pairCount += n - targetIndex;
50        }
51      
52        // Adjust for overcounting due to multiple edges between same pairs
53        for (const [encodedPair, edgeCount] of edgeFrequencies) {
54            // Decode the node indices
55            const nodeAIndex = Math.floor(encodedPair / n);
56            const nodeBIndex = encodedPair % n;
57          
58            // Check if this pair was overcounted
59            const totalDegree = nodeDegrees[nodeAIndex] + nodeDegrees[nodeBIndex];
60            const adjustedDegree = totalDegree - edgeCount;
61          
62            // If pair was counted but shouldn't have been (after removing duplicate edges)
63            if (totalDegree > threshold && adjustedDegree <= threshold) {
64                pairCount--;
65            }
66        }
67      
68        results.push(pairCount);
69    }
70  
71    return results;
72}
73
74/**
75 * Binary search to find the leftmost position where array[position] > target
76 * @param array - Sorted array to search in
77 * @param target - Target value to compare against
78 * @param startIndex - Starting index for the search
79 * @returns Index of the first element greater than target
80 */
81function binarySearchUpperBound(array: number[], target: number, startIndex: number): number {
82    let left = startIndex;
83    let right = array.length;
84  
85    while (left < right) {
86        const mid = Math.floor((left + right) / 2);
87        if (array[mid] > target) {
88            right = mid;
89        } else {
90            left = mid + 1;
91        }
92    }
93  
94    return left;
95}
96

Time and Space Complexity

Time Complexity: O(q × (n × log n + m))

The time complexity breaks down as follows:

  • Initial edge processing takes O(m) to iterate through all edges and update counts and the dictionary
  • Sorting the count array takes O(n log n)
  • For each query (total q queries):
    • The nested loop iterates through each element in sorted array s (size n), and for each element performs a binary search using bisect_right, which takes O(log n). This gives O(n × log n) per query
    • The correction loop iterates through all unique edges in dictionary g, which can be at most O(m) edges
    • Total per query: O(n × log n + m)
  • Overall: O(q × (n × log n + m))

Space Complexity: O(n + m)

The space complexity consists of:

  • Array cnt of size n: O(n)
  • Dictionary g storing at most m unique edges: O(m)
  • Sorted array s (copy of cnt): O(n)
  • Answer array ans of size q: O(q)
  • Since q is typically smaller than or comparable to n and m, the dominant terms are O(n + m)

Learn more about how to find time and space complexity quickly.

Common Pitfalls

1. Incorrect Edge Counting Between Node Pairs

A critical pitfall is miscounting or misunderstanding how edges between the same pair of nodes affect the incident count. Many implementations fail to properly handle multiple edges between the same two nodes.

Problem Example:

# Incorrect approach - treating each edge independently
edge_count = defaultdict(int)
for u, v in edges:
    edge_count[(u, v)] += 1  # Wrong! (1,2) and (2,1) treated as different

Solution: Always normalize the node pair by ordering them consistently:

edge_frequencies = defaultdict(int)
for node1, node2 in edges:
    node1, node2 = node1 - 1, node2 - 1
    smaller_node, larger_node = min(node1, node2), max(node1, node2)
    edge_frequencies[(smaller_node, larger_node)] += 1

2. Double-Counting Node Pairs in Binary Search

When using binary search to find valid pairs, it's easy to count each pair twice (once as (a,b) and once as (b,a)).

Problem Example:

# Incorrect - counts each pair twice
for j, degree in enumerate(sorted_degrees):
    k = bisect_right(sorted_degrees, threshold - degree)  # Wrong starting point!
    result += n - k

Solution: Start the binary search from j + 1 to ensure we only count pairs where the first node comes before the second:

for j, current_degree in enumerate(sorted_degrees):
    k = bisect_right(sorted_degrees, threshold - current_degree, lo=j + 1)
    result[query_idx] += n - k

3. Forgetting 1-Indexed to 0-Indexed Conversion

The problem states nodes are numbered from 1 to n, but arrays in most programming languages are 0-indexed. Forgetting this conversion leads to index errors or incorrect results.

Problem Example:

# Incorrect - using 1-indexed directly
node_degrees = [0] * (n + 1)  # Wastes space
for node1, node2 in edges:
    node_degrees[node1] += 1  # Using 1-indexed directly
    node_degrees[node2] += 1

Solution: Convert to 0-indexed immediately when processing edges:

node_degrees = [0] * n
for node1, node2 in edges:
    node1, node2 = node1 - 1, node2 - 1  # Convert to 0-indexed
    # ... rest of processing

4. Incorrect Correction Logic for Multi-Edges

The trickiest part is correctly adjusting for pairs that were optimistically counted but shouldn't be included due to shared edges.

Problem Example:

# Incorrect correction logic
for (node_a, node_b), edge_count in edge_frequencies.items():
    if node_degrees[node_a] + node_degrees[node_b] - edge_count > threshold:
        result[query_idx] += 1  # Wrong! Should be subtracting overcounts

Solution: The correction should subtract overcounted pairs:

for (node_a, node_b), edge_count in edge_frequencies.items():
    total_with_duplicates = node_degrees[node_a] + node_degrees[node_b]
    actual_incident_edges = total_with_duplicates - edge_count
  
    # We counted this pair optimistically but it doesn't actually qualify
    if total_with_duplicates > threshold and actual_incident_edges <= threshold:
        result[query_idx] -= 1

5. Performance Degradation with Repeated Sorting

Sorting the degree array inside the query loop would cause unnecessary time complexity.

Problem Example:

for query in queries:
    sorted_degrees = sorted(node_degrees)  # Sorting for each query - O(n log n) per query!
    # ... rest of logic

Solution: Sort once before processing all queries:

sorted_degrees = sorted(node_degrees)  # Sort once
for query_idx, threshold in enumerate(queries):
    # Use pre-sorted array for all queries
Discover Your Strengths and Weaknesses: Take Our 3-Minute Quiz to Tailor Your Study Plan:

Which data structure is used in a depth first search?


Recommended Readings

Want a Structured Path to Master System Design Too? Don’t Miss This!

Load More