Facebook Pixel

861. Score After Flipping Matrix

MediumGreedyBit ManipulationArrayMatrix
Leetcode Link

Problem Description

You have a binary matrix (containing only 0s and 1s) with m rows and n columns. You can perform moves on this matrix where each move involves selecting either an entire row or an entire column and flipping all values in it (changing all 0s to 1s and all 1s to 0s).

Each row of the matrix represents a binary number. For example, if a row contains [1, 0, 1, 1], it represents the binary number 1011, which equals 11 in decimal. The score of the matrix is calculated by converting each row to its decimal value and summing all these values together.

Your goal is to find the maximum possible score you can achieve after performing any number of moves (including zero moves).

Example to clarify scoring:

  • If you have a 2Γ—3 matrix:
    [1, 0, 1]
    [0, 1, 1]
  • Row 1: 101 in binary = 5 in decimal
  • Row 2: 011 in binary = 3 in decimal
  • Total score = 5 + 3 = 8

The solution uses a greedy strategy:

  1. First, it ensures all rows start with 1 (since the leftmost bit has the highest value). If a row starts with 0, the entire row is flipped.
  2. Then, for each column from left to right, it counts how many 1s are in that column. If there are fewer 1s than 0s, flipping that column would increase the score.
  3. The final score is calculated by determining the contribution of each column position to the total, considering that column j contributes 2^(n-j-1) for each 1 in that position.
Quick Interview Experience
Help others by sharing your interview experience
Have you seen this problem before?

Intuition

To maximize the score, we need to think about how binary numbers work. The leftmost bit (most significant bit) has the highest value - it contributes 2^(n-1) to the decimal value, which is more than all other bits combined. For instance, in a 4-bit number, the leftmost bit contributes 8, while all other bits together can only contribute at most 7.

This gives us our first key insight: we want as many rows as possible to start with 1. Since we can flip entire rows, if any row starts with 0, we should flip that entire row to make it start with 1. This guarantees the maximum contribution from the most significant bit position.

After ensuring all rows start with 1, we can't flip rows anymore (as that would turn the leading 1 back to 0). Now we can only flip columns. For each remaining column position, we want to maximize the number of 1s in that column.

Here's the second insight: for each column, we want the majority value to be 1. If a column has more 0s than 1s, we should flip that column to convert the majority to 1s. If it already has more 1s, we leave it as is.

Why does this greedy approach work? Because each bit position has a fixed contribution value (2^(n-j-1) for position j), and the positions are independent after we've fixed the first column. We can make local optimal decisions for each column without affecting others.

The beauty of this solution is that we don't actually need to perform the flips explicitly for columns. We can just count how many 1s are in each column and take the maximum of that count or its complement (m - count), which represents the maximum number of 1s we could have in that column after optimal flipping.

Learn more about Greedy patterns.

Solution Approach

The implementation follows a two-phase greedy strategy:

Phase 1: Ensure all rows start with 1

for i in range(m):
    if grid[i][0] == 0:
        for j in range(n):
            grid[i][j] ^= 1

We iterate through each row and check if the first element is 0. If it is, we flip the entire row using XOR operation (^= 1). The XOR with 1 effectively toggles the bit: 0 ^ 1 = 1 and 1 ^ 1 = 0. After this phase, all rows are guaranteed to start with 1, maximizing the contribution from the most significant bit position.

Phase 2: Optimize each column

ans = 0
for j in range(n):
    cnt = sum(grid[i][j] for i in range(m))
    ans += max(cnt, m - cnt) * (1 << (n - j - 1))

For each column j:

  1. Count the number of 1s in the column: cnt = sum(grid[i][j] for i in range(m))
  2. Determine the maximum number of 1s possible in this column:
    • If cnt >= m - cnt (more 1s than 0s), keep the column as is
    • If cnt < m - cnt (more 0s than 1s), we would flip the column
    • We use max(cnt, m - cnt) to get the optimal count without actually flipping
  3. Calculate the contribution of this column to the total score:
    • Each 1 in column j contributes 2^(n-j-1) to its row's value
    • We use bit shifting (1 << (n - j - 1)) which is equivalent to 2^(n-j-1)
    • Multiply the optimal count by this positional value

The algorithm cleverly avoids actually performing the column flips by directly calculating what the optimal count would be. This makes the solution both elegant and efficient with O(m * n) time complexity for iterating through the matrix twice, and O(1) extra space as we modify the input matrix in place.

Ready to land your dream job?

Unlock your dream job with a 5-minute evaluator for a personalized learning plan!

Start Evaluator

Example Walkthrough

Let's walk through a small example with a 3Γ—3 matrix:

Initial matrix:
[0, 1, 1]
[1, 0, 1]
[0, 0, 0]

Phase 1: Ensure all rows start with 1

Row 0 starts with 0, so flip the entire row:

  • [0, 1, 1] becomes [1, 0, 0]

Row 1 starts with 1, so leave it unchanged:

  • [1, 0, 1] stays [1, 0, 1]

Row 2 starts with 0, so flip the entire row:

  • [0, 0, 0] becomes [1, 1, 1]

After Phase 1:

[1, 0, 0]
[1, 0, 1]
[1, 1, 1]

Phase 2: Optimize each column

Column 0 (j=0): All three values are 1

  • Count of 1s: 3
  • max(3, 3-3) = max(3, 0) = 3
  • Contribution: 3 Γ— 2^(3-0-1) = 3 Γ— 4 = 12

Column 1 (j=1): Values are [0, 0, 1]

  • Count of 1s: 1
  • max(1, 3-1) = max(1, 2) = 2 (flipping would give us 2 ones)
  • Contribution: 2 Γ— 2^(3-1-1) = 2 Γ— 2 = 4

Column 2 (j=2): Values are [0, 1, 1]

  • Count of 1s: 2
  • max(2, 3-2) = max(2, 1) = 2 (keep as is)
  • Contribution: 2 Γ— 2^(3-2-1) = 2 Γ— 1 = 2

Total Score: 12 + 4 + 2 = 18

To verify: If we actually flipped column 1 (which had more 0s than 1s), the final matrix would be:

[1, 1, 0]  β†’ binary 110 = 6
[1, 1, 1]  β†’ binary 111 = 7
[1, 0, 1]  β†’ binary 101 = 5
Total: 6 + 7 + 5 = 18 βœ“

Solution Implementation

1class Solution:
2    def matrixScore(self, grid: List[List[int]]) -> int:
3        """
4        Maximize the score of a binary matrix by toggling rows and columns.
5        Score is calculated as the sum of binary numbers represented by each row.
6      
7        Strategy:
8        1. Toggle rows to ensure the leftmost bit (MSB) is always 1
9        2. For each column, toggle if it results in more 1s than 0s
10      
11        Args:
12            grid: Binary matrix with values 0 or 1
13          
14        Returns:
15            Maximum possible score after optimal toggles
16        """
17        num_rows, num_cols = len(grid), len(grid[0])
18      
19        # Step 1: Toggle rows where the first element is 0
20        # This ensures the most significant bit is always 1 for maximum value
21        for row_idx in range(num_rows):
22            if grid[row_idx][0] == 0:
23                # Toggle entire row using XOR operation
24                for col_idx in range(num_cols):
25                    grid[row_idx][col_idx] ^= 1
26      
27        # Step 2: Calculate the maximum score by optimizing each column
28        total_score = 0
29      
30        for col_idx in range(num_cols):
31            # Count number of 1s in current column
32            ones_count = sum(grid[row_idx][col_idx] for row_idx in range(num_rows))
33          
34            # Choose the maximum between current 1s count and potential 1s after toggle
35            # If we toggle, zeros become ones: (num_rows - ones_count)
36            max_ones = max(ones_count, num_rows - ones_count)
37          
38            # Add contribution of this column to total score
39            # Each bit position has weight 2^(n-j-1) where j is column index
40            bit_weight = 1 << (num_cols - col_idx - 1)
41            total_score += max_ones * bit_weight
42      
43        return total_score
44
1class Solution {
2    public int matrixScore(int[][] grid) {
3        int numRows = grid.length;
4        int numCols = grid[0].length;
5      
6        // Step 1: Flip rows that have 0 in the first column
7        // This ensures all rows start with 1 (maximizing the most significant bit)
8        for (int row = 0; row < numRows; row++) {
9            if (grid[row][0] == 0) {
10                // Flip all elements in this row using XOR operation
11                for (int col = 0; col < numCols; col++) {
12                    grid[row][col] ^= 1;
13                }
14            }
15        }
16      
17        // Step 2: Calculate the maximum score by considering column flips
18        int totalScore = 0;
19      
20        for (int col = 0; col < numCols; col++) {
21            // Count the number of 1s in the current column
22            int onesCount = 0;
23            for (int row = 0; row < numRows; row++) {
24                onesCount += grid[row][col];
25            }
26          
27            // Choose to flip column if it gives more 1s (maximize 1s in each column)
28            int maxOnes = Math.max(onesCount, numRows - onesCount);
29          
30            // Calculate contribution of this column to the total score
31            // Each column position has a weight of 2^(n-j-1) in binary representation
32            int columnWeight = 1 << (numCols - col - 1);
33            totalScore += maxOnes * columnWeight;
34        }
35      
36        return totalScore;
37    }
38}
39
1class Solution {
2public:
3    int matrixScore(vector<vector<int>>& grid) {
4        int numRows = grid.size();
5        int numCols = grid[0].size();
6      
7        // Step 1: Flip rows that start with 0 to maximize the leftmost bit
8        // The leftmost bit has the highest value, so we want all rows to start with 1
9        for (int row = 0; row < numRows; ++row) {
10            if (grid[row][0] == 0) {
11                // Flip the entire row using XOR with 1
12                for (int col = 0; col < numCols; ++col) {
13                    grid[row][col] ^= 1;
14                }
15            }
16        }
17      
18        // Step 2: Calculate the maximum score by considering column flips
19        int totalScore = 0;
20      
21        // Process each column from left to right
22        for (int col = 0; col < numCols; ++col) {
23            // Count the number of 1s in the current column
24            int onesCount = 0;
25            for (int row = 0; row < numRows; ++row) {
26                onesCount += grid[row][col];
27            }
28          
29            // For each column, we can choose to flip it or not
30            // We want to maximize the number of 1s, so take the max between
31            // current 1s count and the count after flipping (numRows - onesCount)
32            int maxOnes = max(onesCount, numRows - onesCount);
33          
34            // Calculate the contribution of this column to the total score
35            // The bit position value is 2^(numCols - col - 1)
36            int bitValue = 1 << (numCols - col - 1);
37            totalScore += maxOnes * bitValue;
38        }
39      
40        return totalScore;
41    }
42};
43
1/**
2 * Calculates the maximum score of a binary matrix after performing row and column flips.
3 * The score is the sum of all rows interpreted as binary numbers.
4 * 
5 * @param grid - A 2D binary matrix containing only 0s and 1s
6 * @returns The maximum possible score after optimal flips
7 */
8function matrixScore(grid: number[][]): number {
9    const rowCount: number = grid.length;
10    const columnCount: number = grid[0].length;
11  
12    // Step 1: Flip rows that start with 0 to maximize the most significant bit
13    // Since the leftmost bit has the highest value, ensure all rows start with 1
14    for (let row = 0; row < rowCount; row++) {
15        if (grid[row][0] === 0) {
16            // Flip the entire row using XOR operation
17            for (let col = 0; col < columnCount; col++) {
18                grid[row][col] ^= 1;
19            }
20        }
21    }
22  
23    // Step 2: For each column, count 1s and flip if needed to maximize the column sum
24    let totalScore: number = 0;
25  
26    for (let col = 0; col < columnCount; col++) {
27        // Count the number of 1s in the current column
28        let onesCount: number = 0;
29        for (let row = 0; row < rowCount; row++) {
30            onesCount += grid[row][col];
31        }
32      
33        // Choose the maximum between keeping the column as-is or flipping it
34        // If flipping gives more 1s (rowCount - onesCount > onesCount), use that count
35        const maxOnes: number = Math.max(onesCount, rowCount - onesCount);
36      
37        // Calculate the contribution of this column to the total score
38        // The bit position value is 2^(columnCount - col - 1)
39        const bitPositionValue: number = 1 << (columnCount - col - 1);
40        totalScore += maxOnes * bitPositionValue;
41    }
42  
43    return totalScore;
44}
45

Time and Space Complexity

Time Complexity: O(m * n) where m is the number of rows and n is the number of columns in the grid.

  • The first loop iterates through all m rows, and for each row that starts with 0, it flips all n elements in that row. In the worst case, all rows need flipping, resulting in O(m * n) operations.
  • The second loop iterates through all n columns, and for each column, it sums up m elements to count the number of 1s. This results in O(n * m) operations.
  • Overall time complexity: O(m * n) + O(n * m) = O(m * n)

Space Complexity: O(1)

  • The algorithm modifies the input grid in-place and only uses a constant amount of extra space for variables (m, n, i, j, cnt, ans).
  • No additional data structures are created that scale with the input size.
  • The space complexity is constant, O(1).

Learn more about how to find time and space complexity quickly.

Common Pitfalls

Pitfall 1: Modifying the Input Matrix Without Consideration

Issue: The solution modifies the input matrix directly during Phase 1. This can be problematic if:

  • The original matrix needs to be preserved for other operations
  • The interviewer expects the input to remain unchanged
  • You need to debug or verify your solution against the original input

Example of the problem:

# Original matrix
grid = [[0,0,1,1],[1,0,1,0],[1,1,0,0]]
solution = Solution()
result = solution.matrixScore(grid)
print(grid)  # Grid has been modified! No longer the original input

Solution: Create a copy of the matrix or use a virtual flip tracking approach:

class Solution:
    def matrixScore(self, grid: List[List[int]]) -> int:
        m, n = len(grid), len(grid[0])
      
        # Track which rows need to be flipped without modifying the grid
        row_flipped = [grid[i][0] == 0 for i in range(m)]
      
        total_score = 0
        for j in range(n):
            ones_count = 0
            for i in range(m):
                # Get the actual value after considering row flip
                actual_val = grid[i][j] ^ row_flipped[i]
                ones_count += actual_val
          
            max_ones = max(ones_count, m - ones_count)
            total_score += max_ones * (1 << (n - j - 1))
      
        return total_score

Pitfall 2: Incorrect Bit Position Calculation

Issue: Confusing the bit position formula. It's easy to make off-by-one errors with (n - j - 1) when calculating 2^position.

Common mistakes:

# Wrong: Using n - j instead of n - j - 1
bit_weight = 1 << (n - j)  # This gives wrong positional values

# Wrong: Using j directly
bit_weight = 1 << j  # This reverses the bit significance

Solution: Remember that for a binary number with n bits:

  • The leftmost bit (index 0) has weight 2^(n-1)
  • The rightmost bit (index n-1) has weight 2^0
  • For column j, the weight is 2^(n-j-1)

Test with a simple example to verify:

# For a 3-column matrix:
# Column 0: weight = 2^(3-0-1) = 2^2 = 4 βœ“
# Column 1: weight = 2^(3-1-1) = 2^1 = 2 βœ“
# Column 2: weight = 2^(3-2-1) = 2^0 = 1 βœ“

Pitfall 3: Not Considering Edge Cases

Issue: The solution assumes the matrix is non-empty and has at least one column. This could cause errors with edge cases.

Potential problems:

# These could cause index errors or unexpected behavior:
grid = []        # Empty matrix
grid = [[]]      # Matrix with empty rows

Solution: Add input validation:

class Solution:
    def matrixScore(self, grid: List[List[int]]) -> int:
        if not grid or not grid[0]:
            return 0
          
        m, n = len(grid), len(grid[0])
        # ... rest of the solution
Discover Your Strengths and Weaknesses: Take Our 5-Minute Quiz to Tailor Your Study Plan:

Which of these properties could exist for a graph but not a tree?


Recommended Readings

Want a Structured Path to Master System Design Too? Don’t Miss This!

Load More