minimum array end — 9/11/24
problem statement
Given some \(x\) and \(n\), construct a strictly increasing array
(say
nums
) of length \(n\) such that
nums[0] & nums[1] ... & nums[n - 1] == x
, where
&
denotes the bitwise AND operator.
Finally, return the minimum possible value of
nums[n - 1].
understanding the problem
The main difficulty in this problem lies in understanding what is being asked (intentionally or not, the phrasing is terrible). Some initial notes:
- The final array need not be constructed
-
If the element-wise bitwise AND of an array equals
xif and only if each element hasx's bits set—and no other bit it set by all elements -
It makes sense to set
nums[0] == xto ensurenums[n - 1]is minimal
developing an approach
An inductive approach is helpful. Consider the natural question:
“If I had correctly generated nums[:i]”,
how could I find nums[i]? In other words,
how can I find the next smallest number such that
nums
's element-wise bitwise AND is still \(x\)?
Hmm... this is tricky. Let's think of a similar problem to glean some insight: “Given some \(x\), how can I find the next smallest number?”. The answer is, of course, add one (bear with me here).
We also know that all of nums[i] must have at least
\(x\)'s bits set. Therefore, we need to alter the unset bits of
nums[i].
The key insight of this problem is combining these two ideas to
answer our question:
Just “add one” to nums[i - 1]'s
unset bits. Repeat this to find nums[n - 1].
One last piece is missing—how do we know the element-wise
bitwise AND is exactly \(x\)? Because
nums[i > 0] only sets \(x\)'s unset bits, every
number in nums will have at least \(x\)'s bits
set. Further, no other bits will be set because \(x\) has them
unset.
carrying out the plan
Let's flesh out the remaining parts of the algorithm:
-
len(nums) == nand we initializenums[0] == x. So, we need to “add one”n - 1times -
How do we carry out the additions? We could iterate \(n - 1\)
times and simulate them. However, we already know how we want to
alter the unset bits of
nums[0]inductively— (add one) and how many times we want to do this (\(n - 1\)). Because we're adding one \(n-1\) times to \(x\)'s unset bits (right to left, of course), we simply set its unset bits to those of \(n - 1\).
The implementation is relatively straightfoward. Traverse \(x\) from
least-to-most significant bit, setting its \(i\)th unset bit to \(n
- 1\)'s \(i\)th bit. Use a bitwise mask mask to
traverse \(x\).
long long minEnd(int n, long long x) {
int bits_to_distribute = n - 1;
long long mask = 1;
while (bits_to_distribute > 0) {
if ((x & mask) == 0) {
// if the bit should be set, set it-otherwise, leave it alone
if ((bits_to_distribute & 1) == 1)
x |= mask;
bits_to_distribute >>= 1;
}
mask <<= 1;
}
return x;
}
asymptotic complexity
Space Complexity: \(\Theta(1)\)—a constant amount of numeric variables are allocated regardless of \(n\) and \(x\).
Time Complexity: in the worst case, may need to traverse the entirety of \(x\) to distribute every bit of \(n - 1\) to \(x\). This occurs if and only if \(x\) is all ones (\(\exists k\gt 0 : 2^k-1=x\))). \(x\) and \(n\) have \(lg(x)\) and \(lg(n)\) bits respectively, so the solution is \(O(lg(x) + lg(n))\in O(log(xn))\). \(1\leq x,n\leq 1e8\), so this runtime is bounded by \(O(log(1e8^2))=O(log(1e16))=O(16)\).