## Division without using *, / or % (Divide Two Integers)

Divide two integers without using multiplication, division and mod operator.

Thoughts:
The common way to do this is to count how many times that divisor adding up to dividend. Of course, we should take care of signs. That’s not hard though. However, LeetCode won’t let you pass for the reason of time exceed exception. I have to use nother trick: since $\log(\frac{a}{b}) = \log{a} - \log{b}$, we have $\frac{a}{b} = \exp (\log{a} - \log{b})$. The tricky part of this question does not lie on the algorithm though. It has something to do with overflows. For particular, if we use Math.abs to compute the absolute value of Integer.MIN(-2147483648), we get -2147483648 again. So we should manually make it equal to Integer.MAX(2147483647). Most of the cases this is fine, except for one case where you try to divide Integer.MIN by 2, i.e., -2147483648 / 2 = -1073741824. However, 2147483647 / 2 = 1073741823. I have to add one more edge condition that if the divisor is 2, we just do the bitwise operation: right shift. Another node is Integer.MAX / Integer.MIN = 0 (not -1).

Code (Java):

public class Solution {
public int divide(int dividend, int divisor) {
if(divisor == 0)
return 0;
if(divisor == 1)
return dividend;
if(dividend == divisor)
return 1;
if(divisor == 2)
return dividend >> 1;

boolean sign = false;
if( (dividend > 0 && divisor < 0) ||
(dividend < 0 && divisor > 0) )
sign = true;

if(dividend == Integer.MAX_VALUE && divisor == Integer.MIN_VALUE)
return 0;

dividend = dividend == Integer.MIN_VALUE ? Integer.MAX_VALUE : Math.abs(dividend);
divisor = divisor == Integer.MIN_VALUE ? Integer.MAX_VALUE : Math.abs(divisor);
int result = (int) Math.floor(Math.pow(Math.E, Math.log(dividend) - Math.log(divisor)));
return sign ? -result : result;
}
}

Code (C++):
class Solution {
public:
int divide(int dividend, int divisor) {
if(divisor == 0)
return 0;
if(divisor == 1)
return dividend;
if(dividend == divisor)
return 1;
if(divisor == 2)
return dividend >> 1;

bool sign = false;
if( (dividend > 0 && divisor < 0) ||
(dividend < 0 && divisor > 0))
sign = true;

if(dividend == numeric_limits<int>::max()
&& divisor == numeric_limits<int>::min())
return 0;

dividend = dividend == numeric_limits<int>::min() ?
numeric_limits<int>::max() : abs(dividend);
divisor = divisor == numeric_limits<int>::min() ?
numeric_limits<int>::max() : abs(divisor);

int result = (int) floor(exp(log(dividend) - log(divisor)));
return sign ? -result : result;
}
};



## Sum of two binary strings (Add Binary)

Given two binary strings, return their sum (also a binary string).
INPUT: a = “11”, b = “1”
OUTPUT: “100”

Thoughts:
Think about how we do binary addition. We first align the two numbers to the right (or padding 0 to the left for small number). Here, we convert b into “01”. Then we do additions from right to left, digit by digit. We also need to take care of carries. We can actually separate these two process: we do addition first, without thinking about carries; we then only do carries. In the end we add up those two results together. As in the example, we first do the summation of a = “11” and b = “01” and we get “10”; we then do the carries, we get “01”, as the carries will be applied to the next left digit, we shift it to left for one bit, i.e., we have “010”. We know need to calculate “10” + “010”, how we can do that? Well, we just recursively go through the process described before. When do we stop? What is the base case? When there’s no carries anymore! At that time, we just return whatever the sum is. We can mimic this process using string manipulations.

Code (Java):

public class Solution {
public String addBinary(String a, String b) {
if(b.indexOf('1') == -1)
return a.indexOf('1') == -1 ? a : a.substring(a.indexOf('1'));
int diff = Math.abs(a.length() - b.length());
if(a.length() > b.length()) {
for(int i = 0; i < diff; ++i)
b = '0' + b;
} else {
for(int i = 0; i < diff; ++i)
a = '0' + a;
}

String sum = new String();
String carry = "0";
for(int i = a.length() - 1; i >= 0; --i) {
if( (a.charAt(i) == '1' && b.charAt(i) == '1') ||
(a.charAt(i) == '0' && b.charAt(i) == '0'))
sum = '0' + sum;
else
sum = '1' + sum;
if (a.charAt(i) == '1' && b.charAt(i) == '1')
carry = '1' + carry;
else
carry = '0' + carry;
}
}
}

Code (C++):
class Solution {
public:
string addBinary(string a, string b) {
if(b.find_first_of('1') == string::npos)
return a.find_first_of('1') == string::npos ?
a : a.substr(a.find_first_of('1'));
int diff = abs((int)a.size() - (int)b.size());
if(a.size() > b.size()) {
for(int i = 0; i < diff; ++i)
b = '0' + b;
} else {
for(int i = 0; i < diff; ++i)
a = '0' + a;
}

string sum;
string carry = "0";
for(int i = a.size() - 1; i >= 0; --i) {
if( (a.at(i) == '1' && b.at(i) == '1') ||
(a.at(i) == '0' && b.at(i) == '0'))
sum = '0' + sum;
else
sum = '1' + sum;
if (a.at(i) == '1' && b.at(i) == '1')
carry = '1' + carry;
else
carry = '0' + carry;
}

}
};



## Find missing integer in an array with only access to bits operation

An array A[1..n] contains all the integers from 0 to n except for one number which is missing. In this problem, we cannot access an entire integer in A with a single operation. The elements of A are represented in binary, and the only operation we can use to access them is ”fetch the jth bit of A[i]”, which takes constant time. Write code to find the missing integer. Can you do it in O(n) time?

My initial thoughts:
I have no idea about this problem, basically. I thought that if the array is sorted, then the least significant bits of those integers would appear as 0, 1, 0, 1, 0, 1, etc. But the second least significant bits are then presented as 0, 0, 1, 1, 0, 0, 1, 1 and these two have to be corresponding to each other. Then I’m stack there.

Solutions:

	public static int findMissing(ArrayList<Integer> array) {
return findMissing(array, Integer.SIZE - 1);
}

public static int findMissing(ArrayList<Integer> input, int column) {
if (column < 0) {
return 0;
}
ArrayList<Integer> oddIndices = new ArrayList<Integer>();
ArrayList<Integer> evenIndices = new ArrayList<Integer>();
for (Integer t : input) {
if ((t & (1 << column)) == 0) {
} else {
}
}
if (oddIndices.size() >= evenIndices.size()) {
return (findMissing(evenIndices, column - 1)) << 1 | 0;
} else {
return (findMissing(oddIndices, column - 1)) << 1 | 1;
}
}

The idea is to use recursion to process bit by bit from least significant to most significant. For each bit, the point is that if one value is missing, the number of 1s and 0s in this bit will change. Change how? Before removing an integer, we can only have three cases:

Number of 1s = Number of 0s + 1.
Number of 1s = Number of 0s.
Number of 1s = Number of 0s – 1.

Therefore, if we remove an integer with LSB 1, we will end up with #1s = #0s. Hence, by counting the number of 1s and 0s of LSB, we can eliminate half of the integers and we advance to the second least significant bit. We continue to do this recursively until we find the missing value.


## Swap odd and even bits in an integer

Write a program to swap odd and even bits in an integer with as few instructions as possible (e.g., bit 0 and bit 1 are swapped, bit 2 and bit 3 are swapped, etc).

My initial thoughts:
I was thinking to shift the number to the left and then manipulate the bits somehow, shift the number to the right and also do the trick, then combine these two together. But I never figured out the details…

Solution:

	public static int swapOddEvenBits(int x) {
return (((x & 0xaaaaaaaa) >> 1) | ((x & 0x55555555) << 1));
}

The solution’s idea is similar to mine… well, to some degree.

Mask the odd bits (keep the odd bits and turn off all the even bits). We can do this by: number & 10101010…. In a 32-bit machine, the mask is 0xAAAAAAAA. After doing that, we have all the odd bits, we shift the number to the right for one position, that puts all the odd bits to the oven position.
Mask the even bits (keep the even bits and turn off all the odd bits). We can do this by: number & 01010101…. In a 32-bit machine, the mask is 0x55555555. After doing that, we have all the even bits, we shift the number to the left for one position, that puts all the even bits to the odd position.
We then | the two results combine them together and output the final number.

We should learn the general way of swapping any two bits in position i and position j.
public static int swapTwoPositions(int number, int i, int j) {
return (((number & (1 << i)) >> i) << j)
| (((number & (1 << j)) >> j) << i)
| ((~(1 << i)) & (~(1 << j)) & number);
}

There are three parts separated by |. The first part mask the i position and move it to j position. The second part mask the j position and move it to i position. The third part turns off bits i and j but keep the rest.


## Number of bits required to convert integer A to B

Write a function to determine the number of bits required to convert integer A to integer B.
Input: 31, 14
Output: 2

My initial thoughts:
31 = 11111
14 = 01110
Take 31 – 14 and we get:
17 = 10001
So we take the difference of A-B or B-A, and check how many 1s in that integer.

My initial codes:

	public static int numberOfBitsRequiredToConvert(int A, int B) {
int diff = A - B >= 0 ? A - B : B - A;
int count = 0;
for (int i = 0; i < 32; ++i)
count += (diff & (1 << i)) > 0 ? 1 : 0;
return count;
}

Solutions:
public static int bitSwapRequired(int a, int b) {
int count = 0;
for (int c = a ^ b; c != 0; c = c >> 1) {
count += c & 1;
}
return count;
}

We can conclude two basic idea from this solution:

When trying to detect which bits of two numbers are different, we use XOR(^).
To check each bit of a number, we can keep & it with 1 and then shift to right.



## Explain ((n & (n-1)) == 0)

Explain what the following code does: ((n & (n-1)) == 0).

When we have A & B == 0, it means that A and B have no common 1s, i.e. any two bits at the same positions of A and B are either different from each other, or two 0s.

Let’s look at ((n & (n-1)) == 0).

• n == 1. n – 1 == 0. 1 & 0 == 0.
• n == 2. n – 1 == 1. 10 & 01 == 0.
• n == 3. n – 1 == 2. 11 & 10 != 0.
• n == 4. n – 1 == 3. 100 & 011 == 0.
• n == 5. n – 1 == 4. 101 & 100 != 0.
• n == 6. n – 1 == 5. 110 & 101 != 0.
• n == 7. n – 1 == 6. 111 & 110 != 0.
• n == 8. n – 1 == 7. 1000 & 0111 == 0.

Looking at those n that satisfies ((n & (n-1)) == 0), we find n == 1, 2, 4, 8, i.e. power of 2.
If we think about a general case, let’s say we have a binary number abcde1xxx.
Let think about 1xxx first, after subtracting 1, they have to share not common 1: therefore n-1 must be like 0yyy, which means you must ”borrow” 1 from the most significant bits. That indicates you must have 1xxx as 1000 and 0yyy as 0111. So we have the number n in the form abcde1000.
Now let us look at abcde part. n-1 is abcde0111. To make ((n & (n-1)) == 0), we must have abcde as 0000.
To sum up, n must be in the form of 00…010…0, i.e. the power of 2.

## Find the nearest numbers that have same number of 1s for an integer

Given an integer, print the next smallest and next largest number that have the same number of 1 bits in their binary representation.

My initial thoughts:
For the next largest, keep adding 1 to the number until find the number that has same number of 1 bits. For the next smallest, keep decreasing 1.

My initial codes:

	public static int findNextSmallest(int number) {
int result = number - 1;
while (Integer.bitCount(result) != Integer.bitCount(number))
result--;
return result;
}

public static int findNextLargest(int number) {
int result = number + 1;
while (Integer.bitCount(result) != Integer.bitCount(number))
result++;
return result;
}

Definitely gonna work but terribly boring. We know this is not what the interviewer expects.
Solutions:
public static boolean GetBit(int n, int index) {
return ((n & (1 << index)) > 0);
}

public static int SetBit(int n, int index, boolean b) {
if (b) {
return n | (1 << index);
} else {
int mask = ~(1 << index);
}
}

public static int GetNext_NP(int n) {
if (n <= 0)
return -1;
int index = 0;
int countOnes = 0;

// Find first one.
while (!GetBit(n, index))
index++;

// Turn on next zero.
while (GetBit(n, index)) {
index++;
countOnes++;
}
n = SetBit(n, index, true);

// Turn off previous one
index--;
n = SetBit(n, index, false);
countOnes--;

// Set zeros
for (int i = index - 1; i >= countOnes; i--) {
n = SetBit(n, i, false);
}

// Set ones
for (int i = countOnes - 1; i >= 0; i--) {
n = SetBit(n, i, true);
}

return n;
}

public static int GetPrevious_NP(int n) {
if (n <= 0)
return -1; // Error

int index = 0;
int countZeros = 0;

// Find first zero.
while (GetBit(n, index))
index++;

// Turn off next 1.
while (!GetBit(n, index)) {
index++;
countZeros++;
}
n = SetBit(n, index, false);

// Turn on previous zero
index--;
n = SetBit(n, index, true);
countZeros--;

// Set ones
for (int i = index - 1; i >= countZeros; i--) {
n = SetBit(n, i, true);
}

// Set zeros
for (int i = countZeros - 1; i >= 0; i--) {
n = SetBit(n, i, false);
}

return n;
}

Solution’s idea

Traverse from right to left. Once we’ve passed a 1, turn on the next 0. We’ve now increased the number by 2^i. Yikes! Example: xxxxx011100 becomes xxxxx111100.
Turn off the one that’s just to the right side of that. We’re now bigger by 2^i – 2^(i-1). Example: xxxxx111100 becomes xxxxx101100
Make the number as small as possible by rearranging all the 1s to be as far right as possible: Example: xxxxx101100 becomes xxxxx100011