Can two threads call a synchronized method and normal method at the same time?

You are given a class with synchronized method A, and a normal method C. If you have two threads in one instance of a program, can they call A at the same time? Can they call A and C at the same time?

Solution:
Java provides two ways to achieve synchronization: synchronized method and synchronized statement.

  • Synchronized method: Methods of a class which need to be synchronized are declared with “synchronized” keyword. If one thread is executing a synchronized method, all other threads which want to execute any of the synchronized methods on the same objects get blocked.
    Syntax: method1 and method2 need to be synchronized

    public class SynchronizedMethod {
    	// Variables declaration
    	public synchronized returntype Method1() {
    		// Statements
    	}
    	public synchronized returntype method2() {
    		// Statements
    	}
    	// Other methods
    }	
    
  • Synchronized statement: It provides the synchronization for a group of statements rather than a method as a whole It needs to provide the object on which these synchronized statements will be applied, unlike in a synchronized method
    Syntax: synchronized statements on “this” object

    synchronized(this) {
    	// statement 1
    	// ...
    	// statement N
    }
    

i) If you have two threads in one instance of a program, can they call A at the same time?
Not possible; read the above paragraph.
ii) Can they call A and C at the same time?
Yes. Only methods of the same object which are declared with the keyword synchronized can’t be interleaved

Advertisements

Scheduling method calls in sequence

Suppose we have the following code:

	class Foo {
	public:
		A(.....); // If A is called, a new thread will be created 
					// and the corresponding function will be executed.
		B(.....); // same as above
		C(.....); // same as above
	}
	Foo f;
	f.A(.....);
	f.B(.....);
	f.C(.....);

i) Can you design a mechanism to make sure that B is executed after A, and C is executed after B?
ii) Suppose we have the following code to use class Foo We do not know how the threads will be scheduled in the OS:

	Foo f;
	f.A(.....);
	f.B(.....);
	f.C(.....);
	f.A(.....);
	f.B(.....);
	f.C(.....);

Can you design a mechanism to make sure that all the methods will be executed in sequence?

Solution:
i) Can you design a mechanism to make sure that B is executed after A, and C is executed after B?

Semaphore s_a(0);
Semaphore s_b(0);
A {
	//
	s_a.release(1);
}
B {
	s_a.acquire(1); 
	//
	s_b.release(1);
}
C {
	s_b.acquire(1);
	//
}

ii) Can you design a mechanism to make sure that all the methods will be executed in sequence?

Semaphore s_a(0);
Semaphore s_b(0);
Semaphore s_c(1);
A{
	s_c.acquire(1); 
	// 
	s_a.release(1);
}
B{
	s_a.acquire(1); 
	// 
	s_b.release(1);
}
C{
	s_b.acquire(1); 
	// 
	s_c.release(1);
}

Design a class which provides a lock if no deadlocks

Design a class which provides a lock only if there are no possible deadlocks.

Solution:
For our solution, we implement a wait / die deadlock prevention scheme.

	class MyThread extends Thread {
		long time;
		ArrayList<Resource> res = new ArrayList<Resource>();

		public ArrayList<Resource> getRes() {
			return res;
		}

		@Override
		public void run() {
			// Run infinitely
			time = System.currentTimeMillis();
			int count = 0;
			while (true) {
				if (count < 4) {
					if (Question.canAcquireResource(this, Question.r[count])) {
						res.add(Question.r[count]);
						count++;
						System.out.println("Resource: ["
								+ Question.r[count - 1].getId()
								+ "] acquired by thread: [" + this.getName()
								+ "]");
						try {
							sleep(1000);
						} catch (InterruptedException e) {
							e.printStackTrace();
						}
					}
				} else {
					this.stop();
				}
			}
		}

		public long getTime() {
			return time;
		}

		public void setRes(ArrayList<Resource> res) {
			this.res = res;
		}

		MyThread(String name) {
			super(name);
		}
	}

Thread safe and exception safe singleton design pattern

Implement a singleton design pattern as a template such that, for any given class Foo, you can call Singleton::instance() and get a pointer to an instance of a singleton of type Foo Assume the existence of a class Lock which has acquire() and release() methods How could you make your implementation thread safe and exception safe?

Solution:

using namespace std;
// Place holder for thread synchronization lock
class Lock {
public:
	Lock() { // placeholder code to create the lock
	} 
	~Lock() { // placeholder code to deallocate the lock
	} 
	void AcquireLock() { // placeholder to acquire the lock
	} 
	void ReleaseLock() { // placeholder to release the lock
	}
};

// Singleton class with a method that creates a new instance 
// of the * class of the type of the passed in template 
// if it does not already exist.
template <class T> class Singleton { 
private:
	static Lock lock;
	static T* object; 
protected:
	Singleton() { }; 
public:
	static T * instance(); 
};
Lock Singleton::lock;

T * Singleton::Instance() {
// if object is not initialized, acquire lock 
	if (object == 0) {
		lock.AcquireLock();
// If two threads simultaneously check and pass the first "if"
// condition, then only the one who acquired the lock first
// should create the instance 
		if (object == 0) {
			object = new T; 
		}
		lock.ReleaseLock(); 
	}
	return object; 
}

int main() {
// foo is any class defined for which we want singleton access 
	Foo* singleton_foo = Singleton<Foo>::Instance();
	return 0;
}

The general method to make a program thread safe is to lock shared resources whenever write permission is given This way, if one thread is modifying the resource, other threads can not modify it.

How to measure the time spent in a context switch

How can you measure the time spent in a context switch?

Solution:
This is a tricky question, but let’s start with a possible solution.
A context switch is the time spent switching between two processes (e.g., bringing a waiting process into execution and sending an executing process into waiting/terminated state). This happens in multitasking. The operating system must bring the state information of waiting processes into memory and save the state information of the running process.
In order to solve this problem, we would like to record timestamps of the last and first instruction of the swapping processes. The context switching time would be the difference in the timestamps between the two processes.
Let’s take an easy example: Assume there are only two processes, P1 and P2.
P1 is executing and P2 is waiting for execution. At some point, the OS must swap P1 and P2 — let’s assume it happens at the Nth instruction of P1. So, the context switch time for this would be Time_Stamp(P2_1) – Time_Stamp(P2_N)
Easy enough. The tricky part is this: how do we know when this swapping occurs? Swapping is governed by the scheduling algorithm of the OS. We can not, of course, record the timestamp of every instruction in the process.
Another issue: there are many kernel level threads which are also doing context switches, and the user does not have any control over them.
Overall, we can say that this is mostly an approximate calculation which depends on the underlying OS. One approximation could be to record the end instruction timestamp of a process and start timestamp of a process and waiting time in queue.
If the total timeof execution of all the processes was T, then the context switch time = T – (SUM for all processes (waiting time + execution time)).

Differences between thread and process

What’s the difference between a thread and a process?

My initial thoughts:

  1. A thread is smaller than a process.
  2. Resources can be exchanged between threads but not between processes.
  3. One process usually contains multiple threads.

Solution:

Processes and threads are related to each other but are fundamentally different.
A process can be thought of as an instance of a program in execution. Each process is an independent entity to which system resources (CPU time, memory, etc.) are allocated and each process is executed in a separate address space. One process cannot access the variables and data structures of another process. If you wish to access another process’ resources,
inter-process communications have to be used such as pipes, files, sockets etc.
A thread uses the same stack space of a process. A process can have multiple threads. A key difference between processes and threads is that multiple threads share parts of their state. Typically, one allows multiple threads to read and write the same memory (no processes can directly access the memory of another process). However, each thread still has its own registers and its own stack, but other threads can read and write the stack memory.
A thread is a particular execution path of a process; when one thread modifies a process resource, the change is immediately visible to sibling threads.