Solution:
After you look at the problem, it becomes pretty apparent that the solution is the following:
	1. Start with the interval [1, n]
	2. Find some index i such that a_i is coprime to all values in the interval.
	3. Recurse on [1, i-1] and [i+1, n] (if they are non-empty).
(one can prove that it doesn't matter which i you pick if there are multiple such i)
The question is: how do we find i quickly?

First of all: since the a_i are quite small, we can find their prime divisors pretty quickly. In fact, we can start by running a modifed version of the sieve of Eratosthenes, which for every i gives us sp[i], the smallest prime divisor of i. This allows us to factor all the a_i in O(n lg A_max) time, at a cost of O(A_max lg lg A_max) preprocessing time.

Next, notice that we can now make a list, for each prime p, the indices i where a_i is divisible by p. This allows us to find, for each a_i, the first j<i such that a_i and a_j are not coprime, and similarly k>i.

Using this information, we can now quickly answer the question: for an interval [l, r], could i be the root? We could thus try the following algorithm (omitting construction of the solution):

bool solve(a, l, r):
	if r <= l:
		return True:
	for i in range(l, r+1):
		if i can be root:
			return solve(a, l, i-1) and solve(a, i+1, r)
	return False

The problem is of course that this can take O(n^2) time if finding i takes linear time and the split is very uneven (similar to quicksort).

The solution is to start scanning from two sides simultaneously:

bool solve(a, l, r):
	if r <= l:
		return True
	for i in range(0, (r-l)/2 + 1):
		if l+i can be root:
			return solve(a, l, l+i-1) and solve(a, l+i+1, r)
		if r-i can be root:
			return solve(a, l, r-i-1) and solve(a, r-i+1, r)

The idea behind this is that if i is near l or r (resulting in an uneven split) we find it quickly, and if it is in the middle, it takes linear time to find it, but we cut the problem in half so it's okay.

It turns out that this algorithm runs in O(n lg n) time. A short sketch of the proof goes as follows. Consider a vertex in the final tree. It corresponds to a specific call to the solve function. Now how much time did this call itself take? (not counting recursive calls) It took time linear in the length of the smallest interval (of [l, i-1] and [i+1, r]). But the size of this interval is equal to the size of the smallest subtree of our vertex.

Thus, we ask: given the final tree, suppose for each vertex, we do work linear in the size of its smallest subtree, what time complexity does this have? We flip the question and ask: for each vertex v, how often does it appear in the smallest subtree of one of its ancestors?

Suppose we move on a path upward to the root. Each time we encounter such an ancestor, its other subtree must be at least as large as the subtree containing v. So each time we encounter such an ancestor, the size of the subtree rooted at the vertex we are currently at doubles relative to its smallest subtree. This can happen at most O(lg n) times, and so v appears in at most O(lg n) smallest subtrees.

Thus the total runtime complexity of this algorithm is O(n lg n).

Difficulty: 80
