Partial differential equations with more than three coordinates arise naturally if the model features certain kinds of stochasticity. Typical examples are the Schroedinger, Fokker-Planck and Master equations in quantum mechanics or cell biology, as well as quantification of uncertainty. The principal difficulty of a straightforward numerical solution of such equations is the `curse of dimensionality': the storage cost of the discrete solution grows exponentially with the number of coordinates (dimensions). One way to reduce the complexity is the low-rank separation of variables. One can see all discrete data (such as the solution) as multi-index arrays, or tensors. These large tensors are never stored directly. We approximate them by a sum of products of smaller factors, each carrying only one of the original variables. I will present one of the simplest but powerful of such product representations, the Tensor Train (TT) decomposition. The TT decomposition generalizes the approximation of a given matrix by a low-rank matrix to the tensor case. It was found that many interesting models allow such approximations with a significant reduction of storage demands. I will also discuss algorithms for computing approximate solutions directly in the decomposed representation.