The use of regularisation techniques such as l1 and Total Variation in Basis Pursuit and Lasso has been a great success in inverse problems, imaging, statistics and machine learning. However, in this paper we establish the following paradox: it is impossible to design algorithms to solve these general problems accurately when given inaccurate input data, even when the inaccuracies can be made arbitrarily small. As simple numbers such as root(2) or cos(3) never come with an exact numerical representation, and such numbers are crucial in for example discrete Fourier and wavelet transforms, inaccurate data input is a daily encounter in applications. The impossibility result implies that for any algorithm designed to solve these problems there will be cases where the algorithm fails in the following way: For fixed dimensions and any small accuracy parameter epsilon > 0, one can choose an arbitrary large time T and find an input such that the algorithm will run for longer than T and still not have reached epsilon accuracy. Moreover, it is impossible to determine when the algorithm should halt to achieve an epsilon accurate solution, and hence the algorithm will never be able to produce an output where one knows that the output is at least epsilon accurate. The largest epsilon for which this failure happens is called the breakdown epsilon. We show that for Basis Pursuit and Lasso, the breakdown epsilon > 1/3 even when the size of the input numbers is bounded by one. As our results open up for a new classification theory for these regularisation problems, we provide the first steps by establishing positive classification results by showing that sparse problems are guaranteed to have algorithms that can provide accurate solution even with small inaccuracies in the input. However, this classification theory turns out to be delicate. For example, given standard assumptions from sparse recovery, there are algorithms that can compute a solution to Basis Pursuit accurately, however, this is impossible for Lasso and Basis Pursuit with noise parameter delta > 0. In the latter case, however, one can compute a solution accurately up to the breakdown epsilon that typically coincides with the error bound provided in the theory of sparse recovery. Moreover,the breakdown epsilon tends to zero when delta tends to zero. This helps explaining the success of many modern algorithms applied in numerous real-world applications, and also explains the cases where algorithms will fail and why. Our impossibility results are based on the Solvability Complexity Index (SCI) hierarchy and hence universal regardless of the computational model. We will discuss many real-world examples to illustrate how these impossibility results have impact in applications such as medical imaging.