We investigate the connection of deep neural network architectures and iterative regularisation methods. We show that constraining the parameters of deep neural networks can restore certain mathematical properties that are present in iterative regularisation methods but are usually not present in deep neural networks. We discuss several different architectures, with particularly focus on so-called variational networks and novel variants of it. We conclude with numerical results that compare constrained variational networks to unconstrained ones. This work is joint work with Erich Kobler, Tom Pock and Martin Burger.