The ability of large generative models to respond naturally to text, image and audio inputs has created significant excitement. Particularly interesting is the ability of these models to generate outputs that resemble coherent reasoning and computational sequences. I will discuss the inherent computational capability of large language models and show that autoregressive decoding supports universal computation, even without pre-training. The co-existence of informal and formal computational systems in the same model does not change what is computable, but does provide new means for eliciting desired behaviour. I will then discuss how post-training, in an attempt to make a model more directable, faces severe computational limits on what can be achieved, but that accounting for these limits can improve outcomes.