I don't feel comfortable trying to build reliable, well-understood NLP systems without providing a semantics for their "mental state," including the data structures they encode. One step in that direction is a semantics of sentences, for example that of Montague. However, to handle extended discourses, the methods of model-theoretic semantics need to be extended in (at least) three directions. First, anaphora and deixis require that the interpretation (of phrases, sentences, and entire discourses) should be made sensitive to context, both linguistic and physical. Second, non-declarative sentences must be treated uniformly with declarative ones. Finally, it should be possible to make room in the semantics for interpretation constraints based on the subject matter of the discourse and on communication itself. I'm thinking particularly of the kind of inferences typically captured computationally by the use of scripts, plans, prototypes, etc. Typically these constraints on interpretations have been kept outside the realm of semantics, and might even be taken to be the distinguishing characteristics of a separate pragmatics component. I'd like to suggest that we already have available many of the necessary elements for a context-sensitive theory of discourse, although substantial work still needs to be done to bring the pieces together and to build implementations faithful to the semantics.