In the past decades, the field of Computational Linguistics (CL) has been undergoing a steady shift away from rule-based, linguistically motivated modeling towards statistical learning and the induction of unsupervised feature representations. However, natural language components used in today’s CL pipelines are still static in the sense that their statistical model or rule-base is created once, then subsequently applied without further change. In this talk, I will motivate an adaptive approach to natural language semantics with different levels of adaptivity: In corpus adaptation, a prototypical conceptualization is induced from corpora and used as feature representation for machine learning tasks. In corpus- and text-adaptive approaches, this proto-conceptualization is contextualized to the current text at hand. Finally, in incremental learning approaches, the capability of the model to solve a particular task is iteratively improved through user interaction, adhering to the cognitive computing paradigm. The utility of these approaches is demonstrated in semantic tasks such as lexical substitution, word sense disambiguation, and paraphrasing, as well as in applications such as semantic search, question answering and a semantic writing aid.