In this paper, the authors propose a new position encoding method, Contextual Position Encoding (CoPE), that allows positions to be conditioned on context by incrementing position only on certain tokens determined by the model. This allows more general position addressing such as attending to the $i$-th particular word, noun, or sentence. The paper demonstrates that CoPE can solve selective copy, counting, and Flip-Flop tasks where popular position embeddings fail, and improves perplexity on language modeling and coding tasks.