This paper surveys recent replication studies of DeepSeek-R1, focusing on Supervised Fine-Tuning (SFT) and Reinforcement Learning from Verifiable Rewards (RLVR). It details data construction, method design, and training procedures, offering insights and anticipating future research directions for reasoning language models.