This Perspective outlines ways in which generative artificial intelligence aligns with and supports the core ideas of generative linguistics, and how generative linguistics can provide criteria to evaluate and improve neural language models.
This paper surveys recent replication studies of DeepSeek-R1, focusing on Supervised Fine-Tuning (SFT) and Reinforcement Learning from Verifiable Rewards (RLVR). It details data construction, method design, and training procedures, offering insights and anticipating future research directions for reasoning language models.