A historical language model for nuanced dialogue
Talkie is a 13-billion-parameter language model trained on a dataset comprising 260 billion tokens of text up to the year 1930. It reflects the culture and values of the texts it was trained on, rather than the views of their authors. While Talkie can generate a wide range of outputs, users should be aware that these outputs may sometimes be inaccurate or offensive. To address this, Talkie's outputs are moderated using Qwen3Guard-Gen-4B, although moderation occurs only after the messages are generated. As a result, users may encounter objectionable content briefly before it is flagged.