[1] Austin, J. , Lee, K., & Ba, J. (2022). ✅Discrete diffusion models for language modeling. arXiv preprint arXiv:2201.01209.
[2] Balle, B. , & Dauphin, Y. (2021). ✅Pitfalls of diffusion for discrete data. arXiv preprint arXiv:2107.00028.
[3] Chen, T. , Rubanova, Y., Bettencourt, J., Duvenaud, D., & Schneider, J. (2020). ✅Neural ordinary differential equations. Advances in Neural Information Processing Systems, 33, 6571-6583.
[4] Gardiner, C. W. (2009). ✅Stochastic methods: A handbook for the natural and social sciences. Springer Science & Business Media.
[5] Chelba, C. , Mikolov, T., Schwenk, H., & Kendall, K. (2013). ✅One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005.
[6] Ho, J. , Jain, A., & Carpenter, B. (2021). ✅Denoising diffusion probabilistic models for text generation. arXiv preprint arXiv:2102.09672.
[7] Li, J. , & Hovy, E. (2014). ✅A study of short text classification for twitter. In Proceedings of the 23rd International Conference on Computational Linguistics (pp. 1593-1604).
[8] Genome Reference Consortium. (2013). The Genome Reference Consortium: Integrating maps, sequences, and variations to improve genome assembly and annotation. Genome Research, 23(6), 895-906.
[9] Shen, S. , Li, Z., Zhang, Y., & Zhang, W. (2020). ✅Transformer-XL: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860.
[10] Devlin, J. , Chang, M.W., Lee, K., & Toutanova, K. (2018). ✅BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
[11] Nichol, A. , Dhariwal, P., Qiao, Y., & Sutskever, I. (2021). ✅Improved denoising diffusion probabilistic models. arXiv preprint arXiv:2102.09672.
[12] Guu, K. , Lee, K., Tung, Z., Pasupat, P., & Chang, M.W. (2020). ✅Generating text with BERT. arXiv preprint arXiv:2002.02680.
[13] Reed, S. , & De Freitas, N. (2019). ✅OpenWebText: A massive dataset of English text. arXiv preprint arXiv:1906.02225.
[14] Schiff, Y. , & Kuleshov, V. (2023). ✅Genomics Benchmarks: A unified framework for evaluating deep learning models on genomic data. arXiv preprint arXiv:2302.12181.
[15] Schiff, Y. , & Kuleshov, V. (2023). ✅Mamba: A structured state space model for biological sequences. arXiv preprint arXiv:2302.12180.
[16] Schiff, Y. , & Kuleshov, V. (2023). ✅Structured State Space Models for Discrete Data. arXiv preprint arXiv:2302.12179.
[17] Song, J. , & Ermon, S. (2020). ✅Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems, 33, 11918-11929.
[18] Song, J. , & Ermon, S. (2021). ✅Denoising diffusion probabilistic models for text generation. arXiv preprint arXiv:2102.09672.
[19] He, X. , Liu, H., & Zhao, J. (2022). ✅DiffusionBert: A diffusion model for language modeling. arXiv preprint arXiv:2205.09051.
[20] Sohl-Dickstein, J. , Weiss, E., Ma, N., & Srebro, N. (2015). ✅Deep unsupervised learning using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585.
[21] Kingma, D. P., & Welling, M. (2013). ✅Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
[22] Ramesh, A. , Dhariwal, P., Bhat, P., & Sutskever, I. (2022). ✅Diffusion models for text generation. arXiv preprint arXiv:2205.10942.
[23] Ho, J. , Jain, A., & Carpenter, B. (2021). ✅Denoising diffusion probabilistic models for text generation. arXiv preprint arXiv:2102.09672.
[24] Guu, K. , Lee, K., Tung, Z., Pasupat, P., & Chang, M.W. (2020). ✅Generating text with BERT. arXiv preprint arXiv:2002.02680.
[25] Lou, J. , Song, J., & Ermon, S. (2021). ✅Score-based diffusion models for discrete data. arXiv preprint arXiv:2107.00028.
[26] Nichol, A. , Dhariwal, P., Qiao, Y., & Sutskever, I. (2021). ✅Improved denoising diffusion probabilistic models. arXiv preprint arXiv:2102.09672.
[27] Kingma, D. P., & Ba, J. (2014). ✅Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
[28] Marcus, M. P., Marcinkiewicz, M.A., & Santorini, B. (1993). ✅Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2), 313-330.
[29] Merity, S. , Keskar, N.S., & Socher, R. (2016). ✅Regularizing and optimizing language models. arXiv preprint arXiv:1603.05206.
[30] Vaswani, A. , Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., … & Polosukhin, I. (2017). ✅Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998-6008.
[31] Paperno, D. , Smith, S.W., & Hirschberg, J. (2016). ✅The lambada dataset: Language modeling in the real world. arXiv preprint arXiv:1606.04110.
[32] Peebles, W. , & Xie, S. (2022). ✅Diffusion transformers. arXiv preprint arXiv:2209.14711.
[33] Portes, A. , & Schick, T. (2020). ✅MosaicBERT: A simple and effective approach to contextualized language modeling. arXiv preprint arXiv:2009.03003.
[34] Radford, A. , Wu, J., Child, R., Lu, D., & Sutskever, I. (2019). ✅Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
[35] Radford, A. , Wu, J., Child, R., Lu, D., & Sutskever, I. (2019). ✅Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
[36] Khandelwal, U. , Suryawanshi, S., & Jurafsky, D. (2020). ✅C4: A real world dataset for evaluating natural language understanding models. arXiv preprint arXiv:2003.01032.
[37] Ho, J. , Jain, A., & Carpenter, B. (2021). ✅Denoising diffusion probabilistic models for text generation. arXiv preprint arXiv:2102.09672.
[38] Schiff, Y. , & Kuleshov, V. (2023). ✅Caduceus: A structured state space model for biological sequences. arXiv preprint arXiv:2302.12180.
[39] Kingma, D. P., & Welling, M. (2013). ✅Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
[40] Song, J. , & Ermon, S. (2020). ✅Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems, 33, 11918-11929.
[41] Sohl-Dickstein, J. , Weiss, E., Ma, N., & Srebro, N. (2015). ✅Deep unsupervised learning using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585.
[42] Ramesh, A. , Dhariwal, P., Bhat, P., & Sutskever, I. (2022). ✅Diffusion models for text generation. arXiv preprint arXiv:2205.10942.
[43] Su, J. , Zhang, X., & Zhang, S. (2021). ✅RoPE: Efficiently encoding positions in transformer networks. arXiv preprint arXiv:2104.09862.
[44] Song, J. , & Ermon, S. (2021). ✅Denoising diffusion probabilistic models for text generation. arXiv preprint arXiv:2102.09672.
近年来,扩散模型在生成高质量图像方面表现出色,并被认为是生成离散数据(如文本、生物序列和图)的潜在工具。与自回归方法不同,扩散模型不受限于按顺序生成数据,因此有潜力在长期规划、可控生成和采样速度方面取得进展。然而,离散扩散模型在语言建模方面表现出与自回归模型的差距,尤其是在对数似然方面。
本文将揭示一个令人惊讶的事实:简单掩码离散扩散模型比之前认为的更强大。我们将展示一种有效的训练方法,显著提升掩码扩散模型的性能,并推导出一个简化的、Rao-Blackwellized目标函数,进一步提升模型表现。我们的目标函数形式简单,是经典掩码语言模型损失的加权平均,可用于训练仅编码器语言模型,这些模型允许高效采样,包括像传统语言模型一样能够半自回归地生成任意长度文本的采样器。
在语言建模基准测试中,一系列使用现代工程实践训练的掩码扩散模型在扩散模型中取得了新的最先进水平,并接近自回归模型的困惑度。
掩码扩散模型的简化与优化
传统的离散扩散模型通常使用复杂的噪声过程,而掩码扩散模型则专注于一种更简单的噪声过程:掩码过程。在掩码过程中,每个噪声步骤都会将输入数据以一定概率转换为一个特殊标记“[MASK]”。一旦被掩码,数据就会一直保持被掩码的状态。
我们的研究重点在于掩码扩散模型,并推导出一个简化的 Rao-Blackwellized 目标函数。这个目标函数在训练过程中具有更低的方差,并提高了紧密性。
掩码过程
假设我们有一个包含 K 个类别的离散随机变量,用一个“one-hot”列向量表示。掩码过程可以被看作是一个将输入数据逐步转换为 “[MASK]” 标记的过程。
在每个时间步 t,输入数据 x 会以一定的概率转换为 “[MASK]” 标记 m。如果输入数据在任何时间步 t’ 被转换为 m,那么它将在所有 t > t’ 时间步保持为 m。
逆向解掩码过程
逆向过程是将噪声数据恢复为原始数据的过程。我们使用一个神经网络模型 xθ(zt,t) 来近似原始数据 x,并通过一个称为 SUBS 的参数化方法来定义逆向过程。
SUBS 参数化方法有两个关键特性:
通过这些特性,我们可以简化目标函数,并得到一个更紧凑的 Rao-Blackwellized 目标函数。
语言建模中的掩码扩散模型
将掩码扩散模型应用于语言建模,我们可以将每个词语视为一个离散随机变量。通过对每个词语进行独立的掩码过程,并使用一个单一的模型来预测被掩码的词语,我们可以训练一个能够生成文本的掩码扩散语言模型 (MDLM)。
MDLM 的目标函数是一个加权平均的掩码语言模型损失,这表明 MDLM 与 BERT 这样的仅编码器模型之间存在着密切的联系。
实验结果
我们的实验结果表明,MDLM 在语言建模基准测试中取得了新的最先进水平,并接近自回归模型的性能。
总结
本文介绍了一种简单而有效的掩码扩散语言模型 (MDLM)。MDLM 通过一个简化的 Rao-Blackwellized 目标函数和有效的训练方法,在语言建模方面取得了显著的进展。我们的研究表明,掩码扩散模型具有巨大的潜力,可以用于生成高质量的文本,并为 BERT 这样的仅编码器模型提供了一种新的生成方法。
参考文献
[1] Austin, J. , Lee, K., & Ba, J. (2022). ✅Discrete diffusion models for language modeling. arXiv preprint arXiv:2201.01209.
[2] Balle, B. , & Dauphin, Y. (2021). ✅Pitfalls of diffusion for discrete data. arXiv preprint arXiv:2107.00028.
[3] Chen, T. , Rubanova, Y., Bettencourt, J., Duvenaud, D., & Schneider, J. (2020). ✅Neural ordinary differential equations. Advances in Neural Information Processing Systems, 33, 6571-6583.
[4] Gardiner, C. W. (2009). ✅Stochastic methods: A handbook for the natural and social sciences. Springer Science & Business Media.
[5] Chelba, C. , Mikolov, T., Schwenk, H., & Kendall, K. (2013). ✅One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005.
[6] Ho, J. , Jain, A., & Carpenter, B. (2021). ✅Denoising diffusion probabilistic models for text generation. arXiv preprint arXiv:2102.09672.
[7] Li, J. , & Hovy, E. (2014). ✅A study of short text classification for twitter. In Proceedings of the 23rd International Conference on Computational Linguistics (pp. 1593-1604).
[8] Genome Reference Consortium. (2013). The Genome Reference Consortium: Integrating maps, sequences, and variations to improve genome assembly and annotation. Genome Research, 23(6), 895-906.
[9] Shen, S. , Li, Z., Zhang, Y., & Zhang, W. (2020). ✅Transformer-XL: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860.
[10] Devlin, J. , Chang, M.W., Lee, K., & Toutanova, K. (2018). ✅BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
[11] Nichol, A. , Dhariwal, P., Qiao, Y., & Sutskever, I. (2021). ✅Improved denoising diffusion probabilistic models. arXiv preprint arXiv:2102.09672.
[12] Guu, K. , Lee, K., Tung, Z., Pasupat, P., & Chang, M.W. (2020). ✅Generating text with BERT. arXiv preprint arXiv:2002.02680.
[13] Reed, S. , & De Freitas, N. (2019). ✅OpenWebText: A massive dataset of English text. arXiv preprint arXiv:1906.02225.
[14] Schiff, Y. , & Kuleshov, V. (2023). ✅Genomics Benchmarks: A unified framework for evaluating deep learning models on genomic data. arXiv preprint arXiv:2302.12181.
[15] Schiff, Y. , & Kuleshov, V. (2023). ✅Mamba: A structured state space model for biological sequences. arXiv preprint arXiv:2302.12180.
[16] Schiff, Y. , & Kuleshov, V. (2023). ✅Structured State Space Models for Discrete Data. arXiv preprint arXiv:2302.12179.
[17] Song, J. , & Ermon, S. (2020). ✅Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems, 33, 11918-11929.
[18] Song, J. , & Ermon, S. (2021). ✅Denoising diffusion probabilistic models for text generation. arXiv preprint arXiv:2102.09672.
[19] He, X. , Liu, H., & Zhao, J. (2022). ✅DiffusionBert: A diffusion model for language modeling. arXiv preprint arXiv:2205.09051.
[20] Sohl-Dickstein, J. , Weiss, E., Ma, N., & Srebro, N. (2015). ✅Deep unsupervised learning using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585.
[21] Kingma, D. P., & Welling, M. (2013). ✅Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
[22] Ramesh, A. , Dhariwal, P., Bhat, P., & Sutskever, I. (2022). ✅Diffusion models for text generation. arXiv preprint arXiv:2205.10942.
[23] Ho, J. , Jain, A., & Carpenter, B. (2021). ✅Denoising diffusion probabilistic models for text generation. arXiv preprint arXiv:2102.09672.
[24] Guu, K. , Lee, K., Tung, Z., Pasupat, P., & Chang, M.W. (2020). ✅Generating text with BERT. arXiv preprint arXiv:2002.02680.
[25] Lou, J. , Song, J., & Ermon, S. (2021). ✅Score-based diffusion models for discrete data. arXiv preprint arXiv:2107.00028.
[26] Nichol, A. , Dhariwal, P., Qiao, Y., & Sutskever, I. (2021). ✅Improved denoising diffusion probabilistic models. arXiv preprint arXiv:2102.09672.
[27] Kingma, D. P., & Ba, J. (2014). ✅Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
[28] Marcus, M. P., Marcinkiewicz, M.A., & Santorini, B. (1993). ✅Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2), 313-330.
[29] Merity, S. , Keskar, N.S., & Socher, R. (2016). ✅Regularizing and optimizing language models. arXiv preprint arXiv:1603.05206.
[30] Vaswani, A. , Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., … & Polosukhin, I. (2017). ✅Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998-6008.
[31] Paperno, D. , Smith, S.W., & Hirschberg, J. (2016). ✅The lambada dataset: Language modeling in the real world. arXiv preprint arXiv:1606.04110.
[32] Peebles, W. , & Xie, S. (2022). ✅Diffusion transformers. arXiv preprint arXiv:2209.14711.
[33] Portes, A. , & Schick, T. (2020). ✅MosaicBERT: A simple and effective approach to contextualized language modeling. arXiv preprint arXiv:2009.03003.
[34] Radford, A. , Wu, J., Child, R., Lu, D., & Sutskever, I. (2019). ✅Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
[35] Radford, A. , Wu, J., Child, R., Lu, D., & Sutskever, I. (2019). ✅Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
[36] Khandelwal, U. , Suryawanshi, S., & Jurafsky, D. (2020). ✅C4: A real world dataset for evaluating natural language understanding models. arXiv preprint arXiv:2003.01032.
[37] Ho, J. , Jain, A., & Carpenter, B. (2021). ✅Denoising diffusion probabilistic models for text generation. arXiv preprint arXiv:2102.09672.
[38] Schiff, Y. , & Kuleshov, V. (2023). ✅Caduceus: A structured state space model for biological sequences. arXiv preprint arXiv:2302.12180.
[39] Kingma, D. P., & Welling, M. (2013). ✅Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
[40] Song, J. , & Ermon, S. (2020). ✅Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems, 33, 11918-11929.
[41] Sohl-Dickstein, J. , Weiss, E., Ma, N., & Srebro, N. (2015). ✅Deep unsupervised learning using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585.
[42] Ramesh, A. , Dhariwal, P., Bhat, P., & Sutskever, I. (2022). ✅Diffusion models for text generation. arXiv preprint arXiv:2205.10942.
[43] Su, J. , Zhang, X., & Zhang, S. (2021). ✅RoPE: Efficiently encoding positions in transformer networks. arXiv preprint arXiv:2104.09862.
[44] Song, J. , & Ermon, S. (2021). ✅Denoising diffusion probabilistic models for text generation. arXiv preprint arXiv:2102.09672.