超越相似性:基于复合聚合的个性化联邦推荐

近年来,联邦推荐(FR)作为一种新兴的设备端学习范式,在学术界和工业界都引起了广泛关注。现有的联邦推荐方法通常采用不同的协同过滤模型作为本地模型,并通过各种聚合函数来获得一个全局推荐器,遵循基本的联邦学习(FL)原则。例如,一项开创性的工作是 FCF,它通过执行本地更新和使用联邦优化进行全局聚合,对集中式矩阵分解进行了改进。此外,FedNCF 将矩阵分解的线性与深度嵌入技术的非线性相结合,建立在 FCF 的基础之上。这些基于嵌入的联邦推荐模型有效地平衡了推荐准确性和隐私保护。

然而,现有的联邦推荐方法主要利用联邦视觉领域中发明的聚合函数来聚合来自相似客户端的参数,例如聚类聚合。尽管这些方法取得了相当大的性能,但我们认为直接将它们应用于联邦推荐并非最佳选择。这主要体现在模型结构的差异上。与联邦视觉中的卷积神经网络等结构化参数不同,联邦推荐模型通常采用一对一的项目嵌入表来进行区分。这种差异导致了嵌入偏差问题,即在聚合过程中不断更新已训练的嵌入,而忽略了未训练的嵌入,从而无法准确预测未来的项目。

为了解决这个问题,我们提出了一种基于复合聚合的个性化联邦推荐模型(FedCA),它不仅聚合了相似客户端以增强已训练的嵌入,还聚合了互补客户端以更新未训练的嵌入。此外,我们将整个学习过程转化为一个统一的优化算法,以共同学习相似性和互补性。在多个真实数据集上的大量实验证明了我们提出的模型的有效性。

嵌入偏差问题:联邦推荐的独特挑战

联邦推荐模型通常使用一个嵌入表来存储所有项目的表示,每个客户端只训练与自己交互过的项目的嵌入。当使用传统的相似性聚合方法时,会发生嵌入偏差问题:已训练过的项目的嵌入会不断得到优化,而未训练过的项目的嵌入则保持不变甚至退化。这导致模型在预测用户未来可能感兴趣的项目时,由于缺乏对未训练项目信息的了解,效果不佳。

FedCA:基于复合聚合的个性化联邦推荐

为了解决嵌入偏差问题,我们提出了 FedCA 模型,它采用了一种复合聚合机制,同时考虑了模型相似性和数据互补性。

  • 模型相似性: FedCA 聚合来自相似客户端的模型,以增强已训练的项目的嵌入。
  • 数据互补性: FedCA 聚合来自互补客户端的模型,以更新未训练的项目的嵌入。

FedCA 使用一个统一的优化框架来共同学习相似性和互补性。通过这种方式,FedCA 能够更有效地聚合项目嵌入,从而提高模型的预测准确性和泛化能力。

实验结果

我们对四个基准数据集进行了实验,包括 Movielens-100K. Filmtrust、Movielens-1M 和 Microlens-100K。实验结果表明,FedCA 在所有数据集上都优于其他基线模型,包括 FCF、FedAvg、PerFedRec、FedAtt、FedFast、pFedGraph 和 PFedRec。此外,我们还进行了消融实验,验证了模型相似性和数据互补性在 FedCA 中的有效性。

总结

本文首先重新思考了联邦视觉和联邦推荐任务之间的根本差异。具体来说,联邦视觉领域主要利用结构化参数(例如卷积神经网络)进行联邦优化,而联邦推荐任务主要采用一对一的项目嵌入表进行个性化推荐。这种关键差异导致了从联邦视觉领域借鉴的基于相似性的聚合方法在聚合嵌入表时无效,从而导致嵌入偏差问题。为了解决上述挑战,我们提出了一种专门针对联邦推荐任务的复合聚合机制。具体来说,通过在一个统一的优化框架内结合模型相似性和数据互补性,我们的方法增强了客户端已交互过的项目的训练嵌入,并优化了客户端未交互过的项目的非训练嵌入。这使得能够有效地预测未来的项目。此外,我们还探讨了近端项在联邦推荐任务中对个性化偏好的无效性,并提出了一种插值方法来缓解联邦推荐中的空间错位问题。

这项研究特别提出了一种很有前景的联邦推荐任务复合聚合框架。它是一个与模型无关的即插即用模块,可以无缝集成到主流联邦推荐模型中。然而,在这项工作中,我们需要手动调整相似性和互补性的权重分配。这些局限性可以通过在未来的研究中使用自动机器学习技术来自适应地学习权重分配来缓解。此外,探索更适合联邦推荐任务的模型相似性和数据互补性机制也是一个很有前景的研究方向。

参考文献

[1] Hongzhi Yin, Liang Qu, Tong Chen, Wei Yuan, Ruiqi Zheng, Jing Long, Xin Xia, Yuhui Shi, and Chengqi Zhang. On-device recommender systems: A comprehensive survey. arXiv preprint arXiv:2401.11441, 2024.

[2] Karan Singhal, Hakim Sidahmed, Zachary Garrett, Shanshan Wu, John Rush, and Sushant Prakash. Federated reconstruction: Partially local federated learning. In NeurIPS, pages 11220–11232, 2021.

[3] Canh T Dinh, Nguyen Tran, and Josh Nguyen. Personalized federated learning with moreau envelopes. In NeurIPS, pages 21394–21405, 2020.

[4] Muhammad Ammad-Ud-Din, Elena Ivannikova, Suleiman A Khan, Were Oyomno, Qiang Fu, Kuan Eeik Tan, and Adrian Flanagan. Federated collaborative filtering for privacy-preserving personalized recommendation system. arXiv preprint arXiv:1901.09888, 2019.

[5] Honglei Zhang, Fangyuan Luo, Jun Wu, Xiangnan He, and Yidong Li. LightFR: Lightweight federated recommendation with privacy-preserving matrix factorization. ACM Trans. Inf. Syst., 41(4):1–28, 2023.

[6] Lin Ning, Karan Singhal, Ellie X Zhou, and Sushant Prakash. Learning federated representations and recommendations with limited negatives. arXiv preprint arXiv:2108.07931, 2021.

[7] Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, and Daniel Ramage. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604, 2018.

[8] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30–37, 2009.

[9] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In The Web Conference, pages 173–182, 2017.

[10] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In AISTAT, pages 1273–1282, 2017.

[11] Khalil Muhammad, Qinqin Wang, Diarmuid O’Reilly-Morgan, Elias Tragos, Barry Smyth, Neil Hurley, James Geraci, and Aonghus Lawlor. Fedfast: Going beyond average for faster training of federated recommender systems. In SIGKDD, pages 1234–1242, 2020.

[12] Vasileios Perifanis and Pavlos S Efraimidis. Federated neural collaborative filtering. Knowl.-Based Syst., 242:1–16, 2022.

[13] Zhiwei Li, Guodong Long, and Tianyi Zhou. Federated recommendation with additive personalization. In ICLR, 2024.

[14] Wei Yuan, Liang Qu, Lizhen Cui, Yongxin Tong, Xiaofang Zhou, and Hongzhi Yin. Hetefedrec: Federated recommender systems with model heterogeneity. In ICDE, 2024.

[15] Bingyan Liu, Yao Guo, and Xiangqun Chen. Pfa: Privacy-preserving federated adaptation for effective model personalization. In The Web Conference, pages 923–934, 2021.

[16] Shaoxiong Ji, Shirui Pan, Guodong Long, Xue Li, Jing Jiang, and Zi Huang. Learning private neural language modeling with attentive aggregation. In IJCNN, pages 1–8, 2019.

[17] Avishek Ghosh, Jichan Chung, Dong Yin, and Kannan Ramchandran. An efficient framework for clustered federated learning. In NeurIPS, pages 19586–19597, 2020.

[18] Sichun Luo, Yuanzhang Xiao, and Linqi Song. Personalized federated recommendation via joint representation learning, user clustering, and model adaptation. In CIKM, pages 4289–4293, 2022.

[19] Xin Xia, Hongzhi Yin, Junliang Yu, Qinyong Wang, Guandong Xu, and Quoc Viet Hung Nguyen. On-device next-item recommendation with self-supervised knowledge distillation. In SIGIR, pages 546–555, 2022.

[20] Chunxu Zhang, Guodong Long, Tianyi Zhou, Peng Yan, Zijian Zhang, Chengqi Zhang, and Bo Yang. Dual personalization on federated recommendation. In IJCAI, 2024.

[21] Jinze Wu, Qi Liu, Zhenya Huang, Yuting Ning, Hao Wang, Enhong Chen, Jinfeng Yi, and Bowen Zhou. Hierarchical personalized federated learning for user modeling. In The Web Conference, pages 957–968, 2021.

[22] Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. In MLSys, pages 429–450, 2020.

[23] Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. Exploiting shared representations for personalized federated learning. In ICML, pages 2089–2099, 2021.

[24] Filip Hanzely, Slavomír Hanzely, Samuel Horváth, and Peter Richtárik. Lower bounds and optimal algorithms for personalized federated learning. In NeurIPS, pages 2304–2315, 2020.

[25] Xinrui He, Shuo Liu, Jacky Keung, and Jingrui He. Co-clustering for federated recommender system. In The Web Conference, pages 3821–3832, 2024.

[26] Rui Ye, Zhenyang Ni, Fangzhao Wu, Siheng Chen, and Yanfeng Wang. Personalized federated learning with inferred collaboration graphs. In ICML, pages 39801–39817, 2023.

[27] Guibing Guo, Jie Zhang, and Neil Yorke-Smith. A novel bayesian similarity measure for recommender systems. In IJCAI, pages 2619–2625, 2013.

[28] F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. ACM Trans. Interact. Intell. Syst., 5(4):1–19, 2015.

[29] Ellango Jothimurugesan, Kevin Hsieh, Jianyu Wang, Gauri Joshi, and Phillip B Gibbons. Federated learning under distributed concept drift. In IJCAI, pages 5834–5853, 2023.

[30] Alexander Kraskov, Harald Stögbauer, and Peter Grassberger. Estimating mutual information. Phys. Rev. E, 69(6):1–16, 2004.

[31] Steven Diamond and Stephen Boyd. Cvxpy: A python-embedded modeling language for convex optimization. J. Mach. Learn. Res., 17(83):1–5, 2016.

[32] Lorenzo Minto, Moritz Haller, Benjamin Livshits, and Hamed Haddadi. Stronger privacy for federated collaborative filtering with implicit feedback. In RecSys, pages 342–350, 2021.

[33] Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi, and Chiyuan Zhang. Deep learning with label differential privacy. In NeurIPS, pages 27131–27145, 2021.

[34] Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, and Zico Kolter. Certified robustness to label-flipping attacks via randomized smoothing. In ICML, pages 8230–8241, 2020.

[35] Yongxin Ni, Yu Cheng, Xiangyan Liu, Junchen Fu, Youhua Li, Xiangnan He, Yongfeng Zhang, and Fajie Yuan. A content-driven micro-video recommendation dataset at scale. arXiv preprint arXiv:2309.15379, 2023.

[36] Andriy Mnih and Russ R Salakhutdinov. Probabilistic matrix factorization. NeurIPS, 2007.

[37] Christos Boutsidis, Anastasios Zouzias, and Petros Drineas. Random projections for k-means clustering. NeurIPS, 2010.

[38] David Goldberg, David Nichols, Brian M Oki, and Douglas Terry. Using collaborative filtering to weave an information tapestry. Commun. ACM, 35(12):61–70, 1992.

[39] Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, and Frank Hutter. Efficient and robust automated machine learning. In NeurIPS, 2015.

0 0 投票数
Article Rating
订阅评论
提醒
0 评论
最多投票
最新 最旧
内联反馈
查看所有评论
人生梦想 - 关注前沿的计算机技术 acejoy.com
0
希望看到您的想法,请您发表评论x