FLPurifier: Backdoor Defense in Federated Learning vi" by Jiale Zhang, Chengcheng Zhu et al. : vimarsana.com,,Adaptation models,Daptive classifier aggregation,Ackdoor attacks,Ecoupled contrastive training,Eature extraction,Ederated learning,Obustness,Elf supervised learning,Ervers,Raining,"FLPurifier: Backdoor Defense in Federated Learning vi" by Jiale Zhang, Chengcheng Zhu et al. : vimarsana.com" name="keywords"> FLPurifier: Backdoor Defense in Federated Learning vi" by Jiale Zhang, Chengcheng Zhu et al. : vimarsana.com" property="og:title"> FLPurifier: Backdoor Defense in Federated Learning vi" by Jiale Zhang, Chengcheng Zhu et al. : vimarsana.com">

"FLPurifier: Backdoor Defense in Federated Learning vi" by Jiale Zhang, Chengcheng Zhu et al. : vimarsana.com

"FLPurifier: Backdoor Defense in Federated Learning vi" by Jiale Zhang, Chengcheng Zhu et al.

Recent studies have demonstrated that backdoor attacks can cause a significant security threat to federated learning. Existing defense methods mainly focus on detecting or eliminating the backdoor patterns after the model is backdoored. However, these methods either cause model performance degradation or heavily rely on impractical assumptions, such as labeled clean data, which exhibit limited effectiveness in federated learning. To this end, we propose FLPurifier, a novel backdoor defense method in federated learning that can effectively purify the possible backdoor attributes before federated aggregation. Specifically, FLPurifier splits a complete model into a feature extractor and classifier, in which the extractor is trained in a decoupled contrastive manner to break the strong correlation between trigger features and the target label. Compared with existing backdoor mitigation methods, FLPurifier doesn’t rely on impractical assumptions since it can effectively purify the backdoor effects in the training process rather than an already trained model. Moreover, to decrease the negative impact of backdoored classifiers and improve global model accuracy, we further design an adaptive classifier aggregation strategy to dynamically adjust the weight coefficients. Extensive experimental evaluations on six benchmark datasets demonstrate that FLPurifier is effective against known backdoor attacks in federated learning with negligible performance degradation and outperforms the state-of-the-art defense methods.

Related Keywords

, Adaptation Models , Daptive Classifier Aggregation , Ackdoor Attacks , Ecoupled Contrastive Training , Eature Extraction , Ederated Learning , Obustness , Elf Supervised Learning , Ervers , Raining ,

© 2024 Vimarsana