Live Breaking News & Updates on Elf supervised learning

Stay updated with breaking news from Elf supervised learning. Get real-time updates on events, politics, business, and more. Visit us for reliable news and exclusive interviews.

"FLPurifier: Backdoor Defense in Federated Learning vi" by Jiale Zhang, Chengcheng Zhu et al.

Recent studies have demonstrated that backdoor attacks can cause a significant security threat to federated learning. Existing defense methods mainly focus on detecting or eliminating the backdoor patterns after the model is backdoored. However, these methods either cause model performance degradation or heavily rely on impractical assumptions, such as labeled clean data, which exhibit limited effectiveness in federated learning. To this end, we propose FLPurifier, a novel backdoor defense method in federated learning that can effectively purify the possible backdoor attributes before federated aggregation. Specifically, FLPurifier splits a complete model into a feature extractor and classifier, in which the extractor is trained in a decoupled contrastive manner to break the strong correlation between trigger features and the target label. Compared with existing backdoor mitigation methods, FLPurifier doesn’t rely on impractical assumptions since it can effectively purify the backdoor effects in the training process rather than an already trained model. Moreover, to decrease the negative impact of backdoored classifiers and improve global model accuracy, we further design an adaptive classifier aggregation strategy to dynamically adjust the weight coefficients. Extensive experimental evaluations on six benchmark datasets demonstrate that FLPurifier is effective against known backdoor attacks in federated learning with negligible performance degradation and outperforms the state-of-the-art defense methods.

Adaptation-models , Daptive-classifier-aggregation , Ackdoor-attacks , Ecoupled-contrastive-training , Eature-extraction , Ederated-learning , Obustness , Elf-supervised-learning , Ervers , Raining ,

"Fuzzy contrastive learning for online behavior analysis" by Jie Yang, Gang Huang et al.

With the prevalence of smart devices, billions of people are accessing digital resource in their daily life. Online user-behavior modeling, as such, has been actively researched in recent years. However, due to the data uncertainty (sparse-ness and skewness), traditional techniques suffer from certain drawbacks, such as relying on labor-intensive expertise or prior knowledge, lacking of interpretability and transparency, and expensive computational cost. As a step toward bridging the gap, this paper proposes a fuzzy-set based contrastive learning algorithm. The general idea is to design an end-to-end learning framework of optimizing representation from contrastive samples. The proposed algorithm is characterized by three main modules, including data augmentation, fuzzy encoder, and semi-supervised optimization. More precisely, data augmentation is used to produce contrastive (positive and negative) samples based on anchor ones. The fuzzy encoder is introduced to fuzzify (or encode) latent representation of those contrastive samples, while the semi-supervised learning is then implemented to optimize the fuzzy encoder and minimize the training loss simultaneously. The advantage of the proposed algorithm includes no requirement of domain knowledge, preserving the transparency and interpretability of result, and computational effectiveness. Experimental results, based on a real-world app usage dataset, demonstrate that the applicability and flexibility of the proposed algorithm, compared with other state-of-the-art methods.

Contrastive-learning , Uzzy-set , Obile-applications-analysis , Elf-supervised-learning , Ser-online-behavior ,