浏览全部资源
扫码关注微信
1. 贵州大学公共大数据国家重点实验室,贵州 贵阳 550025
2. 贵州大学密码学与数据安全研究所,贵州 贵阳 550025
3. 贵州大学计算机科学与技术学院,贵州 贵阳 550025
4. 贵州财经大学信息学院,贵州 贵阳 550025
[ "彭长根(1963- ),男,贵州锦屏人,博士,贵州大学教授,主要研究方向为隐私保护、密码学和大数据安全等" ]
[ "高婷(1995- ),女,江西吉安人,贵州大学硕士生,主要研究方向为隐私保护、成员推理等" ]
[ "刘惠篮(1988- ),女,贵州贵阳人,博士,贵州大学副教授,主要研究方向为复杂数据分析、稳健回归、高维数据建模和统计计算" ]
[ "丁红发(1988- ),男,河南南阳人,博士,贵州大学在站博士后,贵州财经大学副教授,主要研究方向为隐私保护和大数据安全" ]
网络出版日期:2022-01,
纸质出版日期:2022-01-25
移动端阅览
彭长根, 高婷, 刘惠篮, 等. 面向机器学习模型的基于PCA的成员推理攻击[J]. 通信学报, 2022,43(1):149-160.
Changgen PENG, Ting GAO, Huilan LIU, et al. PCA-based membership inference attack for machine learning models[J]. Journal on communications, 2022, 43(1): 149-160.
彭长根, 高婷, 刘惠篮, 等. 面向机器学习模型的基于PCA的成员推理攻击[J]. 通信学报, 2022,43(1):149-160. DOI: 10.11959/j.issn.1000-436x.2022009.
Changgen PENG, Ting GAO, Huilan LIU, et al. PCA-based membership inference attack for machine learning models[J]. Journal on communications, 2022, 43(1): 149-160. DOI: 10.11959/j.issn.1000-436x.2022009.
针对目前黑盒成员推理攻击存在的访问受限失效问题,提出基于主成分分析(PCA)的成员推理攻击。首先,针对黑盒成员推理攻击存在的访问受限问题,提出一种快速决策成员推理攻击fast-attack。在基于距离符号梯度获取扰动样本的基础上将扰动难度映射到距离范畴来进行成员推理。其次,针对快速决策成员推理攻击存在的低迁移率问题,提出一种基于PCA的成员推理攻击PCA-based attack。将快速决策成员推理攻击中的基于扰动算法与PCA技术相结合来实现成员推理,以抑制因过度依赖模型而导致的低迁移行为。实验表明,fast-attack在确保攻击精度的同时降低了访问成本,PCA-based attack在无监督的设置下优于基线攻击,且模型迁移率相比fast-attack提升10%。
Aiming at the problem of restricted access failure in current black box membership inference attacks
a PCA-based membership inference attack was proposed.Firstly
in order to solve the restricted access problem of black box membership inference attacks
a fast decision membership inference attack named fast-attack was proposed.Based on the perturbation samples obtained by the distance symbol gradient
the perturbation difficulty was mapped to the distance category for membership inference.Secondly
in view of the low mobility problem of fast-attack
a PCA-based membership inference attack was proposed.Combining the algorithmic ideas based on the perturbation category in the fast-attack and the PCA technology to suppress the low-migration behavior caused by excessive reliance on the model.Finally
experiments show that fast-attack reduces the access cost while ensuring the accuracy of the attack.PCA-based attack is superior to the baseline attack under the unsupervised setting
and the migration rate of model is increased by 10% compared to fast-attack.
GOODFELLOW I J , SHLENS J , SZEGEDY C . Explaining and harnessing adversarial examples [C ] // Proceedings of the 3rd International Conference on Learning Representations .[S.l.:s.n. ] , 2015 : 33 - 47 .
MUÑOZ-GONZÁLEZ L , BIGGIO B , DEMONTIS A , et al . Towards poisoning of deep learning algorithms with back-gradient optimization [C ] // Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security . New York:ACM Press , 2017 : 27 - 38 .
SHOKRI R , STRONATI M , SONG C Z , et al . Membership inference attacks against machine learning models [C ] // Proceedings of 2017 IEEE Symposium on Security and Privacy . Piscataway:IEEE Press , 2017 : 3 - 18 .
SALEM A , ZHANG Y , HUMBERT M , et al . ML-leaks:model and data independent membership inference attacks and defenses on machine learning models [C ] // Proceedings of 2019 Network and Distributed System Security Symposium . Virginia:Internet Society , 2019 : 243 - 160 .
AL-RUBAIE M , CHANG J M . Privacy-preserving machine learning:threats and solutions [J ] . IEEE Security & Privacy , 2019 , 17 ( 2 ): 49 - 58 .
MELIS L , SONG C Z , DE CRISTOFARO E , et al . Exploiting unintended feature leakage in collaborative learning [C ] // Proceedings of 2019 IEEE Symposium on Security and Privacy . Piscataway:IEEE Press , 2019 : 691 - 706 .
PYRGELIS A , TRONCOSO C , DE CRISTOFARO E . Knock knock,who’s there? membership inference on aggregate location data [C ] // Proceedings of 2018 Network and Distributed System Security Symposium . Virginia:Internet Society , 2018 : 199 - 213 .
YEOM S , GIACOMELLI I , FREDRIKSON M , et al . Privacy risk in machine learning:analyzing the connection to overfitting [C ] // Proceedings of 2018 IEEE 31st Computer Security Foundations Symposium . Piscataway:IEEE Press , 2018 : 268 - 282 .
CHOO C A C , TRAMER F , CARLINI N , et al . Label-only membership inference attacks [J ] . arXiv Preprint,arXiv:2007.14321 , 2020 .
LI Z , ZHANG Y . Membership leakage in label-only exposures [C ] // Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security . New York:ACM Press , 2021 : 880 - 895 .
JIA J Y , SALEM A , BACKES M , et al . MemGuard:defending against black-box membership inference attacks via adversarial examples [C ] // Proceedings of 2019 ACM SIGSAC Conference on Computer and Communications Security . New York:ACM Press , 2019 : 259 - 274 .
NASR M , SHOKRI R , HOUMANSADR A . Comprehensive privacy analysis of deep learning:passive and active white-box inference attacks against centralized and federated learning [C ] // Proceedings of 2019 IEEE Symposium on Security and Privacy . Piscataway:IEEE Press , 2019 : 739 - 753 .
HAYES J , MELIS L , DANEZIS G , et al . LOGAN:membership inference attacks against generative models [J ] . arXiv Preprint,arXiv:1705.07663 , 2017 .
LEINO K , FREDRIKSON M . Stolen memories:leveraging model memorization for calibrated white-box membership inference [C ] // Proceedings of the 29th USENIX Security Symposium . Berkeley:USENIX Association , 2020 : 1605 - 1622 .
LONG Y H , BINDSCHAEDLER V , WANG L , et al . Understanding membership inferences on well-generalized learning models [J ] . arXiv Preprint,arXiv:1802.04889 , 2018 .
KHALID F , ALI H , ABDULLAH H M , et al . FaDec:a fast decision-based attack for adversarial machine learning [C ] // Proceedings of 2020 International Joint Conference on Neural Networks (IJCNN) . Piscataway:IEEE Press , 2020 : 1 - 8 .
OREKONDY T , SCHIELE B , FRITZ M . Knockoff nets:stealing functionality of black-box models [C ] // Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . Piscataway:IEEE Press , 2019 : 4949 - 4958 .
SZEGEDY C , ZAREMBA W , SUTSKEVER I , et al . Intriguing properties of neural networks [J ] . arXiv Preprint,arXiv:1312.6199 , 2013 .
BRENDEL W , RAUBER J , BETHGE M . Decision-based adversarial attacks:reliable attacks against black-box machine learning models [J ] . arXiv Preprint,arXiv:1712.04248 , 2017 .
CHEN J B , JORDAN M I , WAINWRIGHT M J . HopSkipJumpAttack:a query-efficient decision-based attack [C ] // Proceedings of 2020 IEEE Symposium on Security and Privacy . Piscataway:IEEE Press , 2020 : 1277 - 1294 .
RIFAI S , DAUPHIN Y N , VINCENT P , et al . The manifold tangent classifier [J ] . Advances in Neural Information Processing Systems , 2011 , 24 ( 8 ): 2294 - 2302 .
ZHANG Y G , TIAN X M , LI Y , et al . Principal component adversarial example [J ] . IEEE Transactions on Image Processing , 2020 , 29 : 4804 - 4815 .
KINGMA D , BA J . Adam:a method for stochastic optimization [J ] . arXiv Preprint,arXiv:1412.6980 , 2014 .
RIBEIRO M T , SINGH S , GUESTRIN C . “Why should I trust You? ”:explaining the predictions of any classifier [C ] // Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining . New York:ACM Press , 2016 : 1135 - 1144 .
CHEN D F , YU N , ZHANG Y , et al . GAN-leaks:a taxonomy of membership inference attacks against generative models [C ] // Proceedings of 2020 ACM SIGSAC Conference on Computer and Communications Security . New York:ACM Press , 2020 : 343 - 362 .
KURAKIN A , GOODFELLOW I J , BENGIO S . Adversarial machine learning at scale [J ] . arXiv Preprint,arXiv:1611.01236 , 2016 .
SIMARD P , VICTORRI B , LE CUN Y , et al . Tangent prop:a formalism for specifying selected invariances in an adaptive network [C ] // Proceedings of the 4th International Conference on Neural Information Processing Systems . New York:ACM Press , 1991 : 895 - 903 .
BENGIO Y , COURVILLE A , VINCENT P . Representation learning:a review and new perspectives [J ] . IEEE Transactions on Pattern Analysis and Machine Intelligence , 2013 , 35 ( 8 ): 1798 - 1828 .
CARLINI N , WAGNER D . Towards evaluating the robustness of neural networks [C ] // Proceedings of 2017 IEEE Symposium on Security and Privacy . Piscataway:IEEE Press , 2017 : 39 - 57 .
HUI B , YANG Y C , YUAN H L , et al . Practical blind membership inference attack via differential comparisons [J ] . arXiv Preprint,arXiv:2101.01341 , 2021 .
LI J C , LI N H , RIBEIRO B . Membership inference attacks and defenses in supervised learning via generalization gap [J ] . arXiv Preprint,arXiv:2002.12062 , 2020 .
SRIVASTAVA N , HINTON G E , KRIZHEVSKY A , et al . Dropout:a simple way to prevent neural networks from overfitting [J ] . Journal of Machine Learning Research , 2014 , 15 ( 1 ): 1929 - 1958 .
SONG L W , SHOKRI R , MITTAL P . Privacy risks of securing machine learning models against adversarial examples [C ] // Proceedings of 2019 ACM SIGSAC Conference on Computer and Communications Security . New York:ACM Press , 2019 : 241 - 257 .
ABADI M , CHU A , GOODFELLOW I , et al . Deep learning with differential privacy [C ] // Proceedings of 2016 ACM SIGSAC Conference on Computer and Communications Security . New York:ACM Press , 2016 : 308 - 318 .
IYENGAR R , NEAR J P , SONG D , et al . Towards practical differentially private convex optimization [C ] // Proceedings of 2019 IEEE Symposium on Security and Privacy . Piscataway:IEEE Press , 2019 : 299 - 316 .
RAHIMIAN S , OREKONDY T , FRITZ M . Differential privacy defenses and sampling attacks for membership inference [C ] // Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security . New York:ACM Press , 2021 :193.
NASR M , SHOKRI R , HOUMANSADR A . Machine learning with membership privacy using adversarial regularization [C ] // Proceedings of 2018 ACM SIGSAC Conference on Computer and Communications Security . New York:ACM Press , 2018 : 634 - 646 .
0
浏览量
532
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构