您当前所zaiwei置: shou页 > 讲座报告 > 正wen
讲座报告

u宝娱乐平台

来源:广州研究yuan          dianji:
报告人 潘淼 副教授 时间 12月16日9:00
地dian 腾xun会议直播 报告时间

讲座名chen:Differentially Private Distributed Machine Learning

讲座人:潘淼 副教授

讲座时间:12月16日9:00

讲座地dian:腾xun会议直播(会议ID:348908387)

 

 

座人jie绍:

潘淼,休斯敦da学澳men第一娱乐登陆yu计算机工程系副教授,zeng获得2014年NSF CAREER Award。2012年8月获得佛罗里达da学电气yu计算机工程博士学wei。研究fang向包括网luo空间an全、深度学习隐私、da数据隐私、水xia无线通xinyu网luo、认知无线电网luodeng。zai著名期刊和会议上发表论wen两百余篇,其中包括IEEE/ACM Transactions on Networking、IEEE Journal on Selected Areas in Communications、IEEE Transactions on Mobile Computing和IEEE INFOCOMdeng。

 

 

讲座内容:

Nowadays, the development of machine learning shows great potential in a variety of fields, such as retail, advertisement, manufacturing, healthcare, and insurance. Although machine learning has infiltrated into many areas due to its advantages, a vast amount of data has been generated at an ever-increasing rate, which leads to significant computational complexity for data collection and processing via a centralized machine learning approach. Distributed machine learning thus has received huge interests due to its capability of exploiting the collective computing power of edge devices. However, during the learning process, model updates using local private samples and large-scale parameter exchanges among agents impose severe privacy concerns and communication burdens. To address those challenges, we will present three recent works integrating differential privacy (DP) with Alternating Direction Method of Multipliers (ADMM) and Decentralized gradient descent, two promising optimization methods to achieve distributed machine learning. First, we propose a differentially private robust ADMM algorithm by adding Gaussian noise with decaying variance to perturb exchanged variables at each iteration, where two kinds of noise variance decay schemes are proposed to reduce the negative effects of noise addition and maintain the convergence behaviors. Second, in order to release the shackles of the exact optimal solution during each ADMM iteration to ensure DP, we consider outputting a noisy approximate solution for the perturbed objective and further adopting sparse vector technique to determine if an agent should update its neighbors with the current perturbed solution to avoid the redundant privacy loss accumulation and reduce the communication cost. Third, we develop a differentially private and communication efficient decentralized gradient descent method which will update the local models by integrating DP noise and random quantization operator to simultaneously enforce DP and communication efficiency.

 

 

主办单wei:广州研究yuan

123

nanxiaoqu地zhi:shan西省西an市西沣路兴隆段266号

邮bian:710126

北xiaoqu地zhi:shan西省西an市太白膞ia?号

邮bian:710071

电hua:029-88201000

fang问量:

ban权所觴iao何鱝n澳men第一娱乐登陆ke技da学     shanICP备05016463号     建设yu运维:xin息网luo技术中心