
Dawei Zhao - Google 学术搜索
Anhui University - 引用次数:353 次 - Multi-view multi-label
Dawei Zhao 0001 - dblp
2025年3月12日 · Dawei Zhao, Haipeng Peng, Shudong Li, Yixian Yang: An efficient dynamic ID based remote user authentication scheme using self-certified public keys for multi-server environment.
Dawei ZHAO | Professor | PhD | Shenyang University of Chemical ...
Dawei ZHAO, Professor | Cited by 1,640 | of Shenyang University of Chemical Technology, Shenyang | Read 22 publications | Contact Dawei ZHAO
Dawei Zhao | IEEE Xplore Author Details
Dawei Zhao (Member, IEEE) received the Ph.D. degree in cryptology from Beijing University of Posts and Telecommunications, Beijing, China, in 2014. He is currently a Professor with Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan, China.
赵大伟-能源与化工技术产业研究院
admin 2019年07月15日 15:51更新 4012 浏览
Dawei Zhao (赵大卫) - GitHub Pages
My main research interests: multi-view multi-label learning, image processing, machine learning and data mining. Publications Journal Article Multi-label learning of missing labels using label-specific features: an embedded packaging method Dawei Zhao, Yi Tan, Dong Sun*, Qingwei Gao, Yixiang Lu, and De Zhu Applied Intelligence (APIN), 2023 ...
Dawei Zhao (0000-0002-0832-3678) - ORCID
ORCID record for Dawei Zhao. ORCID provides an identifier for individuals to use with their name as they engage in research, scholarship, and innovation activities.
Da Bei Zhou/大悲咒 - YouTube
TIBETAN INCANTATIONS MANTRA OF AVALOKITESHVARA LYRICSNamo Ratna Trayaya,Namo Arya JnanaSagara, Vairochana,Byuhara Jara Tathagataya,Arahate, Samyaksam Buddhay...
[论文阅读] Voxel-MAE: Masked Autoencoders for Pre-training …
Voxel-MAE: 大规模点云的自监督掩码自编码器预训练方法 作者:Chen Min, Xinli Xu, Dawei Zhao, Liang Xiao, Yiming Nie and Bin Dai 机构:Peking University 文章链接: https://arxiv.org/pdf/2206.09900.pdf代…
[2405.04390] DriveWorld: 4D Pre-trained Scene Understanding via …
2024年5月7日 · Vision-centric autonomous driving has recently raised wide attention due to its lower cost. Pre-training is essential for extracting a universal representation. However, current vision-centric pre-training typically relies on either 2D or 3D pre-text tasks, overlooking the temporal characteristics of autonomous driving as a 4D scene understanding task. In this paper, we address this challenge ...
- 某些结果已被删除