From:MKT Date: 2024-08-15
AI-based application cannot be achieved without the support of massive amount of data. Whether it is conversational AI, autonomous driving or medical image analysis, the diversity and integrity of training datasets largely affect the test result of AI models. Today, data has become a crucial factor in promoting the progress of intelligent technology, and various fields have been constantly collecting and building more specific datasets to achieve more efficient tech applications.
Recently, the paper Face2Faceρ: Real-Time High-Resolution One-Shot Face Reenactment, co-authored by Netease AI Lab and Tsinghua University, was selected for ECCV 2022, the top international conference on computer vision. This paper innovatively proposes a new method of face reenactment, which can increase the algorithm speed up to 9 times while ensuring the quality of the image generation.
Face reenactment algorithm is a kind of algorithm that takes the source face image as input, and transfer the facial expressions and head poses that drive the face into the source image, while ensuring that the identity of the source face remains unchanged during the migration process.
In recent years, Face Reenactment technology has attracted much attention due to its application prospects in media, entertainment, virtual reality, etc. Its most direct help is to improve the production efficiency of audio and video. In the field of video cloud, low latency and high image quality of video transmission have always been two points that are difficult to balance. At present, the minimum delay of live broadcast can be reduced to less than 400ms, but in the case of increasing demand for video conferences and other scenarios, such as remote PPT presentations, which have higher requirements for the balance between image quality and delay. The combination of face reenactment algorithm and codec technology will greatly reduce bandwidth requirements in the application of video conference scenarios, thereby realizing ultra-low latency and high-quality video conferences.
As we all know, data is as important to the AI industry as oil is to industry. The improvement of the face reenactment algorithm is inseparable from the support of training data. Nexdata owns an impressive amount of face datasets, covering multiple scenes, multiple races, and multiple poses.
Multi-pose and Multi-expression Face Data
The data includes 1,507 Chinese people (762 males, 745 females). For each subject, 62 multi-pose face images and 6 multi-expression face images were collected. The data diversity includes multiple angles, multiple poses and multple light conditions image data from all ages.
Multi-pose Face Data with 21 Facial Landmarks Annotation
399 Chinese People 35,112 Images Multi-pose Face Data with 21 Facial Landmarks Annotation, this data collected 399 people(88 images per person). The data diversity includes multiple poses, different ages, different light conditions and multiple scenes.
Multi-race and Multi-pose Face Images Data
This data includes Asian race, Caucasian race, black race, brown race and Indians. Each subject were collected 29 images under different scenes and light conditions. The 29 images include 28 photos (multi light conditions, multiple poses and multiple scenes) + 1 ID photo.
Human Face Image Data with Multiple Angles, Light Conditions, and Expressions
The subjects are all young people. For each subject, 2,100 images were collected. The 2,100 images includes 14 kinds of camera angles *5 kinds of light conditions * 30 kinds of expressions. The data can be used for face recognition, 3D face reconstruction, etc.
Human Frontal face Data (Male)
The data diversity includes multiple scenes, multiple ages and multiple races. This dataset includes 2,004 Caucasians , 3,007 Asians. This dataset can be used for tasks such as face detection, race detection, age detection, beard category classification.
If you want to know more details about the datasets or how to acquire, please feel free to contact us: info@nexdata.ai.
The future intelligent system will increasingly rely on high-quality datasets to optimize decision-making and automated processes. In the era of data, companies and researchers need to continuously improve their ability of data collection and annotation to make sure the efficiency and accuracy of AI models. To gain an advantageous position in fiercely competitive market, we must laid a solid foundation in data.