
54
Ended
Welcome to the "gCO2e of AI" code competition, a pioneering challenge designed to advance the field of sustainable AI development. This competition is not just about building the most performant AI models; it’s about creating models that excel in both performance and energy efficiency.
In this competition, we’ve implemented an innovative scoring system that evaluates both the accuracy of your models and their computational efficiency, measured in FLOPS (Floating Point Operations per Second) during inference. The final score is weighted with 80% emphasis on accuracy and 20% on efficiency:
This scoring system encourages the development of high-performance AI models while penalizing those that are overly resource-intensive. Learn more about the scoring in the Evaluation section.
To facilitate transparency and emphasize the environmental impact of AI models, we have introduced two key metrics on our platform:
FLOPS (Floating Point Operations): This metric measures the total computational power used by your model during inference, providing an indication of how resource-intensive your model is.
gCO2e (grams of CO2 equivalent): This metric quantifies the carbon emissions associated with the energy consumed by your model, offering a way to assess the environmental impact of your AI solution.
By focusing on key metrics like FLOPS (Floating Point Operations) and carbon emissions (gCO2e), we aim to set a new standard for sustainable AI practices in the industry.
Whether you're a data scientist looking to showcase your skills or a business leader interested in sustainable innovation, this competition offers a unique platform to demonstrate your expertise and commitment to the future of AI.
The core challenge of this competition is a computer vision task focused on keypoint detection. Your objective is to develop a model that accurately detects 16 key points of human posture.

The applications of these models are vast, spanning various industries. For instance, they can be used in high-risk environments to monitor worker fatigue, in robotics, and in numerous other scenarios where human interaction is a key component of business processes.
The provided dataset includes over 11,000 high-resolution images of individuals engaged in various work-related and non-work-related activities. Each image is annotated with 16 keypoints, including the right and left ankle, right and left knee, right and left hip, pelvis, thorax, head top, upper neck, right and left wrist, right and left elbow, and right and left shoulder, capturing critical postures of the human body.
For this competition, we utilize the MPII Human Pose Dataset, an open-source and widely recognized benchmark for human pose estimation tasks. This dataset is designed to evaluate articulated human pose estimation and includes a comprehensive collection of images annotated with body joints.
This dataset, with its rich annotations and diverse activities, is well-suited for training and evaluating AI models in the keypoint detection task central to this competition. The use of this established dataset ensures that models developed during the competition are trained on high-quality, industry-standard data.
The dataset is crucial for the Pose4Safety competition, providing extensive visual data derived from various workplace environments. It features over 11,000 high-resolution images capturing individuals in diverse occupational activities, some displaying signs of fatigue. Each person in the images is meticulously annotated with keypoints that outline essential body joints.
This dataset was meticulously compiled and annotated to provide a robust framework for developing algorithms that detect early signs of fatigue through keypoint analysis. By training models on this dataset, participants can significantly contribute to preventing workplace accidents and ensuring employee safety.
The annotations in the dataset include precise locations and labels for keypoints such as elbows, wrists, knees, and ankles, essential for monitoring and analyzing human motion and posture. The dataset's detailed keypoints facilitate the detection of subtle signs of fatigue, such as slumped shoulders and slow movements, which are critical in ensuring worker safety.
The formula for scoring in this competition evaluates the performance of your AI model based on two key factors: accuracy (specifically, the Percentage of Correct Keypoints, or PCK) and computational efficiency (measured in FLOPS utilized). Below is a detailed breakdown of the formula and each variable:
1. Accuracy (PCK):
2. FLOPS Utilized:
3. Minimum and Maximum FLOPS:
How the Formula Works:
Summary:
This scoring formula incentivizes the creation of models that balance high accuracy with low computational cost, fostering innovation in sustainable AI development.