Description
Airbus invites data scientists around the globe to partake in a unique coding competition focused on improving the landing phase of aircraft. We are seeking your expertise to develop a robust classification model that accurately identifies runways, ensuring enhanced guidance and navigation during landings. By refining this process, we can reduce the complexity of the landing phase, increase efficiency, and continue Airbus's commitment to aviation safety.
Take this opportunity to use your coding skills to contribute to the advancement of aeronautics!
The Challenge
Ensuring smooth landings is a crucial component of Airbus's commitment to aviation safety. As a key phase of any flight, landing can involve managing multiple tasks. Even with existing automatic landing systems, there's always room for enhancement. In most cases, these systems are utilized during challenging weather conditions, leaving pilots to visually identify runways during favorable weather. his is where your unique skillset is needed: your challenge is to engineer a computer-vision model capable of proficiently classifying runways. This model should not only aid pilots during landing approaches but also boost overall operational efficiency.
A key consideration in this competition is the similar features shared by runways and highways, which are frequently found in close proximity. The ability to discern these differences is vital to the competition. Therefore, the dataset has been designed to include classes for runways, highways, and none, to train your models to distinguish these critical differences effectively.
The Infrastructure
For this competition, your model is cordially invited to Hamburg, Germany, specifically to the Center of Applied Aeronautical Research (ZAL). The ZAL serves as a dynamic research and development nexus for the civil aviation industry in the Hamburg Metropolitan Region. It establishes a vibrant intersection between academic and research institutions, the aviation sector, and the City of Hamburg, striving to safeguard and continuously expand one of the world's leading civil aviation sites. The guiding principle here is: "Future. Created in Hamburg."
In the spirit of this competition, we tailor this motto to: "Future. Trained in Hamburg!"
Each team will be allocated a computation budget of 1 000 Petaflops. Your models will be trained on HPE ProLiant DL360 Gen10 Hardware with dual Intel Xeon Silver 4114 CPUs, each clocked at 2.20GHz. For each training instance, a minimum of 10 GB RAM and 5 CPU cores will be allotted. Part of this challenge is to assess your ability to construct not only high-performing but also efficient models, reflecting the demands that real-world production environments place on such models. The provided compute budget and available infrastructure encapsulate these requirements.
The total computation resources for the competition are capped at 2 TB RAM and 400 CPU cores at any point in time. If usage surpasses these limits, model trainings will enter a queue and will be processed as soon as capacity becomes available.
The Data
You will have access to the expansive LARD Dataset (Landing Approach Runway Detection). This rich dataset encompasses over 5,723 synthetic aerial front-view images of diverse runways, further enriched by annotated images from real-world landing footage for reference and comparison.
A portion of the training and test dataset is publicly accessible, and you're encouraged to leverage it to optimize your model's efficiency. Find these resources at the following links:
• Github: https://github.com/deel-ai/LARD • Paperswithcode: https://paperswithcode.com/paper/lard-landing-approach-runway-detection
Evaluation
Evaluation
Here you’ll find the evaluation metric your model's performance will be measured with, as well as a short example and explanation. The main objective of the competition is to improve that score by as much as possible.
For this competition, you are tasked with binary classification and need to improve your models F1 score on the test dataset.
The F1 score is the harmonic mean between precision and recall and can be computed like this (for an example ground truth and model prediction) (click here for reference)
Code Implementation
# Example Ground Truth and Model Prediction
ground_truth = [1, 1, 1, 0, 0, 1, 0, 1, 0]
prediction = [1, 1, 0, 0, 1, 1, 0, 1, 0]
# implementation using sklearn
from sklearn import metrics
metrics.f1_score(ground_truth, prediction)
# basic implementation
def true_positive(ground_truth, prediction):
tp = 0
for gt, pred in zip(ground_truth, prediction):
if gt == 1 and pred == 1:
tp +=1
return tp
def true_negative(ground_truth, prediction):
tn = 0
for gt, pred in zip(ground_truth, prediction):
if gt == 0 and pred == 0:
tn +=1
return tn
def false_positive(ground_truth, prediction):
fp = 0
for gt, pred in zip(ground_truth, prediction):
if gt == 0 and pred == 1:
fp +=1
return fp
def false_negative(ground_truth, prediction):
fn = 0
for gt, pred in zip(ground_truth, prediction):
if gt == 1 and pred == 0:
fn +=1
return fn
def recall(ground_truth, prediction):
tp = true_positive(ground_truth, prediction)
fn = false_negative(ground_truth, prediction)
prec = tp/ (tp + fn)
return prec
def precision(ground_truth, prediction):
tp = true_positive(ground_truth, prediction)
fp = false_positive(ground_truth, prediction)
prec = tp/ (tp + fp)
return prec
def f1(ground_truth, prediction):
p = precision(ground_truth, prediction)
r = recall(ground_truth, prediction)
f1_score = 2 * p * r/ (p + r)
return f1_score
f1(ground_truth, prediction)