Learning Outcomes
Any changes will be announced in class and updated here.
Part 1: bioacoustics (25 November to 6 December by Emmanuel Dufourq)
Understand how audio is digitally captured for audio processing
Understand discrete Fourier Transform within the context of audio processing to create spectrograms and Mel-spectrograms
To be able to implement Python code for audio processing
To be able to implement audio augmentation
Be able to implement 1- and 2-dimensional deep neural networks to create bioacoustics classifiers
To be able to use YAMNet and VGGish for audio classification tasks
To know how to use Sonic Visualiser to annotate soundscape data
To be able to implement transfer learning for audio classification tasks
To be able to implement various recurrent neural networks for bioacoustics classification tasks
To be able to implement unsupervised learning for bioacoustics
To understand and implement template matching for bioacoustics
Part 2: camera traps for animals (9 December - 13 December by Rupa and Timm)
Understand how computer vision can be used for ecology.
Understand elemental computer vision tasks (classification, detection, segmentation, tracking, etc.).
To be able to use online, open-source platforms for interactive dataset labeling (i.e. CVAT).
To be able to implement standard data augmentation techniques for visual data.
Develop a basic understanding for gradient descent based optimization.
Understanding domain shift.
Understand the importance of data splitting and good evaluation practices.
To be able to draw meaningful ecological insights from camera data.
Learn about ethical considerations and limitations of conservation technology.
Credits:
Top image: This image was created with the assistance of DALL·E 3 and then modified by Emmanuel in FireAlpaca.