Learning Outcomes
Any changes will be announced in class and updated here.
Part 1: bioacoustics and camera traps (X November to Y December by Emmanuel Dufourq)
Understand how audio is digitally captured for audio processing
Understand discrete Fourier Transform within the context of audio processing to create spectrograms and Mel-spectrograms
To be able to implement Python code for audio processing
To be able to implement audio augmentation
Be able to implement 1- and 2-dimensional deep neural networks to create bioacoustics classifiers
To be able to use YAMNet and VGGish for audio classification tasks
To know how to use Sonic Visualiser to annotate soundscape data
To be able to implement transfer learning for audio classification tasks
To be able to implement various recurrent neural networks for bioacoustics classification tasks
To be able to implement unsupervised learning for bioacoustics
To understand and implement template matching for bioacoustics
To be able to implement the Megadetector for animal object detection tasks
To be able to use Roboflow to fine-tune the Megadetector for animal object detection tasks
To be able to create a Raspberry Pi audio recording unit, to capture audio data in nature, create a custom dataset, and implement a classifier on the recorded data
Part 2: camera traps for animals (X December - Y December by Rupa)
...
...
...
Credits:
Top image: This image was created with the assistance of DALL·E 3 and then modified by Emmanuel in FireAlpaca.