Ben Evans

Research

Currently embarking on a PhD in Computer Science developing machine learning methods applicable for detection, classification and understanding of species behaviour from camera trap imagery.

Supervised by Dr Allan Tucker and Dr Chris Carbone at Brunel University London and the Institute of Zoology as part of the London NERC DTP.

Software

  • CamTrap Detector - Cross-Platform Desktop Application for detecting animals in camera trap images.
  • CamTrapML - Python package for processing Camera Trap imagery using machine learning.

Publications

  • Lens on the wild: innovations in wildlife monitoring with machine learning

    (2024) Evans, Benjamin C; Rowcliffe, Marcus; Carbone, Chris; Cartledge, Emma L; Al-Fulaij, Nida; Pringle, Henrietta; Yarnell, Richard; Stephens, Philip A; Hill, Russell; Scott-Gatty, Kate; Hartland, Chloe; Horwood, Bella

    Environmental Scientist.

    Hedgehogs, one of the UK’s most loved creatures, have substantially declined in number over the last 50 years.1 The National Hedgehog Monitoring Programme (NHMP) has completed its pilot year, marking the first milestone of a three-year endeavour to better understand the causes of this decline and, ultimately, to monitor the status of other wildlife populations across the UK.

  • Reasoning About Neural Network Activations: An Application in Spatial Animal Behaviour from Camera Trap Classifications

    (2020) Evans, B.C., Tucker, A., Wearn, O.R., Carbone, C.

    ECML PKDD 2020 Workshops. ECML PKDD 2020. Communications in Computer and Information Science, vol 1323. Springer, Cham.

    Camera traps are a vital tool for ecologists to enable them to monitor wildlife over large areas in order to determine population changes, habitat, and behaviour. As a result, camera-trap datasets are rapidly growing in size. Recent advancements in Artificial Neural Networks (ANN) have emerged in image recognition and detection tasks which are now being applied to automate camera-trap labelling. An ANN designed for species detection will output a set of activations, representing the observation of a particular species (an individual class) at a particular location and time and are often used as a way to calculate population sizes in different regions. Here we go one step further and explore how we can combine ANNs with probabilistic graphical models to reason about animal behaviour using the ANN outputs over different geographical locations. By using the output activations from ANNs as data along with the trap’s associated spatial coordinates, we build spatial Bayesian networks to explore species behaviours (how they move and distribute themselves) and interactions (how they distribute in relation to other species). This combination of probabilistic reasoning and deep learning offers many advantages for large camera trap projects as well as potential for other remote sensing datasets that require automated labelling.

  • Can CNN-based species classification generalise across variation in habitat within a camera trap survey?

    (2023) Norman, D. L., Bischoff, P. H., Wearn, O. R., Ewers, R. M., Rowcliffe, J. M., Evans, B., Sethi, S., Chapman, P. M., & Freeman, R.

    Methods in Ecology and Evolution.

    1. Camera trap surveys are a popular ecological monitoring tool that produce vast numbers of images making their annotation extremely time-consuming. Advances in machine learning, in the form of convolutional neural networks, have demonstrated potential for automated image classification, reducing processing time. These networks often have a poor ability to generalise, however, which could impact assessments of species in habitats undergoing change. 2. Here, we (i) compare the performance of three network architectures in identifying species in camera trap images taken from tropical forest of varying disturbance intensities; (ii) explore the impacts of training dataset configuration; (iii) use habitat disturbance categories to investigate network generalisability and (iv) test whether classification performance and generalisability improve when using images cropped to bounding boxes.

  • Using an Instant Visual and Text Based Feedback Tool to Teach Path Finding Algorithms: A Concept

    (2021) B. Nagaria, B. C. Evans, A. Mann and M. Arzoky

    2021 Third International Workshop on Software Engineering Education for the Next Generation (SEENG).

    Methods of teaching path finding algorithms, based purely on programming, provide an additional challenge to students. Indeed many courses use graphs and other visualisations to aid students in grasping concepts quickly. Globally we are rapidly altering our teaching tools to suit the current blended or remote learning style due to the global COVID-19 pandemic. We propose a method that provides instant feedback showing how their programmed path finding algorithm works based upon games. The tool will provide feedback to the student about their code quality. Along with an element of gamification we aim to improve both initial understanding and further exploration into the algorithms taught. This tool aims to provide useful feedback to students in the absence of immediate laboratory support and gives students the flexibility to conduct laboratory worksheets outside of scheduled laboratory slots. Position: Software tools and teaching assistants heavily assist undergraduate students in learning how to program. In developing enhanced software tools, we can provide immediate feedback to learners. Thus, allowing them to gain an initial understanding of the algorithm before facilitated sessions. This further enriches their experience and learning during contact hours with teaching assistants.

Further details on research activity can be found on my academic profiles at London NERC DTP, Google Scholar and ResearchGate.