In collaboration with the U.S. Geological Survey, which has provided a five-year grant, Li and his two research assistants are training a deep-learning algorithm to recognize nuances in individual fishes’ faces and scale patterns.
“It sounds impossible, but after investigating many fish images and talking to fish biologists, we realized that it might indeed be possible,” the assistant professor said.
The goal is to help fisheries and wildlife conservationists not only assess the size of fish populations in large, dynamic aquatic environments, but to track the metrics of individual specimens, with an eye toward how their health and well-being might impact the larger ecosystem.
The deep-learning project is starting with catalogues of publicly available fish imagery that will be refined over time.
New fish photos, including underwater shots, will be collected in a controlled environment by the U.S. Geological Survey researchers.
Just as with software focused on human faces, the algorithm will “look” for structures in facial topography, converting the information into data points. The program will then compare against the database of fishy faces, again and again, until differentiation begins to take hold.
Still, fish don’t have the same level of facial complexity as human do. That’s where Li hopes to put his finger on the scales – by combining information.
And in some ways, a fish’s scales are like its fingerprint.