from the university of leeds, to analyse thousands of images of skin. some photos were captured in a very precise colour measurement camera booth called a digieye, while others were snapped on phones, laptops or regular cameras. but it was streamlining the quality of these images that proved challenging. when we use a smartphone to take a picture of someone, it gets red, green or blue rgb values in every pixel. actually converting camera rgb values into something scientifically meaningful is very, very difficult. we used some relatively simple machine—learning algorithms, and to do that, we need to have lots of examples of images of people and then their true skin colour, which we measure in the laboratory. and based on those two sets of data, we can learn relationships between the two. what were your key findings in