What if the painful but necessary experiments that are conducted on animals and people could instead be conducted on elaborate robots? This would be an ethical boon, but only if the robotic surrogates weren’t so elaborate as to themselves suffer real pain. Aside from the potential benefits to medical research, there might be other motives – some benign, some nefarious – for creating robots with the potential of themselves suffering pains. It might increase the usefulness of a robot servant if it took care to prevent damage to itself; and the resultant self-monitoring system may turn out to implement pain. Some humans may seek to purchase pain-feeling robots for the purpose of torturing them – a sad fact about some humans. Plausibly, there’s an ethical imperative for making sure avoidable robot pains are not inflicted. 1 But there’s a metaphysical question of whether such pains – robot pains – could be inflicted. Further, there’s an epistemological question of how we would ever know. As our technologies advance, this special version of the problem of other minds – the problem of robot pain – becomes increasingly pressing.