Skimming the paper, they're not arguing that the computer can detect if someone is trustworthy, they're arguing they can detect is something is *perceived* as trustworthy.
Grateful it's not technical people publishing ethical tone-deaf papers with ML this time
Less disturbing to see it from subjective humanities people like psychologists rather than real scientists