Here’s my understanding. All the robots have sensors that can detect radiation across the electromagnetic spectrum, but are not hardwired to know how to interpret it.
Imagine telling a small child (who’s not colorblind) “Don’t eat the green berries!” The tyke is capable of seeing the color green, but someone would have to teach her what “green” is, or poison, or berries. She isn’t born knowing. If her mom, who she trusts, holds up a ripe red berry and tells her, “This is a poisonous green berry! Don’t eat it!” the child would believe her.
While I’m here, I’ll politely disagree with your other quibble. I don’t see how the Third Law is a problem with the premise: “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” The puzzle here has nothing to do with the robots actively or passively protecting their own existence. They are compelled to save humans from even minor harm even if it means their own destruction—but not if they know that their attempt would fail, they would not save the human, and they would be destroyed for nothing.
The escaped robot isn’t actually compelled to try to save Susan Calvin. However, it’s trying not to get caught, so it was pretending to act like any other NS-2. Because NS-10 thought the other Nestors also knew that infrared is harmless, it expected them to try to save the human. If I recall correctly, it started to move forward, than as soon as it realized none of the other robots were, immediately halted.
Edit: This does not contradict Asimov’s other stories. Robots don’t die of a positronic aneurism if someone tells them that a human will be hurt and there is no way for them to prevent it. That only happens when the Rules create a paradox, and the robot is trapped in a situation where it must violate the Rules no matter what.
Many of the stories would not work at all if you could blow any robot up just by telling it that some human will suffer harm that is out of its power to prevent. One that comes to mind is the presidential candidate trying to prove he’s not a robot. The Positronic Man does, in a sense, destroy his own brain because of his discovery that every human being will die someday, but not in the way people are saying.
However, if I’m misremembering and there is another story where that’s the case, the NS-2 series is different from those other robots. In this story, it’s explicit: these robots don’t have to try to save a human if they believe it is impossible for them to succeed. They would have to do it if they believed the radiation would destroy them after they saved the human, but not if they believe it will destroy them before they can reach her.