A natural concern about any artificial general intelligence is that an android exhibiting it might somehow be hacked by some external malignant force. We consider how plausible this is given the particular way in which a neural net like ChatGPT is trained and operates, and come to the conclusion that it is virtually impossible. We draw some analogies with the way human beings use their brains and learn.