@CasperTheLich
One immediate way would be to provide invalid sensory input by hijacking sensors or terminals. If EDI sensors were twisted for her to perceive Joker as a Reaper, it/she would probably shot him on sight - which is enough.
Another method would be to *convince* AI that organic beings should be destroyed - which can be done either by altering reasoning (re-programming) or by providing good reasons (possibly with memory uploads / replacing, but can be also done by simply listing some reasons in a persuasive way). As far as I understand that is how Reapers indoctrination worked: they were providing good reasons for some specific behavior, and then reinforced those reasons with "programs" that indoctrinated party willingly accepted, but which were actually taking over or "shackling" both organic and synthetic beings.
Possibly weak points in programming can be identified and then hacker can feed AI with something similar to malicious hyperlinks. If we assume that huge number of programs need to be run for AI to be operational, it is possible to infect AI with data that is related to some of the "petty" programs without AI being aware of the fact or having time to counter. In a similar way people are aware of what they see, smell, and feel, but are not directly aware e.g. how their heart beats or what is in the air they are breathing. AI can be "aware" and "control" major processes, but it may not be able to consciously care for *all* of them. Perhaps it would thus work in a similar way that diseases and vaccinations work for humans: young AIs still base on the original programs and procedures, and those can be easily exploited. With experience AIs learn how to defend themselves, and do replace those original programs and procedures - which are then much more difficult to circumvent. It could also be that e.g. something similar to DDOS can be used to flood AI with data that forces it to analyze complex input and thus AI loses ability to thoroughly control other "basic" processes. It would be then enough to smuggle some hidden code with "program update" tag; or to prod AI with some false stimuli.
Physical interference, e.g. physically replacing data cores or processors always remains the greatest risk, even as it is easiest to detect.
@jpcerutti1
Well, as always it depends on how we define personality. I would say that personality is something that is responsible for wishes, sentiments, reflexes... Sure it results from experience, feelings, and perception. Sure, by altering memories or perception one would definitely impact personality. But I would not expect the change to be instant.
Let me try a parallel: let's assume that wife loves her husband, that he is good for her, provides money, safety, etc. It lasts for years. Then it occurs that the husband is a psychopath and a serial murdered. Riiight, she knows this is not good, but it does not necessarily changes her *feelings* toward him - she is used to trusting him and depending on him. Then let's say that he hits her. OK, that is even worse (eh, this is relative, and perhaps depends on perspective, but I would say this is worse *for her*). But she has never considered living without him, so even if she starts to fear him instead of loving him, she is still not able to change her ways *just then*. It will take her a moment to reconsider her position, and possibly to react in some meaningful way.
Another example: let's assume that we have an AI that is embedded in a combat platform and tasked with military duties. This AI constantly fights, develops thousands of programs and algorithms for clashes, skirmishes, and battles. Then someone manages to replace all its memories with an illusion of being a nurse. Probably replaces the combat platform with some benign as well. Sure, AI believes to be a nurse, understand what being nurse is about... But, hey, all the programs and algorithms it has are still for combat rather than for nursing, right? In the result our AI is a bit sloppy as a nurse, until it develops some actual nursing procedures. At the same time, when given a gun or two, it would easily revert to its old programs - even though it would not truly understand why it is such a good fighter and such a poor nurse. And true - it would change IN TIME, so e.g. after several months or years all those old combat programs are surely replaced with nursing programs... But it is never instant.
And for the Shepard thing... Well, there is no denying that SAM would *need to* change after such a feed. After all, original ME trilogy changed all of *us*. I bet it is much stronger in this respect than any Reaper's indoctrination! ;-)