I imagine a "make sure humanity is happy" might work with the caveats of "a killed person counts as extremely unhappy", "using drugs to force happiness doesn't count", and "happiness of one peoples shouldn't come at the cost of happiness of other peoples."
But then I'm talking in abstract.
The general thought is that you can't do this through patchwork "This is the ethical principle you have to follow, with exceptions a, b, c, d, e...", because if your ethical principle has so many exceptions, then it's not really a decent ethical principle and you'd best look for something better. I
think the best idea right now for teaching ethics to an AI is saying "OK, look at humans, figure out what they think is right and wrong, now extrapolate that to a world where humans were smarter". Or something to that effect, but more precisely formulated.
Seems like it was decently done, apart from the celebreties being included as judges.
That's kind of a major point, though. Would a celebrity judge actually care about proper testing? Especially considering that a) celebrities tend to be people who enjoy publicity and b) "Turing test passed" is a headline, "Turing test failed" is not.
I'm not saying they deliberately passed a bot for publicity, mind you, just that they don't necessarily have any strong motivation to test properly.