I don't have anything novel to say about AI. But I feel I ought to at least mention that I think that if the AI alignment problem isn't already solved when the first smarter-than-human AIs are built, then we will all die.
So think carefully before contributing to AI progress, and consider working on AI alignment if you think you might be able to help advance the research on it.
I'm not sure what the best sources are, but maybe AGI Ruin: A List of Lethalities by Eliezer Yudkowsky is a decent place to start.
And if you've already decided you want to do actual formal research on the subject, the alignment forum is probably the place to go to.