A.I. isn't a science project anymore, and it's important that governments realise this sooner rather than later so they can mitigate contentious political, economic, industrial and ethical issues as they arise, and not after the fact.
This is because in an age where the speed from technological breakthrough to implementation in-market is accelerating to warp speed, "after" is too late.
Leading the way are the United States and China, with indications in this article showing that there is at least 5 times more academic research into A.I. happening in these two than any others. The the gap between them and the rest of us is growing year on year.
There are still some challenges faced even by the giants, however. These are around regulation and legislation,vision and direction, and smart talent acquisition. This article from HBR delves into these in more detail.
What might be most interesting is that the U.S. government has categorically stated it will not be designing any policy for "strong A.I", or artificial general intelligence (AGI) which is the layman's novel idea of the intelligent A.I., and defined as one that "exhibits behaviour at least as advanced as a person across the full range of cognitive tasks". This is in stark contrast to specific views advanced by organisations at MIT, Oxford, and Berkeley, who all state that we should focus on designing for AGI from today. Time will soon tell if these titans are leading us down the right path, but for now it's good that they have acknowledged the role A.I. will be playing in the future of humanity.
Perhaps that is because it is so difficult to predict which areas of research are unlikely to have reasonably immediate commercial value. At present, the line between basic and applied AI research is so blurred — with many private companies hiring academics — and the future so poorly understood that even the most advanced governments have little insight into what sort of R&D to fund.