In the 1970s, programmers were feeling very confident. In ten years, they reported, they would produce a program that was aware of its own self and had a mind, and could converse with a human being. Hundreds of fields would go obsolete as superintelligent artificial minds would work fiendishly 24 hours a day, with no need of sleep, lunch breaks, or coffee. Just a trickle of electricity. They would learn as they went, improving themselves until the job was done. They dove to work, naming this new field Artificial Intelligence.
40 years later, no Artificial Intelligence has to be found, and experts still report it to be ten years away. But this is not to say that no progress has been made. Language processing and pattern recognition has improved remarkably, and decision engines are several million times better than in the 70s. Computers improve our cognition as never before. They can, with just a little training, understand your speaking voice enough to run a command from it. They can sort pictures. They can recognize unsafe conditions in a factory in time to shut it down before anyone gets hurt. All of these things would have been manifestly impossible in the 70s. But they still lack will, and could not hold a decent conversation with you. Nor are they likely to improve upon the programming that they run. Admittedly, in conditions were we know what a good outcome looks like, genetic programming produces code that solves the problem more efficiently than what a human programmer would come up with. This is still only a very small amount of conditions where this applies.
This is not to say that work on AI has been a waste. What we've gained has been very valuable. But we've also learned that we severely underestimated the problem. It may be that the kind of conciousness that we experience requires a brain, or at least a simulation of one, but it's not enough to just throw more computing capacity at it. We also have to use this wisely, and have good models of how to begin and improve.