Despite being programmed without any tic-tac-toe strategy, the most important approach has already been learned: play in the center as soon as you can and if the center is taken, the corners. The way it learned was by "brute force" - playing repetitive games until a pattern emerged - but the results, over time, are valid.While I have created a program that has "learned", this tic-tac-toe game is no closer to sentience than a toaster. "Learning" in this context means collecting data, then making decisions based on that data. We are asking for the word "learn" to do a lot if it covers what just happen in the program as well as what happens in the human brain, but such is the nature of language. We could call it (and I'm borrowing from Wikipedia here), "making data driven predictions or decisions, through building a model from sample inputs", but that doesn't quite roll off the tongue. Or more importantly, most people are not going to know what this means. When humans and programs learn, they acquire information that they didn't have previously, and in this way the word is accurate.
2. The metaphor extends both ways. While we have called what the computer does "learning", we also use language about how the computer works to describe the mind. The easiest example is how we think about memory. In reality there is no "place" in the brain that "holds" our memories the way there is in the computer, yet it is the way most of us describe how our memories work. The storing of information in the computer and the human mind is categorically different, yet like "learning", "memory" is asked to cover them both. The idea of how human memory works and the terms we use to describe it have evolved with our adoption of technology in general, and this is especially true with the computer. My grandfather had a riddle: if you call a dog's tail a leg, how many legs does it have? Four. Just because you call a tail a leg doesn't make it one.That said, discoveries are often made by using the metaphor used to describe one discipline and applying that to another, and it's true that the metaphor that the mind is like a computer has already led to interesting avenues of exploration in both medicine and computer science. But this only works if the reality that the relationship is an analogy - not fact - is maintained.
Both Robert Epstein in "The empty brain" and George Zarkadakis in In Our Own Image explore this idea much further.
3. What actually IS happening is going largely unnoticed. Imagine that instead of tic-tac-toe the program was doing something a lot more complicated at much higher speeds. Say, for example, the game was the stock market and the goal was to make money. The program could take all the input from the current market and weigh that against past performance. It wouldn't need to understand the entire complexity of the rules any more than the tic-tac-toe game does. It might only find a tiny negligible advantage, but because it's a computer it could very quickly execute that advantage several thousand times. The program would "learn" and "evolve", but it would never need to "understand" - why waste cycles on understanding when time and money are at stake? Unlike Google and IBM, investment companies promote their software as tools not as sentient individuals, even though their programs are very sophisticated "learning" algorithms.
Intuitively, this makes sense because the financial software is not marketed as being "alive and conscious", but the same holds true for software like Watson and Deep Mind. Just because they are "learning" beyond what humans can understand does not mean that they are any closer to being sentient. It's a myth that originated from language's dependence on analogy, and exploited to sell articles, or computer services, or get grants, or entertain, or fear-monger.
Be leery and observant when people describe how technology works. Here's an example - this comes from the Wikipedia page on Neural networks. Watch as the description shifts from metaphor to implied fact without evidence or citation.
"The goal of the neural network is to solve problems in the same way that the human brain would, although several neural networks are more abstract. Modern neural network projects typically work with a few thousand to a few million neural units and millions of connections, which is still several orders of magnitude less complex than the human brain and closer to the computing power of a worm."
"Other than the simplest case of just relaying information from a sensor neuron to a motor neuron almost nothing of the underlying general principles of how information is handled by real neural networks is known."
4. I lied, there are four reasons. Think of this one as the epilog.
Google's Deep Mind, or IBM's Watson is much much closer to the tic-tac-toe program I wrote than it is to consciousness. In a Facebook comment recently someone said my position on A.I. was "optimistic" because I would not subscribe to the idea that Skynet (ask a nerd if you don't know this reference) was about to come on-line. I like being called an optimist, but in fact the opposite is probably true. Capitalism uses both the promise and the threat of artificial intelligence as a smokescreen. This is because actual computing is complicated and we're not always 100% sure where it's headed or what it's doing. Investors don't like uncertainty, nor do consumers, academics, or grant givers - not appearing to be sure doesn't pay in any context. But prop up the illusion that we are on the threshold of a technological singularity, and much will be forgiven. A lot of amazing work is being done under the premise of searching for AI. Think of it like the race to get to the moon and all the technological advances that came from that research, except that in this case we're never actually going anywhere.
Google's Deep Mind winning the Go tournament is an amazing accomplishment for the team, but also a scary milestone for a lot of people. Go is a fantastically complicated game and even the fastest computers cannot use brute force to examine every possible position on the board (although the program did employ the strategy of playing itself millions of times as I did with tic-tac-toe). For many the loss was a sad milestone and Google is definitely in a delicate position of being the company that finally "defeated humanity". To combat this Google relies heavily on human brain metaphors like "deep mind" and "neural networks" (or to use Google's vernacular the even more impressive "deep neural networks") to frame their technology. It's comforting to think that Google is working on "intelligence" because it implies that, although they are making algorithms that are designed to work in ways we cannot understand, soon we will be able to "talk" to them. Of course Google never says this literally, they allow the analogy to do this work implicitly.
So whup-de-do. Who cares that the search for AI is modern day alchemy, everybody is getting what they want, right? For many that's true, but I would argue that computer literacy is vital, especially now.
In the campaign leading up to the election, Clinton described wanting to create a task force to break modern encryption. Encryption is what holds the internet together. It's how doing business on-line is possible - remember the algorithms that run the stock exchange? Without encryption and the public's faith that it cannot be broken, the entire world economy would collapse. The government has a tough task of providing security and maintaining privacy and trust, but breaking encryption would do more damage to western civilization than any terrorist attack ever could. She's trying to halfway pop a balloon. Clinton is smart and pro-business but she fundamentally misunderstands just how computers, the internet, and encryption have become the cornerstone of our civilization.
But even if it was a good idea, asking for a task force to break encryption is like thinking with enough money and the right people anti-gravity boots are inevitable. Some problems cannot simply be "fixed" with money and brains, but this is the the impression that Silicon Valley wants to convey - part scientist, part business genius, part magician.
Here's the scary part: Clinton was the smart one.