Pages

Friday, January 20, 2017

IN CASE YOU MAY HAVE MISSED THAT LITTLE ANNOUNCEMENT ABOUT ARTIFICIAL ...Image result for pic of evil computers

You may have missed it, but in case you did, Mr. B.B. and many other regular readers here shared this story, to make sure you didn't miss it. And this is such a bombshell I that its implications and ramifications are still percolating through my mind. The long and short of it is, Google's "artificial intelligence" program-search engine no longer requires quotation marks around it:
And just in case you read this article and are still so shocked that you're "missing it," here it is in all of its frighening-implications-glory:
Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results.
This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input.
All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning.
The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative.
Google Translate invented its own language to help it translate more effectively.
What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation.
Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks.
Now, if you read closely, right after the closing remarks in the quotation above, the author of the article, Mr. Gil Fewster, added this parenthetical comment: "I've added a correction/retraction of this paragraph in the notes." The correction/retraction comes in the form of a comment that Mr. Fewster directs the reader to at the end of his article, from a Mr. Chris MacDonald, who stated:
Ok slow down.
The AI didn’t invent its own language nor did it get creativity. Saying that is like saying calculators are smart and one day they’ll take all the math teachers’ jobs.
What Google found was that their framework was working even better than they expected. That’s awesome because when you’re doing R&D you learn to expect things to fail rather than work perfectly.
How it’s workings that, through all the data it’s reading, it’s observing patterns in language. What they found is that if it knew English to Korean, and English to Japanese, it could actually get pretty good results translating Korean to Japanese (through the common ground of English).
The universal language, or the interlingua, is a not it’s own language per se. It’s the commonality found in between many languages. Psychologists have been talking about it for years. As matter of fact, this work is perhaps may be even more important to Linguistics and Psychology than it is to computer science.
We’ve already observed that swear words tend to be full of harsh sounds ( “p” “c” “k” and “t”) and sibilance (“S” and “f”) in almost any language. If you apply the phonetic sounds to the Google’s findings, psychologists could make accurate observations about which sounds tend to universally correlate to which concepts. (Emphasis added)
Now, this puts that business on the computer teaching itself into a little less hysterical category and into a more "Chomskian" place; after all, the famous MIT linguist has been saying for decades that there's a common universal "grammar" underlying all languages, and not just common phonemes, as Mr. MacDonald points out in the last paragraph of the above quotation.

But, the problem still remains: the computer used one set of patterns it noticed in one context, that appeared in another context, and then mapped that pattern into a new context unfamiliar to it. That, precisely, is analogical thinking, it is a topological process that seems almost innate in our every thought, and that, precisely, is the combustion engine of human intelligence (and in my opinion, of any intelligence).
And that raises some nasty high octane speculations, particularly for those who have been following my "CERN" speculations about hidden "data correlation" experiments, for such data correlations would require massive computing power, and also an ability to do more or less this pattern recognition and "mapping" function. The hidden implication with that is that if this is what Google is willing to talk about publicly, imagine what has been developed in private corporate and government secrecy? The real question then becomes, how long has it been going on? And my high octane speculative answer is, I suspect for quite a while, and one clue might be the financial markets themselves, now increasingly driven by computer trading algorithms, and by markets that increasingly look like they are reflecting that machine reality, and not a human market reality. Even the "flash crashes" we occasionally hear about might have some component of which we're not being told.

No comments:

Post a Comment