The road to artificial intelligence: A case of data over theory (my notes)

Below is a summary of insights from the story published in New Scientist entitled The road to artificial intelligence: A case of data over theory

Dartmouth College in Hanover, New Hampshire

  • Team from Dartmouth gathered together to create a new field called AI in 1956 Create fields in: “machine translation, computer vision, text understanding, speech recognition, control of robots and machine learning
  • They took at top-down approach – reason logical approach where you first creating a “mathematical model” of how we might process speech, text or images, and then by implementing that model in the form of a computer program AND that their work further understanding about our own human intelligence.
  • The Dartmouth had two assumptions of AI
    • Mathematical model theories to stimulate human intelligence
      AND
    • Help us understand our own intelligence
  • Both assumptions were WRONG.

Data beats theory!

  • By mid-200s, success came in the form small set of statistical learning algorithms and large amounts of data and that the intelligence is more in the data than in the algorithm – and ditched the assumption that AI would help us understand our own intelligence
  • A machine learns when it changes its behavior based on experience using data which is contrary to the assumptions of 60 years ago, we don’t need to precisely describe a feature of intelligence for a machine to simulate it.
  • For example email spam, every time you drag it into “spam” folder in your Gmail account for example, you are teaching the machine to “classify” spam or everytime you teach for a bunny rabbit and go to images click “bunny rabbit” you are teaching the machine what a bunny rabbit looks like. Data beats theory!
  • For the field of AI, it has been a humbling and important lesson, that simple statistical tricks, combined with vast amounts of data, have delivered the kind of behaviour that had eluded its best theoreticians for decades.
  • Thanks to machine learning and the availability of vast data sets, AI has finally been able to produce usable vision, speech, translation and question-answering systems. Integrated into larger systems, those can power products and services ranging from Siri and Amazon to the Google car.
  • A key thing about data is that its found “in the wild” – generated as a byproduct of various activities – some as mundane as sharing a tweet or adding a smiley under a blog post.
  • Humans (Engineers and entrepreneurs) have also invented a variety of ways to elicit and collect additional data, such as asking users to accept a cookie, tag friends in images or rate a product. Data became “the new oil”.
  • Every time you access the internet to read the news, do a search, buy something, play a game, or check your email, bank balance or social media feed, you interact with this infrastructure.
  • It creates a “Data-driven” network effort a data-driven AI both feeds on this infrastructure and powers it.
  • Risk: Contrary to popular belief these are not existential risks to our species, but rather a possible erosion of our privacy and autonomy as data (public and private) is being leveraged.
  • Winters of AI discontent – the two major winters occurred in the early 1970s and late 1980s
  • AI today has a strong – and increasing diversified – commercial revenue stream

 

 

  • Artificial intelligence (AI) includes:
    1. Natural language processing,
    2. Image recognition and classification
    3. Machine learning  (ML) –  so it’s a subset of AI and Deep Learning (artificial neural network –  more below) is a subset of ML
  • In 1950 Alan Turing published a groundbreaking paper called “Computing Machinery and Intelligence”.  Turning poses the question of whether machines can think?
  • He proposed the famous Turing test, which says, essentially, that a computer can be said to be intelligent if a human judge can’t tell whether he is interacting with a human or a machine.
  • Artificial intelligence was coined in 1956 by John McCarthy, who organized an academic conference at Dartmouth dedicated to the topic to explore aspect of learning “cognitive thinking” or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
  • The phrase “machine learning” also dates back to the middle of the last century.  In 1959, Arthur Samuel (one of the attendees of Dartmouth conference) defined machine learning as “the ability to learn without being explicitly programmed.”
  • Samuel to create a computer checkers application that was one of the first programs that could learn from its own mistakes and improve its performance over time.
  • Like AI research, machine learning fell out of vogue for a long time, but it became popular again when the concept of data mining began to take off around the 1990s.
  • Data mining uses algorithms to look for patterns in a given set of information.
  • Machine learning went one step further  – it changes its program’s behavior based on what it learns
  • Years go by with “AI Winters” due to lack of big data sets and computing power
  • Until to IBM’s Watson AI winning the game show Jeopardy and Google’s AI beating human champions at the game of Go have returned artificial intelligence to the forefront of public consciousness
  • Now Machine Learning is used for predicting and classification:
    • Natural language processing – IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data.
    • Image recognition – i.e. people face Facebook with Deep face – https://research.facebook.com/publications/deepface-closing-the-gap-to-human-level-performance-in-face-verification/
    • Recommender system –  Amazon highlights products you might want to purchase and when Netflix suggests movies you might want to watch and Facebook with newsfeeds.  HIVERY also uses Recommender system to help our customers with having the right products in the right distribution channels at the right time and place.
    • Predictive analytics –  HIVERY around in fraud detection, price strategy and new product distribution placement strategy
  • Deep learning  – often called artificial neural network or neural net is a system that has been designed to process information in ways that are similar to the ways biological brains work.
  • Deep learning uses a certain set of machine learning algorithms that run in multiple layers. It is made possible, in part, by systems that use GPUs to process a whole lot of data at once.

Source: http://www.datamation.com/data-center/artificial-intelligence-vs.-machine-learning-whats-the-difference.html