Zeit·geist = spirit, essence of a particular time

A collection of food-for-thought posts and articles on technology, business, leadership and management. 

A Very Short History of Artificial Intelligence

via Forbes, h/t Gil Press, contributor & author

From Ramon Llull [1308] to DARPA xAI - Explainable AI [2017]

1308                                  Catalan poet and theologian Ramon Llull publishes Ars generalis ultima (The Ultimate General Art), further perfecting his method of using paper-based mechanical means to create new knowledge from combinations of concepts.

Ramon Llull's - Ars Magna

Ramon Llull's - Ars Magna


1666                                  Mathematician and philosopher Gottfried Leibniz publishes Dissertatio de arte combinatoria (On the Combinatorial Art), following Ramon Llull in proposing an alphabet of human thought and arguing that all ideas are nothing but combinations of a relatively small number of simple concepts.

1726                                  Jonathan Swift publishes Gulliver's Travels, which includes a description of the Engine, a machine on the island of Laputa (and a parody of Llull's ideas): "a Project for improving speculative Knowledge by practical and mechanical Operations." By using this "Contrivance," "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study."

1763                                  Thomas Bayes develops a framework for reasoning about the probability of events. Bayesian inference will become a leading approach in machine learning.

1854                                  George Boole argues that logical reasoning could be performed systematically in the same manner as solving a system of equations.

1898                                  At an electrical exhibition in the recently completed Madison Square Garden, Nikola Tesla makes a demonstration of the world’s first radio-controlled vesselThe boat was equipped with, as Tesla described, “a borrowed mind.”

1914                                  The Spanish engineer Leonardo Torres y Quevedodemonstrates the first chess-playing machine, capable of king and rook against king endgames without any human intervention.

1921                                  Czech writer Karel Čapek introduces the word "robot" in his play R.U.R. (Rossum's Universal Robots). The word "robot" comes from the word "robota" (work).

1925                                  Houdina Radio Control releases a radio-controlled driverless car, travelling the streets of New York City.

1927                                  The science-fiction film Metropolis is released. It features a robot double of a peasant girl, Maria, which unleashes chaos in Berlin of 2026—it was the first robot depicted on film, inspiring the Art Deco look of C-3PO in Star Wars.

1929                                  Makoto Nishimura designs Gakutensoku, Japanese for "learning from the laws of nature," the first robot built in Japan. It could change its facial expression and move its head and hands via an air pressure mechanism.

1943                                  Warren S. McCulloch and Walter Pitts publish “A Logical Calculus of the Ideas Immanent in Nervous Activity” in the Bulletin of Mathematical Biophysics. This influential paper, in which they discussed networks of idealized and simplified artificial “neurons” and how they might perform simple logical functions, will become the inspiration for computer-based “neural networks” (and later “deep learning”) and their popular description as “mimicking the brain.”

1949                                  Edmund Berkeley publishes Giant Brains: Or Machines That Think in which he writes: “Recently there have been a good deal of news about strange giant machines that can handle information with vast speed and skill….These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.”

1949                                  Donald Hebb publishes Organization of Behavior: A Neuropsychological Theory in which he proposes a theory about learning based on conjectures regarding neural networks and the ability of synapses to strengthen or weaken over time.

1950                                  Claude Shannon’s “Programming a Computer for Playing Chess” is the first published article on developing a chess-playing computer program.

1950                                  Alan Turing publishes “Computing Machinery and Intelligence” in which he proposes “the imitation game” which will later become known as the “Turing Test.”

1951                                  Marvin Minsky and Dean Edmunds build SNARC (Stochastic Neural Analog Reinforcement Calculator), the first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons.

1952                                  Arthur Samuel develops the first computer checkers-playing program and the first computer program to learn on its own.

August 31, 1955              The term “artificial intelligence” is coined in a proposalfor a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place a year later, in July and August 1956, is generally considered as the official birthdate of the new field.

December 1955               Herbert Simon and Allen Newell develop the Logic Theorist, the first artificial intelligence program, which eventually would prove 38 of the first 52 theorems in Whitehead and Russell's Principia Mathematica.


1957                                  Frank Rosenblatt develops the Perceptron, an early artificial neural network enabling pattern recognition based on a two-layer computer learning network. The New York Times reported the Perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." The New Yorker called it a “remarkable machine… capable of what amounts to thought.”

1958                                  John McCarthy develops programming language Lispwhich becomes the most popular programming language used in artificial intelligence research.

1959                                  Arthur Samuel coins the term “machine learning,” reporting on programming a computer “so that it will learn to play a better game of checkers than can be played by the person who wrote the program.”

1959                                  Oliver Selfridge publishes “Pandemonium: A paradigm for learning” in the Proceedings of the Symposium on Mechanization of Thought Processes, in which he describes a model for a process by which computers could recognize patterns that have not been specified in advance.

1959                                  John McCarthy publishes “Programs with Common Sense” in the Proceedings of the Symposium on Mechanization of Thought Processes, in which he describes the Advice Taker, a program for solving problems by manipulating sentences in formal languages with the ultimate objective of making programs “that learn from their experience as effectively as humans do.”

1961                                  The first industrial robot, Unimate, starts working on an assembly line in a General Motors plant in New Jersey.

1961                                  James Slagle develops SAINT (Symbolic Automatic INTegrator), a heuristic program that solved symbolic integration problems in freshman calculus.

1964                                  Daniel Bobrow completes his MIT PhD dissertation titled “Natural Language Input for a Computer Problem Solving System” and develops STUDENT, a natural language understanding computer program.

1965                                  Herbert Simon predicts that "machines will be capable, within twenty years, of doing any work a man can do."

1965                                  Hubert Dreyfus publishes "Alchemy and AI," arguing that the mind is not like a computer and that there were limits beyond which AI would not progress.

1965                                  I.J. Good writes in "Speculations Concerning the First Ultraintelligent Machine" that “the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

1965                                  Joseph Weizenbaum develops ELIZA, an interactive program that carries on a dialogue in English language on any topic. Weizenbaum, who wanted to demonstrate the superficiality of communication between man and machine, was surprised by the number of people who attributed human-like feelings to the computer program.

1965                                  Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi start working on DENDRAL at Stanford University. The first expert system, it automated the decision-making process and problem-solving behavior of organic chemists, with the general aim of studying hypothesis formation and constructing models of empirical induction in science.

1966                                  Shakey the robot is the first general-purpose mobile robot to be able to reason about its own actions. In a Life magazine 1970 articleabout this “first electronic person,” Marvin Minsky is quoted saying with “certitude”: “In from three to eight years we will have a machine with the general intelligence of an average human being.”

1968                                  The film 2001: Space Odyssey is released, featuring Hal, a sentient computer.

1968                                  Terry Winograd develops SHRDLU, an early natural language understanding computer program.


1969                                  Arthur Bryson and Yu-Chi Ho describe backpropagation as a multi-stage dynamic system optimization method. A learning algorithm for multi-layer artificial neural networks, it has contributed significantly to the success of deep learning in the 2000s and 2010s, once computing power has sufficiently advanced to accommodate the training of large networks.

1969                                  Marvin Minsky and Seymour Papert publish Perceptrons: An Introduction to Computational Geometry, highlighting the limitations of simple neural networks.  In an expanded edition published in 1988, they responded to claims that their 1969 conclusions significantly reduced funding for neural network research: “Our version is that progress had already come to a virtual halt because of the lack of adequate basic theories… by the mid-1960s there had been a great many experiments with perceptrons, but no one had been able to explain why they were able to recognize certain kinds of patterns and not others.”

1970                                  The first anthropomorphic robot, the WABOT-1, is built at Waseda University in Japan. It consisted of a limb-control system, a vision system and a conversation system.

1972                                  MYCIN, an early expert system for identifying bacteria causing severe infections and recommending antibiotics, is developed at Stanford University.

1973                                  James Lighthill reports to the British Science Research Council on the state artificial intelligence research, concluding that "in no part of the field have discoveries made so far produced the major impact that was then promised," leading to drastically reduced government support for AI research.

1976                                  Computer scientist Raj Reddy publishes “Speech Recognition by Machine: A Review” in the Proceedings of the IEEE, summarizing the early work on Natural Language Processing (NLP).

1978                                  The XCON (eXpert CONfigurer) program, a rule-based expert system assisting in the ordering of DEC's VAX computers by automatically selecting the components based on the customer's requirements, is developed at Carnegie Mellon University.

1979                                  The Stanford Cart successfully crosses a chair-filled room without human intervention in about five hours, becoming one of the earliest examples of an autonomous vehicle.

1980                                  Wabot-2 is built at Waseda University in Japan, a musician humanoid robot able to communicate with a person, read a musical score and play tunes of average difficulty on an electronic organ.

1981                                  The Japanese Ministry of International Trade and Industry budgets $850 million for the Fifth Generation Computer project. The project aimed to develop computers that could carry on conversations, translate languages, interpret pictures, and reason like human beings.

1984                                  ­Electric Dreams is released, a film about a love triangle between a man, a woman and a personal computer.

1984                                  At the annual meeting of AAAI, Roger Schank and Marvin Minsky warn of the coming “AI Winter,” predicting an immanent bursting of the AI bubble (which did happen three years later), similar to the reduction in AI investment and research funding in the mid-1970s.

1986                                  First driverless car, a Mercedes-Benz van equipped with cameras and sensors, built at Bundeswehr University in Munich under the direction of Ernst Dickmanns, drives up to 55 mph on empty streets.

October 1986                   David Rumelhart, Geoffrey Hinton, and Ronald Williams publish ”Learning representations by back-propagating errors,” in which they describe “a new learning procedure, back-propagation, for networks of neurone-like units.”

1987                                  The video Knowledge Navigator, accompanying Apple CEO John Sculley’s keynote speech at Educom, envisions a future in which “knowledge applications would be accessed by smart agents working over networks connected to massive amounts of digitized information.”

1988                                  Judea Pearl publishes Probabilistic Reasoning in Intelligent Systems. His 2011 Turing Award citation reads: “Judea Pearl created the representational and computational foundation for the processing of information under uncertainty. He is credited with the invention of Bayesian networks, a mathematical formalism for defining complex probability models, as well as the principal algorithms used for inference in these models. This work not only revolutionized the field of artificial intelligence but also became an important tool for many other branches of engineering and the natural sciences.”


1988                                  Rollo Carpenter develops the chat-bot Jabberwacky to "simulate natural human chat in an interesting, entertaining and humorous manner." It is an early attempt at creating artificial intelligence through human interaction.

1988                                  Members of the IBM T.J. Watson Research Center publish “A statistical approach to language translation,” heralding the shift from rule-based to probabilistic methods of machine translation, and reflecting a broader shift to “machine learning” based on statistical analysis of known examples, not comprehension and “understanding” of the task at hand (IBM’s project Candide, successfully translating between English and French, was based on 2.2 million pairs of sentences, mostly from the bilingual proceedings of the Canadian parliament).

1988                                  Marvin Minsky and Seymour Papert publish an expanded edition of their 1969 book Perceptrons. In “Prologue: A View from 1988” they wrote: “One reason why progress has been so slow in this field is that researchers unfamiliar with its history have continued to make many of the same mistakes that others have made before them.”

1989                                  Yann LeCun and other researchers at AT&T Bell Labs successfully apply a backpropagation algorithm to a multi-layer neural network, recognizing handwritten ZIP codes. Given the hardware limitations at the time, it took about 3 days (still a significant improvement over earlier efforts) to train the network.

1990                                  Rodney Brooks publishes “Elephants Don’t Play Chess,” proposing a new approach to AI—building intelligent systems, specifically robots, from the ground up and on the basis of ongoing physical interaction with the environment: “The world is its own best model… The trick is to sense it appropriately and often enough.”

1993                                  Vernor Vinge publishes “The Coming Technological Singularity,” in which he predicts that “within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”

1995                                  Richard Wallace develops the chatbot A.L.I.C.E(Artificial Linguistic Internet Computer Entity), inspired by Joseph Weizenbaum's ELIZA program, but with the addition of natural language sample data collection on an unprecedented scale, enabled by the advent of the Web.

1997                                  Sepp Hochreiter and Jürgen Schmidhuber proposeLong Short-Term Memory (LSTM), a type of a recurrent neural network used today in handwriting recognition and speech recognition.

1997                                  Deep Blue becomes the first computer chess-playing program to beat a reigning world chess champion.

1998                                  Dave Hampton and Caleb Chung create Furby, the first domestic or pet robot.

1998                                  Yann LeCun, Yoshua Bengio and others publish papers on the application of neural networks to handwriting recognition and on optimizing backpropagation.

2000                                  MIT’s Cynthia Breazeal develops Kismet, a robot that could recognize and simulate emotions.

2000                                  Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in a restaurant setting.

2001                                  A.I. Artificial Intelligence is released, a Steven Spielberg film about David, a childlike android uniquely programmed with the ability to love.

2004                                  The first DARPA Grand Challenge, a prize competition for autonomous vehicles, is held in the Mojave Desert. None of the autonomous vehicles finished the 150-mile route.

2006                                  Oren Etzioni, Michele Banko, and Michael Cafarella coin the term “machine reading,” defining it as an inherently unsupervised “autonomous understanding of text.”


2006                                  Geoffrey Hinton publishes “Learning Multiple Layers of Representation,” summarizing the ideas that have led to “multilayer neural networks that contain top-down connections and training them to generate sensory data rather than to classify it,” i.e., the new approaches to deep learning.

2007                                  Fei Fei Li and colleagues at Princeton University start to assemble ImageNet, a large database of annotated images designed to aid in visual object recognition software research.

2009                                  Rajat Raina, Anand Madhavan and Andrew Ng publish “Large-scale Deep Unsupervised Learning using Graphics Processors,” arguing that “modern graphics processors far surpass the computational capabilities of multicore CPUs, and have the potential to revolutionize the applicability of deep unsupervised learning methods.”

2009                                  Google starts developing, in secret, a driverless car. In 2014, it became the first to pass, in Nevada, a U.S. state self-driving test.

2009                                  Computer scientists at the Intelligent Information Laboratory at Northwestern University develop Stats Monkey, a program that writes sport news stories without human intervention.

2010                                  Launch of the ImageNet Large Scale Visual Recognition Challenge (ILSVCR), an annual AI object recognition competition.

2011                                  A convolutional neural network wins the German Traffic Sign Recognition competition with 99.46% accuracy (vs. humans at 99.22%).

2011                                  Watson, a natural language question answering computer, competes on Jeopardy! and defeats two former champions.

2011                                  Researchers at the IDSIA in Switzerland report a 0.27% error rate in handwriting recognition using convolutional neural networks, a significant improvement over the 0.35%-0.40% error rate in previous years.

June 2012                         Jeff Dean and Andrew Ng report on an experiment in which they showed a very large neural network 10 million unlabeled images randomly taken from YouTube videos, and “to our amusement, one of our artificial neurons learned to respond strongly to pictures of... cats.”

October 2012                   A convolutional neural network designed by researchers at the University of Toronto achieve an error rate of only 16% in the ImageNet Large Scale Visual Recognition Challenge, a significant improvement over the 25% error rate achieved by the best entry the year before.

March 2016                      Google DeepMind's AlphaGo defeats Go champion Lee Sedol.

The Web (especially Wikipedia) is a great source for the history of artificial intelligence. Other key sources include Nils Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements; Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach;  Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World; and Artificial Intelligence and Life in 2030. Please comment regarding inadvertent omissions and inaccuracies.

See also A Very Short History of Data ScienceA Very Short History of Big Data, and A Very Short History of Information Technology (IT).


2017 DARPA launched the xAI - Explainable Artificial Intelligence aiming at enhancing current machine learning techniques to provide interpretability (vs black box) and explainability of decisions.

Race of the Rest: The Unicorn Trend has gone Global

Love this pic, had to post it again, makes a perfect background for this topic.

It was Steve Case, former AOL, who has championed the term 'Rise of the Rest' on the emergence of tech hubs and VC out of traditional coastal zones in US.

This is not a US domestic phenomenon, the Unicorn trend (startups valued over $1 Bn) has gone global after being in the works for a few years now. See here for educated predictions back in 2015 (somehow neglected at that time, now endorsed by stats).

Thanks to CB insights who just released their Christmas newsletter yesterday, we have some closing year stats to share:

2016 saw close to 300 deals of VC-backed companies in overlooked regions such asLatin America, Africa and Oceania scoring record numbers in terms of # of startup deals.
In terms of overall investment in these regions, this year ended with $1.3 Bn, with three years in a row now consistentlyover $1Bn in capital deployed.

More interesting even, and, as anticipated almost two years ago, flashback here:

of the 1.063 deals done in these three regions since 2009, Latin America alone took almost half of them or 47%.

The world is truly changing.

Happy Holidays


Machine Learning: an industry perspective

by Garret Robertson - Senior Analyst & Author


Satyajeet Salvi
Ruben Ramirez
Ellen Chan

The future of technology is in machine learning. Talk of virtual assistants, neural networks and deep learning is proliferating across the Internet at a rapid pace. According to a recent CB Insights update, deal flow in this space is accelerating rapidly with current estimates of the industry size exceeding $100 billion with compounded annual growth estimated at over 50%. Despite the proliferation of this technology, it is misunderstood. Dreams of androids, self-driving cars and Skynet abound in the conversations of executives, the general population and everyone in between.

Machine Learning industry size is exceeding $100 billion with compounded annual growth estimated at over 50%

Machine learning tools can more accurately described as powerful tools that sort through terabytes of data in order to optimize relationships. These tools find solutions for minimizing fraud, maximizing sales revenue, maximizing lead generation, or minimizing errors in image recognition. What makes these algorithms truly special is their ability to take complex structured and/or unstructured data and find meaningful relationships. Some examples of these data sources can include text heavy sources such as emails or web sites, images, audio files, and/or data points.

Despite lack of awareness, these tools are already finding places in consumers’ lives. When consumers log into Netflix at night and pick a show from the recommended play list or when they choose to add a recommended product to their basket on Amazon, machine-learning algorithms are at the heart of those lists. It is not just limited to product recommendation though, when consumers’ credit cards deactivate over suspicious transactions, there was machine learning. When Social media presents ads to users, there again was machine learning. Additionally these powerful algorithms drive other services like the virtual assistants Siri, Cortana or Alexa. While these examples may be visible to consumers, Machine Learning is rapidly proliferating into many less visible markets like CRM, healthcare and government services and banking.

The valuation of machine learning service companies can best be described by its synergies with cloud service providers and businesses. Businesses create systems that gather data as they conduct business. These systems could include, as an example, systems for tracking customer receipts like an accounting ledger or a customer-profiling tool like a rewards program. Data science showed businesses how to combine these two data sets to better understand customer preferences. When the data moved to cloud services, machine-learning tools were then able to sift through much more complicated information like images, articles, or other unstructured sources and automate the search for interesting relationships.

The valuation of machine learning service companies can best be described by its synergies with cloud service providers

As the outputs became better, the businesses rebuilt systems to integrate more data necessitating more data storage. Now the systems could create profiles, link them to purchasing trends and compare it to even more complex demographic information creating more powerful business insights. The outputs from the 3-way cycle thus reinforce themselves making it more and more efficient and increasing value to all parties.

These synergies define how the industry has been growing. Because the synergies are so strong, most capital investments in this industry occur as partnerships between businesses, cloud service providers and machine learning companies. These strategic investment partners provide two critical pieces to the growth round. First, they validate the effectiveness of the machine-learning product. Second, these partnerships provide access to data from interesting industries such as fraud, healthcare, product recommendation or sales analytics allowing opportunity for the systems to become even more effective.

most capital investments in this industry occur as partnerships between businesses, cloud service providers and machine learning companies

Below is a sampling of some capital raises for machine learning companies where at least one of the investors was not a capital player but a business with a strategic interest and/or a cloud service provider.

The application of this technology is expanding every day. Nearly 70% of all investment into this space is driven by Seed and Series A funding. Additionally more than 40% of all companies that exist in this space are less than 3 years old. Additionally, with the power these solutions have to offer, the industry is expanding rapidly with total year over year transaction and investment volume increasing.

Nearly 70% of all investment into this space is driven by Seed and Series A funding
More than 40% of all companies that exist in this space are less than 3 years old

Due to the synergies in this industry, a few companies have been able to lead the charge. Some of these companies include Amazon, Microsoft, Google, IBM and Apple. This makes sense because the effectiveness of the algorithms grows as the access to relevant data grows. Companies with access to large quantities of data find more value than those with less.

Despite the power of machine learning, there remain two important hurdles for the typical company in adopting these technologies. First, company leadership needs to be aware of how these systems can help them. Understanding how data can be used to redefine and refine existing strategies is crucial in transforming the organization’s systems. General misunderstanding of machine learning has prevented many companies from adopting it.

The effectiveness of the algorithms grows as the access to relevant data grows

Second, if companies want to pursue implementation of these systems, they need to understand how. This involves not only utilizing tools to gather the data, but also knowing what kinds of solutions are already available.

There are many machine learning companies such as BigML, Amazon, IBM, Microsoft, Google or others that have out-of-the-box solutions available to a wide range of industries. Increasingly, machine learning is moving from the world of PhD’s and large teams of data scientists to tools that anyone can implement.

Despite the newness of this technology to businesses, many industries have already found interesting and powerful solutions. A summary of some industries that have been impacted by machine learning as well as some specific examples in selected industries follows.


This is one area where the use of machine learning is most visible to consumers. When customers buy products online, they leave behind with the business a treasure trove of information. Some of this information includes what products are typically bought together, how much the average consumer spends in a given purchase, what sorts of products and brands people like and much more. While individual tickets report single transaction information, registered users create entire shopping profiles over multiple purchases that can be analyzed.

With this kind of information, it is no wonder that Amazon reported shortly after rolling out its product recommendation platform that sales increased by nearly 30%. In fact this is not an uncommon story. With companies better able to identify the needs and wants of users, they are better able to put products consumers want, into their hands.

In addition to product recommendation, chatbots are taking over customer service. More than 11,000 bots have been added to Facebook Messenger since its launch, allowing brands and companies to use AI to connect with customers through virtual concierge services. These bots are replacing employees in physical stores, allowing companies to build long-term relationships with customers while saving labor costs.

Spring Bot is one example of many of these services that acts as a point of contact even after purchases are made and has a wide range of customers, including Givenchy and Lanvin, brands that do not have an established e-commerce platform. An automated interaction generally costs $0.25, while a live agent interaction costs anywhere from $6 to $20. The automated interactions are also faster than normal live interactions. While the natural language processing in these systems is not perfect, the overall results speak for themselves.


Increasingly machine learning tools are being used to enhance sales and CRM. Traditionally, sales data has been stored and analyzed manually. In addition to the time and money spent in performing these tasks, significant capital has been spent training sales teams to track the right data and how to effectively analyze it.

Machine learning has provided a way to collect data automatically and provide the analysis so sales agents can more effectively find, target and convert prospective clients into sales. InsideSales reports that some of its customers have increased their sales pipeline by 30% increase to sales and a 250% increase in leads. Costs associated with training implementation and data entry are reduced for users in addition to these strong revenue increases.

Financial Services:

Increasingly, financial institutions are using automated financial advisors and planners. These tools monitor events and stock and bond price trends and compare them to the user’s financial goals. The machine will be able to compare the user’s portfolio and make recommendations on what stocks to buy or sell. There will be no need to pay an expensive human advisor to make decisions for customers. The machine-learning tool will now be able to make decisions based on data that is coming in real time.

In addition to automated financial advisors, algorithmic trading is a means to increase profitability and decrease risk in investment portfolios. Algorithmic trading systems are systems that process data on a very large scale to identify risks in investment portfolios and rebalance them in order to minimize risk. As these systems gain more data, they are better able to optimize portfolios and mitigate risk.


It is estimated that these algorithmic trading systems handle 75% of the volume of the global trades worldwide. These numbers get larger when looking at specific types of trading.

Algorithmic trading systems were responsible for nearly 80 per cent of foreign exchange futures trading volume, 67 per cent of interest rate futures volume, 62 per cent of equity futures volume, 47 per cent of metals and energy futures volume, and 38 per cent of agricultural product futures volume between October 2012 and October 2014.


Clinical variation management is an area in healthcare ripe for disruption by ML systems. Clinical variation is when clinicians deviate from recommended care pathways in the delivery of care to similar patients. It is estimated that there could be as much as 30% waste in healthcare, but this waste is hard to identify due to the complexity of healthcare and the great degree of variation in the way patients receive treatments.

A recent article by HealthCatalyst indicates the problem. Clinical variation is complicated by two main factors. The first is that studies indicate that only 20% of the care delivery is driven by scientific research. About 80% of the care delivery is determined by subjective clinical care pathway decisions. Second, Doctors must read hundreds of pages of primary literature every day in order to stay fully current. The process needs education, but the education is next to impossible to get and train through normal means. Until then, clinicians deliver care without much consistency driving waste and impeding process development.

Machine learning provides a means to monitor care pathways to ensure clinical variation is minimized. It also provides a means to monitor care pathways to determine areas to improve and optimize them with current methods in mind.

Traditional tools such as control charts, regressions and manually examined data are not robust enough to optimize the system. Machine learning tools are well positioned to do the work individual data scientists and analysts cannot do. Those machine-learning companies focused on healthcare like Ayasdi are well positioned to disrupt this space.

Concluding Remarks

  • Machine learning as an industry is still in its infancy.
  • These examples represent only a few of the hundreds of companies that are emerging to solve next generation business problems.
  • A new industrial revolution is coming in the form of computer code and automated data science.
  • Companies that are not thinking about data and machine learning will soon find themselves unprepared.
  • The companies who have adopted these technologies already enjoy significant advantages over those who have not implemented it yet. **

*special note of thanks to Naiss' contributors:

Satyajeet Salvi
Ruben Ramirez
Ellen Chan













Let’s Get Paid Upfront: getting over the carrot-stick game

In start-up and VC land is all about technology innovation, isn’t it?… or so we tend to think.

Actually is not, or at least not all, many companies are thriving on simple innovation schemes applied to processes, business models or simple tactics.

Going further, the concept itself of ‘frugal innovation’, a.k.a creative thinking in the face of constraints, a.k.a doing more with less in the face of critical conditions, shows all the many ways problems can be solved even in the absence of proper resources.

This wonderful TED talk by Navi Radjou is an excellent primer on the topic, covers from fridges running with no electricity in India to advertising billboards producing water literally out of thin air in the rain scarce city of Lima in South America.

But today, in this post, we’re going to talk about an even more simplistic yet powerful innovation tactic: incentives.

First, the Theory:

Incentives and reward schemes are used in companies to drive employee and management behaviors, aligning those toward a set of objectives.

Incentives range from simple weekly, monthly or yearly salaries all the way to sophisticated cash bonuses with triggers, accelerators and equity schemes more long term oriented.

For the vast array of different incentive and compensation mechanisms one thing is true, they all are paid AFTER the activity and expected behavior has happened. Simply put, if you behave and meet your targets, you get paid (carrot), if not, you don’t (stick).

…There must be a better way. Considering everybody needs to be paid for his/her work, what if we get paid BEFORE?.

With current company incentives lying ahead, in the future, all rewards are perceived as a future ‘gain’ contingent on employee & management behavior and this is an excellent transaction scheme, no doubt.
But there is a much bigger motivation factor for humans, actually double at least: loss aversion.

Kahneman, D. and Tversky, A. (1984) — ‘Choices, Values and Frames’

Kahneman, D. and Tversky, A. (1984) — ‘Choices, Values and Frames’

Just look at the grey area in the left (loss quadrant) in comparison with the green area on the right (gain), for the same absolute incentive amount X, you get 2Y motivation for a loss.

So, what if we re-wire the incentive schemes in a way we can leverage this much bigger motivating factor?

The Watney rule for Startups

In this letter to their limited partners, First Round venture capital firm set the stage for what is to come this year for start ups.

Back to the ‘old normal’ and adjusted valuations has put all expectations on start-ups to become deeply conscious on their expenditure and make the most efficient use of capital.

Kind of the behavior the astronaut Watney showed while stranded on Mars waiting for rescue in the movie The Martian.

Similarly, and while the next ‘rescue’ round of capital comes, Start-ups CEOs need to use their limited resources imaginatively while securing key milestones and monetization in particular happens. So they started paying bonuses in advance.

Wait… how is that an efficient use of working capital?

It actually is, and, as Irfan Pardesi told me when we met in San Francisco in May, paying in advance his sales execs was a much better motivator for performance, which grew 15% on average and even 50% in some individual cases.

Irfan is a serial entrepreneur, founder of Accentuate Capital Markets, a holding company providing financial FX brokerage services from South Africa. Irfan is also a member of YPO, the Young Presidents Organization, a non-profit organization of leaders under 45.

Paying bonuses and/or the paycheck in advance creates trust and strengthen bonds between the company and its employees/managers, who feel much more confident about the company and react incredibly well to the trust put on them.

On top of that, think how cool it is to get your money upfront every month, so you can do the things you want now rather than waiting to get that elusive bonus at the end of the year. This is a form of instant gratification in a way, and your company is doing that for you in appreciation of your work.

There needs to be a catch of course for this model to work and to secure ‘loss aversion’ dynamics work towards objectives.
Irfan told me, for this model to work two things must be in place:

  • Close management and monitoring of the activity so targets are really realistic and achievable, consistently, each month

  • A catch, a future deduction of the paycheck/bonuses if targets are not met, something reasonable and agreed with the employees

End result? once the process is tuned in, and, securing you build that trusted relationship with your employees, productivity and performance will be boosted significantly (specially at the end of the month when loss aversion feelings kick in).

The magic of a 2x motivating factor.

Actually there is a whole new industry arising around the concept of instant gratification, the precursor to loss aversion.

ActiveHours, a Palo Alto startup offers the possibility to get your monthly salary paid almost in real time, through an app and in cozy hourly installments.

It’s like everyday is pay-day!

I’m particularly fond of the trusted relationship model, in my own experience, all stelar teams and epic growth stories come from nimble teams with strong bonds of trust both amongst them and with management, add the right incentives at the right timing, and there you go, the sky is the limit.

ed fernandez

How to make machines learn like humans: Brain-like AI & Machine Learning

AI and machine learning are all over us, a simple search on google draws 105 Million entries and counting, google trends shows a growing demand for this search term consistent with the exponential rise of deep learning since 2013, more or less when Google’s X Lab developed a machine learning algorithm able to autonomously browse YouTube to identify the videos that contained cats.

In 1959, Arthur Samuel defined machine learning as a

Field of study that gives computers the ability to learn without being explicitly programmed

AI and machine learning changes the software paradigm computers have been based on for many decades.

In the traditional computing domain, providing an input, we feed it into an algorithm to produce the desired output. This is the rule-based frameworkthe majority of the systems around us still work with.

We set up our thermostat to a desire temperature (input) and a rule based programming (algorithm) will take care of reading a sensor and activating heating or AC machines to get to the room temperature we want (output).

The industry has been working relentlessly for many years developing better hardware, software and apps to solve a gazillion problems and use cases around us with programmable solutions. But still, every new functionality or feature, every single new ‘learning’ has still to come via an update of the software (or the firmware itself in hardware).

> Machine learning puts head over heels the rule-based paradigm.

Given a dataset (input) and a known expected set of outcomes (output), machine learning will figure out the optimal matching algorithm so that, after trained (learning), it can autonomously predict the output corresponding to new inputs.

The new AI and machine learning paradigm opens up the promise land of ‘self-programming’ machines, capable of finding the right algorithms to be used in any occasion, this is, providing availability of sufficient input training data, the bottleneck today.

However, and despite all the incredible progress made in this field, including breakthroughs around deep learning in recent years, machines are far from matching human ability to learn new patterns, and worse, we don’t know how they learned what they learned nor how they come up with a decision or an specific (wrong) output. We just feed them with big data and ‘tweak’ the machine learning process till we get them to work and deliver the desired outputs within acceptable thresholds of accuracy, but the whole thing remains a ‘black box’ (Fodor & Pylyshyn 1988, Sun 2002).

And they got very efficient and accurate, better than humans in many fronts, no question. AI, machine learning and neural networks are now behind any major service, predicting our credit score, detecting fraud, rendering face recognition, assisting us through Siri, Google, Cortana or Alexa, and soon driving our cars.

But, as in the old computing paradigm, the process of learning still requires an ‘update’, what, in machine learning and neural networks jargon is called ‘retraining the network’ with a new dataset and new features required to incorporate a new learning or a new functionality.

Retraining any AI network takes well experience engineers, top notch hardware (GPUs) and time, a lot of computing time.

That’s why we can’t teach Siri, Google, Cortana or Alexa new things on the fly. If they don’t understand what we say they typically default to a simple search on the web, we can’t simply tell them ‘learn this new word’ or ‘remember my favorite team are the Red Sox’. Same applies for the rest of large neural networks behind other services, they need to be retrained with the new data and that takes days, weeks or even months depending on the size of the the network and the dataset.

Now, imagine for a moment if we could teach machines ourselves and make them learn the same way we humans do, wouldn’t that be awesome?
Imagine if we could teach Siri, Cortana, Google o Alexa new words or expressions, or even new action commands ‘hey Alexa, pull out my car from the garage’:

The answer to this is in the brain.

And some researchers, devoted to reverse engineer the recognition mechanisms of the brain have unlocked brain-like algorithms and new machine learning models solving this problem, turning the traditional machine learning ‘black box’ into a ‘clear box’ neural network where new learnings can happen on the fly, in real time and at a fraction of today’s computational cost (no retraining over the whole dataset required).

In a simplistic way, the underlying problem is that all traditional machine learning models are primarily feedforward based, in other words, the basic calculations in the network happen ultimately in the form of a simple multiplication where the output Y is just the input X weighted (feedforward multiplied by W, the Weight). Y = W * X

Determining the set of weights W for a given input dataset X with a known labeled output Y is called ‘training the network’. The process is long, can take hours, days or even months for large networks, but, once all those weights W are calculated and refined (a process called optimization of weights) the network is capable of amazing wonders like facial recognition or natural language understanding.

However, as mentioned, if you want the network to learn something new, you need to go back again through all the retraining process and start from the beginning, recalculating and optimizing the new set of weights W.

But ‘this is not how the brain works’, Tsvi Achler, MD/PhD in neuroscience and BSEE from Berkeley, told us at a talk in Mountain View.

‘The brain does not turn around and recalculates weights, it computes and learns differently during recognition while the context is still available, and it does not only do feedforward, all sensorial neural recognition mechanisms show some form of feedback loop’
In all traditional machine learning methods (deep learning, convolutional networks, recurrent networks, support vector machines, perceptrons, etc) there is a ‘disconnect’ between the learning & training phase and the recognition phase. What Tsvi Achler proposes is not to recalculate (learn) weights but to determine neural network activation (Y, output) by optimizing during recognition, factoring in feedback as well as feedforward, and more importantly, focusing on the current pattern in context (vs all of the training dataset).

With this approach and this new machine learning algorithm we can ‘see’ the weights and change them in real time, while recognizing, add new nodes to the network (patterns) and features on the fly without the need to go over the re-training process.

At his startup Optimizing Mind, Tsvi ran his machine learning model on a Celeron Quad core laptop, 2GHz, 4GB memory, which is equivalent to a high end smartphone. He tested it against traditional methods such as SVM or KNN and the scalability results were astonishing, showing off up to two orders of magnitud of computational cost reduction.

The ability to embed this new machine learning technology in a smartphone will enable true real time learning from end users’ interaction while preserving data locally (no need to go back and forth to servers).

The time when we will be able finally to teach machines by ourselves, as well as learn from the environment, all in real time, is getting closer.

This may be even a first very early step to enable machine to machine learning, and with that, who knows, exponential intelligence maybe?.

Exciting times ahead, what a moment to be alive.