Zeitgeist

Zeit·geist = spirit, essence of a particular time

A collection of food-for-thought posts and articles on technology, business, leadership and management. 

Two Duck-Rabbit Paradigm-Shift Anomalies in Physics and One in Machine Learning

You never know what a meeting for a quick coffee in Palo Alto can turn into.

What was supposed to be an ‘informal’ chat (if there is such thing when talking with PhD’s) about feedforward-feedback machine learning models, turned into a philosophical discussion on duck-rabbit paradigm shifts.

(disclaimer 1: I’m just a nerd without credentials either topic you choose, with a genuine interest though)

First, the theory:

I see a Rabbit, You see a Duck

Thomas Kuhn described the nature of scientific revolutions back in 1962 (his book The Structure of Scientific Revolutions).

A contrarian back in the time, as he re-defined progress by moving from development-by-accumulation (on pre-established assumptions) into paradigm shifts, or revolutions in scientific progress by looking into anomalies inferring a drastic change of assumptions.

In other words, Kuhn advocated for a change of rules over the pre-existing framework as the ultimate scientific progression method.

The Copernican revolution, Newton’s reformulation of gravity, Einstein’s relativity or Darwin’s evolution all were ‘anomalies’ as theories.

Sun vs Earth at the center, relativity vs linear spacetime, Apes evolving into Humans vs creation

The ethos of the scientific progress theory rests on identifying the right anomalies which support new paradigms. Anomalies come up as revolutions in disguise and, utterly (> and I love this> ), expand on the previous paradigm which ends up nested within remaining perfectly valid.

Anomalies create rejection by opposition (it’s a Duck!, no is not, it’s a Rabbit), but after the new paradigm takes over (…I can see the Duck now ?!?) both paradigms co-exist (it’s a duck AND a rabbit!, illustration above)

For a true paradigm shift to happen, the anomaly needs to grow from exception into alternative: ‘it’s a Rabbit AND a Duck!’

Ok, fair enough on the lecture, where is this Machine Learning anomaly this click-bait headline was all about?

It’s coming, bear with me, will be worth the reading while we get there, first, a couple of jaw-dropping-no-longer-anomalies-but-paradigm-shifts: the first one explains the origin (and meaning) of life, the second one may redefine physics forever.

1. Dissipation-driven adaptation

Jeremy England. MIT, biophysics

This incredibly simple idea is intuitively so powerful and makes so much sense that is difficult to resist. It explains Darwinian evolution and survival of the fittest, ultimately dwelling on the inherent reasons why life comes to exist.

At an intuition level, in Jeremy’s words:

You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant
— Jeremy England

Jeremy, a MIT’s researcher, has developed a mathematical model based on current physics, exerting that a given set of atoms, exposed to a continuous source of energy (i.e the Sun), surrounded by a hot bath (i.e the Ocean) will self-organize to dissipate energy in the most efficient way (i.e life).

We, carbon-based lifeforms, in Spock’s vulcan language, are much better at dissipating heat than inanimate objects. Both living and non-living organisms show this efficiency driven, self-organizing dissipation behavior.

Photosynthesis and self-replication (of RNA molecules, precursor to DNA-based life) are consequences of dissipation driven adaptation. Photosynthesis is about capturing sunlight energy transforming and storing it chemically (sugar) so it can be transported and reprocessed for plant growth and replication (hence forests).

Don’t believe it yet? see for yourself, here is Dr. Hubler’s Stanford professor experiment on self-wiring ball bearings, an example of dissipation driven matter structure reorganisation.


2. Timeless physics

Julian Barbour. British Physicist. Quantum gravity.

Remember the school/college days?: Speed = space / time, power = energy / time, theorem of calculus df/dt, Maxwell’s equations, Einstein’s relativity, Thermodynamics, etc, etc. In physics, anything dealing with change, requires t (time) as a variable, isn’t it? …may be not any more.

How is it possible anyone dares to defy physics by removing time from centuries old proven equations?

If you think about it, time is just an abstraction we use to facilitate our understanding of how things (matter in particular) transitions from one state to another (change). Because we live in an universe governed by the 2nd law of thermodynamics (fighting an increasing entropy) we perceive linear time as our most reliable and dependable reference.

At an intuition level, if we look at the Universe as a simple but immense ‘cloud’ of matter in permanent change (motion) since the big bang occurred, then, if we reduce our view to atoms transitioning for one state to another, you could remove time entirely.

Our Universe could be viewed as a continuum of matter in ‘motion’ (actually, according to Barbour, not motion, but matter in permanent change, removing in full the spacetime continuum).

Our senses and limited computing capacities can’t deal with such enormous entity so we take partial ‘pictures’ with a reference point (time) to deal with reality and make sense out of it (a constrained and partial view).

Another intuitive line of thought, if Newton’s physics were based on linear time (absolute fixed time), and then Einstein’s relativity made time relative, hence flexible (unlocking a bigger scope for physics), what if we make time super-mighty-flexible to the point of making it irrelevant? wouldn’t this even offer an even wider and extended view as we remove the constraints of a time dimension itself?

If, at first, the idea is not absurd enough, then there is no hope for it
— Albert Einstein

…. and now, for something completely different (Monty Python)

3. Flexible recognition in machine learning

Tsvi Achler. Neuroscience (PhD), Medicine (MD), Electrical Engineering (BS-EECS Berkeley) — Optimizing Mind

Our brains are ‘computationally flexible’, this means we can immediately learn and use new patterns as we encounter them in the environment.

We actually ‘like’ to develop those patterns, as we unleash our curiosity, see and try new things for the sake of enjoyment.

Learning, tasting and traveling feed our brains with new patterns. Riding a hover-wheel, flying a drone, speaking to Amazon echo or playing a new game are examples of behaviors where our brains confront and develop new patterns for different uses and purposes.

Now, let’s look into it from a machine learning perspective:

(disclaimer 2: as said at the beginning of this post I’m just a nerd without credentials trying to convey the message. Standing in the shoulder of giants when I wrote what you’re about to read on)

Tsvi Achler has been studying the brain from multidisciplinary perspectives looking for a single, compact network with new machine learning algorithms and models who can display brain phenomena as seen in electrode recordings, performing flexible recognition.

The majority of popular models of the brain and algorithms for machine learning remain feedforward and the problem is that even when they are able to recognitze they are not optimal for recall, symbolic reasoning or analysis.

For example you can ask a 4 year old why they recognized something the way they did or what do they expect a bicycle to look like. However it is difficult to do the same with current machine learning algorithms. Let’s take the example of recognizing a bicycle over a dataset of pictures. A bicycle, from a pattern perspective, would consist of two same or similar size wheels, a handle, and some sort of supporting triangular structure.

In feedforward models the weights are optimised for successful recognition over the dataset (of a bicycle in our example). Feedforward methods will learn what is unique within a bicycle compared to all other items in the training set and learn to ignore what is not unique. The problem is that subsequently it is not easy to recall what are the original components (two wheels of same of similar size, a handle, a supporting triangular structure) that may or may not be shared with other items.

Moreover when something new must be learned, feedforward models have to figure out from what is unique to the new item but not to the bicycle and other items it already knows how to recognize. This requires re-doing learning and rehearsing all over the whole dataset.

What Tsvi suggests is to use a feedforward-feedback machine learning model to estimate uniqueness during recognition by performing optimization on the current pattern that is being recognised, and determining neuron activation. (this is NOT optimization to learn weights by the way).

With this distinct model, weights are no longer feedforward, learning is more flexible and can be much faster, as there is no need to do rehearsal over the whole dataset.

In other words, this model is closer to how our brain actually works, as we don’t need to rehearse a whole dataset of samples to recognize new things.

Think about it, how many samples of the much hyped hoverwheels do you need to see first before recognizing the next one on the street?. Same for a bicycle.

And, the most important thing, with feedforward-feedback models learning happens with significantly fewer data.

Much less data required to learn, an much faster learning.

Optimization during recognition displays also properties observed in brain behaviour and cognitive experiments, like predicting, oscillations, initial bursting with unrecognized patterns (followed by a more gradual return to the original activation) and more importantly even, speed-accuracy trade off (so here is your catch if you were looking for it).

All in all, feedforward-feedback models will make machines learn faster using less data.

They also mimic better how our brain works.

I met Tsvi for the first time at a talk in Mountain View: available here. I will be helping him and his startup along his journey which (as all new ventures) starts with funding, so if anyone has an interest or wants to know more please do not hesitate to reach out and leave a message for Tsvi or me in the comments, or even better, tweet me at @efernandez.

Thanks also to Bart Peintner, Co-founder & CTO at Loop.ai, for his advice, insights and shared interest for the ideas mentioned in this article (note-to-ourselves: keep always bandwidth in your mind to entertain challenging ‘anomalies’)

The best answer for the question 'Will computers ever be smarter than humans?' is probably 'yes, but briefly'

Vernor Vinge coined this descriptive question-answer posted in an IEEE spectrum essay almost a decade ago.

Artificial Intelligence, as opposed to what we have seen so far, radio, TV, PC, smartphones, or the internet, can’t be tracked as a single technology in adoption process, it is too pervasive and has way too many different manifestations. Internet can be measured in terms of penetration, by the number of connections or traffic, AI can’t.

However, there is a conspicuous set of drivers or Intelligence accelerators behind AI evolutionary path (previous work by Luke Muehlhauser, Anna Salamon; Machine Intelligence Research Institute)

Here are six intelligence accelerators. Beyond those technology based enablers and drivers, the economic incentives will eventually create an arms race of AI systems, a clash of AI Clans.

Early signs of the AI arms race are already here: Google and Facebook, among others have been protagonists of the machine learning open-sourcing frenzy lately, starting a race to scale and achieve network effects by releasing deep learning tools to the public.

MtM

1.More than Moore (MtM) Hardware: quantum computing, spintronics and related technologies. A new computing paradigm.

Algorithms

2.Better & more efficient algorithms. IBM’s Deep Blue played chess at the level of world champion Garry Kasparov in 1997 using about 1.5 trillion instructions per second (TIPS), but a program called Deep Junior did it in 2003 using only 0.015 TIPS. Thus, the c omputational efficiency of the chess algorithms increased by a factor of 100 in only six years (Richards and Shaw 2004).

Data

3.Big Data & Analytics (Massive datasets). The greatest leaps forward in speech recognition and translation software have come not from faster hardware or smarter hand-coded algorithms, but from access to massive data sets of human-transcribed and human-translated words (Halevy, Norvig, and Pereira 2009).

Datasets are expected to increase greatly in size in the coming decades, and several technologies promise to actually outpace “Kryder’s law” (Kryder and Kim 2009), which states that magnetic disk storage density doubles approximately every 18 months (Walter 2005).

Neuroscience

4.Progress in psychology and neuroscience. Cognitive scientists have uncovered many of the brain’s algorithms that contribute to human intelligence (Trappenberg 2009; Ashby and Helie 2011).

Methods like neural networks (imported from neuroscience) and reinforcement learning (inspired by behaviorist psychology) have already resulted in significant AI progress, and experts expect this insight-transfer from neuroscience to AI to continue and perhaps accelerate (Van der Velde 2010; Schierwagen 2011; Floreano and Mattiussi 2008; de Garis et al. 2010; Krichmar and Wagatsuma 2011).

Crowd-science

5.Accelerated crowd sourced science efforts. Finally, new collaborative tools, open source projects and other corporate driven initiatives as Google Scholar are already yielding results such as the Polymath Project, which is rapidly and collaboratively solving problems in mathematics (Nielsen 2011).

Economy

6.Economic incentives As the capacities of “narrow AI” programs approach the capacities of humans in more domains (Koza 2010), there will be increasing demand to replace human workers with cheaper, more reliable machine workers (Hanson 2008, 1998; Kaas et al. 2010; Brynjolfsson and McAfee 2011).

First-mover incentives. Once AI looks to be within reach, political and private actors will see substantial advantages in building AI first. AI could make a small group more powerful than the traditional superpowers — a case of “bringing a gun to a knife fight”.

The race to AI may even be a “winner take all” scenario. Thus, political and private actors who realize that AI is within reach may devote substantial resources to developing AI as quickly as possible, provoking an AI arms race (Gubrud 1997).

An AI arms race will eventually happen amid social and economic changes. Changes that have already started.

More importantly, there are signs of structural shifts in the economy related to the core principles of the system itself. As poverty reduces and wealth redistributes gradually, the economy starts shifting from a for-profit (self-interest) model into an altruistic based one.

Non-for profit, social enterprises, public and government are all converging in what is called the 4th sector of the economy.

(continuation to: The Second Arms Race: AI and An End to Moore’s Law to be continued: The emerging 4th sector of the economy: Social & mission driven enterprises and the AI arms race)

Ed Fernandez @efernandez

An End to Moore's Law: the Dawn of a New Computing Era

(continuation to: The Second Arms Race: Artificial Intelligence)

Many information technologies have evolved at exponential rate (Nagy et al, 2011), Moore’s law, stating the transistor count doubles every 2 years, has been at the core of causality for 50 years.

But this trend may not hold for much longer (Mack 2011, Lundstrom 2003) as per physical limitations of silicon, or, maybe we don’t see the forest for the trees.

•    There are limits to the exponential growth inherent in each paradigm. Moore’s law was not the first paradigm to bring exponential growth to computing, but rather the fifth. 

•    In the 1950s they were shrinking vacuum tubes to keep the exponential growth going and then that paradigm hit a wall. But the exponential growth of computing didn’t stop. 

•    It kept going, with the new paradigm of transistors taking over. Each time we can see the end of the road for a paradigm, it creates research pre, quest for the pressure to create the next one. 

 •    That’s happening now with Moore’s law, even though we are still about fifteen years away from the end of our ability to shrink transistors on a flat integrated circuit. 

•    We’re making dramatic progress in creating the sixth paradigm, which is three-dimensional (quantum) molecular computing. 

Ray Kurzweil – The Singularity is near

The dawn of a new computing era: More than Moore MtM

Moore’s law will come to an end as a consequence of physical limitations of silicon; three dimensional quantum computing is poised to take over as the new paradigm. 

Quantum computing timeline:


2013

   
•    Coherent superposition of an ensemble of approximately 3 billion qubits for 39 minutes at room temperature. The previous record was 2 seconds.


2014

    
•    Documents leaked by Edward Snowden confirm the Penetrating Hard Targets Project, by which the National Security Agency seeks to develop a quantum computing capability for cryptographic purposes.

•    Scientists transfer data by quantum teleportation over a distance of 10 feet (3.048 meters) with zero percent error rate, a vital step towards a quantum Internet.


2015

    
•    Optically addressable nuclear spins in a solid with a six-hour coherence time.

•    Quantum information encoded by simple electrical pulses.

•    Quantum error detection code using a square lattice of four superconducting qubits

 

Quantum computing promises to augment computing power a billion fold, however, we may not need to get there to develop a strong Artificial Intelligence, one that has the capacity to improve and evolve by itself.

The expectation is that soon after we reach a strong AI matching a human brain, the ability to replicate it rapidly and limitlessly will generate a self-improving general AI, which in turn would accelerate intelligence exponentially.

@efernandez

Next: An Explosion of Intelligence: The A.I. Arms Race

The Second Arms Race: Artificial Intelligence

The second arms race is actually the third. The first one was the naval race during World War I, followed by the Cold War between United States and the Soviet Union, scaling up nuclear weaponry right after the end of World War II. 
 
The human race has been able to manage the prisoner's dilemma inherent to these competitions so far, and now faces a new test with the advent of a new technology breakthrough: Artificial Intelligence.
 
This is a series of articles on the topic providing a vision toward an artificial intelligence explosion in the context of current economic changes supporting a shift in our economy toward an altruistic model. 

The Intelligence Explosion & the Singularity: 

The Arms Race in Artificial Intelligence & The 4th Sector of the Economy

Ed Fernandez @Efernandez Palo Alto. California.

Introduction:


  • Technological singularity seems plausible and recent advancements in machine learning and AI suggest the ‘intelligent explosion’ event is within reach in this century.

 

  • A n arms race of narrow AI entities will happen in the framework of today’s traditional economy. Strong intelligence or AGI will eventually emerge followed by an explosion of intelligence.

 

  • New globalization processes driven by technology are fueling the sharing economy, as well as the 4th sector where public, non-profit, social and mission oriented enterprises are converging.

  • The 4th sector is poised to grow and thrive enabled by the sharing and collaborative economy; mission driven enterprises will have more resources enabling them to play a key role shaping the right path for AI evolution.

  • The AI arms race will provide ‘good’ and ‘bad’ entities in the context of existing and new economy environments (traditional and altruistic economies)

 

  • We, humans, as a species, can succeed managing the risks of a superintelligence event as we did in the past overcoming other technology threats (i.e nuclear)

We have the capacity to anticipate the future with a certain degree of precision.  Our prediction accuracy is lower as we increase the time horizon we aim at.

It’s pretty straight forward for us to predict short term events, those more likely to impact our survival chances; mechanical or physical, like anticipating when a car is going to cross at the juncture we are on, or, more long term and qualitative, anything related to replicating our gene pool, for instance the chances to date a specific person of the opposed sex.

However, when we look further ahead in time, and, because of our brainpower limitations and the effort required we struggle to foresee all potential possibilities and combinations.

Our brains, during evolution, developed a pattern-based approach to efficiently solve this problem. Identifying patterns allow us to see the big picture of a possible future, although we remain unable to predict the smaller details within (stacking up to conform to the pattern).

The Singularity, as defined by Ray Kurzweil, arguably the biggest ambassador of this concept in our times, is a period of time in future where technological advances will evolve so rapidly (exponentially) that humanity will not be able to keep up with them.

This definition needs to be broad because is a concept coined after careful analysis of the evolution of many technologies. It looks into historical data and speed of change rather than specific events themselves (although Singularity is mostly associated to the dawn of a super-intelligence entity capable of self-improvement).

We say can’t see the forest for the trees referring to short term events clouding our ability to see the big picture. The opposite is true for forward-looking statements.

With sufficient historical data we can develop patterns (and see the forest) but we will remain clueless about details (trees).

To state the analogy, let’s have a look at a practical example, a piece of technology we are all very familiar with, our phones.

 

Wireless phones (smartphones) have undoubtedly been the protagonists of the technology revolution in recent times.

 

The way smartphone technology has been adopted is well described by the diffusion of innovations theory (Everett Rogers - 1962), expressed graphically by an S curve (logistic function) or the widely popular bell curve (derivative of the S curve).

The process is well documented using available data from smartphone manufacturers (sales of devices over time) to the point we can track and predict with a certain degree of accuracy what the future will be for this particular technology.

The graph for US smartphone adoption, now above 70% penetration (Horace Dediu – Asymco), follows accurately the bell curve pattern to the point we can predict overall sales volumes in the years to come (this would be the forest in our analogy), but we are unable to predict which manufacturer will get the greater share in the same way we couldn’t predict Apple’s iPhone explosive growth since 2007 (those are the trees).

Thus, in a competitive and evolutionary environment as the current economy creates, with sufficient historical data, these well known patterns allow us to anticipate how technology breakthroughs will penetrate the markets and impact the population as a social group.

Details (trees) remain hidden though. We can’t predict which species (corporations) will be winners or losers; however, the scope and length of the ‘race’, market size and time span can be forecasted with fair accuracy.

The social aspects of technology adoption, with increasing mobile computing and ubiquitous Internet, are shrinking adoption cycles.

The number of new technology breakthroughs is also increasing over time. The intuitive idea of a singular future with unlimited wonders driven by technology makes more sense than ever.

This vision has fuelled Sci Fi literature and movies since the 50ies. The concept of Singularity, a future time where technology outwits human capabilities, may be now perceived as stating the obvious, a self-fulfilling prophecy.

The question is when.

 

But, not so fast…. First, let’s ‘take a selfie’ of the present and look at today’s status quo.

 

@efernandez

 

Next: An End to Moore's Law [...]

The demise of the smartphone is inevitable, and necessary

ninos_ignorando_museo

 A shorter, edited & curated version of this article, published by CNBC.com on the 20th of May, 2015, is available here.

Big thanks to Eric Rosenbaum, CNBC strategic content editor, whose edits (and a notorious headline) made this article one of the top stories at CNBC.com and a social media hit the day of publishing (ranked among top 5 CNBC stories driving engagement)

The War is Over

Smartphones coupled with mobile services and apps (mobile ecosystems) have been the protagonists of the latest disruption tide for well over a decade. Horace Dediu is probably among the best analysts who have covered the phenomenon.

The Smartphone industry is a monumental business accounting for more than $380 Billions last year, on more than 1,2 Billion devices sold, according to IDC.

Furthermore, IDC is forecasting just under Half a Trillion dollars in revenues by 2018 ($451 Bn to be precise).

Despite these extraordinary numbersthis market has reached maturity and YoY growth is declining gradually, with manufacturers working with cut-throat margins and one single player monopolizing gainsseizing an estimated 93% of industry profits according to Cannacord.

No need to guess, just look around you, most likely you have one or more Apple devices on your desk or in your pockets.

Despite there are an estimated 8 Bn smartphones still to go into the market in the next 5 years, this industry is technically over.

...even in China.

AI-CH098_DIGIT_NS_20140305050304

Applying the diffusion of innovations theory (a.k.a the diffusion of technologies bell curve), when a technology goes over 50% penetration, the remaining audience is composed of a  late majority of followers and laggards.

In other words, with smartphone penetration well over 70% in more developed countries like US, the saturation point has been exceeded long time ago, and the 8 Bn shipments to happen in next 5 years are driven by emerging markets, less penetrated (hence rising star Xiaomi) and shorter product lifecycles with little incremental innovation (hence commoditization, profits diminishing for all manufacturers, hence Apple & others moving quickly into wearables).

Screen-Shot-2013-12-10-at-12-10-10.32.13-AM  

History repeating

The Smartphone war is over. I’ve been myself involved in the mobile industry for nearly two decades (with Nokia and BlackBerry). I started when there weren’t yet internet capable phones and GSM was just a promising standard in Europe.

This is what happened:

From a software perspective, Operating Systems turned competition into a mobile ecosystems war (a.k.a mobile apps & services war) which ended in a duopoly with Android capturing majority of volumes and iOS taking a lion share of the profits.

Before that, devices didn’t have enough computing power nor couldn’t deliver the user experience to drive adoption of content, apps and services (but, for the record, back to the future 15 years ago there was a world of app stores, mobile services and everything we have seen exploding in the smartphone era, and all of it was already working, it was simply not adopted or diffused widely)

Google’s android and Apple iOS disruptions were enabled by asymmetric business modelsApple profiting from HW margins (while investing heavily on an ever growing iOS ecosystem & apps), Google making money out of their services rendered through a myriad of devices running android (commoditizing the OS by giving Android AOSP for free).

Apple case is ironic, as hardware sales and iphone in particular is piggybacking on carriers and the telco services industry (an estimated 80% of iphone market relies on carrier subsidies). Telco (carriers) is a several trillions industry providing the underlying infrastructure and data connectivity over which both hardware (smartphones) and software (Apps & services) have grown explosively (a.k.a OTT services).

Services have been actually the disruptor element driving adoption, ultimately dragging sales of hardware with them (Apple is today’s example, BlackBerry was a pioneer with this asymmetric model).

In its early beginnings BlackBerry didn’t even have intentions to get into the hardware business, their offering was originally focused on the service side only. BlackBerry’s messaging proposition evolved into the incredibly popular mobile push email which Wall Street embraced. Utterly 'forcing' users to buy anti-fashion qwerty devices as a necessary 'accident' to have real time email. This was back in 2001-2005.

This asymmetric offering turned into a phenomenal hardware business for BlackBerry, fostered by carrier driven sales of push email services embedded in their data plans.

Same pattern follows Apple, building an incredible ecosystem of apps & services which in turn make users desire and buy the hardware devices, and it’s in hardware where the margins and profits lie.

Ok, we’re done with smartphones, what’s next?

 

In any industry, once maturity has been reached, it’s poised to disruption, typically even before arriving to the tipping point of the adoption bell curve. Clay Christensen innovation dilemma explains this.

In essence the reason why it is so difficult for existing firms to capitalize on disruptive innovations is that their processes and their business model that make them good at the existing business actually make them bad at competing for the disruption.

But, how is this disruption going to happen in the case of smartphones?

Think of smartphones as the entry point to the online world. Now, wouldn’t it be better, easier and more convenient to access your digital world without the constraints of a small screen?

Everything outside the realm of your smartphone’s touchscreen form the domain of disruption for this industry.

To put it bluntly, our heads can’t continue down staring to our screens. Something must be done to fix this, and, the basic technologies to do it are already there.

tumblr_mn49msj41n1r6rd7ko1_1280

The post-smartphone era is beautifully described by Horace Dediu in this post (a piece of poetry for analysts). 

The writing is in the wall

Early signs of what´s to come can be seen even embedded in our devices in certain ways already.

Siri, Cortana, Google Now are voice portals replacing screen access and typing. These are actually NLP (Natural Language Processing) and AI technologies combined in the cloud.

Smartphones have started talking and displaying information to TVs, projectors and now to smartwatches and wearables.

Furthermore, we have now smart-glasses and head mounted displays capable of displaying virtual images (AR/VR) blended with our natural view of the physical world (MS Hololens, Magic Leap, Oculus Rift). These devices can also understand gestures.

All indicates we will be using our voice instead of typing, and we will be interacting with images well outside the limitations of today’s smartphone screens.

Now, let’s recap what the smartphone wars taught us over the last decade, and, let’s couple it with the early signs of what’s to come:

  • Services are the enabler and differentiator driving hardware sales. (the interface and point of entry for the user is king, think search box or voice recognition)

  • The majority of profits come from Hardware sales (think iphone revenues, hence Apple smartwatch)

  • Smartphone industry is mature and poised to disruption (market is ready to accept new propositions)

  • The new disruption wave of services will be driven by virtual assistants operated by voice and gestures combined with virtual reality (digital images outside phone screens) running on new smart wearable/apparel hardware (again, think voice enabled interfaces, Siri, Cortana, Google Now as disruptors at interface level)

  

We can discern how new disruption devices will be. At the intersection of some sort of smart – eyeware with powerful Augmented Reality display and an advanced voice recognition capabilities, coupled with wireless earbuds, as well as with other wearable apparel equipped with sensors all over our body.

But more important than any of these pieces of hardware, (remember, services drive hardware adoption not the other way around), services in this new smart-wearable context will be delivered through the new access points, voice and gestures.

Access determines hardware but, the key element gluing all together and managing how humans interact with this new mobile computing platform is Artificial Intelligence.

Artificial Intelligence in the form of a guardian angel (yes, the movie Her is an excellent representation of this concept, otherwise refer to HAL the ill computer in 2001 Space Odissey).

If you google ‘virtual assistant’ you’ll get around 18M entries, and you’ll struggle browsing results endlessly to find even the first reference to a truly artificial virtual assistant. It means we are still far from a practical ‘HER’ like experience and for the time being, we are hiring human assistants by the hour to do the tasks, offshore.

Most likely, we will be flooded by wearables, smart glasses, apparel and all kind of fragmented technologies while the new AI powered, cloud based operating system, takes over control of human interaction with the world.

 stock-photo-the-guardian-angel-is-feeling-underappreciated-and-says-to-her-charge-let-me-guess-you-were-100107416

Whoever gets that AI guardian angel operating system to work seamlessly with humans, will disrupt the disruptors and will take control over the wearable hardware, which ultimately will need to bend to its (proprietary) specifications or be left out of the service proposition.

Jay Samit, author of Disrupt Yourself, said

“Disruption causes vast sums of money to flow from existing businesses and business models to new entrants”.

Let’s do a quick & dirty math, in the scenario we have pictured here, considering the smartphone industry represents an average of $350 Mn per year in revenues, there is potential to disrupt $1,750 Bn over the course of the next 5 years.

Big time for venture capitalists.

Fascinating times ahead, welcome to a brave new world of double back-flip disruption.

Dedicated to Graciela, my better half & lifelong soulmate, without whom I would be lost.

photofuna_HiUQCZMUoASw8KLfJY6oXA_r_edit