You are on page 1of 62

Good afternoon

and thanks for having me here. In this talk I want to look at the design
challenges of systems that anticipate users needs and then act on them. That means it sits at
the intersection of the internet of things, user experience design and machine learning, and
although people have dealt with one of those disciplines before, I dont think theyve ever
been combined in quite the ways they are now, or with the current enthusiasm.
The talk is divided into several parts: it starts with an overview of how I think Internet of
Things devices are primarily components of services, rather than being self-contained
experiences, how predictive behavior enables key components of those services, and then I
finish by trying to to identify user experience issues around predictive behavior and
suggestions for patterns to ameliorate those issues.
A couple of caveats:
- My current work in this field focuses almost exclusively on the consumer internet of things,
so I see most things through that lens. Predictive AI has a long history in industrial
applications, its in the consumer space that we really the the UX issues.
- I want to point out that few if any of the issues I raise are new. Though the terms internet
of things and machine learning are hot right now, the ideas have been discussed in
research circles for decades. Search for ubiquitous computing, ambient intelligence, and
pervasive computing and youll see a lot of great thought in the space. If youre really
ambitious, you can read the Artificial Intelligence and Cybernetics works of the 50s and 60s
and youll be surprised by the prescience of the people working in this space when the entire
worlds compute power was about as much as my key fob.
- There are a lot of ideas here, and I will almost certainly under-explain something. For that I
apologize in advance. My goal here is to give you a general sense of how these the pieces
connect, rather than an in-depth explanation of any one of the pieces.
- Finally, most of my slides dont have words on them, so Ill make the complete deck with a
transcript available as soon Im done.

Let me begin by telling you a bit about my background. I m a user experience


designer. I was one of the first professional Web designers. This is the navigation for a
hot sauce shopping site I designed in the spring of 1994.

Ive also worked on the user experience design of a lot of consumer electronics
products from companies youve probably heard of.

I wrote a couple of books based on my experience as a designer. One is a cookbook of


user research methods, and the second describes what I think are some of the core
concerns when designing networked computational devices. Im also married to one
of the authors of this book, so thinking about the impact of the design of connected
devices on people is kind of a family business.

I also started a couple of companies. The first, Adaptive Path, was primarily focused
on the web, and with the second one, ThingM, I got deep into developing hardware.

Today I work for PARC, the famous research lab that invented the personal computer,
object oriented software, the tablet computer, and laser printer, as a principal in its
Innovation Services group. We help companies reduce the risk of adopting novel
technologies using a mix of social research, design and business strategy.

I want start by focusing on what I feel is a key aspect of consumer IoT thats often
missed when people focus on the hardware of the IoT, which is that consumer IoT
products have a very different business model than traditional consumer electronics.

Historically, a company made an electronic product, say a turntable, they found


people to sell it for them, they advertised it and people bought it. That was
traditionally the end of the companys relationship with the customer until that
person bought another thing, and all of the value of the relationship was in the
device. With the IoT, the sale of the device is just the beginning of the relationship
and physical thing holds almost no value for either the customer or the manufacturer.

When you have a multitude of connected devices and apps, value shifts to services
and the devices, software applications and websites used to access itits avatars
become secondary. A camera becomes a really good appliance for taking photos for
Instagram, while a TV becomes a nice Instagram display that you dont have to log
into every time, and a phone becomes a convenient way to check your friends
pictures on the road.
Hardware, physical things, become simultaneously more specialized and devalued as
users see through each device to the service it represents. The avatars exist to get
better value out of the service.

Amazon really gets this. Here s a telling older ad from Amazon for the Kindle. Its
saying Look, use whatever device you want. We don t care, as long you stay loyal to
our service. You can buy our specialized devices, but you don t have to.

When Fire was released 5 years ago, Jeff Bezos even called it a service.

10

Most large-scale IoT products are service avatars. They use specialized sensors and
actuators to support a service, but have little valueor dont work at allwithout
the supporting service. Smart Things, which was acquired by Samsung, clearly states
its service offering right up front on their site. The first thing they say about their
product line is not what the functionality is, but what effect their service will achieve
for their customers. Their hardware products functionality, how they will technically
satisfy the service promise, is almost an afterthought.

11

Compare that to X10, their spiritual predecessor thats been in the business for 30
years. All that X10 tells is you is what the devices are, not what the service will
accomplish for you. I dont even know if there IS a service. Why should I care that
they have modules? I shouldnt, and I dont.

12

Simply connecting existing stuff to the internet does not produce customer value

13

Simple connectivity helps when youre trying to maximize the efficiency of a fixed
process, but thats not a problem that most people have. Weve been able to simply
connect various devices to a computer since a Tandy Color Computers could lights off
and on over X10 in 1983. Today you can buy a module from Particle, Electric Imp or a
dozen other companies and integrate it in a month to connect any arbitrary device to
the Internet. The problem is that that wasnt very useful then, and its not very useful
now. If you replace the Tandy with an iPhone and the lamp with a washing machine

14

or an egg carton, you still have the same problem, and its a user experience
problem.
The UX problem is that end users have to connect all the dots to coordinate between
a wide variety of devices, and to interpret the meaning of all of these sensors to
create personal value. For many simply connected products there is so little efficiency
to be had relative to the cognitive load that its just not worth it. Whats worse, the
extra cognitive load is exactly opposite to what the product promises, and customers
feel intensely disappointed, perhaps even betrayed, when they realize how little they
get out of such a product That makes most such products effectively WORSE than
useless.
That promise gap is what distinguishes a gadget from a tool, why this egg carton is
funny, and why Quirky who made it, filed for bankruptcy after burning through
hundreds of millions of dollars.

15

How do you make money in this space of dematerialized devices and cloud services?

16

One approach is to change from an ownership model to a subscription model. Now


the device gives access to a desired end result, without the burdens of ownership or
maintenance. The IoT technology is what gives an efficient way to track and charge
for assets. Car sharing, bike sharing, Uber and AirBNB follow this model. You dont
use it every day, so why own it? High-end clothing is going this way. Do you really
need to own that Prada handbag so you can use it twice a year?

17

Hewlett Packards printer division is really an ink company that also makes ink
consumption devices. Similarly Amazon is trying to corner the market on all
consumables, whether theyre digital

18

..or physical. Their Dash replenishment service can turn any device with
consumables

19

into an automatic Amazon reordering machine.


The Dash button is a networked computer whose only purpose is to be an avatar for
products where its not yet economically feasible to include connected electronics,
like a macaroni and cheese box. Thats going to change as the electronics get cheaper.
Moreover, the button is a sensor for peoples intent, which then dovetails into the
real business model, which is not just shipping you mints when youre too lazy to
leave the housebut to identify your buying patterns, your cravings, your impulses,
so that they can predict them and ship you mints not when you ask for them, but
when you want them.

20

I think the real value connected services offer is their ability to make sense of the
world on our behalf, to reduce cognitive load by enabling people to interact with
devices at a higher level than simple telemetry, at the level of intentions and goals,
rather than data and control. Humans are not built to collect and make sense of huge
amounts of data across many devices, or to articulate our needs as systems of
mutually interdependent components. Computers are great at it.

21

The interesting thing is that this not just theory.


Prediction and response is at the heart of the value proposition many of the most
compelling IoT services, starting with the Nest. The Nest says that it knows you. How
does it know you? It predicts what youre going to want based on your past behavior.

22

Amazons Echo speaker says its continually learning. How is that? Predictive machine
learning based on your actions and your words.

23

The Birdi smart smoke alarm says it will learn over time, which is again the same
thing.

24

Jaguar, learningAND intelligent.

25

The Edyn plant watering system adapts to every change. What is that adaptation?
Predictive machine learning.

26

Canary, a home security service.

27

Cocoon, another home security system knows. How does it know? Machine learning.

28

Heres foobot, an air quality service.


[I also like how one of its implicit service promises is to identify when your kids are
smoking pot.]

29

Silks Sense adapts

30

Mistbox sprays water into your air conditioner to reduce your energy bill. Youd think
thats a pretty simple process, but no, its always learning.

31

A number of companies are making chips that make machine learning much cheaper
and more power-efficient, which means that its going to be very easy to install it in
every device, from street lights to medical equipment to toys. Its not just likely, its
inevitable. Heres one that was announced a couple of weeks ago.

32

33

They do this through processes that have many names, but Ill lump them all under Machine
Learning, which is a big part of what used to be called Artificial Intelligence. Many of the core
ideas here go back to the 1950s and its the basis of every email spam filter, so if youve had
your spam automatically filtered, youve experienced the value of machine learning.
A big part of Machine Learning is pattern recognition. We humans evolved very sophisticated
faculties to rapidly identify visual images in all kinds of difficult conditions. You look at a
picture of an orange on a red plate and you can tell instantly that its not a sunset, but until
recently that was really, really hard for a computer. Because of a combination of Moores
Law and some breakthroughs, computers have gotten much better at pattern recognition in
the last couple of years.
For a computer, recognizing something starts with a process where some basic attributes of
an image are extracted, such as the shape of boundaries between clusters of pixels, or the
dominant color of a patch of an image. These are called features in machine learning. By
examining lots and lots of examples of features in an image, a machine learning system builds
a statistical model of what that cluster represents.
Basic forms of this kind of image recognition has been used industrially for decade. Lego has
a completely automated factory that injection molds a million Lego bricks an hour, examines
every single piece, automatically sorts, bags and boxes them, all using computer vision. Thats
relatively old.
Images from: Region-based Convolutional Networks for Accurate Object Detection and
Semantic Segmentation, R. Girshick, J. Donahue, T. Darrell, J. Malik, IEEE Transactions on
Pattern Analysis and Machine Intelligence
Real-Time Image and Video Processing: From Research to Reality by Kehtarnavaz and
Gemadia

34

Whats new is a class of systems that understand the content of images. They dont just look
at features, but clusters of features, and clusters of clusters of features, and they can now
identify an orange from the setting sun, or a person from an airplane, or a polar bear from a
dalmatian.
This is why Facebook asks you to say who is in an image. Its not just for you, its for their face
recognizer.
Now heres the interesting part: were built to identify patterns in visual phenomena, but
were pretty bad at identifying them in other kinds of situations. For example, if youve ever
tried to understand someones food sensitivities, its really hard to extract what that person
is reacting to, even if you keep very careful track of what theyve eaten. Were just not built
for it. It was never evolutionarily sufficiently important, so we didnt evolve an organ for it.
Computers, on the other hand, dont care, and now that weve found really good ways to find
patterns in visual images, these same techniques can find patterns in anything.
Instead of a matrix of pixels, what if you had a matrix of medical prescriptions, with each row
as the history of one persons prescriptions from the first time that person went to the doctor
for a problem, through when they were prescribed certain things, to when they got better, or
they didnt. The same kind of system could learn the typical pattern for prescribing, say, a
wheelchair. It would essentially see the general shape of the sequence for the prescription of
a chair over time and across many people.
Then if you saw a wheelchair being prescribed that was outside of the typical pattern, you
could identify it. Thats called anomaly detection. Thats in fact exactly how we built a system
to identify Medicare fraud. People are terrible at that stuff, but computers are great.

35

When one of the dimensions is time and another is the outcome of a series of actions
you can make a pattern recognizer that associates a sequence of actions with a set of
statistical probabilities for possible outcomes based on data collected across a wide
variety of similar situations. In other words, because people and machines behave in
fairly consistent ways, these machine learning systems can increasingly predict the
future and attempt to adapt the current situation to create a more desirable
outcome.

36

37

As interesting as these issues are, I think that, more importantly, what they represent
is that were entering into a new relationship with our device ecosystem, a sea
change in our relationship to the built world.

38

Think of a sewing machine. Its very complex, but it still only acts in response to us.

39

Computers acting autonomously erode this simple tool/user relationship. Predictive


IoT is more than just recommending a new song, its acting on your behalf on the
basis of its assumption about what you want, and whats best for you.
At the dawn of computing in the late 1940s cyberneticists like Norbert Wiener
philosophized about the increasingly complex relationship between people and
computers, and how it was fundamentally different than the way we interact with
other kinds of machines. Developers working in supervisory control of manufacturing
machines and robotics have had to deal with these questions pragmatically for about
30 years, but thanks to the Internet of Things, this is now a problem that everyone
will have to grapple with going forward.
Heres a diagram by the greats Tom Sheridan and Bill Verplank from 1978, in which
they illustrate four ways that semi-autonomous computers and humans can work
together to solve a problem.

40

By 2000 Sheridan expanded these ideas with Parasuraman and Wickens to define a
spectrum of responsibility between people and computers. It ranges from humans
doing all the work (this is you writing an essay) to computers doing all the work
completely autonomously (this is your cars fuel injection controller). Of course the
goal is to get a system to level 9 or 10. Thats the maximum reduction in cognitive
load. However, for a system to qualify for that, it has to be very stable, its effects
need to be highly predictable and, equally importantly, its role needs to be
adequately embedded in society. It needs to be OK for a computer to take on that
level of responsibility. At the airport we trust the monorail computers to work
without human intervention, but we dont trust the plane autopilot to do that, even
though-as I understand itplanes can basically fly themselves these days.
Predictive IoT devices generally fall between 5 and 7 on this scale right now. The
problem is that this is the exact range where youre maximizing someones cognitive
load, but not necessarily doing all the work for them, so the result of the automation
had better be worth it. This fundamentally undermines what we expect from our
tools, and when that tool is trying to anticipate what were trying to do, it
fundamentally changes our working relationship with it.

41

The ideal scenario these things paint is pretty seductive. Imagine a world of espresso
machines that start brewing as youre thinking its a good time for coffee; office lights
that dim when its sunny to save energy, and mac and cheese that never runs out. The
problem is that although the value proposition is of a better user experience, its
unspecific in the details. Previous machine learning systems were used in areas such
as predictive maintenance and finance. They were made by and for specialists. Now
that these systems are for general consumers, we have some significant questions.
How exactly how will our experience of the world, our ability to use all the collected
data, become more efficient and more pleasurable?
Were still early in our understanding of predictive devices, and in the discipline of
what Aaron Shapiro of Huge has dubbed Anticipatory Design, so right now the
problems are worse than solutions. I want to start by articulating the issues Ive
observed in our work.

42

Weve never had mechanical things that make significant decisions on their own. As
devices adapt their behavior, how will they communicate that theyre doing so? Do
we stick a sign on them that says adapting, like the light on a video camera says
recording? Should my chair vibrate when adjusting to my posture? How will users,
or just passers-by, know which things adapt? I could end up sitting uncomfortable for
a long time, waiting for my chair to change, before realizing it doesnt adapt on its
own. How should smart devices set the expectation that they may behave differently
in what appears to be identical circumstances?
How do we know HOW intelligent these devices are? People already often project
more smarts on devices than those devices actually have, so a couple of accurate
predictions may imply a much better model than actually exists. How do we know
were not just homesteading the uncanny valley here?

43

The irony in predictive systems is that theyre pretty unpredictable, at least at first.
When machine learning systems are new, theyre often inaccurate and unpredictable,
which is not what we expect from our digital devices. 60%-70% accuracy is typical for
a first pass, but even 90% accuracy isnt enough for a predictive system to feel right,
since if its making decisions all the time, its going to be making mistakes all the time,
too. Its fine if your house is a couple of degrees cooler than youd like, but what if
your wheelchair refuses to go to a drinking fountain next to a door because its been
trained on doors and it cant tell thats not what you mean in this one instance? For
all the times a system gets it right, its on the mistakes that we judge it and a couple
such instances can shatter peoples confidence. Anxiety is a kind of cognitive load,
and a little doubt about whether a system is going to do the right thing is enough to
turn a UX thats right most of the time into one thats more trouble than its worth.
When that happens, youve more than likely lost your customer.
Unfortunately, sooner than we think, such inaccurate predictive behavior isnt going
to be an isolated incident. Soon were going to have 100 connected devices
simultaneously acting on predictions about us. If each is 99% accurate, then one is
always wrong. So the problem is: How can you design a user experience to make a
device still functional, still valuable, still fun, even when its spewing junk behavior?
How can you design for uncertainty?
Photo CC BY 2.0 photo 2011 Pop Culture Geek taken by Doug Kline:
https://www.flickr.com/photos/popculturegeek/6300931073/

44

The last issue comes as a result of the previous two: control. How can we maintain
some level of control over these devices, when their behavior is by definition
statistical and unpredictable?
On the one hand you can mangle your devices predictive behavior by giving it too
much data. When I visited Nest once they told me that none of the Nests in their
office worked well because theyre constantly fiddling with them. In machine learning
this is called overtraining. The other hand, if I have no direct way to control it other
than through my own behavior, how do I adjust it? Amazon and Netflixs
recommendation systems, which is a kind of predictive analytics system, give you
some context about why they recommended something, but what do I do when my
only interface is a garden hose?

45

Here are 7 patterns Ive observed in developing predictive systems that I think map to
the IoT. For most of these Im going to be using examples from Nest and
recommender systems like Amazons, Googles and Netflixs. Recommender systems
have been around for more than a decade and theyve been extensively studied. The
move into predictive behavior is built on a combination of recommender systems and
supervisory control, so I recommend not reinventing the wheel, but learning from
those disciplines.

46

To build an effective anticipatory machine learning system, you need to know what to
anticipate, and to do that you need to make a model of what people need, value and
desire. Simply automating existing activities without understanding why people do
them, what their goals are in doing them, misses the point of creating value.
Predictability is very valuable, even when the predictability is in something thats
flawed. When we include anticipatory behavior in an experience, were essentially
trading away an incredibly valuable commodity so that trade had better be worth it.
To know whether its worth it, we need to have a model of what people value which
were replacing or augmenting.

47

What goes into that mental model?


There are lots of ways to structure how you represent peoples view of the world. Its
a significant focus of cognitive science, and I cant do it justice, but heres a nice list I
grabbed from the intelligent agent literature.
As a designer, many of these boil down to decisions. What decision will an
anticipatory system help someone make? What decisions will it make on that
persons behalf? What are the parameters of that decision? For example, if I had a
real-time blood glucose monitor and insulin pump that adjusted my blood glucose in
real time, which of my decisions would it make for me? Which decisions would it tell
me how to make? Which decisions would it give me advice about?
Without a clear clearly articulated story about what decisions a system helps
someone make, I believe you dont have a clear story about what value it brings
them. How do you figure out what those decisions are? You talk to people. User
research. Ethnography. Leaving the office.

48

One of the great cliches in UX design is the search for delight, such as the seasonally
changing backgrounds in Google Calendar. My definition for delight is that its
functionality that subverts peoples near-term expectations, but supports their longterm needs and desires. This is particularly important in designing predictive systems,
because if you subvert expectations WITHOUT supporting their needs, you get
cognitive dissonance and you have violated their mental model.

49

Because machine means your tools adapt to you and learns from you, adaptive tools are
more like apprentices, rather than implements and our use of them is more like a
conversation rather than than linear tool use. In fact, I heard one of Nests UX designers say
that he considered users evolving relationships to the Nest as a conversation.
This is especially relevant in the era of chatbots and voice UI. If you listen to a human
conversation, its almost never a linear, straightforward, well-structured process. We stop,
we rephrase, we ask for corrections, we talk past each other, we interrupt. More likely than
not, this is how a predictive machine learning system will interact with people, from whom it
will want guidance, confirmation, and who will ask it for recommendations or changes to its
behavior.
Ethnomethdologists and conversation analysts have been modeling how people talk to each
other for about 40 years, so Im going to borrow some of their concepts.
Sequence organization is about organizing action in time. What happens first, what
happens next? How do the two parties expand on ambiguity? For example, if a home
security system decides youre not home, it can tell you I see youre driving away from
home. Im going to turn all the alarms on. You can then say All of the except for the back
yard.
Turn-taking is critical. We dont just simply take turns when talking, we continuously
provide feedback and correct. We have expectations for whose turn is next and what
theyre supposed to do. Ok, chair, Im sitting here, now its your turn. Confirm you know
Im here. Warn me if youre going to adjust.
Repair is backtracking, clarifying, continuing after an interruption, etc. What happens
when the expected sequence, either from the perspective of the person or the service, is
broken and needs to be reconstructed?

50

In addition to teaching apprentices about our needs, we also learn from apprentices
what their capabilities are and why they made certain decisions, rather than others,
when doing the things we taught them to do. This is both a part of how they learn
about us and how we learn to work with them effectively. The BMW iDrive system
was notorious for its UI, which didnt tell you what it could or couldnt do, and how to
do it. You had a knob and that was basically it.
How do I interrogate an adaptive system to understand what it can do, and to ask it
to explain what it just did.
How do you know what Siri or Google Now have learned to do? Well, you use the
app. But what about services for which you dont have a display? Chatbots today are
essentially command line interfaces. They know specific words and sequences, but
what if those commands change over time? What if the device learns new things over
time?

51

The next pattern is that you need a user story for every stage of the machine learning
and prediction process, even for steps that seems invisible. How will you incentivize
people to add their behavior data to the system at all? Why should I upload my cars
dashcam video to your traffic prediction system EVERY DAY? How will you
communicate youre extracting features? I like the way that Google speech to text
shows you partial phrases as youre speaking into it, and how it corrects itself. That
small bit of feedback tells people its pulling information out and it trains users how
to meet the algorithm halfway. How do machine-generated classifications compare to
peoples organization of the same phenomena? How is a context model presented to
end users and developers? How will you get people to train it and tell you when the
model is wrong? Does the final behavior actually match their expectation?

52

Since predictive systems are neither consistent, nor are the reasons for their behavior
clear, this can be really confusing. The same thing can behave differently in what
appear to be similar circumstances. If we undermine peoples confidence in a system
by violating their expectations, theyre likely to be disappointed and stop using it.
When were dealing with a human or an animal, unpredictable behaviors are
expected and tolerated, but thats not the case with computers. A predictive UX
needs to do is to set peoples expectations appropriately. It needs to explain the
nature of the device, to describe it is trying to predict, that its trying to adapt, that
its going to sometimes be wrong, to explain how its learning, and how long itll take
before it crosses over from creating more trouble than benefit.
Recommender systems, such as Google Now, describe why a certain kind of content
was selected, and that sets the expectation that in the future the system will
recommend other things based on other kinds of content youve requested. Nests
FAQ kind of buries the information, but it does explain that you shouldnt expect your
thermostat to make a model of when youre home or not until its been operating for
a week or so.

53

About ten years ago Timo Arnall and his students tried to address a similar set of
questions around interactions with RFID-enabled devices by creating an iconography
system that communicated to potential users that these devices had functionality
that was invisible from the outside. Perhaps we need something like this for behavior
created by predictive behavior?

54

Predictive behavior, is all about time, about sequences of activities. Many predictive
UX issues around expectations and uncertainty have time as their basis: what were
you expecting to happen and why. If it didnt happen, why? If something else
happened, or it happened at an unexpected time, why did that happen?
Knowing that a device has acted on your behalf, and that its going to actand HOW
its going to actin the future is important to giving people a model of how its
working, setting their expectations, reducing the uncertainty. Nest, for example, has a
calendar of its expected behavior, and it shows that its acting on your behalf to
change the temperature, and when you can expect that temperature will be reached.

55

You have to give people a clear way to teach the system and tell it when its model is
wrong. Statistical systems, by definition, dont have simple rules that can be changed.
There arent obvious handles to turn or dials to adjust, because everything is
probabilistic. If the model is made from data collected by several devices, which
device should I interact with to get it to change its behavior? Google Now asks
whether I want more information from a site I visited, Amazon shows a explanation
of why it gave me a suggestion. Mapping this to the consumer IoT means way more
explanation than were currently getting, which is either that a thing has happened,
or it hasnt.

56

Finally, dont automate. These systems shouldnt try to replace people, but to support
them, to augment and extent their capabilities, to help them be better at what they
want to do, not to replace them.
For example, Ember from Meshfire, is a machine learning assistant for social media
management. It doesnt try to replace the social media manager. Instead it manages
the media managers todo list. It adds things that it thinks are going to be interesting,
deletes old things, and reprioritizes the managers list based on what it thinks is
important. I think this is a good model for how such systems can add value to a
persons experience without creating a situation where random, unexplained
behaviors confuse people, frustrate them and make them feel powerless. Ember is an
augmentation to the social media manager, it helps that person focus on whats
important so that they can be smarter about their decisions. It doesnt try to be
smarter than they are. How can our devices HELP us, rather than trying to replace us?

57

Finally, an antipattern: making people do all of the training, asking them to identify
whether a behavior is appropriate or not, should be done selectively and
infrequently. Yes, it will really help your supervised models accuracy to have people
identify the correct positives from the false positives, but unless youre paying these
people, its incredibly annoying to have customers do it all the time. Last Friday one
consumer IoT product with a machine learning system Im playing with asked me to
classify its output at 1:11PM, then again at 1:26, and again at 1:47 and again and
again. I think it was on roughly ten-minute sensing cycle, and at every cycle it tried to
make a decision, and asked me to verify it. Im sure its still doing it, but I turned off
all notifications from it, and now Im considering turning it off entirely. People will
sometimes willingly act as sensors and actuators for your system, but because they
are not machines, they will not do it all the time and youre just going to have to find
a better way to train your model.

58

Finally, for me the IoT is not about the things, but the experience created by the
services for which the things are avatars.

59

Ultimately we are using these tools to extend our capabilities, to use the digital world
as an extension of our minds. To do that well we have to respect that as interesting
and powerful as these technologies are, they are still in their infancy, and our job as
entrepreneurs, developers and designers will be to create systems, services, that help
people, rather than adding extra work in the name of simplistic automation. What we
want to create is a symbiotic relationship where we, and our predictive systems, work
together to create a world that provides the most value, for the least cost, for the
most people, for the longest time.
We are currently shoveling our old devices into this new medium. We have not yet
figured out what the essential capabilities of this new medium are.
Literal McLuhan quotation: "The content of the press is literary statement, as the
content of the book is speech, and the content of the movie is the novel."

60

Thank you.

61

You might also like