You are on page 1of 9

To begin, could you share your name and how your work in AI began?

My name is James Herbsleb and I guess my work in AI is mainly teaching a couple of classes
for one thing, ethics and policy issues in computing and I’m developing a new class about
manipulation and influence online that I’ll be teaching in the fall. Also, as director of the societal
computing program I oversee a program where a lot of the people are involved in various forms
of AI so that also sort of involves me in the area.

Was your work at any point influenced by popular culture at all?

Yeah I think my own research doesn’t really involve AI, frankly. My research is on things like
collaboration and software development, collaborative environments and how people use them,
so it doesn’t really lean very much on AI frankly. I’m very interested in how computing
technology in general which of course includes AI impacts society and how we can use artificial
intelligence to understand more how society operates. I think right now we have a lot of systems
based on AI and other kinds of technologies that are deployed by companies for particular
purposes serving particular business models and often they provide fantastic services like
Google search and Facebook and so on. But what I think we don’t pay attention to is the
emergent effects that you have from all these things being deployed out into the world. For
example, Facebook has their engagement algorithm which is really very effective at finding
things and showing you things that you want to see and that will keep you engaged. YouTube
has something similar to show you the next up YouTube video, but we don’t really know very
much about what does that do to peoples’ opinions? What does that do to how they see the
world? What does it do to groups of people as they split apart and live in these isolated
information bubbles? And those are some of the issues that I think societal computing needs to
deal with. We want to have a group of researchers that are looking at these things, trying to
understand them, trying to figure out what these technologies are doing to us and figure out
what kinds of technology policy might be appropriate for regulating them. And trying to make
sure that all these great technologies have the effects we want and don’t have a lot of
unintended consequences.

So, can you talk a little bit about the two classes that you mentioned?

Sure, so one of them is called ethics and technology issues and computing. It’s for
undergraduates. It was sort of aimed at sophomores but I find that I get undergraduates form
freshmen to seniors and they come from all over the university and we do a couple different
things. One is we look a little bit at ethical theory, so we have a framework for thinking
systematically about the kinds of effects we want to see. You know, I think students often come
into this, whether they’re technology students or humanities students, they often come into this
area of talking and thinking about technology, they just have opinions they think that this should
be the way or that should be that way or they don’t like this or they do like that. Of course, you
can’t really get anywhere persuading other people just by saying you don’t like something; you
just end up butting heads and not really resolving anything. I think that’s where the importance
of ethical theory is. That if you can agree on some way to approach what you think is good and
what is right then you can relate new technologies and their impact to those things. For
example, utilitarian theories and consequences of actions and there are several different
theories. So, we pay attention to theory, learn a little bit about that, and then we have what are
effectively key studies of a number of different technologies where we look at things like
technology that police departments are starting to use to deploy police resources and the intent
of that and the actual impact of it because it usually trains unbiased data and creates biased
effects. We look at face recognition, we look at autonomous weapons, just a whole range of
topics. So, I hope the students come out of that able to think not just as technologists thinking
about gee how can I build this, but also what are the impacts of this and think about the ethical
issues that are involved. What’s interesting to me is we are starting to see in places like
Microsoft and Amazon and Google and elsewhere, engineers just rising up and refusing to work
on certain things, so I think that’s all to the good. It shows that everybody doing a good job
educating people and kind of making them think about the consequences of what we’re doing. I
don’t think that’s a substitute for appropriate regulation necessarily, but it’s very encouraging to
me.

When did you start teaching this class?

That class I started teaching maybe ten or twelve years ago. My first class had three students
and now I usually have about forty and now it satisfies a requirement for some majors so I’m
expecting it to go up from there, but there seem to be a lot of interest in that area especially
these days people are becoming much more concerned. I think tech companies have fallen
from grace a little bit in the last year or so and there’s a lot more interest in thinking about all the
consequences of these technologies.

And the new class you’re developing?

It’s going to look more specifically at how technologies are used to influence peoples’ opinions,
create dissension. Sometimes technologies themselves because of things like engagement
algorithms may lead to opinions that become more extreme as you follow the recommended
trail of videos on YouTube, for example, there’s some evidence that leads you toward more
extreme opinions. So here, as opposed to the other courses which is ethics and technology, this
is much more behavioral science and how persuasion happens. There’s a lot known among
psychologists for example about how opinions change, how you can influence people’s
behavior, sales tactics, and all those kinds of things. So, we’re going to study the behavioral
science of it, and then again through case studies look at different technologies. Look at how
Facebook has been used to create dissension, for example, through a series of technologies
and kind of see how they work so people are aware of when they see news stories on Facebook
or are offered a new video on YouTube, or see advertisements in particular order they’re aware
of the impact that can have on them.

So, digging into that a little farther can you think of an example of an accurate or useful form of
communications particularly on an AI system to broad publics?

I think we’ve been getting a very rosy picture from most of the technology companies. In
response to criticisms of both Google for favoring their own services for example as the EU has
hit them for, and Facebook finally admitting that their system has been used for manipulative
purposes, I think they’re really starting to try to come to terms with that. I mean they’re still profit
seeking enterprises so that is going to be their main motivation which is fine but they’re starting
to see that they have some responsibilities beyond that. And I think they’re taking a hard look at
that. Twitter is starting to get rid of fake accounts, so you see a lot of activity now that I think is
very good. I think, you know, as the evidence of the Google and Facebook and Microsoft
engineers are talking about who wouldn’t do certain kinds of things because they thought it was
unethical, I think most people care about these things. I don’t see tech companies as being evil
giants I just see that as long as their motivation is primarily economic, which it naturally is, these
kinds of concerns are at the focus of their attention. So, Governments and people have brought
that to their attention and I think they care too, and they’re actually, I think, trying to solve some
of these problems. I’m not sure we can leave it to businesses alone but I think they’re becoming
much better about that.

So, you’ve been teaching in this area of societal impact for some time now so what do you see
as your individual responsibility in communicating these systems broadly, particularly AI?

I think as an educator, I think my responsibility is to help my students understand the ethical


dimensions of their engineering work. I have worried for some time that we are sending off a lot
of really super smart students, very technically capable students to mainly four big tech
companies, I understand that’s where most of them end up according to our surveys and that’s
great but I think if they’re purely technically focused. You know, these are people who are going
to rise to positions of responsibility, a good proportion of them will, and unless they have a good
grounding in ethics and thinking about the consequences of their decisions beyond the welfare
of the company, the welfare of society, I think we will be much better off if they have that kind of
background. So, my responsibility as an educator is to try to do the best I can to disseminate
that kind of information but engage them in that kind of thinking and get them to engage each
other. One of the favorite things I see is when students come and tell me “Yeah, this thing we
were talking about in class the other day, this engagement algorithm, we were talking about that
in the dorm and I could use all this stuff I learned in class to persuade my fellow students.” I love
to see that, maybe it’s really having an impact on how they think about things. I’m also trying to
do other things outside the classroom there’s a magazine read by a lot of practicing software
engineers called IEEE software and I’m going to start doing a societal computing column in that
regularly where I hope to bring some attention to these kinds of things as well.

How do you think AI system are changing the ways that people work up until now?

Yeah, that’s a really interesting question. I mean there’s the big fear that you see people talking
about a lot is that they’ll just replace people and all sorts of different jobs and all of a sudden
there won’t be any work anymore except for people who build AI systems. Historically there’s
always been this concern, every wave of industrialization and automation has produced this fear
that machines will take over, jobs will disappear, it will be a disaster, and who knows it could
happen there is nothing that guarantees that new jobs that are created by technologies will, you
know, somehow be sufficient to replace the jobs that have been cannibalized by technology. But
so far historically the record has been pretty good. Things sort of self-adjust, new opportunities
come to light. I think the fear comes from, as much as anything, the jobs that are lost. They’re
very visible, you can really see that, and you can anticipate that. If all surgeons get replaced by
robots, you know, we will be able to see these people lost their jobs. But the new opportunities
are much harder to see, and it will take a lot of imagination, it will take a lot of clever people
thinking up new ways to make a living, so it’s much harder to anticipate that. So, it’s very hard to
balance out those two and see where we are going to end up, but my feeling is that we will
probably end up okay in the long run. In the short term, there could be a lot of displacement of
people, and I think we need to have government policies which supports people in retraining,
which takes care of them if their skills are no longer relevant and they can’t retrain for whatever
reason. So, I think we need to have policies that provide much more help when these
dislocations occur, but I think in the long run we’re going to be just fine.

Twenty years from now, how do you think AI will have changed the way people work?

Yeah twenty years from now, I have no idea really. I don’t think that the main thing AI does will
be to replace people. I think the main thing AI is going to do is make their jobs easier and
different and better in most ways. I think AI will have a huge effect in replacing a lot of the
drudgery, the more repetitive things. Will have a huge effect in robotics for example, in taking a
lot of the more dangerous jobs that people now have to do. So, I think it will assist with the
things that AI and robots are really good at and hopefully let people focus more on things that
are matters of judgement or matters of aesthetics or things that people are, at least it seems
now, uniquely able to do. So, I think it’s going to be able to improve everybody’s job.

Can you anticipate any tensions that will arise in regard to human dignity?

One of the big dangers is seeing more economic disparity now than we have for maybe a
century, since the gilded age. And I think that technology often has a big role in making that
worse. If you replace manual labor with robots then the portion of the revenue that you earn
from whatever this work happens to be, the portion that goes to labor, peoples’ wages is
smaller. The portion that goes to capital is, and that of course contributes to income and wealth
disparity. So, I think that’s another major thing we’re going to have to look at in the future. Of
course, the federal tax policy is going exactly the wrong direction now but hopefully to be able to
incorporate AI and things like robots into more and more work situations we’re going to have to
deal with the fact that it’s going to make income disparity worse and it's not going to be
acceptable for society. But for human dignity I think that people who, you know, remain
employed will have perfectly good jobs and will not suffer from that. I think in the short run some
people will have very difficult finding a job and that’s going to be very tough on them and
potentially erode their feeling of dignity. If it does in fact displace people long term then we may
need to think about what are the sources of human dignity, do you have to work, do you have to
work full time, are there other pursuits? And this idea that you have to work is a fairly modern
invention anyway. At one time having to work was seen as a little bit embarrassing if you were
anybody who was anybody you would have inherited a ton of money and a royal position and
you wouldn’t have to sully yourself with actual work so that’s kind of our recent cultural
invention. If AI does manage to take on so much work that people are working a lot less and a
lot of people maybe don’t work, I think we need to think about human dignity very differently.

That seems on some level that paradigm of some of the ethical structures is certainly something
to rely upon that your students would be engaging with as well as they’re starting to make
decisions and become leaders in these arenas

Yeah, I certainly hope they can take a role in addressing some of these things it’s certainly
something we talk about a lot in class and especially students who choose to sign up for the
class like this really have a lot of concerns and really want to make sure we manage technology
development and evolution in a way that serves human dignity and doesn’t undermine it.

Do you work with an example in your class or have you considered an example in your own
work where an AI tool has transferred the power from a human user to that system?

I guess the main thing we’ve considered there are kind of decision making and resource
allocation kinds of systems like the one I mentioned a little earlier, the systems that are often
used by police departments to decide how to allocate their resources or other decision making
for things like credit applications. And a lot of what seems a little demeaning about these things
is that it’s very hard to understand what the basis for the decision is. If you apply for a loan, let’s
say, because you want to buy a house, and you’re turned down, it’s very hard to know exactly
why, and even if you kind of find out why, if they have some way of informing you about that, the
things that it uses to decide not to give you credit may seem silly and irrelevant and it could be
who knows what features they’re using to base their models on and features could incorporate
some bias Because they happen to be collinear with things that we would recognize as bias like
race or income level or gender or it could just be things that are strange for some reason, you
know my shoe size may predict whether I repay my loan or not I don’t know. And it feels an
affront to be dealt with on things that seem irrelevant. I would like to be dealt with on things that
reflect my character, not shoe size or other irrelevant characteristics.

So, pushing that a little bit further, what do you think the implications are for human relationships
within that context?

Hard to say. More and more I think human relationships are being mediated by AI on social
media platforms, on dating sites which I presume are becoming more technologically
sophisticated as well. So, it’s having lot of impact I think. I don’t think we know exactly what that
impact is yet, but I think it’s for example extending people’s social networks. Social media
certainly seems to allow you to keep in touch, at least to some extent, with a much larger group
of people. I think it also pulls people into the social sphere in ways that are not always helpful. I
see youngsters running around these days, and they’re always like this, stumbling along the
sidewalk tripping over parking meters or whatever. Constantly in touch with their audience. And
this sense that they have to be in touch with somebody somewhere all the time, I think it’s a little
weird. It’s changing people’s sense of where they are in the world. Everybody’s managing their
own little audiences, it’s like we’re all mini celebrities and we have to be in touch with our
audience all the time. I’m not sure that’s terribly healthy. I think it’s one of the ways that
technology preys on our frailties a little bit. It’s like we all evolved in an environment where
calories were scarce, for example, so we have built in preferences for sugar and fat and that’s a
very good thing when the environment is very lean. You have to work really hard to get enough
calories to survive. If I can go down to the second floor here and buy fifty snickers bars that
preference is not necessarily a good thing in this environment that we’ve created with
technology right now. I tend to think about social media is they’re doing kind for the same thing.
We have a built-in desire for social contact, for approval, for people to like something we say,
and when that’s scarce, that’s great. It kind of ties us together, binds us together, like I want you
to approve of me and so on, but if you have this network of a thousand people you can get liked
every two minutes and you kind of obsess with that. That’s why I think again, snickers bars, it’s
a little bit addictive it’s not really healthy and I think we need to figure out ways of letting people
better moderate their behavior

Can you think of an existing AI system or even a conceptual AI system that actually empowers
the individual/s?

I think we have some unfortunate examples right now. I think that the form in which you take in
information really changes the extent to which you’re able to or bother to see how much you
should trust it, what you can really rely on so I think another impact of social media is that
certain people have been able to use this as a platform to just say things to people who are
really ready to accept certain things because of their biases, their predispositions. You can
appeal to their prejudices and throw the red meat out as they say, right, which they consume
very uncritically. The technology that allows this direct delivery from some individual in very
short form to millions of people and bypasses all editorial function, is a very different way of
disseminating information and has given certain groups lots of power to sway opinion over their
sort of loyal audience. So, I think that’s rather dangerous. I think the editorial role that was
played by great newspapers is extremely important: the tracking down of sources, the correcting
of errors. If you throw all that out and if the primary information dissemination mechanism for
society is short-form tweets with no vetting of anything and people just believe what makes
them feel good, I think that’s a dangerous place to be, and I think we’ve kind of arrived there.
So, I think this technology that empowers certain groups of people, and certain individuals, is in
place. It’s in place in dubious forms, not always dubious. The ones that get the most attention
because they’re terrifying are pretty dubious but, you know, it lets bands keep in touch with their
friends and let’s people keep in touch with their families. So, there is a good side to it too but it’s
the scary ones that I seem to gravitate to just because they’re worrisome. I think we’re seeing in
social media now some emphasis on trying to do a better job of at least eliminating fake
accounts with deliberately fabricated news and amplification with bots and all this kind of stuff.
They’re trying to take some action, but the way the technology works though is just
fundamentally different. With this distribution from one to many, with nothing but an algorithm
deciding who sees it. There’s not a lot of human judgement involved in that. And so, we’ve
eliminated the human hands-on which contains some of this. But these are the kinds of things
that I hope our students will see and will worry about. You know, if they go to work at a social
media company and rise to a position where they have some influence over the product that
they’ll be aware that these are hugely important issues and will be seeking solutions not just to
make more revenue for the company but to think about what the impact is going to be on all of
us.

Yeah, and perhaps think a little bit more about the human decision making that goes into
actually developing the algorithm to begin with.

Yes. Yes.

I mean that the subtlety is sometimes just in the foundational practice of actually building these
systems. Right, on some level technologists are developing the systems that might be kind of
precluding, right, or beginning to edge out continued decision-making by humans who are using
the tool. But ultimately it’s a cache of humans that devised it to begin with.

Absolutely, and it is the purely business motivation of a social media company which wants to
maximize engagement. I mean that’s what they’re tuning they’re algorithms to, the algorithms
decide what to show you in Facebook, for example. They’re very very good at predicting
engagement and selecting news stories on that basis, but suppose they were also motivated by
ensuring that people see some sort of diversity of opinion. Could they become good at finding
stories that you may not agree with initially but you would read? Maybe they could predict that
really really well. So, you wouldn’t just discard it but it would be thought provoking to you. I’d
love to see them think more broadly about what would we be optimizing for here or maximizing
if we were trying to use our technology to make society better, in some way, make people
smarter, make people more thoughtful, more informed. How would you be tuning the algorithm
then? So, my hope is that we’ll kind of move in this direction. Companies like Facebook are
showing some willingness to sacrifice some profits for creating a product that actually does, you
know, better things for society. So that could be another direction. The companies themselves
could do awful lot to minimize the damage and make this a really positive tool for keeping
people broadly informed.

Yeah, so perhaps we’re still in our adolescence with this regards.

I think so, I think so.

What do you perceive is valuable in the prospect of machine autonomy?

Well, I think there are lots of situation where robots, to be effective, have to act on their own.
Everything from planetary exploration to making decisions in rescue situations where it’s
impossible for someone else to be sort of controlling their moment to moment actions. So, I
think that’s one good reason for autonomy, is that humans aren’t in the position under those
conditions to provide a lot of direct control over the machine. And I think there’s other places, I
think we’ll eventually get there in autonomous vehicles where they’re just so much better at
driving that people it would be considered an unreasonable risk to put a person behind the
wheel of a car because your reflexes are so much slower and your ability to see hazards may
be so much worse. I don’t think we’ll be there for a while but I think that in lots of cases that kind
of thing will happen. Robots like the little ones’ Amazon has running around their warehouse
floors will be much safer, will not be running into people, or having people dump product on
themselves and so on. So, I think there is a lot of situations where robots can do things better
that we can. And AI systems the same way. We’ll be able to will maybe be able to make medical
diagnoses better than people-we’re getting close to that. So I think there will be a lot of ways
where it’s just an improvement over human judgement, and in those cases it is just definitely a
good thing. You hear a lot about general intelligence or machines and robots trying to acquire
some sort of general intelligence. I think that’s kind of unlikely and I’m not sure it’s very
important or a good idea. It would be interesting, really interesting, I think from a research point
of view if you could create something which could converse on many topics over a long period
of time with a human and not be recognizable as an AI. That would be interesting. The chat bots
we have now are not so good at that, and they can be maliciously biased as we’ve seen with
Microsoft’s unfortunate experience. But I’m not sure that’s as important as more specialized
kinds of intelligence people are already generally pretty intelligent and a lot of that intelligence is
based on experience carried in the form of genes and DNA from millions of years really of
evolution from the days of early primates to now. You can think of all that as kind of a built-in
experience that has provided some preferences like a preference for sweet things but it’s also
built in learning mechanisms and biases that helps us figure out what to learn in a situation. I
think that kind of thing is really hard to duplicate and I think that kind of underlies a lot of general
intelligence. So, I don’t know, I think that’s really interesting from a research point of view, that’s
super challenging, but in terms of what’s going to have an impact, I think much more specialized
systems at least for the foreseeable future are going to be much more important.

Can you talk a little bit about the potential pitfalls of machine autonomy?

One of the big ones that a lot of people talk about is autonomous weapons which is a very scary
prospect. You know, it’s actually possible, there’s been some writing about this that autonomous
weapons in some ways may be more ethical than humans in some conditions. For example,
they don’t have fear. They don’t injure people because they’re afraid, or they don’t get angry, or
they don’t seek revenge. Presumably you wouldn’t program those things into them. So, there
may be conditions in which they are more ethical than humans wielding the same kinds of
weapons, but on the other hand using lethal force is a very weighty and difficult kind of
judgment, and I’m not sure I trust any algorithm in the decision to do that. I think there’s some
places that they could be used more defensively like I understand that ships have defenses that
will hit incoming missiles and such and there it’s just a matter of no human can react fast
enough or accurately enough. I think they’re just used often in autonomous mode and it kind of
has to be. But they’re shooting at something that’s not alive anyway. More worrisome are things
like supposedly alone in the demilitarized zone between North and South Korea there are
weapons that are capable of being autonomous so they are close to being autonomous where
they can apparently identify human targets and they’re not used autonomously now but they
could be. There are systems that turn not-so-good marksmen into expert marksmen and they
can also have facial recognition incorporated which is kind of scary. So, I think we would want to
hold autonomous weapons to such a high standard of both ethics and accuracy and that for me
it’s just in the foreseeable future I just can’t imagine actually wanting to do that except under a
few very limited conditions when its defensive or things like that.

How would you ascribe responsibility if an autonomous system caused harm?

Yeah, so, this comes up with autonomous vehicles all the time. If an autonomous vehicle
crashes into somebody and who knows why, there’s some glitch in the algorithm or the
conditions were such that it didn’t recognize this white semi-trailer in front of it and so on. And
you see people talking about holding the car company responsible or the driver is responsible
no matter what or the software developers are responsible or all sorts of things. I think it’s so
hard to allocate blame that a better system would be like a no-blame policy. So, if you were
driving an autonomous vehicle you would just pay insurance, and maybe manufacturers have to
also contribute to insurance and if people are damaged you just pay for their expenses and you
compensate them. That way, everybody has to pay into this pool, and you don’t have to worry in
detail, in the occasion of this accident who is really responsible. Was it this line of code? Was it
improperly trained? Or was it the vehicle company that didn’t give the right context for the AI?
So, I think we can just avoid a lot of these issues of who’s responsible by just realizing that
there’s going to be a certain number of accidents and we need to create a pool of resources to
handle them and deal with it that way.

It seems to work in an interesting way in terms of consumers or choosing to work with or use
devices like a car. Do you think that kind of system could translate to multinational agreements
in regards to the use of autonomous weaponry?

Yeah, I think that the situations are very different in terms of potential amount of harm and the
intention of the system. I mean you could imagine autonomous weapons being hacked and
having lethal robots just running around and terrorist attacks. I don’t know how you would create
a pool of resources because you wouldn’t know how large of a pool you would need, you don’t
want to provide resources if there are actually bad actors involved. You know, when automobile
companies are making cars, they don’t want accidents. They’re actively trying to avoid that, but
with autonomous weapons causing harm is the whole point. And so, I think you wouldn’t want to
create some system that would insure against this damage when the whole point is to create
this damage. It would sort of free them up to not worry about that. I think that they’re very
different kinds of situations.

What happens when the consumer system like the driverless car is hacked?

I think that’s a huge, huge issue with autonomous vehicles, other autonomous technologies that
move around in the world or other things like internet of things where there’s sensors and
actuators everywhere. That’s a whole new world and security becomes so important, and I don’t
think we’ve kind of got far enough yet. I mean, it was a couple years ago that someone figured
out how to hack a Jeep. And I think if a Jeep can be hacked, anything can be hacked. And so,
I just hope that new systems in the internet of things kinds of context and other things in robotics
context that will be deployed broadly will be designed from the ground up to be secure unlike
the internet where security was thought about much later. So, you know there are colleges here
who are working on the internet and security and IOT kinds of systems and I think they have
some really good ideas for doing this but It has to kind of be designed that way from the ground
up.

Any other topics you would like to comment on?


The one thing that is kind of interesting and I have seen talked about but is a ways off are when
if ever would AI technologies demand civil rights and should they? If we ever develop systems
that are indistinguishable from humans which we are a long way away from but it’s not
inconceivable we could be sitting here having this conversation and you might be a robot, I
might not know. But then would you be entitled to some sort of civil-rights having all the
capabilities of a human? Those are very interesting questions that science fiction writers like to
think about. But I think one day we will have to confront that sort of thing.

Well we have the beginnings about it, if nothing else, we have governments like Saudi Arabia
which have captured that imagination in offering citizenship to a robot. So even with, and we
touched briefly on general intelligence and the ways in which we are far away from that. There
seem to be gestures towards this imagined future, whether it’s fantasy or not, we have very
strange instantiations of that presently.

Yeah, and I think we will get to the point where it becomes a serious issue. I mean if you look at
urban studies of kids and even adults dealing with stuffed toys or more so animated toys and if
they are left to play with them or pet them for a while and then someone asks them to take a
hammer and smash it they won’t do it. And this is just a stuffed toy. So, I think there’s going to
be very powerful feelings that people have towards autonomous systems that they’ve interacted
with, especially if its they resemble living things in some way, that will be an issue for the future.

Yeah, and as we’re living longer, increasingly a health concern comes in between us.

Yes. Yes.

So, the sophistication of the chat box, as long as it’s not berating you. What kind of
companionship is it offering that is more difficult to continue to provide within our societies?

Right, and if it gets better at it, your grandchildren will they be jealous?

Right, right.

So, yeah, future issues I think.

You might also like