Roman Yampolskiy on the dangers of AI

Philosopher interviews

Dear friends of Daily Philosophy,

another weekend is here, and with it comes your weekly dose of philosophy! This time, I thought I’d break the hermits and Daoism series up a little. After all, we’ve now had a long and fascinating run of twelve (!) hermit posts in a row (which you can all read on the Daily Philosophy site here), so I, at least, am ready for a little change.

What you might have missed in the past week is a new episode of our Accented Philosophy podcast, in which Ezechiel Thibaud and I discuss beauty discrimination in online media. I found this week’s discussion very interesting, so if you have time, you might want to check that out.

In other changes, I’ve made the font in which articles are displayed on the Daily Philosophy site, when read on big screens (not on phones), a little bit smaller. I find this easier to read. If you have an opinion on that, or if you preferred the older, slightly bigger fonts, please do tell me. In any case, you can always change the font size in which websites are displayed: press Shift-Ctrl-+ (that’s the plus key) to increase the font size in your desktop browser; and Shift-Ctrl— (minus) to decrease it. Not sure if this works the same way on Apple computers, though.

This brings us to this week’s article, which is a very long, very interesting interview with a researcher whom I admire very much, Prof. Roman Yampolskiy. Prof. Yampolskiy is one of the world’s leading experts on the dangers of Artificial Intelligence, and in this interview we discussed the possible future of AI and whether humanity will be able to avoid being dominated by AI systems.

The interview was over an hour long and the transcribed version is almost 8000 words, which might break some email clients or be flagged as spam by Google. This is why I will cut this email up into two parts and send them one after the other. Sorry for that, but I think that the interview is worth it and it surely beats having to scroll through an 8000 word article on a phone. Again, if you feel that I’m overdoing it with the length of these articles, please feel free to tell me. You can just reply to this message and I will get your reply as a private email. Please also tell me if you feel that you’d like to get only links to the articles sent, rather than the full article text. I’m always happy to listen to your preferences and to try and improve these emails so that they serve you as well as possible. You can also leave a comment by clicking here:

Leave a comment

Please also feel free to share these emails with others who might be interested.

Thank you for all your support of Daily Philosophy, and now let’s directly go to the interview!

Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Science and Engineering at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. His research has been cited by 1000+ scientists and profiled in popular magazines both American and foreign, hundreds of websites, on radio and TV. He has been an invited speaker at 100+ events including Swedish National Academy of Science, Supreme Court of Korea, Princeton University and many others.


DP: Welcome, Professor Yampolskiy, welcome Roman! I’m very happy and honoured to have you here for this interview. Let us begin with you telling us a little about who you are and what your interests are in philosophical research. What are you currently working on?

Sure! I self-identify as a computer scientist; an engineer. I work at the University of Louisville. I’m a professor and I do research on AI safety. A lot of what I do ends up looking like philosophy, but, you know, we all get PhD’s and we’re “doctors of philosophy,” so a computer scientist is a kind of applied philosopher; a philosopher who can try his ideas out. He can actually implement them and see if they work.

DP: So what is your philosophical background then? Are you also professionally a philosopher?

Not in any formal way. I think I took an Introduction to Philosophy once and it was mostly about Marx’s Capital or something like that. So I had to teach myself most of it.

DP: I also noticed that you have written lots of articles; some you wrote together with many different collaborators, some also on your own, and you are also writing books and you are almost continuously on Twitter… Since some early career philosophers might be watching or reading this interview, I was wondering if you have any advice for them on how to do this. How do you organise your time? How can you manage to be this very prolific philosopher and do all these other things on the side?

So it may not work for early career philosophers… I’m ten years on the job, so I have the power of saying no to almost everything I don’t care about. It’s much harder when you are just starting out. You have to say “yes, I love to teach another course! And, yes, your meeting sounds fascinating!” At this point, I don’t have to do that so I think that’s the main difference. I just look at the long-term impact of what is being offered in terms of time taken and what it’s going to do for me. Will I care about it five years later? And if the answer is “absolutely not,” why would I do it?

Read on...