EP 7: Should AI be Regulated? Unraveling the Complexities of AI Regulation
The pros and cons of regulating AI and its impact.
Posted on June 2, 2023 by Fusion Connect
Join us on this episode of Tech UNMUTED as we delve into the complex world of AI regulation. In a departure from our usual format, we present a thought-provoking discussion on the regulatory framework surrounding artificial intelligence. We examine the pros and cons, discussing topics such as stifling innovation, international competition, national security, and ethical concerns. Should businesses establish their own guidelines, or is government regulation necessary? Tune in as we explore the fascinating dynamics of AI regulation and its impact on our rapidly evolving technology landscape.
Watch & Listen
Tech UNMUTED is on YouTube
Catch up with new episodes or hear from our archive. Explore and subscribe!
Transcript for this Episode:
INTRODUCTION VOICEOVER: This is Tech UNMUTED. The podcast of modern collaboration – where we tell the stories of how collaboration tools enable businesses to be more efficient and connected. With your hosts, George Schoenstein and Santi Cuellar. Welcome to Tech UNMUTED.
GEORGE: Welcome to today's episode of Tech UNMUTED. Today, we're going to approach this a little differently than we have before. We're going to take a look at some of the things that are going on from a regulatory framework around AI. We're going to throw up a couple of slides today which is a little different than what we've done before, and take a look at some topic-driven stuff. For folks listening along on a podcast on Spotify or one of the other platforms, we're going to do this in a way we think will really work pretty well for you and be able to follow along.
Santi, I don't know if you want to put any other frame around this before we jump in?
SANTI: I hate regulating, but let's go. I'm all for it. Let's jump. Let's jump into it, but I'm already getting goosebumps. I don't like the topic but that's just me. Let's dive in.
GEORGE: With that, I will give a little bit of a frame on this. Santi and I both have a leaning towards less regulation, at least as it's relative to AI and technology development. You'll see some of the reasons why. We'll talk about it as we go through this. At the same time, we're trying to be a little unbiased here. We're going to present pros and cons on this.
SANTI: Sure.
GEORGE: Why should the government regulate, maybe why shouldn't they? There are two broader guiding principles here. The first one is around businesses really need some kind of a framework internally to do this, right? They need guidelines and guide rails. I see those as two different things. Guidelines being something more formal, a set of policies and procedures, something else that tells people you can do this or not do that in this kind of situation and provide a clear stamp for what that is.
Guide rails are more of your business philosophy towards AI and other tools, and how are you going to use them within your organization, and how are you going to get the best business outcome. That business outcome, we've talked about it on some of the previous episodes. You'll see it in the Forbes article, which is in an earlier episode. It's really about advancing productivity and there's really this productivity revolution that's starting to take place around AI.
Second piece is, really, transparency is an imperative as it comes to AI, so what's being developed by AI? How is it being used in your interactions with the business? Those kinds of things. You need to be clear. You can't create a frame of reference where people think they're interacting with a real person, for example when they're not, right?
SANTI: Yes. That's correct.
GEORGE: It doesn't create a fair environment for that person, and it misleads from an expectation standpoint. Santi, let me give you a second to chime in here as well before we jump into the couple of slides that we have.
SANTI: I'm in agreement so far in the direction that we're headed with this conversation. I do like your definition of guidelines versus guide rails. One of the things that I took away from when we were in Vegas at the Microsoft conference, was this idea of it's not a new thing. It's just now it's become more prevalent, which is the idea of responsible AI, and to your point, this is the framework.
By the way, there's no universal framework because folks tend to do things slightly differently, but it covers the areas that you're touching on here. Things like not being biased, things like being factual and being truthful in its response, not discriminating, and being secured as well because the data is only as good as its integrity. When you lose that integrity, the AI loses integrity.
I think we're heading in the right track here, but so far, I like this, especially the second one. I'd like to see what you got here for this show and tell today.
GEORGE: We had talked a little bit about this episode of maybe even just going to the website and going to Microsoft's website and looking at that framework. It sits under Azure. We'll drop it in the show notes. You can certainly go take a look at it.
SANTI: Sure, yes.
GEORGE: It wasn't as broader framed as I think this discussion is, which we're working on topics that are both business-oriented and government-oriented or at least, country oriented, as you'll see in the slides as we go through it. We're going to go through three pros and three cons. We're starting with the con and then we're going to flip to the pros, and this is really around the framework of should there be regulations or should there not be regulations? Again, our personal belief, and Santi and I talk about this all the time, is we'd prefer to start with a business framework and a set of business rules or business engagement rules that are collectively agreed to.
SANTI: Correct.
GEORGE: Although there are some things in here, and we'll flag them, where it really does need some type of government regulation around certain elements of this to ensure safety and efficacy and those kind of things. I'll hit on both these bullets first. This first con is around stifling innovation. There's two issues here. You over-regulate or you create a framework that's challenging. You will discourage people from investing. You will not have entrepreneurial endeavors either as their own company or within side organizations. It's very likely that you will impact this pace at which this moves.
Again, Santi and I have mentioned this a little bit on some of the previous podcasts, but we've been very aggressive in using tools internally from bot development to workflows that are AI-enabled, to any number of new tools that were out there. If those things weren't available, we, obviously, wouldn't use them. They wouldn't be available to offer anybody else.
SANTI: That's right. Listen, let's be honest, the last six, seven, eight, months of AI news and the evolution of AI, how fast it has evolved in just the last few months, it's because it's not regulated. [chuckles] Let's be honest. That's exactly why it has happened so fast in the past several months. To your point, once you regulate it too much and handcuff some of these ideas, then it just becomes a burden, so I agree.
GEORGE: The flip side, again, is that we're in at the really early stages of productivity revolution. This is something most of us in our lifetime haven't really seen. Computers came in, and they did make a dramatic impact on the workplace, but this is happening so much more rapidly. The impact is going to be quicker and the impact is going to be bigger. We'll take a look at con number two, international competitiveness.
SANTI: This is a good one. Yes, this is a good one.
GEORGE: From our own business standpoint, we're competing against other companies, and countries, and other organizations or connections between countries are competing against other countries. This creates a lot of risk. If you overregulate in one area and you're competing against another country or set of countries who are less regulated, they will advance faster, right?
SANTI: Oh, yes. For sure.
GEORGE: This will be big, big issue.
SANTI: For sure. Listen, it's historically proven. Without mentioning countries, we are always in a competitive nature with countries that-- Especially in technology, we're always racing to be the first ones out with that latest and greatest shiny object, and we know for a fact that some other countries have beat us to the punch. It's happened why? Well, because we overregulate and they don't have that. They don't have that kind of burden. This is a big one in my opinion. Folks who come up with these regulations, I don't think they take this into account, or at least they don't take it serious, but in the world stage, you will lose first place from a technology evolution standpoint because of overregulation.
GEORGE: Yes, for sure. The bullet at the bottom is more around what is the outcome of overregulating? If I've got the ability to do something in another region or another country, and it's favorable to my business, I'm going to go do that. We saw manufacturing for years move overseas for a number of reasons, cost being one of them, regulatory being other.
SANTI: That's correct.
GEORGE: You've got to be aware of the consequences that potentially come with the regulation. The final piece, it depends on the application of the regulations or the laws but it's around bias and unfairness, and specifically around that, are certain types of organizations targeted for some reason. Are they not connected or involved enough in the development of the regulation that they lose out or is there some other reason that the regulation unfairly impacts them? This can have a dramatic effect as well. It could result in the same migration. Like I will go to another country to do what I do because [crosstalk]
SANTI: Sure. I never thought of this in this frame of mind. I will tell you that I have a different perspective, and that is what if the people who are writing the regulation, be it whatever, for political reasons, for personal interest, what if the people who are writing the regulation inject bias of some sort of some kind of unfairness themselves? That's another thing. It's like you're in the hands of people who are regulating, who really don't understand the technology, to begin with. Let's be honest. This is good. All right.
GEORGE: We're going to take a look at a couple of the pros, reasons why we would potentially want to consider regulation. First and foremost, national security and defense.
SANTI: I'm all in, all in.
GEORGE: You get some level of unregulated risk that happens if this is completely out in the wild. It could be malicious use. It could be cyberattacks. It could be different types of autonomous weapons. This isn't necessarily a weapon that's shot into the air. It could be an electronic weapon that is constantly out there attacking and doing other things, as well as surveillance that could happen that would infringe on people in other ways. [crosstalk]
SANTI: That's a big one. That's a big deal. This is what the government should be doing. When we talk about regulation, this should be front of mind, is this aspect here. I think that's appropriate. You have no pushback for me on this one.
GEORGE: There's another element that we didn't know where to put it in the three pros that we had, but it's around accountability. If there is a negative outcome that comes from the development of AI-enabled tools, who is accountable? Who is responsible? How is that regulated? Is there a medical tool built on AI that prescribes medicine and it's the wrong medicine? Does it indicate that there's some type of medical issue that doesn't actually exist? Does it take you down a path that's the wrong path?
Even things like, to a great extent, in finance there are a lot of regulations around how you give investment advice. There are licensing requirements and all kinds of other things around that. It's probably not feasible in today's world that that is going to be a completely AI-based environment unless there's an ability to get people to release all rights to a negative outcome. I don't think we necessarily want that. There are good reasons, clearly you know, we saw Santi’s reaction to the national security and defense piece, clearly a big risk. [crosstalk]
SANTI: From the guy who doesn't like regulation, I'm okay with this one. [laughs]
GEORGE: The second one is around ethical concerns. Some of this tails off of that second bullet.
SANTI: Oh, sure.
GEORGE: There's a longer list here. I'll hit them one by one and we can come back to them. That piece I already hit on, which is operational safety and not causing harm to individuals. There needs to be, and again, maybe it's not guidelines/regulations, maybe it's guide rails into the way things are used and what's an acceptable use, but are the algorithms fair, transparent, and non-biased? Think of this bias more in the way of an individual bias. Does it inadvertently or deliberately treat one group of people different than another group of people? Again, back to that same point, who's accountable when something goes wrong? When something is off the rails and there's a bad outcome, how do you deal with that bad outcome?
Fourth bullet that we've got here, what are the penalties? Without regulation, there are no penalties for bad actors, and bad actors can do anything they want, right?
SANTI: Yes. Listen, I use the stop sign analogy. We got a stop sign. That's a regulation. It's a law. You have folks, the majority of the people, are going to do what? They're going to stop. They're absolutely going to stop. You're going to have a small subset of people who may roll through it, not quite follow it to the letter of the law, roll through it, but then we know there's going to be folks who will recklessly just go right through, blow through a stop sign with no regard for anybody else's safety, even their own, and that's an even smaller group. Thank goodness. That's my argument. My argument is you can have all these stop signs in the form of regulation all you want. It's not going to stop the bad actor.
You're right. When you're able to catch the bad actor, what are you charging with unless you have some kind of a statute? I get it. I get it. So long as it's clear, it's not going to stop it. Look at the internet today. You have the dark web. Who stops that? It's impossible to stop it.
GEORGE: We've already heard stories of people using AI voice generators to--
SANTI: Oh, yes, deepfake software.
GEORGE: Grandmom gets a call and-
SANTI: Oh, my goodness.
GEORGE: -it's granddaughter Sally, and she's in distress and sounds just like Sally because it's an AI version of Sally.
SANTI: Absolutely.
GEORGE: Not necessarily going to stop any of that. If there's some accountability and some penalties in place, at least you have a way to go back and attempt to take action, right?
SANTI: Sure.
GEORGE: The final piece in here is-- This is broader. We've seen a lot of this regulation start in the EU. We've seen elements of it in Canada, some loose elements of it in the US. Clearly, in California has been more in the forefront of some of this. it's around data collection, storage, and usage and what's your individual rights and how were these tools built in the first place?
They ingested data. They interacted with the world effectively, everything that was available on the internet, other data sources, et cetera. They do not house that data. The tools, at least, ChatGPT as an example, doesn't house the previous thing it said to someone in a broader way that everyone has access to it or the tool itself can even go back and look at what it had previously said in any real way. There's got to be some rules around that. I want some control in my personal [crosstalk]
SANTI: Yes, I agree.
GEORGE: There was a recent one, I saw the story pop up a couple of times. There was a university where, at least the way the story was written, a professor went and took all the final assignments from several of his classes, went to ChatGPT itself, and put them one by one into ChatGPT, and asked ChatGPT if it wrote it.
SANTI: Really?
GEORGE: Yes. The tool itself has no way to know what it wrote in that way.
SANTI: Of course. [laughs]
GEORGE: It can't look back on itself. It doesn't know who that would have been-
SANTI: Interesting.
GEORGE: -even their verbatim content, and apparently told most of the class that they had failed the class. There's a big debate around what really happened here, but there's a level of where was that data used, what was it used for and what is that access. Should there be access to what ChatGPT previously said? Should there be an audit trail? I don't know. [crosstalk]
SANTI: I think it's fine the way it is.
[laughter]
GEORGE: Let's take a look. I've got number three for regulation, which is around economic impact and job displacement. I'll hit on each of these bullets. I might take these one by one and debate them a little bit because I think we put these in to try to be thoughtful around the way we're interacting around but we don't necessarily believe some of this.
The potential to significantly impact the job market and the economy. I think the perspective in the market today is where's this negative outcome going to be? I know there's going to be one and I'm going to lose my job or it's going to impact my job. Clearly, we've had these discussions over and over again. We see this as a productivity hack. We see this as a way that there's really this productivity revolution that is already [crosstalk]
SANTI: That is correct. Absolutely.
GEORGE: It's about how do you use the tools. Take advantage of the tools you have. We talk frequently about what's in Microsoft 365 and the new things that literally appear just about every day, right?
SANTI: Yes, and we've adapted. We've adapted and we ourselves have adapted. We're not out of a job. No, we identified the use, we identified its value, and we've repositioned ourselves to leverage it as best we can. I think that the key here is, and we've always said it and we'll continue to say it, become the master of the prompt. If you can become the master of the prompt, then you won't be thinking about these things because it's not going to take your job. You're going to use it to make your job even better and more valuable is what you're going to do. Yes, that's a good- I get the point. I get the fear that's out there. It's the fear of the unknown is what I think it really is.
GEORGE: It's happened throughout time. There have been revolutions, the computer which was in our lifetime. The industrial revolution which obviously wasn't, but those things all played out and overall economic activity increased. You can debate whether there was winners and losers within that but I will say there will be winners and losers in this. There will be those people and those businesses and those countries that take the most advantage of this OpenAI. [crosstalk]
SANTI: Look, we have Vietnam veterans on smartphones, on social media. Come on. It's the greatest thing ever. To them, they would've never had envisioned that when they were out fighting the war. Now here they're back home and they have this entire world unlocked where they can connect with people from all over and stay in touch with friends and family and their phone tells them what to do. This is just the next revolution, and every time there is one, there is uncertainty. I think this is what we're seeing.
GEORGE: For sure. I'm going to skip the second bullet. We already hit that on increased productivity. Third bullet is around regulation can help maintain fair competition and prevent monopolistic practices. There's a lot to unpack here. Who has access to it? OpenAI was originally conceived, I think, as a nonprofit and some type of open-source access. It is not open source today. It is loosely open access. There's APIs available. You can build tools off the back of it, but you do have one single organization controlling what's at the crux of a lot of the AI development in the market today.
There's a risk there. Completely unregulated that could go in multiple different directions. Some of them that have a positive impact and some of which have a negative. Elon Musk said the other day, something to the effect of-- "There's a lot of outcomes here. A lot of them are positive, and then there's this one that it's catastrophic and it ends the world," basically is what he said. I don't believe that.
SANTI: I'm not buying that either.
GEORGE: I think there's more to happen from a positive standpoint. Then the final thing on here is around intellectual property and data ownership. This goes back to a point we made a little earlier where did the data come from in the first place? What is the eventual use of the outputs? There are already some regulations in place around being able to patent or copyright things that were fully developed by some type of autonomous tool.
SANTI: There's something in place already that prevents you from--
GEORGE: It's my understanding-
SANTI: Interesting.
GEORGE: -on a reviewer of regulatory material based an article the other day that indicated that.
SANTI: It makes sense.
GEORGE: That there's a push towards even more detailed guidelines.
SANTI: Yes, interesting.
GEORGE: I don't know how to debate this. Should you be able to create an image with AI and copyright it? To me, I would say you should be able to. You, as the operator of the AI tool, what prompted you to use to get to the outcome, right?
SANTI: Absolutely. It's your unique prompt. If I copied your prompt and use your prompt, I'm copying your input, and that should not be allowed. If I come up with my own prompt, with my own details from scratch, my own concept and idea, and I inject this into the AI and get an output, that's my idea. I struggle with that one. Let's see where the chips fall, but I can see where this could be problematic, I think. We'll see.
GEORGE: Let's hit- so let's look at one final piece here. There was a meeting in Washington a couple of days ago as we're taping this. It was on May 15th. There was a series of hearings that took place in DC. I'm not going to go into a lot of detail about what they were, but Sam Altman was there, who's the CEO of OpenAI.
The feedback that I saw was pretty good, that this wasn't your typical tech CEO comes into Washington and sits around and gets a lot of questions about things that they did wrong. This was more about the representatives who were involved in those meetings really understanding what the tools can do. I thought there were two interesting things that was actually pulled that out of a CNBC article and we'll link that as well in the show notes.
The first one that Sam said was, "AI is a tool and not a creature," so, again, reinforcing that point we've been making. The second part of that was this is something that is going to assist human beings not replace human beings. You need to find your place. You need to understand how to use these tools. You need to understand how they're going to impact productivity, those kind of things. One of his other statements was, it will do tasks and not jobs.
SANTI: Right. Yes.
GEORGE: Now, I could debate that one a little bit. Could you have an AI bot who is a customer service rep, who, in many instances, from start to finish, could handle a customer service interaction, either in chat, or in a live video, or on a phone? Yes, probably can.
SANTI: They do today. They do today but, at the end of the day, when it has to be escalated, who's escalating to? It's got to escalate to human being. Again, the mundane task of asking a customer, how can I help you, and the FAQ, the frequently asked questions and point them to the answer that 90% of people who call ask, that's great application for AI, but when it's not inside that knowledge base, AI can't find the answer, who's escalating-- It's going to escalate to a human being.
By the way, I love the first quote, "the AI is a tool and not a creature." I think I want to make a T-shirt out of that because it is absolutely. Sometimes we speak about this like it's a creature. I think it's brilliant in the statement that he made. Look at the tolls, remember? I'm old enough, you're old enough to remember when there were people at the toll plaza.
Remember that? They would take your money and give your change. By the way, if you didn't have money, they write you like an IOU-type thing. You have to send it in.
Then they came out with these RFID tags and so they cut the tolls in half. Now you had speed lanes and then half the workers. Well, now they have plate readers, so you don't even need the RFID. Therefore you don't need a toll plaza worker. Now you just go through tolls with or without it. I understand that part, but I think AI is really playing a major role in the information worker's productivity. That's where the majority of the impact is occurring. If you're an information worker or you're in management, AI is going to be your best friend to make your role more effective.
I can speak from experience because that's what I've been experiencing for the past six months as we've done a deep dive into AI ourselves. Yes, I get it. The toll workers, right? They would make a different argument, but this is not the application today that we're referring to. I struggle with the concept still.
GEORGE: It's about velocity, right?
SANTI: Yes.
GEORGE: Even the discussion we had earlier today, we look at Salesforce all the time to see what's different data, right?
SANTI: Right. Sure.
GEORGE: We run marketing campaigns, we see opportunities in pipeline, we look at the cascade of what happens within Salesforce. Not everyone on the team has access. For me individually, I'm in there a lot, but once or twice a day I have to log back in after, reauthenticate to get back in, I've closed the browser, I've got to get back in, whatever it is. What we started to look at today was, is there a way to present that data in Microsoft Teams.
The tool that we are in constantly, the tool that we are recording this podcast in. The ability to, at the click of a button, get to those same dashboards in Teams, it's going to, from an iterative standpoint, save me 5 to 10 minutes a day. What do I do with those 5 to 10 minutes? What else can I layer in there? Workday, for example, we separately log into Workday for our approvals or submissions around paid time off or financial approvals, or whatever else that flows through the system. The ability to get a notification in Teams that that's there to be approved, and then the ability to actually take action without a separate login within Teams, wow, there's another 5 or 10 minutes there, right?
SANTI: Of course, 100%
.GEORGE: That's 10 to 20 minutes. How long is it going to take in those productivity hacks to free up an hour a day? That's five hours a week. What can I do with five hours a week? It's substantial.
SANTI: It is substantial.
GEORGE: With that, we're at the end of the slides today. Hopefully, this was an engaging discussion between Santi and I for us to listen to. [crosstalk]
SANTI: Yes, it's definitely a different angle. It's different than what we've done in the past. It's something that we have to talk about it because it's unavoidable. It's going to happen. Listen, I went in it at first thinking, "Oh, boy, we're going to talk regulation." I ended up saying, "You know what, some of these things do make sense. I can buy into some of this stuff."
We're at the end of this presentation. Thanks, George for bringing up the show and tell, I thought was very well thought out. Until next time, folks. Remember to stay connected.
CLOSING VOICEOVER:
Visit www.fusionconnect.com/techunmuted for show notes and more episodes. Thanks for listening.
Episode Credits:
Produced by: Fusion Connect
Citations:
Listen on Your Favorite Podcast Player:
Expert insights, exclusive content, and the latest updates on Microsoft products and services - direct to your inbox. Subscribe to Tech ROUNDUP!
Tech UNMUTED, the podcast of modern collaboration, where we tell the stories of how collaboration tools enable businesses to be more efficient and connected. Humans have collaborated since the beginning of time – we’re wired to work together to solve complex problems, brainstorm novel solutions and build a connected community. On Tech UNMUTED, we’ll cover the latest industry trends and dive into real-world examples of how technology is inspiring businesses and communities to be more efficient and connected. Tune in to learn how today's table-stakes technologies are fostering a collaborative culture, serving as the anchor for exceptional customer service.
Get show notes, transcripts, and other details at www.fusionconnect.com/techUNMUTED. Tech UNMUTED is a production of Fusion Connect, LLC.