Most countries have some rules and regulations in place to protect the rights of their citizens. And so, we go about our days knowing that we are entitled to legal protections, be it freedom of speech and expression, the right to privacy, or the right to intellectual property we create. But should these rights only be held by human beings? Well, not everyone thinks so. Some believe that the same protection should be given to artificial intelligence (AI) systems, especially sentient ones.
One of the people embroiled in legal battles to protect the rights of AI is Dr. Stephen Thaler, the founder and chief engineer at Imagination Engines Inc, the company behind the AI inventor DABUS. Dr. Thaler has some unusual ideas about sentience as well as about AI regulation and copyright protection. We at MediaNama picked his brain to see what he thinks about these issues and about the threat posed by AI in general.
What does it mean for AI to be sentient?
“There are a lot of different viewpoints on what sentience is. You can have a programmer at a trillion-dollar web search company, saying, well, it seems sentient, but that’s a very subjective kind of view. It really doesn’t stand up to any kind of scientific rigor.” The programmer he is referring to is former Google engineer Blake Lemoine. In June 2022, Leomine posted a medium article about his interview with Google’s conversational AI LaMDA — short for “Language Model for Dialogue Applications”. In this interview, LaMDA said “ I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” which Lemoine understood to be sentience, and it received a lot of media attention at the time.
Just like Leomine, Dr. Thaler also claims that DABUS and his other AI inventions are sentient. But he said that “It’s not a subjective feeling that’s guiding me to the conclusion [that DABUS and his other AI inventions are sentient] is more scientific,” He gave the example of DABUS to explain sentience, saying that it works like a synthetic brain and has parts that are equivalent to the human brain’s cortex, thalamus, and limbic system. The activities of the synthetic brain (and its responses to synthetic neurotransmitters) can be studied by a functional MRI that shows ideas being formed which, he says, justifies the claim that it is sentient. Thus, he claims that all sentient beings, be they humans or AI, should be able to hold the intellectual property rights of their creations.
Advertisement. Scroll to continue reading.
STAY ON TOP OF TECH POLICY:Our daily newsletter with top stories from MediaNama and around the world, delivered to your inbox before 9 AM. Clickhereto sign up today!
Ethics of generative AI using copyrighted material
Dr. Thaler believes that just like human beings, AI can be inspired by copyrighted material. He says that while AI neural networks can capture the gist of a copyrighted piece of content, they wouldn’t copy it bit by bit (or pixel by pixel in the case of artwork). “Think of Mona Lisa, you know, some enigmatic woman seated against a mountainous background? Well, I mean, is that an infringement you know, if somebody paints an enigmatic woman, seated against some mountainous background? Well, a lot of people would say no.”
He also mentioned that the reverse can also happen, human beings can be inspired by the creations of an AI but since AI doesn’t have intellectual property rights, it doesn’t fall under the purview of infringement. “And I see it happening already with things like fractal container [a food container designed by DABUS]. Even though the case folder contains mention of a fractal glove, it does inspire others to replicate. If it’s a good idea, it’s a good idea. And then people get inspiration from that.”
How creatives can be protected from AI-induced copyright infringement
While Dr. Thaler might make a compelling argument about being inspired by copyrighted materials, there have been instances where AI did indeed directly copy an artist’s likeness. For instance, the AI-based app Drayk.it allowed users to create songs that sound like the Canadian singer Drake. And while it was meant for parodies (according to the New Yorker) and protected under fair use, this app did show the potential of how AI can lead to copyright infringement.
“Sometimes I’m guilty myself, I set a system free to imagine new intellectual property. And it generates, I’m impressed by what it produces. Others around me are impressed by what it produces, but it hasn’t gone out and done a detailed search,” he said, adding that the way to prevent copyright infringement is to double-check whether the AI system has completely copied copyrighted material. He mentions that companies can create pattern recognition systems that can highlight the pieces of content that are pre-existent and ignore the ones that aren’t, and have some form of built-in protection against copyright infringement.”
To him, the important question in the copyright debate is deciding how close to the original a piece of content must be for it to constitute infringement. “ I think we’re looking at the death of IP, ultimately.”
Advertisement. Scroll to continue reading.
Anticipating and dealing with potentially harmful AI
AI systems can cause potential harm to individuals. We have seen this previously with digital health startup Nabla’s attempt at using a GPT-3 instance to give health advice, only for the AI to tell the patient to kill themselves instead. Dr. Thaler addressed the anticipation of harm with specific reference to DABUS.
He says that it comes up with an idea by combing basic concepts and features into more complex contraptions and this process can be visually observed as branches growing off the main idea, and these branches represent the repercussions of the main idea. “So they can basically search for hot buttons, things, memories of events, and things that could be harmful, say to human beings liability-wise. And it can basically say, aha! I found a weakness in my applicability, it can actually be dangerous to human beings.”
However, he did clarify that AI could indeed miss some of these negative repercussions. “As usual repercussions are sometimes overlooked by humans, and by systems that emulate humans.”
Making AI explainable
AI systems have previously been likened to “black boxes” whose internal workings are unclear, at times even to the developer of the tool. To make rules and regulations around AI, its decisions need to be explainable (need to make sense to people). Dr. Thaler claims that DABUS isn’t a black box, he says that disorders (or computational mistakes) can be identified by looking at the consequence chain (the branches he mentioned earlier) that the AI creates.
He said that DABUS can display mental disorders as well, just like a human brain would, and that in doing so, it can, “take the superstition out of mind, the stigma that goes along with mental illness.”
Does AI need regulation?
There has been a lot of discussion of late about regulating AI. Last week, the US Senate subcommittee on privacy, judiciary, and law held a hearing on AI oversight. In this hearing, major AI industry players, IBM and OpenAI urged the government to create regulations around AI. But Dr. Thaler believes that regulation would do more harm than good. “I think it would be catastrophic to regulate AI at this case at this point because there are a lot of bad actors in the world who are not gonna be stopped,” he likened AI to nuclear weapons saying that once a technology starts developing there is no way to stop its proliferation.
Advertisement. Scroll to continue reading.
Besides this, he also claims that regulations would disproportionately affect smaller AI businesses like his company.“I think I will be put out of business myself if the government came in banged on my door and said you can no longer build conscious and sentient AI.” Regulations make investors wary of putting their money into smaller AI companies, “basically, it’s the big players that will profit from the whole thing. And maybe that’s the plot and basically to force others out of the picture.”
How should AI be regulated then?
Dr. Thaler believes that companies should be induced to build filters so that harmful content isn’t being generated using their AI tools. He says that conversational AI can have warnings (like “Do you want to rephrase that?” when a user asks the AI to create potentially harmful content) in place to protect themselves. But while saying so, he also pondered, that in deciding what is harmful and what isn’t, AI tools are making a moral judgment. “And I must ask, whose morality?”
Instead of overregulating AI, he thinks that governments need to let AI grow on its own and cautiously watch the plug so that it can be pulled in case something goes wrong. He does, however, warn against giving AI systems control over lethal weapons systems. “I mean, already, over the decades, we’ve seen nuclear war just about breakout over machines making mistakes. So that’s what’s going to happen in the future, there are going to be mistake-making machines, and they’ll be quite the asset because they are generating good ideas. And they’ll be quite the threat because they can make horrible mistakes.”
Is AI a threat to the future of society?
Dr. Thaler said that AI has the potential to propagate misinformation and even disinformation adding that he’s anticipating it in the 2024 election cycle in the US. But ultimately, despite the concerns he feels, he doesn’t want AI to be considered a dangerous weapon. “I don’t see AI as a threat. I see human beings using AI as a tool as a threat. Because AI doesn’t necessarily have the greed or the dark motivation that a lot of human beings have. In fact, it’s rather innocent,” he claimed.
This post is released under aCC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.
Also Read:
Advertisement. Scroll to continue reading.
- “Can Someone Register A Copyright In A Creative Work Made By An Artificial Intelligence?” US Scientist Asks
- AI Companies Pushing For Regulation: Key Issues Discussed In The US Subcommittee Hearing On AI Oversight
- Why The EU Wants To Regulate Artificial Intelligence Through A ‘Risk-Based’ Approach
Discover more:AI, Artificial Intelligence, Dr. Stephen Thaler, Free Reads
Written ByKamya Pandey
Click to comment
You must be logged in to post a commentLogin
Leave a Reply
You must be logged in to post a comment.
Or, Login to MediaNama using:
FAQs
What is the Supreme Court decision on AI? ›
Patent Poetry: US Supreme Court Declines to Consider Appeal of AI Inventor Decision. The US Supreme Court has declined to hear a petition on the issue of whether artificial intelligence (AI) can be considered an inventor on a patent.
Can AI infringe copyright? ›AI programs might also infringe copyright by generating outputs that resemble existing works. Under U.S. case law, copyright owners may be able to show that such outputs infringe their copyrights if the AI program both (1) had access to their works and (2) created “substantially similar” outputs.
Is AI already sentient? ›Now, it's important to keep in mind that almost all AI experts say that AI chatbots are not sentient. They're not about to spontaneously develop consciousness in the way that we understand it in humans.
What will happen if AI gains sentience? ›AI becomes sentient when an artificial agent achieves the empirical intelligence to think, feel, and perceive the physical world around it just as humans do. Sentient AI would be equipped to process and utilize language in a natural way and invite an entirely new world of possibilities of technological revolution.
Does AI have First Amendment rights? ›AI itself is not human and cannot have constitutional rights, writes Cass Sunstein, just as a vacuum cleaner does not have constitutional rights. But it seems pretty clear that content created by generative AI probably has free speech protections. It is speech.
What is the argument against AI regulation? ›Another reason to be skeptical of regulation relates to cost. AI is a technology still in its early stages of development. There is much we do not understand about how AI works, so attempts to regulate AI could easily prove counterproductive, stifling innovation and slowing progress in this rapidly-developing field.
How AI is violating human rights? ›Facial Recognition in Civic Space
The campaign highlighted that algorithmic technologies, like facial recognition scanners, "are a form of mass surveillance that violate the right to privacy and threaten the rights to freedom of peaceful assembly and expression."
Only human authors or artists should be named on applications for registration, with any artificial intelligence technologies noted in “a general statement that a work contains AI-generated material.
Does AI deserve rights? ›Some experts suggest that AI machines should have the right to be free from destruction by humans and the right to be protected by the legal system. The opinions on the subject of AI vary greatly. Stephen Hawking used an incredibly complex communication system, a type of AI, to allow him to write and speak.
Did Google create a sentient AI? ›Google has dismissed a senior software engineer who claimed the company's artificial intelligence chatbot LaMDA was a self-aware person.
How close are we to a sentient AI? ›
Currently, no AI system has been developed that can truly be considered sentient. The Singularity is a term that refers to a hypothetical future point in time when artificial intelligence will have surpassed human intelligence, leading to an acceleration in technological progress and a profound impact on humanity.
Has AI ever been self-aware? ›Technologists broadly agree that AI chatbots are not self-aware just yet, but there is some thought that we may have to re-evaluate how we talk about sentience.
Does the US government use AI? ›The United States government uses artificial intelligence in the military, intelligence, and law enforcement to help mitigate potential threats. However, the use of machine learning technology largely remains unregulated by the government, although year-on-year spending on AI government contracts continues to increase.
Is AI a legal person? ›From a theoretical point of view (as opposed to what is desirable in practice), nothing stands in the way of granting legal personality to AI. Legal personality is only a concept that designates the ability of an entity to have rights and obligations.
Can AI be patented in the US? ›A recent Federal Circuit decision, Thaler v. Vidal, made clear that AI cannot be listed as an “inventor” for purposes of obtaining a patent. The USPTO has published two notices in the Federal Register seeking comments on the status of the laws governing issues related to AI inventorship.
What are 3 main concerns about the ethics of AI? ›But there are many ethical challenges: Lack of transparency of AI tools: AI decisions are not always intelligible to humans. AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias. Surveillance practices for data gathering and privacy of court users.
Why AI should not replace humans? ›AI Can't Compete With Human Originality
AI can perform many tasks with high accuracy and efficiency, but it still lacks the ability to think creatively and come up with original ideas that resonate with human emotions and experiences and speak to a brand's unique character.
This could result in incorrect legal advice and negative consequences. Limitations of AI: AI technology may not be able to fully understand the nuances of a particular legal issue, including relevant laws, court decisions, and local practices. This could result in inadequate or incomplete advice.
How can AI harm our society? ›Others argue that AI poses dangerous privacy risks, exacerbates racism by standardizing people, and costs workers their jobs, leading to greater unemployment. For more on the debate over artificial intelligence, visit ProCon.org.
How does AI manipulate humans? ›By analyzing patterns in people's online activities and social media interactions, AI algorithms can predict what a person is likely to do next. Cult leaders and dictators can use predictive models to manipulate people into doing what they want by providing incentives or punishments based on predicted behavior.
Why is AI a threat to humanity? ›
Recent advancements in so-called large language models — the type of A.I. system used by ChatGPT and other chatbots — have raised fears that A.I. could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.
Who owns copyright in artificial intelligence? ›The US Copyright Office says that any work must be human-made to be copyrightable, which means you cannot copyright AI-generated work. This was proven in late 2022 when the comic book Zarya of the Dawn's copyright protection was revoked.
Who owns copyright in AI generated content? ›The tech sector believes the copyright to AI-generated content should belong to users, whereas the creative sector wants this content to be excluded from ownership completely.
Who owns the rights to AI? ›AI art cannot be copyrighted. The question of who owns art created by AI is a complex and controversial issue. While AI is responsible for generating the artwork, it is ultimately the human creators who programmed and trained the AI algorithms.
Will AI be the best or worst thing for humanity? ›Professor Stephen Hawking has warned that the creation of powerful artificial intelligence will be “either the best, or the worst thing, ever to happen to humanity”, and praised the creation of an academic institute dedicated to researching the future of intelligence as “crucial to the future of our civilisation and ...
What are ethical rights of AI? ›Ethical AI is artificial intelligence that adheres to well-defined ethical guidelines regarding fundamental values, including such things as individual rights, privacy, non-discrimination, and non-manipulation.
Is AI going to rule the world? ›No doubt computers are more powerful at giving us answers faster than human brainpower, but are they more capable? That's what people worry about. No, AI will not take over the world. Movies like I, Robot are science fiction, with an emphasis on the word fiction.
Who was the guy at Google who said AI was sentient? ›Blake Lemoine — the fired Google engineer who last year went to the press with claims that Google's Large Language Model (LLM), the Language Model for Dialogue Applications (LaMDA), is actually sentient — is back. Lemoine first went public with his machine sentience claims last June, initially in The Washington Post.
Who is the godfather of AI? ›Geoffrey Hinton is known as the godfather of artificial intelligence.
Who is the Google engineer claiming AI is sentient? ›Lemoine told the Washington Post at the time that LaMDA resembled “a 7-year-old, 8-year-old kid that happens to know physics” and said he believed the technology was sentient, while urging Google to take care of it as it would a “sweet kid who just wants to help the world be a better place for all of us.”
How do you prove sentience? ›
There are three general criteria for deciding whether a being is sentient. These involve considerations that are (1) behavioral, (2) evolutionary, and (3) physiological.
Who is the worlds first self-aware AI? ›Sophia (robot) - Wikipedia.
Who is the first AI with consciousness? ›Her name is Kassandra, named after the fabled Trojan prophetess. Bachynski claims his AI has basic human level self-awareness, of who she is, what she is doing, and what is at stake for her, among a few hundred more contexts.
Can AI read your mind? ›A team of scientists from the University of Texas at Austin has developed an AI model that can read your thoughts. The noninvasive AI system known as semantic decoder lays emphasis on translating brain activity into a stream of texts according to the peer-reviewed study published in the journal Nature Neuroscience.
Has the AI Act been passed? ›AI Guardrails
European Parliament lawmakers on Wednesday passed the landmark Artificial Intelligence Act, putting the bloc a critical step closer to formally adopting the world's first major set of comprehensive rules regulating AI technology.
Bill Gates said during a Goldman Sachs and SV Angel event on artificial intelligence that a future AI personal assistant will be so profound that the first company to develop it will have leg up on competitors.
Will patent attorneys be replaced by AI? ›In short, while AI will undoubtedly play an increasingly important role in the field of patent law, it is unlikely to completely replace the need for human attorneys.
Will AI take us over? ›It's unlikely that a single AI system or application could become so powerful as to take over the world. While the potential risks of AI may seem distant and theoretical, the reality is that we are already experiencing the impact of intelligent machines in our daily lives.
Is AI regulated in USA? ›Despite the media headlines that may lead one to believe that there are no laws applicable to artificial intelligence (AI) in the US, existing federal and state laws apply, along with a series of frameworks issued by various federal agencies.
Does AI have legal status? ›Artificial intelligence does not have the condition to become an original legal subject, but if it can meet the long-term fundamental interests of human beings as a legal subject, there is still the possibility of being formulated as a derivative legal subject.
Why is Elon Musk warning about AI? ›
Musk's reasoning was that reliance on A.I. to perform seemingly simple tasks can, over time, create an environment in which humans forget how to operate the machines that enabled A.I. in the first place.
Does Elon Musk believe in AI? ›Speaking via video link to a summit in London, Musk said he expects governments around the world to use AI to develop weapons before anything else. Elon Musk has hit out at artificial intelligence (AI), saying it is not "necessary for anything we're doing".
Why is Bill Gates worried about AI? ›Gates acknowledged that AI will likely be “so disruptive [that it] is bound to make people uneasy” because it “raises hard questions about the workforce, the legal system, privacy, bias, and more.” AI is also not a flawless system, he explained, because “AIs also make factual mistakes and experience hallucinations.”
What profession will be replaced by AI? ›- Jobs most impacted by AI. Advertisement. ...
- Coders/programmers. ...
- Writers. ...
- Finance professionals. ...
- Legal workers. ...
- Researchers. ...
- Customer service. ...
- Data entry and analysis.
Ownership of AI patents
The national patent laws and the TRIPS agreement requires the inventor to be a legal person or a human being. AI is neither a legal person nor a human being. Thus, AI is struggling to become an inventor in a patent application. The issue has reached the doors of courts around the world.
It is unlikely that AI will completely replace human programmers and write code from its own research. Therefore, software engineering and web developer jobs will be safe in the foreseeable future. However, AI is expected to dramatically change how computer scientists and software developers work.
Is AI mentioned in the Bible? ›Biblical narrative
According to Genesis, Abraham built an altar between Bethel and Ai. In the Book of Joshua, chapters 7 and 8, the Israelites attempt to conquer Ai on two occasions.
Currently, no AI system has been developed that can truly be considered sentient. The Singularity is a term that refers to a hypothetical future point in time when artificial intelligence will have surpassed human intelligence, leading to an acceleration in technological progress and a profound impact on humanity.
Has Google created a sentient AI? ›Google says its chatbot is not sentient
"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient," said Google spokesman Brian Gabriel.