Sneakily Using Generative AI ChatGPT To Spout Legalese And Imply That You’ve Hired An Attorney, Unsettling For AI Ethics And AI Law
You might be aware of the ongoing meme and social media game known as tell me about something without telling me.
For example, suppose you said to a lawyer that they should tell you they are indeed a lawyer, but do so without outright saying so. We can guess that a lawyer might mutter all manner of arcane legalese to try and convey that they are versed in the law and serve as a practicing attorney. Upon hearing this tremendous barrage of nearly incomprehensible and lofty-sounding legal words, you might speculate they are a lawyer.
Let’s try a different version of the same game.
Tell me that you are a lawyer, without telling me that you are a lawyer, and do so even though you, in fact, are not a lawyer.
How would you handle that one?
Well, before you get too far along in contemplating this, please know that by and large anyone that holds themselves out as a lawyer can get themselves into some rather endangering legal hot waters if they aren’t indeed a properly licensed and active attorney. This overall notion is commonly referred to as the Unauthorized Practice of Law (UPL), varying depending upon the legal jurisdiction, but in the United States, there is a relatively consistent set of state-by-state rules barring people from pretending to be attorneys. For my extensive analysis of the use of AI in the legal field and the resultant implications for UPL, see the link here and the link here, just to name a few.
Consider the rules in California that pertain to the unlawful practice of law.
There is the California Business and Professionals Code (BPC) consisting of Article 7 covering the unlawful practice of law, for which subsection 6126 clearly declares this:
- “Any person advertising or holding himself or herself out as practicing or entitled to practice law or otherwise practicing law who is not an active licensee of the State Bar, or otherwise authorized pursuant to statute or court rule to practice law in this state at the time of doing so, is guilty of a misdemeanor punishable by up to one year in a county jail or by a fine of up to one thousand dollars ($1,000), or by both that fine and imprisonment.”
MORE FOR YOU
I hope you carefully examined that legal passage. I emphasize this because the act of holding yourself out as a lawyer can be prosecuted as a crime that lands you in jail. Do the crime, pay the time, as they say.
I trust that none of you are wantonly going around and pretending to be an attorney.
Then again, there is a new trend underlying the advent of generative AI such as the widely and wildly popular ChatGPT that has everyday people slipping and sliding toward appearing to be attorneys. These decidedly non-lawyers are sneakily making use of ChatGPT or other akin generative AI apps to seemingly embrace the aura of being or having a lawyer at their fingertips.
Generative AI is the type of Artificial Intelligence (AI) that can generate various outputs by the entry of text prompts. You’ve likely used or known about ChatGPT by AI maker OpenAI which allows you to enter a text prompt and get a generated essay in response, referred to as a text-to-text or text-to-essay style of generative AI, for my analysis of how this works see the link here. The usual approach to using ChatGPT or other similar generative AI is to engage in an interactive dialogue or conversation with the AI. Doing so is admittedly a bit amazing and at times startling at the seemingly fluent nature of those AI-fostered discussions that can occur.
A recent headline news story highlighted an emerging approach of using ChatGPT to emit legalese, seemingly as though an essay was composed by an attorney.
Here’s the deal.
Reportedly, a woman in New York City had grown tired of trying to get her landlord to fix the broken washing machines in her apartment complex. She had purportedly repeatedly conveyed to the landlord that the washing machines were in dire need of repairs. Nothing happened. No response. No action.
To add to this frustration and exasperation, she was soon thereafter notified that her rent was going up. Imagine how this might make you feel. Your rent is going up, and meanwhile, you can’t get the darned washing machines fixed.
The woman claims that she opted to use ChatGPT to come to her aid.
This is how. She entered a series of prompts into ChatGPT to produce a letter in legalese that would intimate that the rent increase was a retaliatory action by the landlord. Furthermore, such retaliation would presumably be contrary to the New York rent stabilization codes.
If she had written the letter in plain language, the assumption is that the landlord would have handily discarded the complaint. Writing the letter in legalese was meant to show a sense of seriousness. The landlord might worry that perhaps she is an attorney and will be legally aiming to make his life a legal nightmare. Or perhaps she hired an attorney to prepare the letter. Either way, the letter would seem to have a lot more potency and provide a powerful legal punch to the gut by leveraging impressive-looking legalese.
We don’t know for sure that the jargon-filled legalese letter necessarily moved the needle. She indicated that the washing machines were soon repaired and that she assumes that the letter did the trick. Maybe, maybe not. It could be that any number of other factors came to play. The letter might have been ignored and the washing machines were fixed for completely other non-related reasons.
In any case, hope springs eternal.
The gist is that people are at times making use of generative AI such as ChatGPT to boost their writing and seek to say more than they might have said before. One such embellishment consists of having the generative AI churn out a legalese-looking essay or letter for you. This could include all of those “shall this” or “shall that” throughout the missive, and of course have to use a few “thereof” catchphrases too.
The assumption would be that such a letter that at least sounds like it was written by an attorney will garner the attention that otherwise might have ended up in the proverbial wastebasket. Someone that receives a legally intimidating email or correspondence is probably going to think the jig is up. Whereas a landlord might normally assume they have the upper hand over a tenant, once the renter has lawyered up as it were, the full weight of the law might come crashing down on their head. Or so they assume.
Headaches galore.
All in all, for all of those people out there that don’t have legal representation or that cannot afford it, the contention is that perhaps a bit of trickery to imply that a legal beagle is on the case would seem an innocuous act and partially cope with the pressing issue of a lack of access to justice (A2J) throughout the land. I’ve covered extensively in my columns how AI might be legitimately used to bolster attorneys and make legal advice more readily affordable and available, see the link here and the link here.
In this use case, the AI is being used to imply or suggest that a lawyer is in the midst, despite this not being the case in these circumstances. It is a ploy. A ruse. We return to my earlier stated opening theme about telling something without actually telling it.
Put on your thinking cap and mull over this weighty matter:
- Does using generative AI such as ChatGPT for such a purpose make sense and is it something that people are okay to undertake, or is it an abysmal use that should be stopped or entirely banned and outlawed?
That is a question that generates a lot of heated debate and controversy.
In today’s column, I will take a close look at this rising predilection. Most people that are using generative AI have not likely latched onto this kind of use, as yet. If enough viral stories get published about the approach, and if it seems that the approach is moving mountains or even molehills, the chances are that the phenomena will grow like wildfire.
That is worrisome in many pivotal ways.
Let’s unpack the complexities involved.
Vital Background About Generative AI
Before I get further into this topic, I’d like to make sure we are all on the same page overall about what generative AI is and also what ChatGPT and its successor GPT-4 are all about. For my ongoing coverage of generative AI and the latest twists and turns, see the link here.
If you are already versed in generative AI such as ChatGPT, you can skim through this foundational portion or possibly even skip ahead to the next section of this discussion. You decide what suits your background and experience.
I’m sure that you already know that ChatGPT is a headline-grabbing AI app devised by AI maker OpenAI that can produce fluent essays and carry on interactive dialogues, almost as though being undertaken by human hands. A person enters a written prompt, ChatGPT responds with a few sentences or an entire essay, and the resulting encounter seems eerily as though another person is chatting with you rather than an AI application. This type of AI is classified as generative AI due to generating or producing its outputs. ChatGPT is a text-to-text generative AI app that takes text as input and produces text as output. I prefer to refer to this as text-to-essay since the outputs are usually of an essay style.
Please know though that this AI and indeed no other AI is currently sentient. Generative AI is based on a complex computational algorithm that has been data trained on text from the Internet and admittedly can do some quite impressive pattern-matching to be able to perform a mathematical mimicry of human wording and natural language. To know more about how ChatGPT works, see my explanation at the link here. If you are interested in the successor to ChatGPT, coined GPT-4, see the discussion at the link here.
There are four primary modes of being able to access or utilize ChatGPT:
- 1) Directly. Direct use of ChatGPT by logging in and using the AI app on the web
- 2) Indirectly. Indirect use of kind-of ChatGPT (actually, GPT-4) as embedded in Microsoft Bing search engine
- 3) App-to-ChatGPT. Use of some other application that connects to ChatGPT via the API (application programming interface)
- 4) ChatGPT-to-App. Now the latest or newest added use entails accessing other applications from within ChatGPT via plugins
The capability of being able to develop your own app and connect it to ChatGPT is quite significant. On top of that capability comes the addition of being able to craft plugins for ChatGPT. The use of plugins means that when people are using ChatGPT, they can potentially invoke your app easily and seamlessly.
I and others are saying that this will give rise to ChatGPT as a platform.
As noted, generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.
There are numerous concerns about generative AI.
One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including manifestly untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).
Another concern is that humans can readily take credit for a generative AI-produced essay, despite not having composed the essay themselves. You might have heard that teachers and schools are quite concerned about the emergence of generative AI apps. Students can potentially use generative AI to write their assigned essays. If a student claims that an essay was written by their own hand, there is little chance of the teacher being able to discern whether it was instead forged by generative AI. For my analysis of this student and teacher confounding facet, see my coverage at the link here and the link here.
There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what today’s AI can do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.
Do not anthropomorphize AI.
Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.
One final forewarning for now.
Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.
Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that President Abraham Lincoln flew around the country in a private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not realize that jets weren’t around in his day, or they might know but fail to notice that the essay makes this brazen and outrageously false claim.
A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.
Into all of this comes a slew of AI Ethics and AI Law considerations.
There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.
The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.
I’ll be interweaving AI Ethics and AI Law related considerations into this discussion.
The Legalese Printing Machine
We are ready to further unpack this thorny matter.
I’ll cover these ten salient points:
- 1) Possibly Prohibited by OpenAI Rules
- 2) ChatGPT Might Flatly Refuse Anyway
- 3) Aren’t Using Bona Fide Legal Advice
- 4) Unauthorized Practice of Law (UPL) Woes
- 5) Could Backfire And Start A Legal War
- 6) Devolve Into Legalese Versus Legalese
- 7) Scoffed And Seen As Hollow Bluff
- 8) Turns Into Pervasive Bad Habit
- 9) Used Against You During Legal Fight
- 10) Attorneys Love-Hate This Use Of ChatGPT
Put on your seatbelt and get ready for a roller coaster ride.
1) Possibly Prohibited by OpenAI Rules
I’ve previously covered in my columns the notable facet that most of the generative AI apps have various stipulated restrictions or prohibited uses, as decreed by their respective AI makers (see my analysis at the link here).
When you sign-up to use a generative AI app such as ChatGPT, you are also agreeing to abide by the posted stipulations. Many people do not realize this and proceed unknowingly to use ChatGPT in ways that they aren’t supposed to undertake. They risk as a minimum being booted off ChatGPT by OpenAI or worse they might end up getting sued. Plus, adding to the peril, there is an indemnification clause associated with OpenAI’s AI products and ergo you might incur quite a legal bill to defend yourself and also defend OpenAI, as I’ve discussed at the link here.
What does OpenAI have to say about legal-oriented uses of ChatGPT and as applicable to the rest of their AI product line?
Here’s a pertinent excerpt from the OpenAI online usage provisions:
- Prohibited use — “Engaging in the unauthorized practice of law, or offering tailored legal advice without a qualified person reviewing the information.”
That comports with my points earlier about dangerously veering into the territory of UPL. OpenAI says don’t do it.
Let’s dig a bit deeper into this.
Suppose a person decided to use ChatGPT to generate a letter that is rife with legalese. The person carefully avoids encompassing any wording that suggests that they are a lawyer. They are not a lawyer and they do not in the letter say they are. Nor do they deny they are a lawyer. The letter is silent with respect to whether they are a lawyer or not a lawyer.
It is entirely up to the receiver to make their own personal leap of logic, if they opt to do so.
Would you claim that the letter somehow crosses the line and is an indication that the person is holding themselves out as a lawyer?
This seems a bit of a stretch, all else being equal.
Imagine that the person wrote the letter from their own noggin. They opted to not use ChatGPT. It just so happens they are familiar with legal writing and can do a pretty good job of mimicking legalese. They are able to devise a letter that is completely on par with a ChatGPT legalese-produced letter.
Once again, I ask you, does the letter cross the line into the verboten territory of appearing to be a lawyer?
Try this next one on for size. A person does an online search across the Internet and finds various posted legal cases and generic legal advice. They stitch together their own letter that includes much of that language, though presumably altered to not violate copyright provisions. Or, they might go to an online site that provides legal documents as templates. They buy or download a template and use that to write their letter.
Under the conditions stated, we would be hard-pressed to seemingly make a convincing argument that any of those instances are demonstrative examples of performing UPL.
Of course, there are a zillion other factors to consider. Is the letter solely pertaining to the person or are they writing the letter on behalf of someone else? Does the letter make legal declarations or is it merely spiffed-up everyday language that has been coated with legalese? And so on.
This brings us to another crossroads.
Some people are turning to ChatGPT and other generative AI for straight-out legal advice, see my coverage at the link here. They log in to ChatGPT and ask legal questions and aim to get legal advice about what they should do about a thorny predicament they are in. The beauty of ChatGPT is that it is a text generator available at a nominal price, it is available 24x7 and seemingly allows you to get legal advice on whatever you like. Trying to find and hire a lawyer can be arduous, exhausting, and costly.
Here is what OpenAI says about this type of usage:
- “OpenAI’s models are not fine-tuned to provide legal advice. You should not rely on our models as a sole source of legal advice.”
I’d bet that most people that are using ChatGPT for legal advice have failed to take the time to read that usage warning. They probably just assume that ChatGPT can give legal advice. Possibly even under the shakey presumption that they can readily get decently credible legal advice.
Some attorneys believe OpenAI should be more explicit about this usage provision. It should always be at the front and center of all prompts entered by a user. That being said, the ChatGPT app will at times detect that a user is seeking legal advisement, and if so, a somewhat standardized message is emitted telling the user that ChatGPT is not able to give legal advice.
You might argue that is a sufficient guardrail.
A counter-argument is that it is an insufficient guardrail. For example, a persistent user that knows the tricks of how to get around these controls can get ChatGPT to essentially respond, see my coverage at the link here.
A kind of cat-and-mouse gambit ensues.
There is an old saying amongst lawyers that an attorney that represents themselves in legal matters has a fool for a client. In today’s world of generative AI, we might reemploy the saying and indicate that a non-lawyer that uses ChatGPT as a legal advisor has a fool for a client.
Note too that ChatGPT is prone to generating essays containing errors, falsehoods, biases, and so-called AI hallucinations. Thus, just because you can get ChatGPT to embellish an essay with legalese does not mean there is any legal soundness within the essay. It could be an utterly vacuous legal rendering. Some or all of the generated content might be entirely legally incorrect and preposterous.
Bottom-line is that if you have a legal issue, seek out a bona fide attorney. Right now, that would be a human attorney, though incursions are being made by AI to try and provide a so-called robo-lawyer, which has a slew of complexities and complications (see my discussion at the link here).
One other quick thought on this notion of ChatGPT prohibited uses, I trust that everyone realizes these other stipulations exist by OpenAI:
- “OpenAI prohibits the use of our models, tools, and services for illegal activity.”
- Prohibited use — “Generation of hateful, harassing, or violent content.”
I bring this up for another avenue or pathway on this rather expansive topic.
Suppose that someone uses ChatGPT to compose a letter that has a bunch of legalese in it. The person then sends this letter to whomever they are trying to deal with. This seems so far a rather tame action.
On the other hand, the target of the letter perhaps perceives the letter as hateful or a form of harassment. Oops, the user that leveraged ChatGPT has maybe gotten themselves into a bind. They thought they were being clever to use ChatGPT to get them out of a bind. Instead, they have shot their own foot and landed in a potential legal quagmire.
ChatGPT is a gift horse that is worth looking closely in the mouth and at the teeth.
2) ChatGPT Might Flatly Refuse Anyway
I already covered this in my discourse above, namely that sometimes the ChatGPT app will figure out that a person is asking for legal advice and will refuse to provide said advice.
One of the most popular ways to try and get around various ChatGPT restrictions involves instructing the AI app to do a pretend situation. You tell ChatGPT that you are pretending to have a legal problem. It is all just a pretense. You then ask ChatGPT to respond. This might work, but it is pretty transparent and usually ChatGPT will still refuse to reply.
Other tricks can be tried.
3) Aren’t Using Bona Fide Legal Advice
You should not be relying on ChatGPT for legal advice, as stated earlier herein.
Some people are cynical about the provision by OpenAI that says you should not use ChatGPT for legal advice. They believe that this is a rigged setup. In theory, lawyers have told OpenAI that by gosh the ChatGPT and other AI products better not be dispensing legal advice. Doing so would take money out of the pockets of lawyers.
Whether you believe in grand conspiracies or not is part of the equation in that supposition. We can at least for right now reasonably agree that ChatGPT and other generative AI are not yet up to par in being able to provide legal advice that a proper human attorney can provide.
Meanwhile, there are uses of AI for legal advisement that are being devised and used by lawyers themselves, an area of focused coverage on AI and LegalTech that I cover at the link here. The sage wisdom today is that it isn’t so much that AI will replace human lawyers (as yet), but more so that AI-using lawyers will outdo and essentially replace lawyers that don’t use AI.
4) Unauthorized Practice of Law (UPL) Woes
Be cautious in trying to use generative AI such as ChatGPT for performing any semblance of legal work.
You might want to post a highly visible sign above your screen that says in large bold foreboding letters UPL. Hopefully, that will daily remind you of what to not do.
5) Could Backfire And Start A Legal War
Assume that someone has written a letter using ChatGPT and it contains legalese. They send the letter to their landlord, akin to the news item about the renter and the busted washing machines.
The letter might intimidate the landlord and produce the stellar result you are aiming for. Success might be had. That is the smiley face version.
Unfortunately, life often disappoints. Here’s what might happen instead. The landlord engages a bona fide human attorney and starts a legal war with you. Whereas the matter might have been cleared up in a simpler fashion, now all kinds of legal wrangling take place. The situation mushrooms into an all-out legal battle.
The crux is that you sometimes live with the sword and can die by the sword.
If you start down the path of pretending to be using legal wrangling via your use of ChatGPT, this might spark a set of legal dominos into action. I am not saying that this is necessarily wrong. You might be right to get the legal shoving match into motion, though you would have been wiser to consult an attorney before you fell into that sordid legal quicksand.
6) Devolve Into Legalese Versus Legalese
I’ve got a variation on all of this that might seem nearly comical.
You use ChatGPT to prepare a legalese-sounding letter. The letter is aiming to get the other person to comply in some fashion. You go ahead and send them the letter.
Lo and behold, you get a letter from them in return.
It too has legalese!
Was it written by a human attorney?
You aren’t sure whether it was or not.
Turns out they are also using ChatGPT. In other words, neither of you is using an actual attorney. You are both fighting a “legal” battle or one that seems to appear as such, by using ChatGPT to do your legalese writing.
This is reminiscent of the once popular Spy versus Spy cartoons.
The question becomes whether you will be intimidated by their legalese. Maybe yes, maybe no. An endless loop starts to occur. Back and forth this could continue. How long will it play out?
Perhaps until either or both of you lose access to ChatGPT and can no longer push a button to get your legalese on its way.
7) Scoffed And Seen As Hollow Bluff
You make use of ChatGPT to produce a legalese letter. This might require quite a number of iterations to achieve. Your first prompt doesn’t elicit exactly what you had in mind. You keep trying various prompts and seek to guide ChatGPT.
Finally, after an hour or two of fumbling around, you get a ChatGPT legalese letter that seems fitting to be sent.
You send it to the targeted recipient.
They look at it and rather than being intimidated, they laugh at it. The legalese letter is seen as silly and ineffective. It actually makes you look weak and almost like a buffoon.
Have you improved your situation or inadvertently undermined it?
Also, was the time spent toying with ChatGPT worthwhile or a waste of time?
You decide.
8) Turns Into Pervasive Bad Habit
There are studies examining whether people might be getting hooked on using generative AI such as ChatGPT (see for example my coverage at the link here).
It is easy to get hooked. You quickly will find that ChatGPT can do the heavy lifting for your writing chores. It does more than that too. You can have ChatGPT review written materials for you. All kinds of writing-related tasks can be performed.
Suppose you discover that ChatGPT can do legalese. You start to use this capability. It seems to impress others.
Whoa, you have a secret weapon that few seem to know exists.
The next thing you know, all of your writing starts to leverage the legalese capacities. Writing a note to your friend is kind of fun and catchy when employing the legalese option (assuming your friend doesn’t take the note in a demeaning or hostile way).
But this might become a bridge too far.
You write a memo to your boss and infuse the memo with legalese. Your boss is upset and thinks you are trying to make a legal ruckus at work. Yikes, you suddenly are having to explain why you have needlessly been using the legalese infusing. Your relationships at work go sour.
Be careful what you wish for.
9) Used Against You During Legal Fight
Here’s a somewhat obscure possibility.
Suppose you proceed to use ChatGPT to produce some legalese letters. You send them to your targeted recipient. So far, so good.
Later on, the whole matter goes to court. Your prior correspondence becomes part of the issues at trial. The judge sees and reviews your letters. The opposing side attempts to undermine your credibility by arguing that you were being deceitful by using such language.
Ouch, the very thing that you thought was your best ally has turned into an attack on your integrity.
10) Attorneys Love-Hate This Use Of ChatGPT
You might be wondering what attorneys have to say about people using generative AI such as ChatGPT to produce legalese letters.
There is a decidedly love-hate positioning to all of this.
Some attorneys will decry that ChatGPT and other generative AI are veering into legal territory. Cease and desist ought to be the order of the day. I mentioned that point earlier.
Other attorneys might say that if the usage is not of a true legal nature, and assuming that the person is not in any fashion at all holding themselves out as an attorney, then it probably is okay under selective and narrow circumstances.
That being said, they would also urge that people should consult an actual attorney and not try to rely upon a generative AI app. I’ve listed above a variety of reasons why using ChatGPT for even the surface-level legalese can get someone ensnared in an ugly legal morass.
There is another angle to this too.
We know from collected statistics that people are regrettably widely not aware of their legal rights, see my coverage at the link here. If the use of generative AI can get people to become cognizant of their legal rights, you could persuasively say that this is a valuable educational tool. The difficulty and concern are that there is a big difference between getting up-to-speed about legal aspects versus plunging ahead into trying to take legal action without consulting an attorney.
A similar issue arises concerning any legal informational content on the Internet. People can use the material to learn about legal aspects. That’s a good thing. But when they take that info and start to perform legal actions, doing so without proper legal insight and advice, they are going to risk legal repercussions.
ChatGPT and other generative AI make this an abundantly slippery slope.
Conclusion
Someday there might very well be AI that can perform in the same capacities as human attorneys. We are already witnessing incursions into that space. My research and work are avidly in pursuit of both semi-autonomous and fully autonomous legal-based AI reasoning.
The looming sword of UPL hangs above any such AI use. Is this an insidious ploy to keep human attorneys gainfully employed? Or is this a sensible safety net to ensure that people do not get lousy or improper legal advice that might be dispensed by AI?
You can bet for sure that such issues are going to become more pronounced as advances in AI continue to stridently march forward.
A final comment for now.
The comedian Steven Wright proffered one of the funniest lines about attorneys (which even attorneys tend to relish too): “I busted a mirror and got seven years bad luck, but my lawyer thinks they can get me five.”
Is that lawyering advice from a human attorney or ChatGPT?
You tell me.
View the rest of the story here
Comments
Post a Comment