Why I Do Not Fear AI

by Vanessa L. Kanaga Esq. On June 16, 2023

Last year, I was working with a physical therapist to rehab a hip injury, and I asked him, “What do you think about foam rollers?  Are they good or bad?”  His response:  “A foam roller is a tool, so it depends on how you use it.  Would you ask whether a hammer is good or bad?”  That answered my question, and he made a good point.  As humans, we are fortunate to have the ability to create and use tools to improve our lives, by making tasks easier or more efficient, or by doing things we otherwise could not do using our natural capabilities.  Unfortunately, we also have the ability to use those tools for purposes that are detrimental (a certain country song involving a Louisville Slugger comes to mind).  I’ve been thinking about this concept a lot in recent months, as the discussion of ChatGPT and other AI tools has permeated our inboxes, newsfeeds, and tv screens.  Understandably, people are concerned.  They are concerned about kids using AI to do their homework and the effects on learning.  They are concerned about a dystopian future in which our lives are controlled by machines.  And they are concerned about AI-enabled tools doing jobs that now require human skill and intelligence, leaving those of us who rely on those jobs unemployed and destitute.

I do not intend to minimize these concerns or disparage those who raise them.  To the contrary, I think those concerns are very real, and paint a picture of one possible future (with the exception of kids using AI to complete homework which, if you know a teacher, you know is a current reality).  A possible future, but not an inevitable future.  The flip-side of our human ability to misuse and abuse tools is our ability to exercise restraint and curtail our activities to minimize harm.  AI is a tool, just like a hammer or a foam roller.  Many have already noted the benefits it can provide in the workplace when used to generate content as a starting point – not the end product, but a good head start.  It is a way to work more efficiently, to help organize thoughts, and to overcome “blank page syndrome.”  This concept is a familiar one, if you think about it.  Many of us have searched online for a template to use as a starting point when drafting a letter, putting together a project plan, or even creating a menu.  To some extent, InterActive Legal users are doing the same thing when they use our programs.  Not every document generated in InterActive LegalSuite will meet 100% of the client’s needs.  In fact, it may be safe to say that most of the time, the document is 90% complete, and the attorney or paralegal working with the attorney is responsible for providing the extra 10% that turns it into a finished product.  That may not seem like much, but that 10% is critical.  It is what distinguishes a human-made tool from the human using it, and what makes humans a necessary component of a competent workplace and a flourishing society.

Of course, there are interests that are inclined to drive toward eliminating the need for human involvement, perhaps by pushing society toward lower standards, so that the AI-driven product is sufficient.  Without a doubt, there are some tasks currently done by humans that can be adequately accomplished by AI.  That does not mean that the need for humans is diminishing; it means that we will have to reshape the type of work done by humans.  This likely will require embracing AI as a tool, and finding the opportunity that exists in using it to work smarter and accomplish more, just as lawyers are able to expand their capabilities and work more efficiently using automation tools like InterActive Legal.  Rather than lowering our standards, we have to raise them, to expect more out of ourselves now that we have this tool to help us.  To do that, we have to collectively agree to use this new and developing technology for good, and be judicious about how it is deployed.  We have to be mindful of the potential for harm, and act intentionally to avoid it.  That may mean holding back and erring on the side of caution, rather than moving at full speed to tap into the full capability that AI has to offer. 

AI is a tool, neither good nor bad.  Its benefits and harms lie in what we make of it.  Once we recognize this, we can see that it is not something to be feared, but something to be managed.  By doing that, we can harness its full potential and perhaps elevate the quality of human lives.


Author

Vanessa Kanaga currently serves as InterActive Legal’s Special Advisor on Estate Planning and Legal Strategy.  She is the former CEO of InterActive Legal.  Vanessa received her J.D. from Cornell Law School and holds a B.A. in Philosophy from Wichita State University, as well as an Advanced Professional Certificate from New York University School of Law. She is licensed in New York, Kansas, and Arizona, and currently lives in Arizona.

Prior to joining InterActive Legal in 2013, Vanessa practiced in New York, at Milbank LLP and Moses & Singer LLP, and in Kansas, at Hinkle Law Firm, LLC. She has experience in a range of estate planning matters, including high net worth tax planning and asset protection planning.

In 2024, Vanessa returned to the practice of law.  She is an Associate Attorney at Greengard Law Firm, PLC in Phoenix, Arizona.

We use cookies to make interactions with our websites and services easy and meaningful, and to tailor advertising. May we have your consent to use cookies to track your activities? Accept Read our Privacy Notice.