Artificial Intelligence and Machine Learning: Navigating the Pitfalls
Artificial Intelligence and Machine Learning: Navigating the Pitfalls

Artificial Intelligence and Machine Learning: Navigating the Pitfalls

The potential benefits of artificial intelligence (AI) and machine learning (ML) for today’s small and mid-sized businesses are undeniable, but their rapid adoption also raises legal concerns. Some pitfalls are due to bias in AI algorithms, ongoing data privacy breaches, and the retention of intellectual property ownership.

To navigate these challenges, companies can take proactive steps such as vetting potential AI vendors, setting standards for gathering data, implementing robust data security measures, and defining and enforcing protocols for all AI- and ML-powered processes and systems. These measures will help businesses harness the power of AI and ML while safeguarding their legal interests, maintaining public trust, and upholding corporate responsibility.

The Rapid Evolution of AI and ML

The growth of AI and ML has been exponential over the last few years, rapidly redefining the technological landscape and how people interact. ChatGPT is already ubiquitous for business and personal use, much like how smartphones transformed the way people work and live. According to Statista, the AI market grew beyond $184 billion in 2024, up $50 billion from 2023, and by 2030, the U.S. market is expected to grow beyond $826 billion. 

With this massive marketplace shakeup, businesses are examining the potential use cases for generative AI and ML within their products, services, and operations. But along with this enthusiasm are the genuine concerns of a possible invasion of privacy via personal data that these new algorithms could potentially be trained on, as well as the need for controls to be in place to secure that data and proprietary information.

Setting the Parameters 

When implementing AI and ML in their business plans, companies can avoid these potential pitfalls by focusing on several issues, including AI and ML bias, which occurs when bias creeps into algorithm programming. This is a common problem when, for example, algorithms are trained using certain demographic data. For example, Amazon abandoned its AI-based recruiting tool in 2018 when it was discovered that its algorithms, dating back four years, were biased to favor male over female candidates. 

In May 2020, the ACLU, ACLU of Illinois, and the law firm Edelson PC sued Clearview AI—a face surveillance company—alleging violation of Illinois residents’ privacy rights under the Illinois Biometric Information Privacy Act (BIPA). Clearview AI “scraped” or extracted more than 10 billion faceprints from online photos using facial recognition software. The company planned to sell that technology to private companies before being sued. In 2022, the parties reached a settlement that permanently banned Clearview AI nationwide from making its faceprint database available to businesses and most private companies, either for free or for profit.

Nathan Freed Wessler, a deputy director of the ACLU Speech, Privacy, and Technology Project, said the settlement “demonstrates that strong privacy laws can provide real protections against abuse. Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit. Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws.”

It’s difficult not to build in bias when creating algorithms inadvertently. Still, companies can be vigilant when processing data using the rapid evaluation method (REM) by using tools specifically designed to spot bias leaks in algorithms, such as IBM Watsonx.ai. Tarun Chopra, vice president of product management for data and AI software at IBM, said of Watsonx.ai., “At the heart of all AI use cases is data, but more important is access to and processing of the required data to get the best results from AI models. Enterprises need to be able to bring compute capacity and AI models to where enterprise data is created, processed, and consumed—in support of AI use cases—including both traditional AI and machine learning (ML) workloads and generative AI.” 

Upfront Intellectual Property Agreements

In addition to bias concerns, companies can also focus on navigating ownership rights. That includes securing copyrights and obtaining clear contractual agreements to avoid legal skirmishes. Contracts that include how the AI or ML system will train data and opt-in or opt-out processes are not difficult to implement if companies are willing to be transparent about their methods.

It’s also beneficial for organizations to recognize the legal standards already in place regarding regulation and compliance. For example, in the medical field, there is the Health Insurance Portability and Accountability Act (HIPAA) compliance checklist, which, among other things, requires companies to designate a HIPAA privacy officer “responsible for the development, implementation, and enforcement of HIPAA-compliant policies.” The policy also ensures that electronically transmitted data is encrypted or anonymized by replacing key identifiers with other values.

While the United States does not yet have comprehensive legislation that directly regulates AI, some rules limit how companies can use and share data under the California Consumer Privacy Act (CCPA) and the EU’s General Data Protection Regulation (GDPR). In addition, the Algorithmic Accountability Act of 2023 requires companies to assess the impacts of the AI systems they use and sell and create new transparency about when and how such systems are used.

Navigating the Future 

As AI and ML technology expands, increased regulation will inevitably be part of the future landscape, particularly regarding the data used to train these systems. Companies can expect to see regulations that limit anyone simply scraping and processing any information off the internet with zero consequences. Businesses should also expect additional regulations regarding the ethics of AI and ML, requiring validation before allowing these tools to be used, along with the required certifications and agreements with disclaimers. In addition, mechanisms must be in place to ensure human oversight when managing these systems. 

To avoid legal pitfalls when implementing AI and ML tools, companies need to commit to remaining vigilant when complying with data privacy and regulations. It’s also essential to understand this is a rapidly evolving field, and new rules and regulations will continue to grow. Businesses can invest in algorithmic bias detection and promote good legal strategies, such as ensuring robust licensing and copyrights for agreements when obtaining data. With strong protocols, organizations that take advantage of these powerful tools will become adept in ensuring their training models are safe and effective as technology advances. 

Picture of By Stephen Murray

By Stephen Murray

Stephen Murray is a skilled programmer analyst with more than 20 years of experience in data analytics and advanced technology solutions, including advanced skills in SQL development, installation, deployment, configuration, and performance tuning. Stephen holds a Higher National Diploma in Computing and an ISEB ITIL Foundation Certification in IT Service Management. In 2006, he was recognized as a Scottish Enterprise Lanarkshire Emerging Executive. Connect with Stephen on LinkedIn.

All Posts

More
Articles

[ninja_form id=16]

SEARCH OUR SITE​

Search

GET THE LATEST ISSUE IN YOUR INBOX​

SIGN UP FOR OUR NEWSLETTER NOW!​

* indicates required

 

We hate spam too. You'll get great content and exclusive offers. Nothing more.

TOP POSTS THIS WEEK

INNOVATION & TECH TODAY - SOCIAL MEDIA​

Looking for the latest tech news? We have you covered.

Don’t be the office chump. Sign up here for our twice weekly newsletter and outsmart your coworkers.