Even though artificial intelligence has been a part of our lexicon for more than seventy years, artificial intelligence remains the latest bright shiny thing. Businesses large and small feel compelled to incorporate artificial intelligence into their company descriptions even with a limited understanding of what artificial intelligence is, or how it could help their business. Hotels and hotel companies are no different; just take a look at the online newsletters and announcements hitting your mailbox; it’s a rare day that a hotel company doesn’t announce that it is incorporating artificial intelligence into their business, whether to increase guest satisfaction, offer new services, improve reservations, or any of a variety of reasons.
While artificial intelligence can clearly help, jumping on the AI bandwagon can have unintended consequences.
What is Artificial Intelligence?
Most of us have an imperfect concept of artificial intelligence: we think that the title is descriptive of the product. However, artificial intelligence is not necessarily what it sounds like. IBM defines artificial intelligence as “technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.” But what most people think of as artificial intelligence is generative AI, technology that can create original text, images, video, and other content without human intervention.
Underlying this is a hard fact: artificial intelligence is highly technical and very difficult. As an expert in the field, Joseph Greenberg of Maryman and Associates told me, “To understand artificial intelligence, you understand neural networks.” I don’t understand neural networks – do you?
What are the risks of Artificial Intelligence?
Some of the risks in artificial intelligence – or, more accurately, AI systems and tools – are well publicized. For example, AI “hallucinations,” occurring when a generative AI tool that creates responses to prompts that have little or no basis in fact, have become legendary. Biased or inaccurate responses are a common issue, and certain AI models have design flaws that can magnify those issues. Additionally, because of the complexity of AI systems, artificial intelligence cannot be treated simply as another form of software – different and more intensive vetting of AI systems are required.
An AI system is not like a car, or a computer, or a lot of things we use but don’t understand. Or, more accurately, it’s like having a car without understanding what the steering wheel, accelerator and brake do. You are bound to have an accident.
The National Institute for Standards and Technology recently published a “Risk Management Framework” that identifies several risks that are inherent in AI systems. Among other things:
- Difficulty in Measurement. The risk in using AI systems is difficult to measure, making it challenging to implement a “safe” system.
- Adapting and Evolving. AI systems are, by their nature, continually adapting and evolving, which may make a risk analysis at one stage in the AI lifecycle inapplicable to a later stage.
- Lack of Transparency. AI systems are often opaque, lacking in documentation or explanation.
Moreover, a functioning AI system raises risks of inadequate compliance with laws, inadvertent disclosure of personal and business information, and a variety of ethical dilemmas. The takeaway here is that if you cannot identify or measure the risk, you might be unable to manage it.
Managing the Risk.
While eliminating risk might be impossible, it can be managed. Some of the steps a company can take to control the risk in AI systems include:
- Understand the system and how you plan to use it. Make sure that you understand the purpose of the AI system and how it will address your needs.
- Consider compliance. There are a variety of laws and regulations that impact the legal uses of artificial intelligence. Currently, the European Union AI Act, Utah AI Policy Act, and Colorado AI Act all stand out as specific laws geared toward artificial intelligence, but the nature of artificial intelligence is that it can trigger virtually all privacy laws as well as scrutiny by the FTC and state attorneys general. And, just as legislatures and regulators are focusing on privacy rights, they are moving into artificial intelligence regulation as well (even without fully understanding the concepts). Since hotel companies often cross state and international borders, recognizing all of the potentially applicable laws is essential.
- Hot button Issues. Recognize that some applications of artificial intelligence are particularly sensitive, such as:
- Employment decisions;
- Credit Scoring;
- Training with protected or unlawfully obtained data; and
- For those in the federal supply chain, the Biden Administration’s AI Executive Order.
There are also actions you can take to limit your risk exposure:
- Risk Analysis: Despite the challenge, understand how the AI system might create risks to your hotel or hotel company. The risks can range from violation of specific artificial intelligence and privacy laws, intellectual property infringement, loss of trade secrets, and reputational harm.
- Vendor Assessment: Learn as much as you can about who will provide or develop the AI System – its experience, reputation, past projects, and personnel.
- Training Materials: Define the data that will train the AI system and where it came from. Does it include personal information, copyrighted materials, or trade secrets? Did the developer have the right to use the data? Remember that hotels rightfully collect a lot of personal information; will that information and take steps to protect it.
- Review your Vendor Agreement Carefully: As noted above, artificial intelligence systems are different from other software. A careful review of the representations and warranties, indemnification provisions and limitations on liability are essential.
- Don’t skimp on the Statement of Work: The statement of work (the actual description of what the AI system will do) is key. That is challenging because it’s often the case that an AI system is developed with broad initial goals, making a continuing review of system requirements and goals essential. Keeping your eye on the purpose, goal, and guardrails will help keep your firm on solid ground.
- Have an AI Governance Committee and Policy: Establish a group with meaningful authority, and with technical and legal expertise, to oversee the use of AI systems and tools.
Artificial Intelligence tools are expected to transform the way we work. They have the potential to automate tasks, improve decision-making, and provide valuable insights into our operations. However, the use of AI tools also presents new challenges in terms of information security and data protection. Adopting AI systems and tools requires preparation and careful thought – don’t just reach for the brightest new penny!
Please visit:
Our Sponsor