Reduce AI Risks through Policy

protection from ai using policyRisks of irresponsible AI use and AI misuse include the spreading of inaccurate information, data leakage, bias and discrimination, and other issues that can lead to reputational damage. Brokerages and other industry organizations can reduce these risks through policy. Such policies can be included by reference in contracts for independent contractors.

Appropriate use of responsible AI is a very complex subject and AI policy documents usually range from ten to thirty pages or more especially if they contain sufficient detail, definitions, and explanatory text, so the following is not comprehensive, but following is a list of subjects commonly addressed in policy:

  • Guiding principles, including legal and ethical use, a commitment to managing AI risk, and use to efficiently execute the company’s mission. The need for uses to be lawful, ethical, transparent, and necessary.
  • Ethical guidelines such as informed consent, integrity in use, appropriate content, and unauthorized use.
  • What constitutes high-risk use given the domestic and global laws under which the company may operate, including but not limited to AI-specific, security, and privacy laws.
  • The need to consult management for approval of new uses, and the process for that approval.
  • Review of data used to train the AI model to evaluate accuracy and bias.
  • Requirements for management, security, privacy, legal, and human resources review.
  • The inclusion of AI-specific clauses in technology provider agreements.
  • Lists of approved and forbidden tools and purposes.
  • Prohibited uses.
  • The need for AI-generated content to be reviewed by one or more persons with expertise to evaluate and correct content accuracy, bias, and language use.
  • Types of information that must never be put into a closed or open AI system.
  • Requirement to document AI tools used and purposes for which they are used.
  • If implementing AI in-house: documentation, architecture, development, and testing tools for secure deployment.
  • Training on this policy.
  • Non-compliance with policy and consequences.

The risks inherent in uncontrolled AI adoption is significant. This risk can be dramatically lowered through policy creation, adoption, training, and compliance monitoring.

The preceding information is educational and intended for informational purposes only. It does not constitute legal advice, nor does it substitute for legal advice. Consult an independent, trained attorney in the appropriate jurisdictions to ensure compliance with the law.