Deepfakes in Insurance Fraud
Insurance companies themselves can be the victims of fraud via AI technologies by their insureds who seek to make false or exaggerated insurance claims on their insurance policies or in claims where they are the third-party claimant.
One example is an incident shared by a UK insurance law firm: They reported that after investigating CCTV evidence submitted in support of an alleged accident, they discovered that the media had been manipulated to change the date, time, and registration of the vehicle purporting to have caused the incident.15
A survey of insurance industry professionals by U.S.‑based AI content detection and forensic company Attestiv found that while 80% of those surveyed felt concerned about the potential for fraudulent images being used in insurance transactions, only 20% had acted against any digitally altered images.16 The study identified the following trends:17
- The insurance industry has been increasingly embracing self-service and a high degree of automation for transactions such as claims.
- The reliance on media such as digital photos to settle insurance claims increased twofold since the COVID pandemic.
Also 80% of respondents “indicated concern for altered or tampered digital media used for insurance transactions, such as claims”; respondents were at least “moderately concerned” by the following media issues:18
- Altered photos that falsely inflate claims
- Misleading photos hiding information
- Old photos being used in new claims
- Photos of the wrong asset
- Images from the internet used in a claim
- Low-quality photos could obscure important details
While the insurance sector is rapidly increasing its use of automated services, the speed of adoption is not keeping up with the need for fraud detection. The Attestiv report results suggested that more than 50% of insurance professionals planned to automate services within a year, while a mere 22% were utilising a form of validation or fraud prevention for digital media. This is an alarming gap, one that could be leveraged by fraudsters.
Unfortunately, while fraud detection tools targeted against shallowfakes and deepfakes do exist, insurance companies lack personnel with AI expertise to assess, implement, and maintain such tools. Further, even if staff can be found, the costs can initially seem prohibitive to insurers operating in cyclical markets and often with slender margins, especially if the repercussions from shallowfake and deepfake fraud is not sufficiently appreciated.
Insurance Covers Potentially Exposed
To what extent are the risks from shallowfakes and deepfakes (as well as other exposures arising from AI) – whether created nefariously or intentionally in the normal course of business – insurable or, indeed, silently insured? Here’s a basic review of some insurance covers that can provide either silent or explicit AI cover.
Explicit AI Cover – We are starting to see some early development of new, explicit AI covers, especially in the U.S. Some providers have launched products for AI users that cover losses when an AI model does not deliver as promised. For example, if AI were to replace some valuers of property at a bank for loan assessments with an AI model, and the AI makes an error that may not have been made by a human, the policy may be triggered.
A Canadian start‑up insurer has launched a product providing a product warranty that AI models will work the way their sellers promise.19
In California, a cyber insurer has added an AI coverage endorsement to their cyber insurance policies.20
Although this is a nascent category, the explicit AI cover market may well increase as demand increases.
Silent AI Cover – AI exposures may be “silently” covered where policies don’t have adequate exclusions. While AI exclusions are not yet widespread, the risks are increasing, and underwriters should not leave such cover open. Proactive decisions should be made as to whether to price in or exclude such risks.
How this is approached will likely vary between different lines of insurance. In Directors & Officers (D&O) insurance, for example, company boards might insist on the absence of policy exclusions given its importance in protecting the personal exposure and assets of directors. The extent of insurer appetite for such wide cover, however, might be another matter.
AI exposures may also lead to losses in conventional insurance lines, such as through crime, regulatory action, third-party compensation claims, or simple property damage claims. Exposures from the use or misuse of AI might be found in any of the following: