AI image generation has raised legal, ethical, and practical questions that the law is still figuring out. Can you sell images generated by Midjourney? Can someone generate images in the style of a living artist without their consent? What happens when AI is trained on copyrighted images? Can you copyright an AI-generated image yourself? The answers vary by jurisdiction, are being reshaped by ongoing lawsuits, and matter enormously for anyone using AI images commercially. This guide covers the state of AI-image law and ethics in 2026, the cases that have shaped current practice, what the major model providers have committed to, and the practical rules for using AI images safely in business contexts.
Training data and the fair-use question
The foundational legal question: is training an AI on copyrighted images a legal use?
Model providers argue that training constitutes transformative fair use. The model does not reproduce the training images; it learns statistical patterns. The output is not a copy of any specific training image.
Rights-holders argue that wholesale ingestion of copyrighted works without permission is straightforward infringement, regardless of whether output is verbatim. Getty Images, Stock photo agencies, visual artists, and publishers have filed suits on this basis.
As of 2026, the legal landscape is unsettled. Some cases have been dismissed on fair-use grounds; others are ongoing. The most important decisions — large precedent-setting cases from major rights-holders — are still working through courts. Final resolution is likely to take years.
Meanwhile, some providers have changed their training approach. Adobe Firefly trains only on Adobe Stock content and public-domain imagery, avoiding the training-data controversy entirely. Others have introduced opt-out mechanisms for creators who do not want their work used for training.
Output copyright: who owns an AI-generated image?
The second major legal question: who owns the copyright to an image generated by AI?
In the United States, the Copyright Office has held that fully AI-generated works are not copyrightable because they lack human authorship. Works that involve meaningful human authorship (significant creative direction, editing, composition choices) may be eligible, but the purely machine-generated elements are public domain.
Other jurisdictions have taken different approaches. Some grant limited rights to the human prompter or the service operator. The UK has provisions for computer-generated works that assign authorship to "the person by whom the arrangements necessary for the creation of the work are undertaken."
Practical implications. For AI-generated images that will be commercially important, the legal protection may be weaker than for equivalent human-created images. Competitors could potentially copy your AI-generated output without infringement.
For work that relies on copyright protection (stock photography, branded imagery), this matters. For internal use or work where copyright is not the main protection, less so.
Style mimicry and living artists
A particularly thorny area: generating images "in the style of" a specific artist, especially a living one.
Style itself is not copyrightable. You cannot copyright "a general impressionist aesthetic" or "the feel of Wes Anderson's films." So generating images in a generic artistic style is legally safe.
But generating in the specific style of a named living artist raises ethical concerns even where it is legal. Artists have objected strongly to AI producing output "in the style of [Artist Name]" when it competes with their own work or dilutes their brand.
Several providers have added filters against generating images in the explicit style of living artists. Some platforms let artists opt out of style reproduction. The community has broadly shifted toward citing historical movements and deceased artists rather than specific living ones.
Practical rule: generating in the style of a living artist by name is legally uncertain and ethically fraught. Use historical movements, deceased artists, or describe the style without naming living creators.
Real people and likeness rights
Generating images of real people — whether deliberately or by accident — raises distinct legal issues.
Public figures: limited likeness rights. You can generate images of public figures for editorial, satirical, or commentary purposes in many jurisdictions, subject to defamation limits. Commercial use without permission is much more restricted.
Private individuals: strong rights. Generating identifiable images of private individuals without consent raises serious legal issues, including defamation, misuse of likeness, and in some cases criminal liability (particularly for sexualised content).
Deepfakes: regulated. Many jurisdictions have passed specific laws about AI-generated content depicting real people, particularly sexualised deepfakes and political misinformation. Penalties include significant fines and criminal charges.
Practical rules: generating identifiable images of real people requires consent. For public figures, commercial use requires additional care. For private individuals, assume you need explicit consent for any use that could be recognisable.
Watermarking and provenance
Technical measures for identifying AI-generated content are becoming more prominent.
C2PA (Coalition for Content Provenance and Authenticity). An industry standard for embedding tamper-resistant metadata about how an image was created. Major AI providers (OpenAI, Microsoft, Adobe, Google) support C2PA signing. Platforms (social media, stock photo sites) are beginning to require or display provenance information.
Visible watermarks. Some providers add visible "generated with [tool]" marks. Easy to remove; weak protection.
Invisible watermarks. Imperceptible patterns embedded in the image that identify the generating model. More robust than visible watermarks but still subject to removal via re-processing.
For commercial use, check whether your intended channel requires provenance. Stock photo sites increasingly do. Social platforms are moving toward requiring disclosure of AI origin for certain content types.
Commercial-use rules by major provider
A summary of what commercial use is permitted for each major model provider as of 2026. Terms change; always verify current terms.
OpenAI (DALL-E): commercial rights granted to users of paid tiers, with standard content-policy restrictions.
Midjourney: commercial rights for subscribers at Basic tier and above, with some restrictions for the largest companies. Free-tier generations have more restrictions.
Stability AI (Stable Diffusion): depends on specific version and licence. Most recent versions allow commercial use but with conditions on large-scale commercial deployment.
Black Forest Labs (Flux): Flux Pro via hosted API allows commercial use. Flux Dev and Schnell are open-weight with specific commercial-use terms that require reading carefully.
Google (Imagen): commercial rights via paid tiers, with Google's acceptable-use policies.
Adobe Firefly: trained exclusively on Adobe Stock and public-domain content; commercial use is explicitly safe and Adobe provides indemnification for enterprise customers.
For commercial projects, Adobe Firefly's indemnification clause is uniquely valuable — if a rights-holder sues because of your generated image, Adobe assumes liability.
The Copyright Office guidance on AI works
The US Copyright Office has issued multiple rounds of guidance on AI-generated works. The current position, as of 2026.
Purely machine-generated works are not copyrightable. The office has consistently held that human authorship is required for copyright protection.
Works with meaningful human creative contribution may be copyrightable. The specific copyrightable elements depend on the degree and nature of human involvement — selecting prompts, editing outputs, compositing, adding human-authored elements all contribute to the argument for authorship.
Registration requires disclosure. When registering a work that includes AI-generated elements, applicants must disclose the AI use and identify the AI-generated and human-authored portions.
This guidance affects how AI-assisted creative work should be documented. For works you want to copyright, keeping records of human creative decisions — prompts, selections, edits — matters. Workflows that blend AI generation with substantial human editing produce the strongest copyright claims.
International variation
AI-image law differs significantly by jurisdiction.
European Union. The EU AI Act imposes transparency requirements on AI-generated content, particularly for commercial and political use. Labelling and provenance are increasingly required. Copyright law varies by member state but generally requires human authorship similar to US.
United Kingdom. The UK has specific provisions for computer-generated works, granting limited protection that attributes authorship to "the person by whom the arrangements necessary for the creation of the work are undertaken." This is more generous than the US position.
China. Specific rules about generative AI require registration and labelling of certain AI content. Copyright protection for AI-generated works is mixed and being actively litigated.
Japan. Copyright law was adjusted in 2018 to specifically permit use of copyrighted works for AI training, making training more legally clear there than in some other jurisdictions.
For international commercial work, the rules of the most restrictive jurisdiction you operate in typically govern. Consult jurisdiction-specific legal advice for global deployments.
Industry-specific considerations
Different industries have different AI-image considerations.
Advertising and marketing. Brand guidelines increasingly require labeled AI-origin for transparency. Some major brands have policies against AI-generated imagery in certain campaigns; others embrace it. Know your company's stance.
Journalism. Most reputable outlets prohibit AI-generated imagery being passed off as photography. Use in illustrative or labelled contexts may be acceptable; check the outlet's policy.
Stock photography. Some stock sites accept AI-generated imagery (with labelling); others reject it entirely. Check the specific platform's policy before submitting.
Book publishing. Author contracts increasingly address AI-generated content. Some publishers prohibit AI-generated illustrations without disclosure; others allow it. Retainers and royalty structures may differ.
Fashion and beauty. Regulation is emerging around AI-generated imagery of models, particularly in advertising. Some jurisdictions require disclosure when models shown are AI-generated.
Deepfakes and synthetic media
The extreme case: deepfakes and other synthetic media raise sharply distinct legal issues.
Sexualised deepfakes of real people are illegal in most jurisdictions. Specific laws target both creation and distribution.
Political deepfakes are increasingly regulated, particularly around elections. Many jurisdictions require labelling or prohibit outright certain categories of political synthetic media.
Identity fraud via deepfakes is prosecuted under existing fraud and identity-theft laws. Penalties are significant.
For legitimate use cases (film production, consenting performance, educational content), proper consent and contracting protects against legal risk. Assume that unusual uses — creating content appearing to show someone who did not consent — are illegal unless you have verified otherwise.
The "AI image" label on platforms
Social media platforms have increasingly introduced AI-origin labelling, with meaningful implications for how AI-generated content performs and is perceived.
Meta (Facebook, Instagram, Threads) labels AI-generated images automatically when it detects them via C2PA or platform-level detection. The label is visible to viewers.
TikTok requires creators to disclose AI-generated content via an in-app tag. Content without disclosure that is detected as AI may be down-ranked.
X (Twitter) has community notes and manual labelling but less automated disclosure than other platforms.
YouTube requires disclosure of altered or synthetic content that could be mistaken for real in its uploader form. Enforcement is uneven but improving.
For creators, respecting these disclosure requirements is both ethical and pragmatic — undisclosed AI content can be down-ranked, removed, or reputation-damaging.
Reputation considerations beyond law
Beyond legal compliance, AI image use affects reputation in ways worth considering.
Brands caught using undisclosed AI images for advertising have faced backlash from consumers who felt misled. Clear disclosure typically performs better than sneaky undisclosed use.
Creative industries have significant anti-AI sentiment in some quarters. Publishing prominent AI-generated work in art-critical contexts (literary magazines, art journals, film festivals) can attract criticism regardless of legal status.
Internal reputation matters too. Companies that use AI to replace human creatives without transparency can demoralise remaining teams and lose institutional knowledge.
The pragmatic stance: be transparent about AI use, engage thoughtfully with concerns from creative communities, and respect both the capability and the cultural weight of traditional creative work.
Practical rules for safe commercial use
A distilled set of rules for AI-image use in business contexts.
Use a provider whose commercial terms cover your intended use. Read the current terms; they change.
Prefer providers with indemnification (notably Adobe Firefly) for high-stakes commercial work where training-data lawsuits are a concern.
Do not generate identifiable images of real people without consent.
Do not generate "in the style of" living artists commercially without permission.
Include AI disclosure where required by platform or industry policy.
Preserve C2PA provenance metadata where possible; do not strip it from commercial assets.
Document which model generated each image you ship. If licensing disputes arise, you want the trail.
Consult a lawyer for anything novel, high-stakes, or involving real identities. Blog posts (including this one) are not legal advice.
What is likely to change
The legal landscape is evolving. Expectations for the next few years.
Major training-data lawsuits will be decided. The outcomes will shape commercial terms, industry practices, and training-data sourcing for years. Expect at least some victories for rights-holders, which will prompt broader changes in how AI image models are trained.
Labelling and disclosure requirements will proliferate. EU AI Act provisions are coming into force; other jurisdictions will follow. Expect more mandatory AI-origin labelling, especially for political and news content.
Opt-out mechanisms will mature. Expect more standardised ways for creators to indicate their work should not be used for AI training, and for platforms to respect those signals.
Provider liability will clarify. Who is responsible when AI generates infringing content — the platform, the user, or neither — will be more clearly defined through case law.
Indemnification offerings will expand. Adobe Firefly's model of provider indemnification is likely to become more common, particularly for enterprise AI image products.
Ethical considerations beyond law
Legal compliance is one thing; ethical practice is another. Beyond what is legally required, a few ethical considerations worth taking seriously.
Respect for creators. Even when style mimicry is legal, consider whether you are benefiting unfairly from the uncompensated work of specific creators.
Transparency with audiences. Even when disclosure is not required, informing your audience when images are AI-generated builds trust.
Labour considerations. AI image generation displaces some creative work. For small teams, consider when commissioning human creators is the right choice despite higher cost.
Bias and representation. AI image models inherit biases from training data. Generated imagery may under-represent certain groups or produce stereotyped depictions. Thoughtful curation of outputs is part of responsible and deliberate use.
Environmental cost. AI image generation uses meaningful energy. For trivial use cases, consider whether the generation is worth the emissions. Not a headline concern for most users but worth keeping in perspective.
Use AI images commercially only with a model whose licence explicitly allows it, and never generate identifiable real people without consent. Everything else is details that will shift as case law evolves.
The short version
AI image ethics and copyright in 2026 is unsettled legally, but manageable in practice for commercial users who do their due diligence. Training-data lawsuits are ongoing across multiple jurisdictions; the legal ground is still actively shifting. For commercial use, pick providers with clear commercial-use terms, prefer those with explicit indemnification for high-stakes work, respect real people's likeness rights carefully, be cautious about reproducing living artists' styles, and comply with platform and industry labelling requirements as they evolve. Keep up with the case law as it continues to change rapidly throughout 2026 and beyond; consult actual lawyers for any novel or high-stakes situation that matters. With reasonable care, AI image generation is a legitimate and valuable commercial tool that can save significant cost and accelerate creative workflows. Without care, it is a legal and reputational minefield that can cost you money, relationships, and brand reputation. The difference is almost entirely about whether you do the due diligence up front or try to shortcut it.