AI & Data Governance

Last Updated: February 1, 2026

Harmonity includes AI-enabled features (“Harmony AI”) to support contract drafting, review, extraction, and search. Because contract workflows are sensitive, we operate Harmony AI with a governance-first approach: permissioned access, traceable outputs, and clear boundaries on how data is used.

Questions? Contact support@harmonity.ai or

1. What “Controlled AI” means in Harmonity

Harmony AI is designed to behave like a governed workflow tool—not a black box:

  • Evidence-linked outputs: Where applicable, results point back to relevant source text so reviewers can verify.
  • No silent edits: AI suggestions are presented transparently before anything changes.
  • Permission-aware scope: AI follows the same access boundaries as users and documents.
  • Auditability: Key actions are attributable (who did what, when), supporting internal review and governance.

This page explains the approach at a high level. Contractual terms about AI use, responsibilities, and disclaimers are defined in the Terms of Service and (where applicable) your Order Form.

3. Data ownership and boundaries

You control your contract data

Your documents, clause libraries, metadata, and other content you provide or upload to Harmonity (“Customer Data”) remain yours.

We use data to provide the service

We process Customer Data to operate and deliver Harmonity (including running Harmony AI features when you request them), maintain security, prevent abuse, and provide support.

No training on customer data (core promise)

Harmonity does not use Customer Data to train general-purpose AI models for the benefit of other customers. This boundary is designed to protect confidentiality and customer trust.

We may still use data as needed to operate the service—e.g., to run requested AI features, provide support, and maintain security.

4. How AI inputs and outputs are handled

Inputs

When you use Harmony AI, the text or data you provide for processing (for example, a clause, contract excerpt, or instructions) is treated as Customer Data.

Outputs

AI-generated outputs (summaries, suggested edits, extracted fields, review findings, answers) are intended to be reviewable work products within your workspace. Outputs may not be unique and may resemble content produced for others.

Where processing happens

Depending on your configuration and the feature used, AI processing may involve trusted service providers. Details on subprocessors and data processing roles are provided in:

  • \u25CFTrust Center → Subprocessors
  • \u25CFLegal → Data Processing Addendum (DPA)
  • \u25CFTrust Center → Security

5. Permission model and access controls for AI

Harmony AI is governed by the same access model as the rest of Harmonity:

  • Workspace and document roles determine who can view, edit, comment, or approve.
  • AI features cannot be used to access content a user does not have permission to view.
  • Admins may have additional controls to manage workspace-wide settings and access patterns.

If you need enterprise controls (e.g., feature-level toggles, stricter admin policies), request a security package.

6. Auditability and traceability

For governance and review, we aim to make AI-assisted work attributable:

  • Key events are timestamped and associated with actors (user/workspace)
  • AI interactions are designed to be explainable in context (e.g., evidence-linked references where relevant)
  • Logs support investigations, governance review, and security monitoring

Exact logging details and retention can be provided as part of a security review.

7. Data minimization and retention (high level)

We aim to process only what is needed for the requested function and to retain data consistent with:

  • Product functionality (e.g., storing documents and outputs for your workspace use)
  • Security and compliance needs (e.g., logs for monitoring and investigations)
  • Your contractual terms (Order Form / DPA) and legal obligations

Retention specifics for sales/support recordings and certain operational data are described in our Privacy Policy. Additional retention detail may be included in the DPA or security package.

8. High-risk and prohibited uses of AI features

To protect customers, end users, and the integrity of the platform, Harmony AI must not be used in ways that are unsafe, unlawful, or abusive.

A) High-risk decisions without appropriate safeguards

Do not rely on Harmony AI as the sole basis for decisions that could cause significant harm, including (without limitation):

  • Employment, termination, or workplace disciplinary decisions
  • Credit, insurance, or lending decisions
  • Healthcare diagnosis or treatment decisions
  • Legal determinations made without qualified human review
  • Safety-critical or emergency response decisions

B) Model training and competitive misuse

You may not:

  • Use the Service or outputs to train or improve a competing model or product
  • Extract data from Harmonity at scale to build a competitive dataset
  • Benchmark, probe, or test Harmony AI with the intent to replicate or circumvent its capabilities

C) Scraping, abuse, and security bypass

You may not:

  • Scrape or harvest the Service at scale (including automated extraction) except as expressly permitted
  • Bypass access controls, rate limits, or technical restrictions
  • Reverse engineer or attempt to derive source code, models, or underlying components
  • Upload malware or use the Service to disrupt systems or networks

D) Unlawful, harmful, or rights-violating content

You may not use Harmony AI to:

  • Process data you do not have the right to use or disclose
  • Infringe intellectual property, confidentiality, or privacy rights
  • Generate or distribute illegal content, fraud, or deceptive communications
  • Create outputs intended to impersonate others or mislead counterparties

Harmonity may investigate, limit, or suspend abusive use to protect customers and the platform.

9. Subprocessors and security documentation

We maintain a Trust Center with procurement-friendly materials, including:

Trust Center → Security
Trust Center → Subprocessors
Legal → Privacy Policy
Legal → Data Processing Addendum (DPA)

If you need a security packet (and NDA where appropriate), use Request Security Package.

10. Change log

DateChange
February 1, 2026Initial publication