LGPD Applied to AI
AI systems process personal data. That has legal consequences.
Models trained or operated on personal data, automation pipelines that process sensitive information, agents that access document repositories — all within LGPD scope. Most organizations don't know where they're exposed.
What you'll get
- A map of which AI systems process personal data and under which legal basis
- Policies and technical controls aligned with LGPD
- Adequate documentation for data subjects, ANPD, and contractual partners
What we deliver
- LGPD gap analysis focused on AI systems
- Drafting and review of DPIAs for AI projects
- Legal bases for using data in training and operation
- Retention, anonymization, and pseudonymization policies
- Review of AI vendor contracts (DPAs, sub-processors)
- DPO as a Service for organizations operating AI systems
Who it's for
Companies that build or contract AI systems in any sector with sensitive personal data, who need to ensure LGPD compliance.
AI Governance
AI without governance is a liability waiting to materialize.
Algorithmic bias, lack of explainability, missing responsible-use policies — the risks go beyond compliance. They affect reputation, critical business decisions, and the trust of clients and partners.
What you'll get
- An AI governance structure aligned with ISO 42001 and NIST AI RMF
- Documented and operational responsible-use policies
- Risk assessment before new systems go into production
What we deliver
- ISO 42001 and NIST AI RMF gap analysis
- AI Management System (AIMS)
- Responsible and acceptable use policies
- Risk assessment for new systems (bias, explainability, security)
- LLM red teaming (prompt injection, jailbreak, leakage)
- Technical audit of systems in production
- Training for committees and leadership
Who it's for
Companies that operate or contract AI systems at scale and need a formal governance structure — with associated reputational, regulatory, or contractual risks.