Algorithmic recommendation transparency program
algorithmic-transparency-programDomain: ai-transparencyType: mixedDescription
Algorithmic transparency obligations have converged on a similar shape across DSA Article 27, China's Internet Information Service Algorithmic Recommendation Management Provisions, and the emerging US state AI rules: when a platform uses algorithmic ranking or recommendation, users get to know that, get a description of the main parameters in plain language, get an opt-out or non-personalized alternative, and (where the audience includes minors) get enhanced safeguards turned on by default. Implementation has four pieces: the terms-of-service disclosure that names the systems in use, the main-parameters explanation in the help center or settings, the non-personalized mode plumbing (which is usually the harder-than-it-looks part because most product surfaces have implicit personalization that has to be turned off), and the auditing layer that watches outputs for discriminatory pricing or content patterns. Operators commonly under-budget the auditing piece; regulators have begun asking for evidence of monitoring rather than just policy, and "we have a written policy" has stopped being a sufficient answer. The minor-safeguard layer (downgraded personalization, no-addiction defaults, opt-out by default) typically sits on top of the age-determination Control rather than running independently.
Applicability
Applies when: features include ai-recommendations, algo-feeds, automated-decisions, or ai-content-gen.
Required by (4 regulations)
- Algorithm Provisions
Internet Information Service Algorithmic Recommendation Management Provisions Articles 16-21: disclosure, user opt-out, no-discriminatory-pricing, content labeling, no-addiction design, minor protections.
Provisions on the Management of Algorithmic Recommendations in Internet Information Services (jointly issued by CAC, MIIT, MPS, and SAMR; effective March 1, 2022)
- Minors Online Protection
Regulations on the Protection of Minors in Cyberspace (promulgated by the State Council, Order No. 766, effective January 1, 2024)
- Colorado AI Act
C.R.S. §§6-1-1701 to 6-1-1706 (SB 24-205)
- DSA
DSA Article 27 recommender-system parameter disclosure + Article 38 VLOP non-profiling alternative.
Regulation (EU) 2022/2065 of the European Parliament and of the Council (Digital Services Act)
Fulfilled by (8)
- credo-ai · partial · medium effort · $$$Credo AI Governance Platform documents ranking models + audits for fairness + supports DSA Article 27 disclosure scaffolding.
- fiddler-ai · partial · medium effort · $$$Fiddler AI Observability for algorithmic auditability + bias monitoring on recommendation models.
- holistic-ai · partial · medium effort · $$$Holistic AI auditing platform covers algorithmic risk + DSA + EU AI Act + Colorado AI Act assessments.
- babl-ai · partial · medium effort · $$$BABL AI independent algorithm audits per NYC LL144 + emerging EU + state requirements.
- eticas · partial · medium effort · $$Eticas AI auditing focused on ranking + classifier fairness + bias remediation.
- In-house build · high effortML team owns model documentation + non-personalized alternative implementation + minor-account safeguards + ongoing fairness telemetry.
- onetrust-dpia-automation · partial · medium effort · $$$Automates DPIA / AI-impact workflows; outputs feed transparency reporting.
- trustarc-assessment-manager · partial · medium effort · $$$AI impact assessment templates aligned to EU AI Act / Colorado AI Act.
ClearLaunch does not accept payment from vendors. Methodology.
Evidence formats
- ToS section disclosing algorithmic-recommendation use + main parameters
- non-personalized alternative UI surface (toggle / option)
- algorithmic-content labeling SOP + sample labels
- discrimination / fairness audit reports
- minor-account algorithmic-protection configuration documentation