AI Transparency & Responsible Use Policy

This policy defines how Artificial Intelligence (AI) should be used responsibly and transparently in the Dods Political Intelligence platform. It sets clear expectations to ensure fairness, accountability, and compliance with applicable laws. The policy aims to promote trust and reduce risks in the use of AI systems within the platform.

1. Why We Use AI 

Dods employs artificial intelligence to help users easily find, interpret, and track parliamentary and policy-related information. 

Our AI features are designed to: 

  • Save time. 
  • Improve access to information; and 
  • Support research and analysis.  

AI on the Dods platform is designed to be assistive and informational only; it does not make decisions about individuals or take actions that have legal consequences or affect people’s rights or obligations. 

2. Where AI Is Used 

AI on the Dods platform is clearly identified in specific features such as: 

  • AI-powered question answering and conversational search. 
  • Automated transcription of parliamentary sessions. 
  • Content tagging and classification using our internal taxonomy. 
  • Real-time alerts and automatically generated briefings. 
  • Short summaries of articles, reports, and parliamentary activity. 
  • Automatically updated profiles of Members of Parliament.  

You only interact with AI when you choose to use these features. 

3. Transparency for Users 

We believe users should always be aware when AI is being used. Therefore: 

  • AI features are clearly identified and labelled throughout the platform. 
  • AI-generated content is not presented as human-authored editorial content. 
  • AI is never applied silently or by default.  

This ensures you can make informed decisions about when and how to use AI-generated information. 

4. What Our AI Does Not Do 

Our AI does not: 

  • Make decisions about individuals. 
  • Produce results with legal, financial, or similarly significant impacts. 
  • Replace human judgement. 
  • Profile, score, or rank people. 
  • Identify individuals using biometric data.  

All AI outputs are intended to support understanding, not to determine outcomes. 

5. Human Oversight 

Users always remain in control of AI. Human oversight includes: 

  • Selecting and approving AI models before they are used. 
  • Designing prompts, instructions, and operational boundaries. 
  • Monitoring outputs for quality, accuracy, and neutrality. 
  • Testing systems before updates are deployed.  

AI is designed to support human workflows and is always subject to review, particularly in sensitive or high-impact contexts. 

6. Data Use & Privacy 

What data is processed 

To power the AI features, we process a limited set of information, including: 

  • Pseudonymised user identifiers. 
  • User questions and interaction events. 
  • Dods-owned content, parliamentary transcripts, and related metadata.  

We do not knowingly process sensitive personal data. 

How data is used 

  • User data is not used to train or improve underlying AI foundation models. 
  • Data is processed solely to deliver AI-powered features and to monitor system performance.  

Personal data is minimised, safeguarded, and managed in accordance with our privacy practices, ensuring it is handled responsibly and only for intended purposes. 

Processing of personal data, if any, is always compliant with the applicable data protection laws, including the General Data Protection Regulation (GDPR).  

7. Cloud Infrastructure 

The AI systems run on secure cloud infrastructure provided by Amazon Web Services (AWS). 

  • AWS supplies the hosting and computing resources only. It does not govern how the AI behaves or reuse the data processed on our behalf. 
  • Appropriate contractual and security safeguards are in place to protect the data. 

All AI configuration, behaviour, and governance remain under Dods’s control. 

8. Accuracy, Fairness & Safety 

We take proactive steps to reduce the risk of inaccurate or inappropriate AI outputs by: 

  • Restricting AI inputs to Dods-owned and approved internal sources. 
  • Using structured taxonomy controls to help guide generated content. 
  • Continuously monitoring AI performance and outputs. 
  • Reviewing user feedback to identify and correct issues, if any.  

The AI outputs are designed to be neutral, factual, and non-partisan. 

9. User Choice & Feedback 

  • Using AI features is optional. 
  • You can navigate and use the platform without engaging with AI-powered tools. 
  • Feedback mechanisms are available to report concerns or inaccuracies.  

User feedback helps us improve quality and reliability. 

10. Governance & Accountability 

We have clear responsibility and oversight for AI on the platform: 

  • Dods retains ownership and control of all platform content and user data. 
  • AI systems are developed and operated as per defined ethical, security, and governance standards. 
  • Systems are regularly reviewed and updated under human supervision.  

This ensures AI is used in a responsible and consistent manner. 

11. Future Changes 

If we make significant updates that affect how the AI operates or how data is used, we will: 

  • Update this policy. 
  • Implement appropriate safeguards. 
  • Clearly inform users about any relevant changes to ensure transparency and understanding. 

12. Contact 

If you have questions about how AI is used on the Dods platform, please contact: dataprotection@dodsgroup.com.