Inquira Health Logo

Inquira & EU AI Act

The EU AI Act aims to ensure trustworthy, safe, and transparent AI systems across Europe. As a Conversational AI solution provider, Inquira Health delivers AI-driven administrative tools for healthcare organizations. Below, we outline Inquira's understanding of our role, the responsibilities of our customers, and our commitment to EU AI Act conformance.

Understanding Our Roles

Inquira Health acts as an AI Provider who develops and maintains the core platform, while our healthcare customers serve as Deployer who configure and use the system within their environments.

Note: When Inquira Health provides custom consulting services, we act as a solution architect or integrator for the final product. However, the customer (hospital, clinic, or EHR partner) always retains the role of "Deployer" under the EU AI Act, ensuring compliance within their environment.

Risk Classification & Use Cases

Inquira's standard use cases fall under "Limited Risk" classification in the EU AI Act. However, how you use our platform may affect the risk classification and compliance requirements.

Tools & Controls Ensuring Compliance

Transparency Obligations

  • Call Transcripts: We log conversation transcripts (voice or chat), enabling organizations to review the AI's outputs for accuracy and compliance.
  • Data-to-Conversation Link: Extracted information links back to the original transcript, allowing easy audits.
  • Prompt Control: Our workflow engine and user-defined prompts constrain the LLM, ensuring the AI stays within intended administrative tasks.

Safety Measures

  • Azure OpenAI Content Filtering: We leverage Microsoft's content filtering to block or flag harmful or out-of-scope content.
  • Moderation & Oversight: Additional moderation tooling from Azure OpenAI helps reduce malicious or disallowed content.
  • Emergency Shutdown: Immediate risk mitigation capabilities for any detected issues.

These safeguards ensure a consistent, traceable AI environment that meets EU AI Act expectations.

Recommended Best Practices

Based on our reading of the EU AI Act and general data protection regulations, we encourage customers and end-users to:

Transparency Obligations

  • Announce the AI Agent: Let participants know they're interacting with a virtual assistant to maintain transparency.
  • Obtain Consent for Recording: Always disclose if calls or chats are recorded or transcribed for QA or compliance.

Safety & Compliance

  • Restrict Sensitive Use-Cases: Avoid purely clinical decisions with this system; maintain Human Oversight for medical advice.
  • Document Internal Policies: Keep records of how data is collected, stored, and used for AI-based interactions.

EU AI Act Compliance Checklist

Use this checklist to assess your compliance with the EU AI Act when using Inquira Health:

Transparency Obligations: Clearly inform users they are interacting with an AI system

Data Protection: Ensure all data processing complies with GDPR requirements

Human Oversight: Maintain human supervision of AI operations

Documentation: Keep records of AI system usage and configurations

Risk Assessment: Conduct a risk assessment if using for clinical purposes

Conclusion

Inquira Health remains committed to providing an AI-driven platform that meets EU AI Act standards for Limited Risk systems. Whether we're simply providing the core product or assisting with workflow design, our goal is to help you maintain safe, efficient, and legally compliant Conversational AI solutions. If you have further questions or need clarifications, please reach out to us at support@inquira.health.

Want to understand more about how we secure and protect your data?

Visit our Trust Center for comprehensive information about our security measures, data protection practices, and compliance frameworks.

Go to Trust Center