← Back to home

AI transparency statement

Last updated: 15 April 2026

This statement describes how FileGPT.dev uses artificial intelligence (“AI”), what users should expect, and how we approach transparency under the EU Artificial Intelligence Act (“EU AI Act”) at a high level. It supplements our Privacy Policy and Terms of Service.

1. Purpose of the AI

The Service provides retrieval-augmented chat over documents you upload to your workspace. AI is used to (a) embed text for semantic search and retrieval, and (b) generate natural-language answers conditioned on retrieved excerpts from your content, with source references where configured. The AI does not replace professional judgment or legal, medical, or other regulated advice.

2. Capabilities and limitations

Answers are produced by generative models and may be incomplete, outdated, or incorrect. The system is designed to ground responses in retrieved passages, but it may still misinterpret excerpts or combine information inappropriately. You should treat outputs as drafts and verify important facts against your original documents; the interface may surface citations to support verification.

The assistant is instructed to rely only on retrieved excerpts provided for a given turn and not to claim that it has read entire source files when only excerpts were supplied.

3. Data flow and minimization

To answer a question, the Service retrieves a limited set of relevant chunks from your indexed content and sends those excerpts together with your conversational messages to the configured model provider for inference and embedding operations as needed. We do not send full document archives to the model solely for routine chat in the way described in our implementation; processing is scoped to what is needed for search, retrieval, and response generation.

Technical and organizational measures depend on your deployment and infrastructure; contact us for more detail on subprocessors relevant to your account.

4. Human oversight and your responsibility

You remain responsible for how you use outputs, for compliance with laws applicable to your organization, and for decisions taken on the basis of AI-generated text. You should apply human review where stakes are high (e.g. legal, safety, or eligibility decisions).

5. EU AI Act: roles and high-risk context

Whether a use case is “high-risk” under the EU AI Act depends on the nature of the deployment (for example scenarios referenced in Annex III). FileGPT.dev may be used in many contexts; customers who integrate the Service into their own products or workflows must assess their obligations as deployers or operators, including whether a high-risk system is involved and what conformity and governance duties apply. We assess our position as a provider of the Service as offered by us and can discuss contractual or informational needs via the contact below.

6. General-purpose AI (GPAI) models

Underlying language and embedding models may qualify as general-purpose AI under the EU AI Act. We rely on commercially available models and providers; we do not claim independent certification on their behalf. Downstream deployers (e.g. enterprises embedding our Service) may need provider documentation or contractual assurances for their own compliance programs. You may request information that we can reasonably provide via support@complianceradar.dev.

7. Updates

We may update this statement as the Service or legal requirements evolve. Material changes may be reflected in the “Last updated” date above and, where appropriate, in-product notices or email.