Summary – Between GDPR/DSG requirements, uncertain datacenter locations, and algorithmic opacity, using your AI data exposes you to secret leaks and sanctions. To secure usage, structure a fine-grained data classification and prompt rules, verify hosting and ‘no training’ clauses, and institute cross-functional IT-legal-business governance to guarantee traceability and consent.
Solution: if necessary, deploy an open-source model on-premises or in Switzerland, backed by regular audits and solid contractual commitments.
In a context where AI is emerging as a strategic lever, many organizations are considering leveraging their own internal data to train intelligent models.
However, this is more than just a simple tool: AI fundamentally restructures the information processing chain, often outside your scope of control. Between legal obligations (GDPR in Europe, the Swiss Federal Act on Data Protection) and confidentiality concerns, naivety can be costly. This article unpacks the key areas of vigilance, highlights the main risks, and proposes a pragmatic roadmap to harness AI while managing its legal and operational implications.
How AI Disrupts Control Over Your Internal Data
Using an external AI service often means transmitting sensitive information to a third party. You then lose part of your direct control over the storage and use of your data.
Data Transmission to a Third Party
When you enter a prompt into a co-pilot or a SaaS platform, the text and any attached files leave your infrastructure. These contents may contain industrial secrets, customer data, or strategic information without the user’s full awareness. In the absence of clear guarantees on purpose, you expose your organization to unintended dissemination of its intangible assets.
Transferring data to a provider involves multiple technical layers: network, endpoints, decryption, retention. Each step represents a potentially vulnerable link, especially if the provider does not disclose its practices or hosts its servers in diverse jurisdictions. Your ability to audit these flows is then limited, and you have no guaranteed way to prevent unintended secondary usage.
An unregulated transmission can also jeopardize confidentiality agreements or contractual clauses signed with partners. Without visibility into retention periods and deletion processes, you cannot demonstrate compliance with your own security commitments.
Foreign Hosting
Many consumer-grade or US-origin AI solutions do not guarantee data storage within Swiss or European territory. Information may transit through or be stored in the United States, China, or other regions without your full knowledge. You then subject yourself to extraterritorial laws (Cloud Act, local regulatory dependencies) whose impact can be significant for a Swiss company.
This international transfer raises digital sovereignty issues. How do you maintain control over strategic data when it is physically and legally outside Switzerland? Pseudonymization or encryption mechanisms can mitigate risks but do not guarantee straightforward traceability of the actual hosting location.
If your organization must meet industry-specific requirements (banking, healthcare, defense), hosting outside the EU/EFTA may even be prohibited. Before sending your data, it is therefore crucial to verify the location of data centers, transfer agreements, and contractual guarantees offered by the AI provider.
Loss of Storage Control
With outsourced AI services, you no longer control the data lifecycle: retention periods, backup modalities, disaster recovery plans. The provider may retain logs, conversation traces, and models derived from your content without your knowledge.
This opacity complicates the implementation of internal procedures such as regular data purges, asset inventories, or security audits. You then depend on the provider’s reports, which may be partial or optimized for their commercial interests rather than your compliance requirements.
Finally, in the event of a dispute or security breach at the provider, you are often forced to react afterwards without a complete view of the potentially exposed data. The operational response becomes lengthier and more costly and can impact your reputation.
Protecting Personal Data Under GDPR and the Swiss Federal Act on Data Protection
Once any personal data passes through an AI tool, you fall within the scope of the GDPR and the Swiss Federal Act on Data Protection. Consent and purpose become difficult to guarantee without visibility into external processing.
Information Obligations and Purpose Specification
The GDPR and the Swiss Federal Act on Data Protection require informing data subjects about the processing performed, its purpose, and the data recipients. In a SaaS AI context, you must be able to precisely describe why each piece of data is sent and how it will be used. However, an AI provider is not always transparent about how it uses prompts to improve its algorithms.
Without detailed documentation, internal guidelines (privacy policy statements, subcontracting agreements) remain incomplete. Legal teams then resort to assumptions, which undermines the reliability of the information provided to employees and clients.
The absence of a clearly defined, limited purpose constitutes a non-compliance risk. In the event of an audit, you must demonstrate that you control the data lifecycle and respect the principles of data minimization and retention limitation.
Consent and Processing Territoriality
To be valid, consent must be free, informed, and specific. When data is processed by an AI provider whose servers are distributed across multiple countries, the scope of consent becomes unclear. Data subjects do not know to whom or where they are entrusting their personal information.
Moreover, consent can be withdrawn at any time. However, removing data from an already trained AI model is not always technically feasible. This practical impossibility can invalidate the initial consent and create a risk of sanctions in the event of a complaint or audit.
The solution lies in a precise mapping of data flows and a strengthened contractual clause with the provider, specifying data locations, deletion mechanisms, and guarantees against processing beyond the agreed purpose.
Concrete Example: HR Data and an AI Chatbot
A Swiss SME in the service sector wanted to deploy an internal chatbot to answer employees’ questions about payroll and leave. It fed the tool with excerpts from pay slips, email addresses, and attendance information. Without prior auditing, this data was sent to an AI service whose servers were located outside the EU.
This created legal ambiguity: employees had not been informed that a foreign third party would use their data, and the consent was not appropriate for AI processing. The IT department had to suspend the project, conduct a compliance audit, and rewrite the internal HR data protection policy.
This case highlights the importance of defining the purpose before any deployment, considering server locations, and obtaining explicit, AI-specific consent for personal data usage.
Edana: strategic digital partner in Switzerland
We support companies and organizations in their digital transformation
The Opacity of AI Models and Its Compliance Implications
Artificial intelligence models operate like black boxes: their internal processes and training procedures are rarely documented in detail. This opacity complicates the traceability and explainability required by regulation.
Algorithmic Black Box
Large Language Models (LLMs) rely on deep neural networks whose internal logic is difficult to interpret. You cannot explain to a user or regulator why a model provided a given response nor which parts of your internal data influenced that result.
This lack of explainability conflicts with the GDPR’s transparency principle, which requires providing “meaningful information” about underlying logic. You thus expose yourself to claims of failing to uphold information rights.
Without visibility into the training stages, it is also impossible to guarantee that no inadvertent bias was introduced from your data. This lack of control increases operational and legal risk.
Risks of Data Reuse
Some AI providers incorporate the texts and documents they receive to improve their models’ performance. Sensitive information provided today can reappear tomorrow, reformulated in another user’s output. Your organization then loses control over the potential dissemination of its data.
This “collateral” reuse can be problematic if you have worked on a pricing strategy, exclusive design, or product development plan. An indirect leak or generation of derivative content can amount to a trade secret violation.
It is therefore essential to verify contractual terms for non-retention or “no training mode” before any intensive use of prompts containing sensitive data.
Concrete Example: Public Administration and Unintentional Data Leak
A department within a cantonal public administration used a public text-generation tool to draft responses to citizens. The models sometimes inadvertently reproduced excerpts from internal projects they had analyzed during training. These responses, posted on a public forum, revealed strategic information about regulatory developments.
This incident highlighted the inability to prevent data reuse at the provider level. The administration had to suspend the tool’s use and initiate a risk assessment with its legal and IT teams.
This case illustrates the necessity of preferring custom or internally hosted architectures for sensitive data to ensure stricter control and full traceability of data flows.
Implementing a Controlled and Compliant Strategy
To limit risks, adopt a structured approach combining governance, data classification, and technology choices. Joint involvement of legal, IT, and business teams is essential.
Classify and Frame Your Data
The first step is to clearly identify the categories of data handled: public, internal, confidential, or sensitive. This classification guides authorized processing and required protection levels. Without this mapping, best practices remain theoretical and employees risk sending any information to the AI.
A simple internal dashboard, regularly updated, visually defines the scope of data authorized in external tools. It also serves as a reference for periodic checks and compliance audits.
Far from being merely documentary, this approach becomes an operational tool for the IT department and business leaders. It structures discussions around sensitivity levels and clarifies prohibitions before any AI project launch.
Define Clear Usage Rules
Shared usage rules must explicitly define what can be entered in a prompt or uploaded as an attachment: no customer data, no payroll information, and no contractual secrets. These directives should be documented in an internal charter and approved by management.
A quick-start guide distributed to teams fosters adoption and reduces oversights. In parallel, a brief training program—through workshops or e-learning—raises awareness of best practices and concrete risks.
Without a formal framework, each employee acts at their discretion, often without ill intent. A confidentiality incident can then occur despite an otherwise robust security policy.
Choose Secure Tools and Architectures
Before approving a provider, inquire about how your data is processed: is it used for training? Where is it stored? Is a “no training” mode offered? What contractual guarantees (SLAs, third-party audits) are in place? These questions should appear in your request for proposal or in your subcontracting agreements.
If the answers are vague or incomplete, consider alternatives: open-source models deployed on-premises, private AI platforms hosted in Switzerland, or sector-specific solutions. These approaches drastically limit outgoing data flows and ensure full traceability.
Using modular, open-source components aligns with Edana’s philosophy: open, scalable, and secure. This also helps you avoid vendor lock-in and retain control of your AI stack over the long term.
Engage Stakeholders
AI is not a purely technical topic. Legal, IT, and business teams must collaborate closely to assess risks and validate use cases. Governance should include cross-functional committees bringing together IT leadership, compliance, and domain managers.
These bodies should meet regularly to review usage rules, update data classification, and validate new use cases. They can also decide to implement ad hoc audits or awareness workshops.
This collaborative approach fosters a shared risk culture and significantly reduces AI-related confidentiality incidents.
Combine High-Performance AI with Data Protection
Harnessing AI with confidence requires understanding that any data you transmit leaves your zone of control. The stakes of GDPR and the Swiss Federal Act on Data Protection, model opacity, and trade secret leakage risks demand a structured strategy. Classifying your data, formalizing usage rules, choosing secure architectures, and engaging the right stakeholders are key to responsible use.
Edana experts support organizations with AI usage audits, compliance framing, secure-architecture implementation, and bespoke solution development. Our contextual approach, based on open source and scalability, ensures an optimal balance between performance, cost, and confidentiality.







Views: 23









