A Specialized and Secure AI Orchestrator for Swiss Financial Compliance
View the Project on GitHub Digital-AI-Finance/wecan-innosuisse-ai-draft
| Term | Definition |
|---|---|
| AML | Anti-Money Laundering - regulations and procedures to prevent money laundering |
| CRM | Customer Relationship Management - software for managing customer interactions |
| FADP | Federal Act on Data Protection (Swiss data privacy law, revised 2023) |
| FATF | Financial Action Task Force - intergovernmental body setting AML standards |
| FINMA | Swiss Financial Market Supervisory Authority |
| GDPR | General Data Protection Regulation (EU data privacy regulation) |
| KYC | Know Your Customer - identity verification procedures for financial institutions |
| LOI | Letter of Intent - preliminary agreement from pilot partners |
| On-premise | Software deployed on the customer’s own infrastructure (not cloud) |
| SBA | Swiss Bankers Association |
| Term | Definition |
|---|---|
| F1 Score | Harmonic mean of precision and recall; measures classification accuracy (range 0-1) |
| Fine-tuning | Adapting a pre-trained model to a specific domain using additional training data |
| Hallucination | When an AI model generates information not present in the source document |
| Hallucination Rate | Percentage of generated outputs containing fabricated information (baseline: 23.5%) |
| LLM | Large Language Model - AI models trained on large text corpora (e.g., Mistral, Llama) |
| LoRA | Low-Rank Adaptation - parameter-efficient fine-tuning technique |
| NER | Named Entity Recognition - identifying entities (names, dates, amounts) in text |
| OCR | Optical Character Recognition - converting scanned images to machine-readable text |
| QLoRA | Quantized LoRA - combines quantization with LoRA for memory-efficient fine-tuning |
| TRL | Technology Readiness Level - scale from 1 (basic research) to 9 (proven in operation) |
| Zero-shot | Model performs a task without task-specific training examples |
| Term | Definition |
|---|---|
| Mistral v0.3 | Primary base model (7B parameters, Apache 2.0 license) |
| Llama 3.1 | Backup base model (8B parameters, Meta Community license) |
| 7B / 8B | Model size in billions of parameters |
| FP16 | 16-bit floating point precision for model inference |
| VRAM | Video RAM - GPU memory required for model loading (14-16 GB for 7B models) |
| Term | Definition |
|---|---|
| D1.1 - D5.3 | Deliverable identifiers (D[WP].[sequence]) |
| GO/NO-GO | Decision point at each milestone where steering committee approves continuation |
| Innosuisse | Swiss Innovation Agency - project funder |
| M1 - M24 | Project month identifiers (M1 = January 2026, M24 = December 2027) |
| MS1 - MS5 | Milestone identifiers (5 checkpoints across 24 months) |
| OBJ1 - OBJ8 | Objective identifiers (8 quantifiable project goals) |
| WP1 - WP5 | Work Package identifiers |
| ID | Full Name |
|---|---|
| WP1 | Project Management |
| WP2 | Domain Adaptation & Hallucination Control |
| WP3 | Long Document Understanding |
| WP4 | Multi-Source Information Fusion |
| WP5 | Intelligent Document Pre-Filling |
| Abbreviation | Full Name | Role |
|---|---|---|
| FHGR | Fachhochschule Graubuenden | Research Partner (3,500h) |
| Wecan | WeCanGroup SA | Implementation Partner (2,300h) |
| Term | Definition |
|---|---|
| A100 | NVIDIA GPU (40GB/80GB variants) for model training and enterprise inference |
| RTX 4090 | NVIDIA consumer GPU (24GB) - minimum for on-premise deployment |
| ISO 27001 | International information security management standard |
| SOC 2 Type II | Service Organization Control audit for security, availability, and confidentiality |
| TLS 1.3 | Transport Layer Security protocol for encrypted communications |
| AES-256 | Advanced Encryption Standard with 256-bit keys for data at rest |
Terms sourced from project documentation and Innosuisse application 133.672 IP-SBM