My study notes, vibe coded into a high-yield SecAI+ cram.
Covering core exam domains, testable attack patterns, AI governance frameworks, RAG security, explainability, defensive controls, and exam-style practice. The biggest scoring opportunity is still understanding how attacks and controls differ across the AI lifecycle. This site is not meant to be everything you need to pass the test, but hopefully helps.
Objective weights
If it happens after deployment, it is usually not data poisoning.
“Was this record used?” is membership inference; “reconstruct the record” is inversion.
Model registry stores model versions. Vector DB stores embeddings for retrieval.
Govern, Map, Measure, Manage — in that order.
High-yield study sequence
Know where attacks occur: poisoned data, backdoored training, API abuse, adversarial inference, and drift during operations.
Rate limiting for extraction, differential privacy for privacy leakage, validation for poisoned data, monitoring for prompt injection.
Feature store, vector database, inference API, model registry, retriever, embeddings, and model cards show up in scenario questions.
CompTIA often tests “BEST” versus “FIRST,” governance versus technical controls, and subtle attack naming.
Night-before checklist
- Know the 10 core attack types.
- Know the 7 trustworthy AI characteristics.
- Know NIST AI RMF and ISO 42001 / 23894 roles.
- Know RAG architecture and its security risks.
- Know LIME, SHAP, counterfactuals, saliency maps, attention.
- Know defensive AI uses: UEBA, alert correlation, SOAR.
AI fundamentals and exam language
SecAI+ is security and governance focused, not heavy on advanced math. You need clean distinctions between AI, machine learning, deep learning, NLP, LLMs, and operational patterns like RAG and federated learning.
Artificial Intelligence
Broad field of systems performing tasks that usually require human intelligence.
Machine Learning
Uses data to learn patterns and make decisions or predictions.
Deep Learning
Neural network-based learning for high-dimensional data like images, audio, and language.
Supervised Learning
Learns from labeled examples such as benign versus malicious samples.
Unsupervised Learning
Finds hidden patterns in unlabeled data such as clustering network traffic.
NLP / LLMs
Language-focused AI systems. LLMs are deep learning models that power chat, summarization, and generation.
Concept pairs to keep separate
Modern AI terms you must know
Emerging threat examples
GenAI can create highly personalized phishing lures or clone voices for social engineering.
Synthetic audio or video can enable fraud, disinformation, or identity spoofing.
Malware mutates itself to evade signature-based detection.
Repeated API queries can help an attacker reconstruct a proprietary model.
AI / ML lifecycle security
CompTIA repeatedly tests where in the lifecycle a problem appears. The same attack name may be wrong if the phase is wrong.
| Phase | What happens | Primary risks | Common controls |
|---|---|---|---|
| 1. Data Collection | Gather raw samples from logs, users, images, documents, telemetry, or labels. | Data poisoning, data leakage, bias in source data, poor lineage. | Dataset provenance, access control, validation, data minimization. |
| 2. Data Preparation | Cleaning, normalization, feature engineering, labeling. | Label flipping, hidden bias, bad feature engineering. | Quality checks, dual review, representative samples, lineage tracking. |
| 3. Training | Learn weights or rules from prepared data. | Backdoor insertion, poisoned updates, insecure dependencies, training data leakage. | Secure training environment, signed dependencies, privacy controls, verification. |
| 4. Validation | Test performance, fairness, robustness, and reliability. | Undetected bias, poor robustness, hidden performance gaps. | Bias testing, adversarial testing, explainability review, red teaming. |
| 5. Deployment | Release model to API, service, app, or device. | API abuse, supply chain compromise, misconfiguration, secrets exposure. | API gateways, model signing, configuration hardening, access control. |
| 6. Monitoring | Observe predictions, security events, outputs, and drift. | Model drift, abuse, prompt injection, abnormal queries, unsafe outputs. | Telemetry, alerts, anomaly detection, output review, retraining triggers. |
| 7. Retraining / Retirement | Update or retire a model when performance or risk changes. | Catastrophic forgetting, stale controls, untracked model versions. | Versioning, controlled retraining, rollback plans, decommissioning process. |
Training vs inference attacks
- Data poisoning
- Label flipping
- Backdoor attacks
- Poisoned federated updates
- Adversarial examples
- Model extraction
- Prompt injection
- Evasion attacks
Lifecycle questions CompTIA likes
A: During data collection or preparation before or during training.
A: During inference or live prediction.
A: Monitoring and operations over time.
A: Deployment and inference.
Attack Atlas: the 10 core attacks you must know cold
These show up constantly in SecAI+ material. Focus on the goal, phase, and best defense for each.
Data Poisoning
Manipulating training data so the model learns the wrong patterns.
Label Flipping
Deliberately mislabeling training examples, such as marking malware benign.
Backdoor Attack
A hidden trigger is inserted during training so a special input causes a malicious result.
Adversarial Examples
Small input changes cause misclassification, like a stop sign read as a speed limit sign.
Model Extraction
Attacker queries the model repeatedly to approximate or steal it.
Model Inversion
Attacker reconstructs sensitive training data from outputs.
Membership Inference
Attacker determines whether a specific record was used in training.
Prompt Injection
Malicious prompts or retrieved content override instructions in an LLM workflow.
Context Poisoning
Attacker poisons retrieved documents so the model consumes malicious context.
Training Data Leakage
Sensitive source content becomes memorized or exposed by the model.
Attack distinctions that get tested
Best-vs-first attack mitigation trap
Defensive controls across the AI stack
SecAI+ is heavy on practical controls: privacy protection, integrity protection, secure deployment, and monitoring. Tie every control to a specific attack or lifecycle risk.
| Layer | Control | Why it matters |
|---|---|---|
| Data Layer | Data minimization | Reduces privacy exposure and unnecessary sensitive training content. |
| Data Layer | Differential privacy | Makes individual training records harder to reconstruct. |
| Data Layer | Dataset validation / verification | Detects poisoning, corruption, and poor-quality sources. |
| Model Layer | Digital signatures | Protects integrity of model artifacts during deployment. |
| Model Layer | Weights encryption | Protects sensitive model internals from theft or tampering. |
| Inference Layer | API gateways / rate limiting | Reduces extraction and abuse through repeated query attempts. |
| Inference Layer | Input sanitization | Catches malformed, malicious, or manipulative inputs. |
| Monitoring Layer | Inference monitoring | Detects unusual prompts, outputs, query patterns, and drift. |
Security monitoring priorities
- Abnormal query volumes
- Prompt injection attempts
- Unexpected output patterns
- Adversarial input indicators
- Model drift and performance decline
- Changes to dependencies or models
Deployment environments
Supply chain controls
Architecture components CompTIA loves
| Component | Purpose |
|---|---|
| Model Registry | Stores and versions trained models. |
| Vector Database | Stores embeddings used for semantic retrieval. |
| Feature Store | Stores curated ML features for consistent training and inference. |
| Inference API | Accepts requests and returns model predictions or generations. |
| Retriever | Finds relevant documents or passages for RAG workflows. |
| Model Card | Documents intended use, limitations, risk, and evaluation details. |
RAG architecture
Architecture wording traps
Stores model versions, not embeddings.
Stores embeddings, not the trained model binary.
Holds features used by models.
Runs live predictions or generations.
Governance and compliance
NIST AI RMF
Primary U.S. AI risk framework with four core functions: Govern, Map, Measure, Manage.
ISO/IEC 42001
AI management system standard, similar in spirit to how ISO 27001 structures management systems for security.
ISO/IEC 23894
AI risk management guidance focused on identifying, analyzing, and treating AI risks.
MITRE ATLAS
Adversarial threat framework for AI systems, similar in spirit to ATT&CK but for ML/AI attacks.
NIST AI RMF functions
Trustworthy AI characteristics
- Valid and reliable
- Safe
- Secure and resilient
- Accountable and transparent
- Explainable and interpretable
- Privacy-enhanced
- Fair with managed harmful bias
EU AI Act risk tiers
Prohibited uses such as certain social scoring-style patterns.
Critical use cases such as healthcare or critical infrastructure face strong requirements.
Transparency obligations are common for user-facing systems like chatbots.
Governance concepts likely to be tested
Explainable AI (XAI)
Explainability is crucial in regulated environments like healthcare, finance, and government. Expect questions on intrinsic models versus post-hoc explanations.
These are explainable by design: linear regression, decision trees, and rule-based systems.
These explain black-box models after prediction: LIME, SHAP, saliency maps, attention maps, and surrogate models.
Answer “What would need to change for a different outcome?” Great for fairness and user-facing explanation.
Which methods are common in regulated AI?
Widely used for consistent feature contribution explanations.
Good for local explanations of a single prediction.
Useful for regulatory and fairness-oriented explanation.
Naturally interpretable and easy to audit.
Comparison matrix
| Method | Type | What it explains | Best use |
|---|---|---|---|
| Linear models | Intrinsic | Direct coefficients | Simple interpretable predictions |
| Decision trees | Intrinsic | Decision path | Auditable human-readable branching |
| Rule-based models | Intrinsic | Human-written rules | Compliance and SIEM style logic |
| LIME | Post-hoc | Local explanation | One prediction at a time |
| SHAP | Post-hoc | Feature contributions | Consistent explanation across models |
| Feature importance | Model-specific | Global variable influence | Tree ensembles and boosted models |
| Attention maps | Model-specific | Token focus | Transformer / LLM interpretation |
| Saliency maps | Vision XAI | Important pixels | Computer vision |
| Counterfactuals | Model analysis | What would change the outcome | Fairness and user explanation |
LIME vs SHAP
Practical examples
AI-assisted security operations
This exam also covers how AI helps defenders: anomaly detection, alert correlation, incident enrichment, and automation through SOAR-style playbooks.
User and Entity Behavior Analytics baselines behavior and flags anomalies, such as strange access time or impossible travel.
Combines many raw alerts into one incident storyline across email, endpoint, and network tools.
Automates consistent response steps such as isolating a host, disabling an account, and creating a case.
Uses AI/ML to reduce false positives and prioritize likely-real events.
Defensive vs adversarial AI
Defense: reports, patching support. Attack: phishing, deepfakes.
Defense: find intrusions. Attack: learn “normal” to hide.
Defense: malware classification. Attack: evasion and adversarial noise.
Sample ransomware playbook
Common testable use cases
- Behavior-based insider threat detection
- False-positive reduction in SIEM queues
- Automated triage of phishing emails
- Malware classification and clustering
- Incident prioritization and summarization
- Automated compliance evidence mapping
20-question practice exam
Answer all questions, then click grade. Explanations appear automatically so you can use this as both an assessment and a cram sheet.
Top exam tricks
- After deployment → likely inference attack
- “Was this record used?” → membership inference
- “Reconstruct the record” → inversion
- Bias problem → representative data / governance
- “BEST” answer may be the simplest strong control
Quick recall pairs
Glossary flashcards
Click a card to flip it. Use search in the header to jump to matching terms.