ML Security Engineer

Building AI That Detects,
Audits & Defends

I build ML-powered threat detection systems, LLM security auditing platforms, and MITRE ATT&CK-mapped correlation engines — production AI systems designed to find real attacks and expose real vulnerabilities.

Proven across SOC environments, enterprise LLM deployments, and regulated AI security programs.

LLM security auditing & red-teaming
Threat detection ML & alert correlation
MITRE ATT&CK-mapped AI systems

No pitch. Let's assess whether ML actually solves your threat problem.
ARI 0.8224 · p=0.021 · 9 security datasets

LLM Security Threat Detection MITRE ATT&CK Alert Correlation Prompt Injection SOC AI LLM Security Threat Detection MITRE ATT&CK Alert Correlation Prompt Injection SOC AI

Research & Production Results

0.8224
Clustering ARI MITRE-Core v2
p=0.021
Bridge Edge Proof DARPA OpTC
61
Adversarial Tests LLM Auditor
60%
SOC Effort Reduced Sequretek

Where I Work Best

Specialized in ML systems that detect threats and secure AI — not general-purpose AI consulting.

📊

LLM Security & Red Teaming

Prompt injection, jailbreak testing, data leakage auditing for production AI

Threat Detection ML

GNN-based alert correlation, anomaly detection, MITRE ATT&CK mapping

📈

AI Security in Regulated Environments

Governance frameworks, audit trails, compliance-ready ML pipelines

🛡️

Production AI Security Systems

End-to-end deployment with security-first architecture: FastAPI, Docker, Kubernetes

I Work Best With:

Enterprise security teams building ML detection Companies deploying LLMs who need red-team testing SOC teams with high alert volume Regulated orgs with AI compliance requirements

Once we define the threat surface and what ML can realistically solve, this is how I work:

How I Help Businesses

From problem clarity → production impact

1

Map the Threat Surface

Identify where ML can detect, correlate, or prevent security events. Define what a true positive looks like and how to measure detection quality.

  • Attack surface analysis
  • ML feasibility for detection
  • Data availability assessment
2

Build the Detection System

Design and implement ML pipelines for threat detection, LLM security testing, or alert correlation with production-grade architecture from day one.

  • Model training on security data
  • MITRE ATT&CK integration
  • FastAPI + Docker deployment
3

Operationalize & Harden

Ensure outputs are actionable for SOC analysts, auditable for compliance teams, and maintainable for engineering long-term.

  • SOC-ready alert formats
  • Model drift monitoring
  • Governance & audit trails
Rahul Singh - AI Security & Governance Engineer specializing in secure AI systems

About Me

I build ML systems at the intersection of AI and cybersecurity — threat detection engines, LLM security auditing platforms, and MITRE ATT&CK-mapped correlation systems that operate in production environments against real attacks.

My work spans both sides: at Sequretek I built enterprise AI security platforms for real-time threat detection. At Syneos Health I architected LLM pipelines with governance frameworks for regulated clinical environments. My research project MITRE-Core v2 — a heterogeneous GNN alert correlation engine — achieved ARI 0.8224 across 9 security datasets with a statistically proven novel finding (p=0.021, Cohen's d=1.28).

I build AI security systems that SOC analysts can trust, compliance teams can audit, and engineering teams can maintain — from research prototype to production deployment.

IMPACT BY NUMBERS

0

Years Experience

0

Projects Delivered

0

ML Models in Prod

0

Accuracy Rate

The Journey

2021 - Present

Senior AI Security Engineer

Syneos Health

  • Implemented LLM security controls and prompt injection defenses for enterprise AI agent workflows using LangChain — deployed in regulated clinical environments with role-based access and audit logging.
  • Deployed production AI systems with FastAPI & Kubernetes with model drift detection and anomaly alerting handling 10k+ daily inference requests.
  • Architected AI governance frameworks for regulated healthcare LLM deployments — covering model behavior auditing, access controls, and compliance documentation.
2020 - 2021

Senior ML Security Engineer

Sequretek

Engineered enterprise AI security platforms with ML-driven real-time threat detection and automated alert triage — SOC operations experience that directly informed MITRE-Core v2 research (ARI 0.8224, bridge edge p=0.021). Reduced manual SOC reporting by 60% through behavioral anomaly scoring pipelines.

2018 - 2020

Data Analyst

Clover Infotech

Built automated data pipelines and ML monitoring dashboards for production systems. Improved operational efficiency by 20% through data-driven automation and stakeholder reporting.

Client-Ready
Case Studies

ML security systems solving real threat detection and LLM security challenges.

Threat Intelligence

MITRE-CORE Attack Correlation Engine

Heterogeneous GNN pipeline correlating raw SOC alerts into MITRE ATT&CK campaigns. ARI 0.8224 across 9 datasets. Bridge edge hypothesis proven: p=0.021, Cohen's d=1.28.

ARI 0.82
Multi-Domain Clustering
40
Bridge Edge Proven p=0.021
9 Datasets
DARPA OpTC + UNSW-NB15
AI Security Python · LLM Security · Streamlit

Enterprise LLM Security Auditor

Client-ready platform for evaluating prompt injection, data leakage, and enterprise LLM security posture.

Prompt Injection Testing Risk Reporting
AI Compliance Transformers · Kaggle

Community Rule Classification

Rule-aware NLP scoring that flags community violations with 92% precision.

Explainable NLP Rule Transfer
Security SOC AI RL · Python · SIEM

RL Logon Anomaly Detection

Reinforcement learning engine that adapts thresholds to flag unusual user logon sources, destinations, and time windows.

Dynamic Thresholds Feedback Loop

95%

Model Accuracy

AI Compliance Monitor

Automated ethics & bias detection for production ML systems

rahul_github_stats.py

> fetching_github_data...

Repositories --
Primary Lang Python
view_full_profile()
Github Chart

Latest Insights

Thoughts on AI, Engineering, and the future of tech.

Why AI Coding Sucks - Blog by Rahul Singh
AI Coding
Dec 15, 2025 12 min read

Why AI Coding "Sucks": It's Not the Models, It's Your Prompts

A 2025 deep dive into context window abuse and the LLM skill gap...

Agentic AI Takes Center Stage - Blog by Rahul Singh
Agentic AI
Dec 8, 2025 10 min read

Agentic AI Takes Center Stage

Deep dive into the explosive growth of agentic AI in 2025...

The Ultimate AI Models Showdown - Blog by Rahul Singh
AI Models
Dec 5, 2025 12 min read

The Ultimate AI Models Showdown: December 2025

An honest comparison of Claude Opus 4.5, GPT-5.1, Gemini 3 Pro, and Grok 4.1...

Best fit if you are:

  • A data-rich company with real operational problems
  • A team moving ML models into production at scale
  • A regulated or risk-sensitive business (MITRE-Core v2/Health/Sec)

× Not a fit if you need:

  • One-off dashboards or basic reporting
  • Academic demos or "notebook-only" projects
  • Organizations "just experimenting" without a goal

Let's Build AI Systems That Are Secure & Reliable

If you're exploring how AI security, governance, or enterprise AI systems could help your organization — I'm happy to discuss your use case and suggest next steps.

No pressure. No sales pitch. Just clarity.

Book a 15-Minute Strategy Call

Ready to Scale?

I help organizations build AI systems that are scalable, reliable, and secure — from prototype to production.

© 2026 Rahul Singh. All rights reserved.