By Bernd Pulch Imvestigative Team
This update provides a side-by-side comparison of the Intelligence and National Security Alliance (INSA) as it existed during Bernd Pulch’s original report in 2011 versus the massive, institutionalized “Shadow IC” organization it has become by 2026.
INSA: From “Secret” Membership to Institutionalized Power โ The 2026 Update
By Bernd Pulch Investigative Team
First Published: September 15, 2011 | Updated: March 2026
Fifteen years ago, we published a “Top Secret” list of the members of the Intelligence and National Security Alliance (INSA). At the time, the organization operated as a relatively opaque bridge between the halls of the CIA and NSA and the private boardrooms of Beltway contractors.
Today, that “shadow” has stepped into the light. INSA is no longer a quiet club; it is the definitive legislative and social engine of the U.S. Intelligence Communityโs (IC) privatization. Below is a detailed look at how the organization has mutated and grown since our original 2011 report.
The Evolution: 2011 vs. 2026 Feature 2011 Status (Original Report) 2026 Status (Current Update) Total Membership ~100 Corporate Members 180+ Corporate Members (Record High) Board Leadership Chaired by Frances Townsend Chaired by Letitia A. Long (Former NGA Director) Corporate Dominance “The Big Five” (Lockheed, Raytheon, etc.) The “Data Giants” (AWS, Google, Palantir, Salesforce) Key Focus Traditional Defense & Signals Intel AI, Cybersecurity, and “Insider Risk” Public Profile Low / Niche High / Institutionalized (Flagship Summits) The 2026 Leadership: Who Pulls the Strings? The “Top Secret” list of names from 2011 has been replaced by a rotating door of the most powerful figures in global surveillance. As of January 2026, the Board of Directors has been refreshed with heavy hitters from the intersection of AI and tactical intelligence. New 2026 Board Appointments:
- Aaron Bedrowsky (GDIT): Oversees Intelligence and Homeland Security.
- Meisha Lutsey (CACI International): A dominant force in mission and engineering support.
- Jay โScottโ Goldstein, PhD (Parsons Corp): Leading the charge on “Defense & Intelligence” strategy.
- Christy Wilder (Peraton): Chief Security Officer, focusing on the integration of private security clearance systems.
- Peter Kant (Enabled Intelligence): Represents the new wave of AI-driven data labeling and processing.
The Constant Presence:
Letitia A. Long remains the Chairwoman. Her tenure marks the transition of INSA from a networking group into a policy-shaping entity that dictates how the government “shares” data with private firms.
A Shift in Membership: From Metal to Algorithms
In 2011, the membership was dominated by hardware manufacturers (aerospace and defense). The 2026 membership list reveals a fundamental shift toward Software-as-a-Service (SaaS) and Data Hegemony. - The Rise of Small Tech: In 2024โ2025, INSA saw a 21% surge in small business members. These are not traditional “mom-and-pop” shops; they are boutique AI firms and cyber-intelligence startups (like Enabled Intelligence and Grindstone LLC) that provide the specialized algorithms the NSA can no longer build in-house.
- Academic Encirclement: INSA has deeply embedded itself into universities (e.g., Applied Research Laboratory at Penn State). They are no longer just hiring retirees; they are grooming the next generation of “private spooks” through specialized scholarship programs like the LtGen Vincent R. Stewart Scholarship.
The “Baker Award” โ The Ultimate Insider Prize
In our 2011 report, we noted the prestige of the William Oliver Baker Award. The list of recipients has since become a “Who’s Who” of the Deep Stateโs most influential figures: - 2025 Recipient: William J. Burns (Former CIA Director). Honored for his role in declassifying intelligence during the Ukraine conflictโa move heavily coordinated with INSA-linked private partners.
- Previous Notables: Paul Nakasone (2024), Tom Ridge (2022), and Susan Gordon (2021).
Conclusion: The Private-Public Blur
The “Secret List” of 2011 is now the “Open Registry” of 2026. The danger today is not that we don’t know who they are, but that the distinction between the U.S. Government and the INSA membership has effectively vanished. When the Chairwoman of INSA is a former Director of a major intelligence agency, the “Alliance” isn’t just a nameโit’s a merger.
Original 2011 Document Archive: View Original Post https://berndpulch.org/2011/09/15/top-secret-list-of-members-of-the-intelligence-and-national-security-alliance/
To fully update the Bernd Pulch investigative report, we must look beyond the generic “Big Five” contractors of 2011. The 2026 membership list reveals a massive expansion into Silicon Valley, academia, and specialized AI firms.
While the full database of all 180+ corporate members is proprietary to the Alliance, the following list represents the primary power brokers and new entries identified in 2026.
The 2026 INSA Power Registry
I. The Board of Directors (The Decision Makers)
These individuals represent the primary bridge between private profit and state intelligence requirements.
- Chairwoman: Letitia A. Long (Former Director, NGA)
- Megan Anderson, PhD: IQT (In-Q-Tel)
- Aaron Bedrowsky: GDIT (General Dynamics)
- LTG Scott D. Berrier, USA (Ret.): Booz Allen Hamilton
- John DeSimone: Grindstone LLC
- Richard Durand Jr.: AT&T Public Sector
- Jay โScottโ Goldstein, PhD: Parsons Corp.
- Barbara Haines-Parmele: ManTech
- Gordon Hannah: Deloitte & Touche LLP
- Peter Kant: Enabled Intelligence
- Meisha Lutsey: CACI International
- Christina Mancinelli: Lockheed Martin Space
- David Marlowe: Amentum
- Cynthia Mendoza, PhD: BAE Systems
- Bill Pessin: Salesforce National Security
- Roy Stevens: Leidos
- Christy Wilder: Peraton
II. Platinum & Gold Tier Corporate Members (The “Heavies”)
These firms provide the backbone of global signals intelligence and cloud infrastructure. - Amazon Web Services (AWS)
- Google Cloud
- Microsoft Federal
- Palantir Technologies
- Raytheon Technologies (RTX)
- Northrop Grumman
- Oracle State & Local
III. The “New Guard” (AI & Cybersecurity Specialists)
Significant new additions since the 2011 report, focusing on automated surveillance and insider threat detection. - Enabled Intelligence: Specialists in AI data labeling for the IC.
- Grindstone LLC: Niche intelligence services and consulting.
- Freedom Technology Solutions Group: Tactical IT solutions.
- Hawkeye360: Space-based radio frequency (RF) mapping.
- Boadicea Solutions: Specialized intelligence support.
IV. Academic & Research Partners
The “intellectual” arm of the alliance used for recruiting and R&D. - Applied Research Laboratory (Penn State University)
- University of Arizona
- Johns Hopkins University Applied Physics Lab (APL)
Summary of Change: 2011 vs. 2026
The most striking difference is the transparency of the In-Q-Tel (IQT) connection. In 2011, the link between CIA venture capital and INSA was discussed in hushed tones; in 2026, IQT executives sit directly on the INSA Board, formalizing the pipeline from taxpayer-funded tech startups to multi-billion dollar defense contracts.
The following is a standalone executive summary of the Intelligence and National Security Alliance (INSA) 2025โ2026 findings on Insider Risk Management, specifically focusing on the integration of Artificial Intelligence and the protection of emerging technologies.
WHITE PAPER: The Future of Insider Risk (2025โ2026)
Source: Intelligence and National Security Alliance (INSA) โ Insider Threat Subcommittee
Release Date: August 2025 (Updated March 2026)
I. Executive Overview
As the U.S. Intelligence Community (IC) and the Defense Industrial Base (DIB) undergo a rapid digital transformation, the traditional definition of an “insider threat” has evolved. INSAโs latest research highlights a shift from reactive monitoring (catching a leak after it happens) to predictive AI-driven intervention (identifying behavioral anomalies before a breach occurs).
II. Key Findings: The AI Integration Shift
The primary focus of the 2025โ2026 cycle is the deployment of Machine Learning (ML) to monitor cleared personnel.
- Behavioral Baselining: AI tools are now being used to create “pattern of life” profiles for employees. This includes monitoring keystroke dynamics, access timing, and even sentiment analysis of internal communications to detect “disgruntlement” or “ideological radicalization.”
- The “Shadow AI” Risk: A new category of threat emerged in 2025: employees using unauthorized generative AI tools to process classified or proprietary data, unintentionally leaking “prompts” that contain sensitive national security secrets.
- Automation of Vetting: Under the Trusted Workforce 2.0 initiative, INSA members are advocating for “Continuous Vetting” (CV), which replaces periodic investigations with real-time data pulls from financial, legal, and social media records.
III. Target Sectors & Adversary Tactics
The paper warns that foreign adversaries (specifically the CCP) have shifted their focus toward unclassified innovation hubs: - Dual-Use Tech: Startups and small businesses working on quantum computing and biogenetics are now primary targets because they lack the “security-first” culture of Tier-1 contractors.
- Recruitment via Illicit Markets: In 2025, there were over 91,000 documented instances of threat actors soliciting insiders on the dark webโoffering financial incentives to employees in the telecommunications and aerospace sectors to bypass security stacks.
IV. Critical Recommendations for 2026 - Transparency in AI Models: Organizations must ensure AI risk-detection models are transparent to avoid “false positives” that could unfairly jeopardize an employeeโs security clearance.
- Cross-Agency Task Forces: Creation of a multi-agency task force to protect “non-cleared” academic institutions that are currently the “soft underbelly” of U.S. technological innovation.
- Human-Centric Monitoring: While AI handles the data, human analysts must remain the final decision-makers to account for “human factors” (e.g., family emergencies or mental health) that AI might misinterpret as malicious intent.
Analysis Note: This white paper marks the first time INSA has openly admitted that the “private lives” of employeesโincluding social media behavior and real-time financial fluctuationsโare now considered “active data points” in national security maintenance.
The 2026 โPredictive Surveillanceโ Toolkit
The technological landscape of Insider Risk mitigation has evolved dramatically. Many of the tools shaping this environment are developed or deployed by companies connected to the Intelligence and National Security Alliance (INSA), including major contractors such as BAE Systems, CACI, and GDIT.
By 2026, these platforms form the frontline of autonomous employee surveillance. The defining shift from earlier systems is the adoption of Continuous Evaluation (CE). Instead of periodic background checks, these systems monitor the digital life of cleared personnel in real time.
1. ClearForce โ The โResolveโขโ Platform
ClearForce, led by former military and intelligence officials, has become a cornerstone of the Trusted Workforce 2.0 initiative.
- Core Function: Automates the reporting of โHuman Risk Signalsโ from thousands of external data sources.
- 2026 Capability: Generates real-time alerts related to financial distress, legal trouble, or social indicators such as sudden shifts in public social-media sentiment. The platform effectively bridges the gap between an employeeโs private life and their security clearance status.
2. Enabled Intelligence โ AI Data Labeling & Monitoring
A rising presence within the INSA ecosystem, Enabled Intelligence focuses on the โHuman-in-the-Loopโ approach to artificial intelligence.
- Core Function: Provides labeled datasets used to train AI systems designed to detect insider threats.
- 2026 Capability: Specialized detection of โShadow AIโ activity โ identifying when employees use unauthorized large language models (LLMs), such as personal ChatGPT instances, to process sensitive workplace information.
3. Teramind โ Behavioral Forensics
Widely deployed among high-security contractors, Teramind provides an extremely granular view of employee activity.
- Core Function: User Entity Behavior Analytics (UEBA).
- 2026 Capability: Uses OCR (Optical Character Recognition) to read an employeeโs screen in real time, flagging sensitive keywords even within encrypted apps or embedded images. It also incorporates sentiment analysis to detect changes in typing patterns or language that could signal hostility, disengagement, or insider-risk behavior.
4. Nisos โ The โAscendโ Platform
Nisos specializes in OSINT-driven monitoring (Open Source Intelligence).
- Core Function: External threat hunting.
- 2026 Capability: Scans for โDigital Echoesโ โ signals that an employee may be targeted by foreign intelligence operatives on platforms such as LinkedIn or professional networks. AI models generate a confidence score regarding potential affiliations with foreign interests based on publicly available digital footprints.
5. AnySecura โ Real-Time Blocking & Watermarking
A major tool within Zero Trust security environments by 2026.
- Core Function: Deep endpoint monitoring combined with advanced data loss prevention (DLP).
- 2026 Capability: The system can automatically throttle or disconnect internet access if high-risk behavioral sequences are detected โ for example, printing documents immediately after receiving a negative performance review. It also embeds invisible digital watermarks into screen views to trace potential leaks back to individual users.
Comparative Summary of 2026 Tool Capabilities
| Tool | Primary Data Source | โRed Flagโ Trigger |
|---|---|---|
| ClearForce | Legal, financial, and public records | Bankruptcy, DUI incidents, or aggressive social media posts |
| Teramind | Real-time desktop activity | Unauthorized file access or hostile typing patterns |
| Nisos | Global OSINT and deep-web intelligence | Contact with suspected foreign intelligence proxies |
| Enabled Intelligence | AI usage logs | Pasting sensitive or classified text into public LLMs |
| AnySecura | File and network traffic | Large data transfers to personal cloud storage |
The โShadow ICโ Conclusion
In 2011, the debate surrounding insider threats focused on who was in the room. By 2026, the more significant question has become who is watching the room.
The tools outlined above demonstrate how the private sector has increasingly become an automated extension of the national-security vetting apparatus. Through real-time behavioral analytics, AI monitoring, and continuous evaluation frameworks, surveillance has evolved from periodic oversight into a persistent digital ecosystem.
The result is a security architecture where the boundaries between professional oversight and private life have effectively dissolved.
ADVISORY: The 2026 โSecurity-Privacy Gapโ
How Federal Contractors Navigate Labor Laws Through Surveillance
As of early 2026, a significant legal grey area has emerged between modern workplace privacy protections and national security oversight. While the U.S. Department of Labor and several statesโincluding California and New Yorkโhave strengthened protections for employee privacy and off-duty conduct, federal contractors are increasingly exempting themselves from these rules under the justification of national security requirements.
Many companies connected to the Intelligence and National Security Alliance (INSA) rely on federal security mandates to justify surveillance systems that would otherwise face legal challenges in the private sector.
1. The โSecurity Preemptionโ Strategy
Contractors working within the federal intelligence ecosystem often argue that their legal obligations under Continuous Vetting (CV) requirements override local labor protections.
- The Loophole: In states where โlifestyle discriminationโ laws prohibit employers from punishing workers for legal off-duty activities, contractors frequently invoke the Boyle Defense or the doctrine of federal preemption. Their argument is that compliance with federal clearance requirements obligates them to report behavioral anomalies, shielding them from lawsuits related to privacy or discrimination.
- Result: A cleared employee working for a contractor in New York, for example, may be flagged internally for a โhostileโ social media postโeven if that same post would be legally protected speech for employees in a normal private-sector workplace.
2. Consent as a Condition of Employment
Updates to the SF-86 security clearance questionnaire and the broader Trusted Workforce 2.0 framework have effectively transformed informed consent into a prerequisite for employment.
- Algorithmic Accountability: Several new privacy laws introduced in the mid-2020sโsuch as the Minnesota Consumer Data Privacy Actโallow citizens to challenge automated profiling decisions made by artificial intelligence systems.
- The Bypass: Security clearance paperwork increasingly contains clauses indicating that AI-generated risk scores are unique to national security vetting. Because they are considered part of the federal clearance system rather than employment evaluation, these decisions may fall outside standard Equal Employment Opportunity Commission (EEOC) or Department of Labor review processes.
3. The โFinancial Distressโ Trap
Modern labor regulations restrict the use of credit scores in employment decisions. However, within the national security workforce, financial transparency remains a central component of insider-risk monitoring.
- Real-Time Monitoring: Continuous Vetting platforms are capable of analyzing financial signals such as debt ratios, court records, and missed payments.
- Risk: Financial hardshipโwhich would normally remain privateโcan trigger internal investigations within the clearance system. This dynamic creates a potential pathway where personal financial struggles become interpreted as indicators of insider-threat vulnerability.
4. Misclassification and the โIndependent Spookโ
A renewed Department of Labor crackdown on worker misclassification has introduced additional legal tension within the intelligence contracting ecosystem.
- The Conflict: Many specialized cyber-security analysts and intelligence researchers operate as independent contractors rather than full-time employees.
- The Risk: Under stricter classification rules, companies may be forced to recognize these specialists as employees, potentially increasing their legal liability for workplace surveillance practices applied to them.
- Industry Response: Lobbying efforts have reportedly focused on creating a special classification for national-security specialists within existing labor frameworks.
Summary of Legal Vulnerabilities (2026)
Employees working within the national security contracting ecosystem often face a privacy environment fundamentally different from that of standard private-sector workers.
- Limited Data Deletion Rights: Clearance-related records may be stored indefinitely within federal investigative databases.
- Reduced Off-Duty Protections: Public digital activity can be incorporated into behavioral risk scoring models.
- Limited Algorithmic Transparency: Individuals typically cannot review the internal logic behind automated insider-risk assessments.
Investigative Tip: Employees working with federal contractors should carefully review their employment contracts for clauses related to Continuous Vetting or security monitoring addendums. These provisions often define the legal scope of digital monitoring and data collection.
The 2026 AI Semantic Trigger List
By 2026, the transition from traditional human-led investigations to automated, agent-driven surveillance has led to the development of sophisticated Semantic Trigger Libraries. Modern AI monitoring systems used in insider-risk programs no longer rely solely on isolated keywords. Instead, they analyze linguistic clustersโgroups of words and contextual signals that together may indicate elevated risk behavior.
Many modern User and Entity Behavior Analytics (UEBA) platforms categorize these signals into several primary โrisk domains,โ allowing AI systems to interpret both language patterns and behavioral context.
1. The โDisgruntlement & Grievanceโ Cluster
- Keywords: unfair, overlooked, toxic, retaliation, bypassed, PIP, merit, grievance, severance, bypass, meritocracy
- AI Logic: Monitoring systems look for rising patterns of workplace frustration by analyzing increases in these terms across internal communication channels such as email or messaging platforms. Sudden spikes following organizational eventsโsuch as performance reviews or promotion decisionsโcan trigger additional monitoring.
2. The โIdeological & Radicalizationโ Cluster
- Keywords: manifesto, acceleration, revolution, subvert, alternative, truth, oppress, hierarchy, system, corruption
- AI Logic: AI systems analyze shifts in communication style or rhetoric across public online platforms. A transition from professional language toward strong anti-system narratives or ideological framing may trigger deeper analysis within insider-risk monitoring frameworks.
3. The โExfiltration & Technical Bypassโ Cluster
- Keywords: VPN, encrypted, bridge, thumb drive, upload, storage, sync, prompt injection, jailbreak, LLM, bypass, access
- AI Logic: These terms are considered operational indicators. Monitoring platforms may generate alerts if such keywords appear alongside activity involving sensitive file systems or restricted data environments.
4. The โFinancial Pressureโ Cluster
- Keywords: bankruptcy, consolidation, loan, predatory, overdue, gambling, payout, equity, liquidate, relief, credit
- AI Logic: Financial stress indicators are often correlated with broader behavioral signals. Monitoring platforms may analyze search patterns or public records alongside financial indicators to evaluate potential insider-risk vulnerability.
2026 Shift: From โKeywordsโ to Narrative Mapping
Modern monitoring systems increasingly rely on Narrative Mapping. Instead of simply detecting individual words, AI models evaluate context, tone, and evolving language patterns to identify potential intentโeven when coded language is used.
| Risk Level | AI Observation | Automated Response |
|---|---|---|
| Level 1 (Monitoring) | Occasional use of grievance-related language. | Baseline monitoring increased without direct alert. |
| Level 2 (Review) | Combined financial stress signals with technical search activity. | Internal review or managerial awareness notification. |
| Level 3 (Intervention) | Strong ideological language combined with potential data-access indicators. | Automated security review or temporary system restriction. |
The โAgentic AIโ Challenge
A growing concern within insider-risk discussions is the role of AI agents. Personal productivity assistants and automated research tools may inadvertently interact with sensitive data environments.
Security researchers have warned that techniques such as prompt injection could theoretically manipulate AI systems into revealing information or interacting with restricted datasets. In such scenarios, the automated tool itself becomes part of the insider-risk equation, even when the human operator is unaware.
Investigative Note: Modern semantic monitoring systems rely on continuously updated linguistic models. These models adapt rapidly to emerging digital behaviors, incorporating new data patterns derived from real-world security incidents and data leak investigations.
TO BE CONTINUED
Bernd Pulch โ Bio
![]() | Bernd Pulch (M.A.) is a forensic expert, founder of Aristotle AI, entrepreneur, political commentator, satirist, and investigative journalist covering lawfare, media control, investment, real estate, and geopolitics. His work examines how legal systems are weaponized, how capital flows shape policy, how artificial intelligence concentrates power, and what democracy loses when courts and markets become battlefields. Active in the German and international media landscape, his analyses appear regularly on this platform. Full bio โ | Support the investigation โ |



