Cognitive security is maintaining rational decision-making under adversarial conditions. It entails generally accepting the same shared reality and rules of the game to come to a decision, resisting/mitigating emotional manipulation and protecting individuals and societies to enable collective action to solve problems. Risks to cognitive security include the following:
- Manipulating human decision-making
- Hacking the "human" of the human-machine team
- Person-to-group behavior manipulation
- How to get information to the human (symbiotic human-computer interface)
- Expanding beyond HMI to HME (human-machine environment or human-machine ecosystem)
- Narrative weaponization
- Politicized and monetized information environments
Our research aims to develop new tools and methodologies to protect decision-making in the face of persistent social-cyber adversarial conditions and environments. We seek to define and detect attacks against individuals, society, etc. meant to confuse, delay, and degrade action, while researching and developing novel tools and methodologies to assess the information space, at all levels (e.g., operational, strategic) and phases (e.g., competition, conflict) of conflict. Finally, our research investigates and explores over the horizon at emerging and future threats to cognitive security. Some example research lab outputs include developing and maintaining customized social-cyber analysis and analytic capabilities, as well as advising on mitigating/countering/monitoring/etc. social-cyber threats.