NSF Org: |
CNS Division Of Computer and Network Systems |
Recipient: |
|
Initial Amendment Date: | March 15, 2021 |
Latest Amendment Date: | May 21, 2021 |
Award Number: | 2103829 |
Award Instrument: | Standard Grant |
Program Manager: |
Karen Karavanic
kkaravan@nsf.gov (703)292-2594 CNS Division Of Computer and Network Systems CSE Direct For Computer & Info Scie & Enginr |
Start Date: | June 1, 2021 |
End Date: | May 31, 2025 (Estimated) |
Total Intended Award Amount: | $498,618.00 |
Total Awarded Amount to Date: | $498,618.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
1000 OLD MAIN HL LOGAN UT US 84322-1000 (435)797-1226 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
4205 Old Main Hill Logan UT US 84322-4205 |
Primary Place of Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Secure &Trustworthy Cyberspace |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Insiders are malicious people within organizations who abuse their authorized access in a manner that compromises the confidentiality, integrity, or availability of information systems. Attacks from insiders are hard to detect and can cause significant loss to organizations. While the problem of insider threat detection has been studied for a long time, the traditional machine learning-based detection approaches, which heavily rely on feature engineering, are hard to accurately capture the behavior difference between insiders and normal users due to the dynamic and adaptive nature of insider threats. Advanced deep learning techniques provide a new paradigm to learn end-to-end insider threat detection models from complex user behavior data. This project develops a deep learning framework for insider threat detection. The project?s novelties are the development of self-supervised user behavior representation learning, few-shot learning for malicious session detection, reinforcement learning for adaptive behavior detection, and counterfactual explanations based malicious activity detection. The project?s broader significance and importance are to provide a novel toolset for detecting and mitigating internal security risks, which can be benefit industries and governments who are frequently under attacks from malicious insiders.
This project develops novel deep learning approaches to detect malicious sessions through a) developing a self-supervised representation learning approach to encode user sessions into a low-dimensional embedding space without using any manually labeled data, b) advancing a few-shot learning framework via disentangled representation learning to detect malicious sessions with subtle activity changes, c) adapting reinforcement learning framework to identify dynamically evolving insider attacks, and d) proposing a counterfactual explanation approach to detect malicious activities in malicious sessions. The framework has the potential to extend to different types of fraud detection.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.