Great research starts with great data.

Learn More
More >
Patent Analysis of

Security agent

Updated Time 12 June 2019

Patent Registration Data

Publication Number

US10002250

Application Number

US15/393797

Application Date

29 December 2016

Publication Date

19 June 2018

Current Assignee

CROWDSTRIKE, INC.

Original Assignee (Applicant)

CROWDSTRIKE, INC.

International Classification

G06F21/56,G06N5/04

Cooperative Classification

G06F21/566,G06N5/04,G06F9/46,G06F21/554,G06F21/56

Inventor

DIEHL, DAVID F.,ALPEROVITCH, DMITRI,IONESCU, ION-ALEXANDRU,KURTZ, GEORGE ROBERT

Patent Images

This patent contains figures and images illustrating the invention and its embodiment.

US10002250 Security agent 1 US10002250 Security agent 2 US10002250 Security agent 3
See all images <>

Abstract

A security agent is described herein. The security agent is configured to observe events, filter the observed events using configurable filters, route the filtered events to one or more event consumers, and utilize the one or more event consumers to take action based at least on one of the filtered events. In some implementations, the security agent detects a first action associated with malicious code, gathers data about the malicious code, and in response to detecting subsequent action(s) of the malicious code, performs a preventative action. The security agent may also deceive an adversary associated with malicious code. Further, the security agent may utilize a model representing chains of execution activities and may take action based on those chains of execution activities.

Read more

Claims

1. A computer-implemented method comprising: detecting a first action associated with malicious code; responsive to detecting the first action, while refraining from taking a preventative action, gathering data associated with the first action; subsequently, detecting one or more subsequent actions associated with malicious code, the one or more subsequent actions occurring after the first action; and in response to detecting the first action and the one or more subsequent actions, performing the preventative action.

2. The method of claim 1, wherein the preventative action comprises preventing the one or more subsequent actions and further actions by the malicious process or deceiving an adversary associated with the malicious code.

3. The method of claim 1, further comprising storing the gathered data in a model that tracks actions taken by processes of a system which executed the first action.

4. The method of claim 3, further comprising: observing execution activities of the processes, the execution activities of the one or more processes comprising the first action and the one or more subsequent actions; storing data associated with the one or more execution activities in the model, the model representing one or more chains of execution activities; and performing the preventative action further based at least in part on the one or more chains of execution activities.

5. The method of claim 1, further comprising providing the gathered data to a remote security system.

6. The method of claim 5, further comprising, after providing the gathered data to the remote security system, receiving, from the remote security system, at least one of: instructions associated with the preventative action; or a configuration update for configuring a security agent that performs the detecting, gathering, and performing.

7. The method of claim 6, further comprising: receiving the configuration update comprising a configuration of a configurable filter; and detecting a second action associated with malicious code based at least in part on the configurable filter.

8. The method of claim 1, wherein the detecting, gathering, and performing are performed by a kernel-level security agent that utilizes configurable filters.

9. The method of claim 1, wherein detecting the first action or the one or more subsequent actions comprises observing events associated with multiple processes or threads in parallel.

10. One or more tangible computer-readable storage devices storing computer-executable instructions configured to implement a security agent on a computer device, the security agent performing operations comprising: observing an event associated with a process executing on the computing device;determining, based at least in part on the observed event and on a model that tracks processes of the computing device, that the process is associated with malicious code wherein the determining comprises: observing execution activities of one or more processes of the computing device, the one or more processes including the process and the execution activities including the event; and storing data associated with the one or more execution activities in the model, the model representing one or more chains of execution activities; and responsive to the determining, deceiving an adversary associated with the malicious code based at least in part on the one or more chains of execution activities.

11. The one or more tangible computer-readable storage devices of claim 10, wherein the deceiving comprises falsifying data acquired by the malicious code.

12. The one or more tangible computer-readable storage devices of claim 10, wherein the deceiving comprises falsifying the data transmitted to the adversary.

13. The one or more tangible computer-readable storage devices of claim 10, wherein the security agent utilizes configurable filters.

14. The one or more tangible computer-readable storage devices of claim 10, wherein the operations further comprise providing gathered data associated with the observed event to a remote security system.

15. The one or more tangible computer-readable storage devices of claim 14, wherein the operations further comprise, after providing the gathered data to the remote security system, receiving, from the remote security system, at least one of: instructions associated with the preventative action; or a configuration update for configuring a security agent that performs the detecting, gathering, and performing.

16. The one or more tangible computer-readable storage devices of claim 10, wherein the operations further comprise preventing an action by the process.

17. A method implemented by a security agent of a computing device, the method comprising: observing execution activities of at least two processes of the computing device; storing first data associated with a first execution activity of the execution activities in a model of the security agent;storing second data associated with a second execution activity of the execution activities in the model of the security agent, wherein: the model represents one or more chains of execution activities; the one or more chains of execution activities comprise a first chain of execution activities; and the first chain of execution activities comprises the first data and the second data; and taking action based at least in part on the first chain of execution activities.

18. The method of claim 17, wherein at least one of the chains of execution activities represents a genealogy of one of the processes.

19. The method of claim 17, wherein the taking action comprises halting or deceiving a process of the at least two processes, the process being associated with malicious activity.

20. The method of claim 17, further comprising providing at least some of the stored data to a remote security system.

21. The method of claim 20, further comprising receiving, in response to providing the at least some of the stored data to the remote security system, instructions associated with the action or a configuration update for configuring the security agent.

22. The method of claim 21, further comprising: receiving the configuration update comprising a configuration of a configurable filter; and observing an execution activity based at least in part on the configurable filter.

23. The method of claim 17, wherein the security agent utilizes configurable filters.

Read more

Claim Tree

  • 1
    1. A computer-implemented method comprising:
    • detecting a first action associated with malicious code
    • responsive to detecting the first action, while refraining from taking a preventative action, gathering data associated with the first action
    • subsequently, detecting one or more subsequent actions associated with malicious code, the one or more subsequent actions occurring after the first action
    • and in response to detecting the first action and the one or more subsequent actions, performing the preventative action.
    • 2. The method of claim 1, wherein
      • the preventative action comprises
    • 3. The method of claim 1, further comprising
      • storing the gathered data in a model that tracks actions taken by processes of a system which executed the first action.
    • 5. The method of claim 1, further comprising
      • providing the gathered data to a remote security system.
    • 8. The method of claim 1, wherein
      • the detecting, gathering, and performing are performed by a kernel-level security agent that utilizes configurable filters.
    • 9. The method of claim 1, wherein
      • detecting the first action or the one or more subsequent actions comprises
  • 10
    10. One or more tangible computer-readable storage devices storing computer-executable instructions configured to implement a security agent on a computer device, the security agent performing operations comprising:
    • observing an event associated with a process executing on the computing device
    • determining, based at least in part on the observed event and on a model that tracks processes of the computing device, that the process is associated with malicious code wherein the determining comprises: observing execution activities of one or more processes of the computing device, the one or more processes including the process and the execution activities including the event
    • and storing data associated with the one or more execution activities in the model, the model representing one or more chains of execution activities
    • and responsive to the determining, deceiving an adversary associated with the malicious code based at least in part on the one or more chains of execution activities.
    • 11. The one or more tangible computer-readable storage devices of claim 10, wherein
      • the deceiving comprises
    • 12. The one or more tangible computer-readable storage devices of claim 10, wherein
      • the deceiving comprises
    • 13. The one or more tangible computer-readable storage devices of claim 10, wherein
      • the security agent utilizes configurable filters.
    • 14. The one or more tangible computer-readable storage devices of claim 10, wherein
      • the operations further comprise
    • 16. The one or more tangible computer-readable storage devices of claim 10, wherein
      • the operations further comprise
  • 17
    17. A method implemented by a security agent of a computing device, the method comprising:
    • observing execution activities of at least two processes of the computing device
    • storing first data associated with a first execution activity of the execution activities in a model of the security agent
    • storing second data associated with a second execution activity of the execution activities in the model of the security agent, wherein: the model represents one or more chains of execution activities
    • the one or more chains of execution activities comprise a first chain of execution activities
    • and the first chain of execution activities comprises the first data and the second data
    • and taking action based at least in part on the first chain of execution activities.
    • 18. The method of claim 17, wherein
      • at least one of the chains of execution activities represents a genealogy of one of the processes.
    • 19. The method of claim 17, wherein
      • the taking action comprises
    • 20. The method of claim 17, further comprising
      • providing at least some of the stored data to a remote security system.
    • 23. The method of claim 17, wherein
      • the security agent utilizes configurable filters.
See all independent claims <>

Description

BACKGROUND

With Internet use forming an ever greater part of day to day life, malicious software—often called “malware”—that steals or destroys system resources, data, and private information is an increasing problem. Governments and businesses devote significant resources to preventing intrusions by malware. Malware comes in many forms, such as computer viruses, worms, trojan horses, spyware, keystroke loggers, adware, and rootkits. Some of the threats posed by malware are of such significance that they are described as cyber terrorism or industrial espionage.

Current approaches to these threats include traditional antivirus software, such as Symantec Endpoint Protection, that utilizes signature-based and heuristic techniques to detect malware. These techniques involve receiving malware definitions from a remote security service and scanning a host device on which the antivirus software is implemented for files matching the received definitions.

There are a number of problems with traditional antivirus software, however. Purveyors of malware are often able to react more quickly than vendors of security software, updating the malware to avoid detection. Also, there are periods of vulnerability when new definitions are implemented or when the security software itself is updated. During these periods of vulnerability, there is currently nothing to prevent the intrusion and spread of the malware. Further, antivirus software tends to be a user mode application that loads after the operating system, giving malware a window to avoid detection.

BRIEF DESCRIPTION OF THE DRAWINGS

The detailed description is set forth with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items or features.

FIG. 1 illustrates an example network connecting a computing device configured with a security agent, e.g., a kernel-level security agent, to a security service cloud.

FIG. 2 illustrates an example architecture of the security agent, e.g., kernel-level security agent, used in the network of FIG. 1, including a model, components, and managers.

FIG. 3 illustrates an example process implemented by the security agent, e.g., kernel-level security agent, used in the network of FIG. 1 for detecting a first action associated with malicious code, refraining from preventative action while gathering data, and, upon detecting subsequent action(s) associated with the malicious code, performing a preventative action.

FIG. 4 illustrates an example process implemented by the security agent, e.g., kernel-level security agent, used in the network of FIG. 1 for observing events, determining that the events are associated with malicious code, and deceiving an adversary associated with the malicious code.

FIG. 5 illustrates an example process implemented by the security agent, e.g., kernel-level security agent, used in the network of FIG. 1 for observing execution activities, storing data associated with the execution activities in a model that represents chains of execution activities, and taking action based on the chains of execution activities.

DETAILED DESCRIPTION

Overview

This disclosure describes, in part, a security agent, e.g., a kernel-level security agent, that operates on a host computing device, including mobile and embedded systems, as a virtual machine/shadow operating system. The kernel-level security agent loads before the operating system of the host computing device. In fact, the kernel-level security agent is loaded very early in the boot-time of the host computing device, by some of the first few dozen instructions. By loading early in boot-time, the kernel-level security agent significantly reduces the window in which malware can become active and interfere with operation of the host computing device or run unobserved on the host computing device. In some embodiments, by leveraging hardware-based security features, the agent can also validate the integrity of its computing operations and data and additionally enhance the level of security provided.

In various embodiments, the kernel-level security agent may be installed on the host computing device in the form of a driver and may be received from a security service. Such a security service may be implemented as a cloud of security service devices (referred to herein as a “security service cloud” or a “remote security system”). In addition to installing the kernel-level security agent, the security service cloud may receive notifications of observed events from the kernel-level security agent, may perform analysis of data associated with those events, may perform healing of the host computing device, and may generate configuration updates and provide those updates to the kernel-level security agent. These interactions between the kernel-level security agent and the security service cloud enable a detection loop that defeats the malware update loop of malware developers (also referred to herein as “adversaries”) and further enable the kernel-level security agent to perform updating while continuously monitoring, eliminating dangerous gaps in security coverage. Also, as used herein, the term “adversaries” includes not only malware developers but also exploit developers, builders and operators of an attack infrastructure, those conducting target reconnaissance, those executing the operation, those performing data exfiltration, and/or those maintaining persistence in the network, etc. Thus the “adversaries” can include numerous people that are all part of an “adversary” group. Also, the detection loop is focused on defeating not just the malware update loop but all aspects of this attack—the changing of the malware, the changing of the exploits, attack infrastructure, persistence tools, attack tactics, etc.

The detection loop of the kernel-level security agent and security service cloud is enabled by an agent architecture designed in accordance with the principles of the well-known OODA-loop (i.e., observe-orient-detect-act-loop). Rather than using fixed signatures to make quick determinations and responses, the kernel-level security agent observes and analyzes all semantically-interesting events that occur on the host computing device. Kernel-level security agent components known as collectors receive notifications of these semantically-interesting events (e.g., file writes and launching executables) from host operating system hooks or filter drivers, from user-mode event monitors, or from threads monitoring log files or memory locations. These events may then be filtered using configurable filters of the kernel-level security agent and routed/dispatched to event consumers of the kernel-level security agent, such as correlators or actor components. A correlator component notes the fact of the occurrence of the filtered events. An actor component may, for example, gather forensic data associated with an event and update a situation model of the kernel-level security agent with the forensic data. The situation model represents chains of execution activities and genealogies of processes, tracking attributes, behaviors, or patterns of processes executing on the host computing device and enabling an event consumer of the kernel-level security agent to determine when an event is interesting. Upon determining an occurrence of such an interesting event, the event consumer can perform any or all of updating the situational model and performing further observation, generating an event to represent the determination that an interesting event has occurred, notifying the security service cloud of the interesting event, or healing the host computing device by halting execution of a process associated with malicious code or deceiving an adversary associated with the malicious code. In various embodiments, any or all of the observing, filtering, routing/dispatching, and/or utilizing of event consumers may occur in parallel with respect to multiple events.

By looping based on significant events and chains of execution activities of the host computing device rather than on fixed signatures, the kernel-level security agent is able to better detect processes associated with malicious code. While adversaries can easily change malware to avoid signature-based detection, it is significantly more difficult to avoid detection by an agent that monitors and analyzes significant events. Further, by observing events for some time, and not immediately performing preventative action in response to detecting an action associated with malicious code, the kernel-level security agent may fool adversaries into thinking that the malware is working and, when the malware is later halted or deceived, the adversaries may first think to debug their own malware.

In various embodiments, as mentioned, the kernel-level security agent performs updating while continuously monitoring, eliminating dangerous gaps in security coverage. Responsive to receiving a configuration update from the security service cloud, a configuration manager of the kernel-level security agent may invoke a component manager of the kernel-level security agent to load a new component that updates or replaces an existing component. The existing component continues to participate in threat detection while the new component loads, thus ensuring uninterrupted threat detection.

In some embodiments, the kernel-level security agent includes an integrity manager that performs threat detection while core components of the kernel-level security agent or the managers themselves are updated or replaced. Thus, once the kernel-level security agent is installed, some components or manager(s) of the kernel-level security agent are continually involved in detecting threats to the host computing device.

In some embodiments, a kernel-level security agent is described herein. The kernel-level security agent is configured to observe events, filter the observed events using configurable filters, route the filtered events to one or more event consumers, and utilize the one or more event consumers to take action based at least on one of the filtered events. In some implementations, the kernel-level security agent detects a first action associated with malicious code, gathers data about the malicious code, and in response to detecting subsequent action(s) of the malicious code, performs a preventative action. The kernel-level security agent may also deceive an adversary associated with malicious code. Further, the kernel-level security agent may utilize a model representing chains of execution activities and may take action based on those chains of execution activities.

Example Network and Devices

FIG. 1 illustrates an example network connecting a computing device configured with a security agent, e.g., a kernel-level security agent, to a security service cloud that provides configuration, analysis, and healing to the computing device through the kernel-level security agent. As illustrated in FIG. 1, a computing device 102 may interact with a security service cloud 104 over a network 106. In addition to components such as processors 108, network interfaces 110, and memory 112, the computing device 102 may implement a kernel-level security agent 114, which is shown stored in the memory 112 and executable by the processor(s) 108. The kernel-level security agent 114 may include components 116 to observe events and determine actions to take based on those events, a situational model 118 to track attributes and behaviors of processes of the computing device 102, managers 120 to update the components 116 and provide continual detection during updates, and a communications module 122 to communicate with the security service cloud 104. In addition to the kernel-level security agent 114, the computing device 102 may include an operating system 124, processes 126, and log files 128.

In various embodiments, devices of the security service cloud 104 may also include processors 130, network interfaces 132, and memory 134. The memory 134 may store a communications module 136 to communicate with the kernel-level security agent 114 of the computing device 102, an analysis module 138 to evaluate interesting events identified by the kernel-level security agent 114, a configuration module 140 to generate and provide configuration updates to the kernel-level security agent 114, a healing module 142 to halt or deceive malware executing on the computing device 102, a social module 144 to notify other computing devices or users of the malware detected on the computing device 102, and an administrative user interface (UI) to enable an administrator associated with the security service cloud 104 to view notifications of observed events and make decisions regarding appropriate responses to those events.

In various embodiments, the computing device 102 and devices of the security service cloud 104 may each be or include a server or server farm, multiple, distributed server farms, a mainframe, a work station, a personal computer (PC), a laptop computer, a tablet computer, a personal digital assistant (PDA), a cellular phone, a media center, an embedded system, or any other sort of device or devices. In one implementation, the computing device(s) of the security service cloud 104 represent a plurality of computing devices working in communication, such as a cloud computing network of nodes. When implemented on multiple computing devices, the security service cloud 104 may distribute the modules and data 136-146 of the security service cloud 104 among the multiple computing devices. In some implementations, one or more of the computing device(s) of the computing device 102 or the security service cloud 104 represents one or more virtual machines implemented on one or more computing devices.

In some embodiments, the network 106 may be include any one or more networks, such as wired networks, wireless networks, and combinations of wired and wireless networks. Further, the network 106 may include any one or combination of multiple different types of public or private networks (e.g., cable networks, the Internet, wireless networks, etc.). In some instances, the computing device 102 and the security service cloud 104 communicate over the network using a secure protocol (e.g., https) and/or any other protocol or set of protocols, such as the transmission control protocol/Internet protocol (TCP/IP).

As mentioned, the computing device 102 includes processor(s) 108 and network interface(s) 110. The processor(s) 108 may be or include any sort of processing unit, such as a central processing unit (CPU) or a graphic processing unit (GPU). The network interface(s) 110 allow the computing device 102 to communicate with one or both of the security service cloud 104 and other devices. The network interface(s) 110 may send and receive communications through one or both of the network 106 or other networks. The network interface(s) 110 may also support both wired and wireless connection to various networks.

The memory 112 (and other memories described herein) may store an array of modules and data, and may include volatile and/or nonvolatile memory, removable and/or non-removable media, and the like, which may be implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. Such memory includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other medium which can be used to store the desired information and which can be accessed by a computing device.

As mentioned, the memory 112 includes a kernel-level security agent 114. The kernel-level security agent 114 operates as a virtual machine/shadow operating system. The kernel-level security agent 114 loads before the operating system 124 of the computing device 102. In fact, the kernel-level security agent 114 is loaded very early in the boot-time of the computing device 102, by some of the first few dozen instructions.

As illustrated in FIG. 1, the kernel-level security agent 114 includes components 116 to observe events and determine appropriate action(s) to take based on those events and on a situational model 118, as well as the situational model 118, managers 120 to receive configuration updates from the security service cloud 104 and to perform the updates while continuing to observe events, and a communications module 122 to communicate with the security service cloud. Also the kernel-level security agent 114 may include a hypervisor or one or more pre-boot components. The pre-boot components may or may not leverage hardware provided security features. These modules and data 116-122 of the kernel-level security agent 114 are described in further detail below with reference to the kernel-level security agent architecture 200 of FIG. 2.

As is further shown in FIG. 1, the memory 112 includes an operating system 124 of computing device 102. The operating system 124 may include hooks or filter drivers that allow other processes, such as the kernel-level security agent 114 to receive notifications of the occurrence or non-occurrence of events such as file creates, reads and writes, launching of executables, etc. The memory 112 also includes processes 126 and log files 128 that are monitored by the kernel-level security agent 114.

As mentioned, the devices of the security service cloud 104 include processor(s) 130 and network interface(s) 132. The processor(s) 130 may be or include any sort of processing units, such as central processing units (CPU) or graphic processing units (GPU). The network interface(s) 132 allow the devices of the security service cloud 104 to communicate with one or both of the computing device 102 and other devices. The network interface(s) 132 may send and receive communications through one or both of the network 106 or other networks. The network interface(s) 132 may also support both wired and wireless connection to various networks.

The memory 134 (and other memories described herein) may store an array of modules and data, and may include volatile and/or nonvolatile memory, removable and/or non-removable media, and the like, which may be implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. Such memory includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other medium which can be used to store the desired information and which can be accessed by a computing device.

As mentioned, the memory 134 includes a communications module 136. The communications module may comprise any one or more protocol stacks, such as a TCP/IP stack, device drivers to network interfaces 132, and any other modules or data that enable the devices of the security service cloud 104 to send and receive data over network 106.

In various embodiments, analysis module 138 may receive notifications of interesting events from kernel-level security agents 114 of computing devices, as well as forensic data associated with those interesting events. Upon receiving notification of an interesting event, the analysis module 138 may determine if related notifications have been received from other kernel-level security agents 114 of other computing devices 102. Also or instead, the analysis module 138 may evaluate the interesting event based on one or more rules or heuristics. The analysis module 138 may determine that an interesting event may be associated with malicious code based on these determinations and evaluations and may, in response, perform any or all of generating an event and providing the event to computing device 102 (e.g., for diagnostic or healing purposes), invoking the configuration module 140 to trigger a configuration update, invoking the healing module 142 to perform healing of computing devices 102 associated with the interesting event or deceiving of an adversary associated with the malicious code, or invoke the social module 144 to notify entities or persons associated with other computing device 102 of the potential malicious code. The analysis module 138 may also maintain and utilize one or more models, such as models specific to individual computing devices 102, to types of computing devices, to entities, or to a generic device. The analysis module 138 may update these models based on the received notifications and utilize the models in analyzing the interesting events. Additionally, the analysis module 138 may alert an administrator associated with the security service cloud 104 through the administrator UI 146.

In various embodiments, the configuration module 140 stored in memory 134 may generate configuration updates and provide those updates through the communications module 136. The configuration module 140 may generate device-specific configuration updates or configuration updates applicable to multiple devices. Such a configuration manager 140 may also be referred to as an ontology compiler and may be configured to provide security policies specific to hardware, OS, and language constraints of different computing devices 102. The configuration updates may include both updates responsive to interesting events and updates to the modules and data 116-122 comprising the kernel-level security agents 114. The configuration module 140 may generate and provide configuration updates responsive to a notification from the computing device 102 or independently of any prior notification from the computing device 102.

The healing module 142 may determine appropriate remedies to events determined to be associated with malicious code. For example, the healing module 142 may determine that an appropriate remedy is to halt a process associated with malicious code, to remove one or more executables, files, or registry keys, or to deceive malicious code by having it write to a dummy file rather than an operating system file, having it read falsified data, or falsifying a transmission associated with the malicious code. The healing module 142 may then instruct the kernel-level security agent 114 to perform the determined remedy. In some embodiments, the healing module 142 may provide the instructions via an event generated by the healing module 142 and provided to the kernel-level security agent 114.

In various embodiments, the social module 144 may share notifications of events determined to be associated with malicious code with individuals at other entities. The malicious code may not have affected the other entities yet, but they may be interested in learning about the malicious code. For example, if the malicious code affects devices of one defense department contractor, other defense department contractors may desire to know about the malicious code, as they may be more likely to be affected by it. The social module 144 may share notifications of malicious code and other information about the malicious code if both entities—the affected entity and the interested entity—agree to the sharing of the notifications.

In further embodiments, the administrative UI 146 may enable an administrator of the security service cloud 104 to be alerted to events determined to be associated with malicious code, to examine the data associated with those events, and to instruct the security service cloud 104 regarding an appropriate response. The administrative UI 146 may also enable an examination of the events and associated data by the administrator without first providing an alert.

In some instances, any or all of the computing device 102 or the devices 104 of the security service cloud 104 may have features or functionality in addition to those that FIG. 1 illustrates. For example, any or all of the computing device 102 or the devices 104 of the security service cloud 104 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. The additional data storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. In addition, some or all of the functionality described as residing within any or all of the computing device 102 or the devices 104 of the security service cloud 104 may reside remotely from that/those device(s), in some implementations.

Example Agent Architecture

FIG. 2 illustrates an example architecture of the kernel-level security agent, including a model, components, and managers. As illustrated, the security agent architecture 200 of the kernel-level security agent 114 includes modules and data 116-122 in a kernel mode 202 of the computing device 102 and components 116 in a user mode 204 of the computing device 102. In other embodiments, the security agent architecture 200 may only include the modules and data 116-122 in the kernel mode 202. The kernel mode 202 and user mode 204 correspond to protection domains—also known as rings—that protect data and functionality of the computing device 102 from faults and malware. Typically, a user mode, such as user mode 204, is associated with the outermost ring and the least level of privileges to access memory and functionality. This ring is often referred to as “ring 3” and includes many application processes. A kernel mode, such as kernel mode 202, is associated with an inner ring (sometimes the innermost ring, although in modern computing devices there is sometimes an additional level of privilege, a “ring-1”) and a higher level of privileges to access memory and functionality. This ring is often referred to as “ring 0” and typically includes operating system processes.

In various embodiments, the security agent architecture 200 includes collectors 206. These collectors 206 are components 116 of the kernel-level security agent 114 that observe events associated with one or more processes, such as kernel mode processes. Events may include both actions performed by processes and non-occurrence of expected actions. For example, a collector 206 may register with a hook or filter driver offered by the operating system 124 to receive notifications of the occurrence or non-occurrence of certain events, such as file creates, reads and writes, and loading executables. A collector 206 may also monitor locations in memory 112 or log files 128, or spawn a thread to do so, observing events associated with the log files or memory locations. A collector 206 may observe multiple kinds of events, or each type of event may be associated with a different collector 206. The events observed by the collectors 206 may be specified by a configuration of the kernel-level security agent 114. In some embodiments, the collectors 206 observe all events on the computing device 102 and the configuration specifies configurable filters 214 for filtering and dispatching those events. In other embodiments, the configuration specifies which collectors 206 should be loaded to observe specific types of events. In yet other embodiments, the configuration both specifies which collectors 206 should be loaded and configurable filters 214 for filtering and dispatching events observed by those collectors 206.

As is further shown in FIG. 2, the security agent architecture 200 may include user mode collectors 208 to observe events that may not be visible to kernel mode processes. Such events could include, for example, rendering of display graphics for display on a display screen of the computing device 102. To observe these events, the kernel-level security agent 114 is further configured to load user mode collectors 208 as user mode modules of the computing device 102. Like collectors 206, user mode collectors 208 may observe multiple kinds of events, or each type of event may be associated with a different user mode collector 208. The events observed by the user mode collectors 208 may be specified by a configuration of the kernel-level security agent 114. In some embodiments, the user mode collectors 208 observe all user mode events on the computing device 102 and the configuration specifies configurable filters 210 for filtering and dispatching the events. In other embodiments, the configuration specifies which user mode collectors 208 should be loaded to observe specific types of events. In yet other embodiments, the configuration both specifies which user mode collectors 208 should be loaded and configurable filters 210 for filtering and dispatching those events.

As mentioned, the security agent architecture may further include configurable filters 210. The configurable filters 210 may be user mode components 116 of the kernel-level security agent 114 that filter user mode events observed by the user mode collectors 208 based on the configuration of the kernel-level security agent 114. The configurable filters 210 may perform any filtering of the user mode events that does not require querying of the situational mode 118 so as to maximize the filtering of user mode events performed in the user mode 204. Maximizing the filtering performed in the user mode 204 minimizes the number of observed user mode events that are transferred from user mode 204 to kernel mode 202 and thus conserves resources of the computing device 102.

In some embodiments, the filtered user mode events are transmitted between the user mode 204 and the kernel mode 202 by an input/output (I/O) mechanism 212 of the kernel-level security agent 114. The I/O mechanism 212 may be, for example, a ring buffer or other known mechanism for transmitting data between protection domains. In some embodiments, the I/O mechanism 212 is not a component of the kernel-level security agent 114 but, rather, is part of the other modules and data of the computing device 102.

In various embodiments, a filtering and dispatch component 214, representative of configurable filters 214 each associated with one or more of the collectors 206, routing component 216, correlators 218, situational model 118, actors 220, and/or communications module 122, receives observed events from the collectors 206 and user mode events from the via the I/O mechanism 212. While FIG. 2 illustrates the filtering and dispatch component 214 as being logically associated with the routing component 216, the filtering and dispatch component 214 may instead comprise one or more components (e.g., configurable filters 214) that are separate from the routing component 216. Upon receiving events, the filtering and dispatch component 214 may perform any filtering specified by the configuration of the kernel-level security agent 114. For example, the configuration may specify filtering of all received events or only of specific types of received events. Such filtering may, in some embodiments, involve querying the situational model 118 to determine attributes, behaviors, patterns, or other descriptions of the process that is associated with the event being filtered. The filtering may also involve application of one or more rules or heuristics of the configuration.

Upon filtering the events, the filtering and dispatch component 214 may dispatch the events using the routing component 216, which may be a throw-forward bus or other type of bus. The routing component 216 may in turn transmit events to any or all of correlators 218, the situational model 118, actors 220, or the communications module 122. In some embodiments, events that are significant in aggregate, but not alone, or events that do not necessitate the kernel-level security agent 114 to copy associated data associated with the events, are dispatched via the routing component 216 to the correlators 218. In some embodiment, these may be synchronous events that do not utilize a scheduler of the kernel-level security agent 114. In further embodiments, events that are significant in isolation or that necessitate the kernel-level security agent 114 to copy associated data associated with the events are dispatched via the routing component 216 to a scheduler of the kernel-level security agent 114 for scheduled delivery to the actors 220. As these events are dispatched to a scheduler, they may be asynchronous events.

In various embodiments, the correlators 218 are components 116 of the kernel-level security agent 114 that note the fact of the occurrence of specific types of events. Each correlator 218 may be specific to a single type of event or may be associated with multiple types of events. A correlator 218 may note the fact of the occurrence of a filtered event and, based at least in part on an association between the occurrence of the filtered event and at least one of a threshold, a set, a sequence, a Markov chain, or a finite state machine, take an action. For example, a correlator 218 may maintain a counter of the numbers of occurrences of an event (e.g., ten writes to file X) and, at some threshold, may generate an event to indicate that the number of occurrences of a type of event is potentially interesting. Such a threshold may be a set number specified in the configuration of the kernel-level security agent 114 or may be a number determined by querying the situational model 118 to determine the typical pattern of occurrences of the type of event within a time period. The generated event may indicate the type of observed event and the number of occurrences of the observed event. A correlator 218 that has generated an event may transmit the event via the routing component 216 to any or all of the situational model 118, an actor 220, or the communications module 122. In some embodiments, a configurable filter 214 of the filter and dispatch component 214 may be used to filter the event generated by the correlator 218.

In further embodiments, the situation model 118 of the kernel-level security agent 114 may comprise any one or more databases, files, tables, or other structures that track attributes, behaviors, and/or patterns of objects or processes of the computing device 102. These attributes, behaviors, and/or patterns may represent execution activities of processes and the situational model 118 may represent chains of execution activities providing genealogies of processes. The situational model 118 (also referred to herein as “the model”) stores attributes, behaviors, and/or patterns of events, specific events, and forensic data associated with events. This data and other data stored by the situational model 118 may be indexed by specific events or by specific types of events. The situational model may receive events from the routing component 216 and be updated to include the received events by logic associated with the situational model 118. The situational model 118 may also be updated by actors 220 with forensic data that is associated with events and retrieved by the actors 220. Further, the situational model 118 may be configured to respond to queries from configurable filters 214, correlators 218, or actors 220 with descriptions of attributes, behaviors, and/or patterns of events or with descriptions of specific events.

In various embodiments, actors 220 of the kernel-level security agent 114 receive events from the scheduler of the kernel-level security agent 114. Each actor 220 may be specific to a type of event or may handle multiple types of events. Upon receiving an event, an actor 220 may determine if the event was observed by collectors 206 or user mode collectors 208 or was instead generated by a correlator 218 or security service cloud 104. The actor 220 may gather additional forensic data about the event. Such forensic data may include additional descriptions of the event and may be gathered by interfacing with the operating system 124. Upon gathering the forensic data, the actor 220 may update the situational model 118 with the forensic data. The actor 220 may also query the situational model 118 to determine attributes, behaviors, and/or patterns or other descriptions associated with the event. Based on those attributes, behaviors, and/or patterns, descriptions, or other rules or heuristics specified by the configuration of the kernel-level security agent 114, the actor 220 may determine that the event is interesting in some fashion and/or may be associated with malicious code.

Upon determining that an event is interesting, potentially associated with malicious code, or upon receiving an event generated by a correlator 218 or security service cloud, an actor 220 may update the situation model 118, may notify the security service cloud 104 of the event, or may heal the computing device 102. As mentioned above, the healing may involve halting a process associated with the event, deleting a process associated with the event (or malicious code associated with that process), or deceiving an adversary associated with malicious code that is in turn associated with the event. Such deceiving may be achieved by falsifying data acquired by the malicious code or by falsifying the data transmitted to the adversary. The action taken may be determined by the configuration of the kernel-level security agent 114. In some embodiments, an actor 220 may perform the healing responsive to receiving instructions from the security service cloud 104 to perform the healing. As mentioned above, such instructions may be provided via an event generated by the security service cloud 104.

In various embodiments, the security agent architecture 200 includes the communications module 122. The communications module 122 may represent network protocol stack(s), network interface driver(s), and any other network interface components utilized by the kernel-level security agent 114 to communicate with the security service cloud 104 over the network 106. The communications module 122 may, as illustrated in FIG. 2, be a kernel mode component of the computing device 102. Further, the communications module 122 may transmit events, other notifications, and data associated events from the kernel-level security agent 114 to the security service cloud 104. The communications module 122 may also transmit configuration updates received from the security service cloud 104 to a configuration manager 222 of the kernel-level security agent 114 and healing instructions and/or events from the security service cloud 104 to an actor 220.

As shown in FIG. 2, the security agent architecture includes a plurality of managers 120: a configuration manager 222, a component manager 224, a state manager 226, a storage manager 228, and an integrity manager 230. And as mentioned above, the configuration manager 222 may receive configuration updates from the security service cloud 104. The configuration manager 222 may receive the updates periodically or as needed. The configuration manager 22 then determines what changes to the kernel-level security agent 114 are needed to implement the configuration contained in the configuration update. For example, the configuration may specify a rule that creation of executables by Adobe Reader® is indicative of activity of malicious code and that any executions of such executables should be halted. To handle such a configuration, the configuration manager 222 may invoke the component manager 224 to load a new collector 206 to observe events associated with Adobe® and files created by Adobe Reader®, a configurable filter 214 that notes and dispatches such events, and/or an actor 220 to gather forensic data responsive to creation of an executable by Adobe Reader® and to halt execution of the created executable responsive to the loading of that executable.

In another example, the configuration update may specify a new configuration manager 222. Responsive to such an update, the existing configuration manager 222 may invoke the component manager 224 to load the new configuration manager 222 and the integrity manager 230 to ensure continued observation while the configuration manager 222 is updated.

In various embodiments, the component manager 224 loads new components 116 and managers 120 designed to update or replace existing components 116 or managers 120. As mentioned, the component manager 224 is invoked by the configuration manager 222, which may inform the component manager 224 of which new component 116 or manager 120 is to be loaded, which component 116 or manager 120 is designated to be replaced or updated, and may specify a configuration of the new component 116 or manager 120 that implements the configuration update. The component manager 224 may then load the new component 116 or manager 120 while the existing/old component 116 or manager 120 continues to operate. After the new component 116 or manager 120 has been loaded, the component manager 224 or some other component 116 or manager 120 of the kernel-level security agent 114 may deactivate the existing/old component 116 or manager 120 that is now replaced by the new component 116 or manager 120.

In various embodiments, the state manager 226 may be invoked by the component manager 224 to share state of an existing/old component 116 with a new component 116. For example, if the component 116 is an actor 220 having an interface with the operating system 124, the state manager 226 may keep the state of that interface and pass the interface between the old/existing component 116 and the new component116.

In some embodiments, the storage manager 228 may be an interface to the memory 112 capable of being invoked by other components 116 or managers 120 of the kernel-level security agent 114 to read from and write to the memory 112.

As mentioned above, the integrity manager 230 maintains continued observation while core components 116 and managers 120 are updated. The core components 116 and managers 120 are components 116 and managers 120 that always form part of an operating kernel-level security agent 114. Because updates of such core components 116 and managers 120 can open a window of vulnerability in which malicious code can go undetected, some measure of continued observation is needed during the updates. The integrity manager 230 provided this measure of continued observation by observing events and processes of the computing device 102 during the core component/manager updates. In some embodiments, the integrity module 230 may also be configured to detect attempts to delete it or other components 116 or managers 120 of the kernel-level security agent 114 and may prevent those attempts from succeeding.

Example Processes

FIGS. 3-5 illustrate example processes 300, 400, and 500. These processes are illustrated as logical flow graphs, each operation of which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes.

The process 300 includes, at 302, a kernel-level security agent of a computing device detecting a first action associated with malicious code. At 304, responsive to detecting the first action, the kernel-level security agent gathers data associated with the first action. At 306, the kernel-level security agent may then store the gathered data in a model that tracks actions taken by processes of a system which executed the first action. Alternatively or additionally, at 308, the kernel-level security agent may inform a remote security service of the occurrence of the first action. At 310, in response, the kernel-level security agent receives from the remote security system instructions associated with the preventative action or a configuration update for configuring the kernel-level security agent. Also in response to detecting the first action, the kernel-level security agent refrains, at 312, from performing a preventative action.

At 314, the kernel-level security agent detects one or more subsequent actions associated with the malicious code and, in response at 316, performs a preventative action. The one or more subsequent actions occur after the first action. At 316a, the preventative action is preventing the one or more subsequent actions and further actions by the malicious process or deceiving the an adversary associated with the malicious code.

The process 400 includes, at 402, observing by a kernel-level security agent an event associated with a process executing on the computing device.

At 404, the kernel-level security agent determines, based at least in part on the observed event, that the process is associated with malicious code. At 404a, the determining comprises determining that the process is associated with malicious code based at least on part on a model that tracks processes of the computing device.

At 406, responsive to the determining at 404, the kernel-level security agent deceives an adversary associated with the malicious code. At 406a, the deceiving comprises falsifying data acquired by the malicious code. At 406b, the deceiving comprises falsifying the data transmitted to the adversary.

The process 500 includes, at 502, observing by a kernel-level security agent execution activities of one or more processes of the computing device.

At 504, the kernel-level security agent stores data associated with the one or more execution activities in a model of the kernel-level security agent, the model representing one or more chains of execution activities. In some embodiments, at least one of the chains of execution activities represents a genealogy of one of the processes.

At 506, the kernel-level security agent takes action based at least in part on the one or more chains of execution activities. At 506a, the taking action comprises halting or deceiving a process associated with malicious activity.

Example Clauses

A: A computer-implemented method comprising: detecting a first action associated with malicious code; responsive to detecting the first action, gathering data associated with the first action while refraining from taking a preventative action; upon detecting one or more subsequent actions associated with malicious code, the one or more subsequent actions occurring after the first action, performing the preventative action.

B: The method of paragraph A, wherein the preventative action is preventing the one or more subsequent actions and further actions by the malicious process or deceiving an adversary associated with the malicious code.

C: The method of paragraph A or B, further comprising storing the gathered data in a model that tracks actions taken by processes of a system which executed the first action.

D: The method of any of paragraphs A-C, further comprising providing the gathered data to a remote security system.

E: The method of paragraph D, further comprising receiving, in response, instructions associated with the preventative action or a configuration update for configuring a security agent that performs the detecting, gathering, and performing.

F: The method of any of paragraphs A-E, wherein the detecting, gathering, and performing are performed by a kernel-level security agent that utilizes configurable filters.

G: The method of any of paragraphs A-F, wherein detecting the first action or the one or more subsequent actions comprises observing events associated with multiple processes or threads in parallel.

H: One or more tangible computer-readable media storing computer-executable instructions configured to implement a security agent on a computer device, the security agent performing operations comprising: observing an event associated with a process executing on the computing device; determining, based at least in part on the observed event, that the process is associated with malicious code; and responsive to the determining, deceiving an adversary associated with the malicious code.

I: The one or more tangible computer-readable media of paragraph H, wherein the deceiving comprises falsifying data acquired by the malicious code.

J: The one or more tangible computer-readable media of paragraph H or I, wherein the deceiving comprises falsifying the data transmitted to the adversary.

K: The one or more tangible computer-readable media of any of paragraphs H-J, wherein the determining comprises determining that the process is associated with malicious code based at least on part on a model that tracks processes of the computing device.

L: The one or more tangible computer-readable media of any of paragraphs H-K, wherein the security agent utilizes configurable filters.

M: The one or more tangible computer-readable media of any of paragraphs H-L, wherein the operations further comprise providing gathered data associated with the observed event to a remote security system.

N: The one or more tangible computer-readable media of any of paragraphs H-M, wherein the operations further comprise preventing an action by the process.

O: The one or more tangible computer-readable media of any of paragraphs H-N, wherein the security agent is a kernel-level security agent.

P: A method implemented by a security agent of a computing device, the method comprising: observing execution activities of one or more processes of the computing device; storing data associated with the one or more execution activities in a model of the security agent, the model representing one or more chains of execution activities; and taking action based at least in part on the one or more chains of execution activities.

Q: The method of paragraph P, wherein at least one of the chains of execution activities represents a genealogy of one of the processes.

R: The method of paragraph P or Q, wherein the taking action comprises halting or deceiving a process associated with malicious activity.

S: The method of any of paragraphs P-R, further comprising providing the stored data to a remote security system.

T: The method of paragraph S, further comprising receiving, in response, instructions associated with the action or a configuration update for configuring the security agent.

U: The method of any of paragraphs P-T, wherein the security agent utilizes configurable filters.

V: The method of any of paragraphs P-U, wherein the security agent is a kernel-level security agent.

CONCLUSION

Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claims.

Read more
PatSnap Solutions

Great research starts with great data.

Use the most comprehensive innovation intelligence platform to maximise ROI on research.

Learn More

Patent Valuation

$

Reveal the value <>

34.31/100 Score

Market Attractiveness

It shows from an IP point of view how many competitors are active and innovations are made in the different technical fields of the company. On a company level, the market attractiveness is often also an indicator of how diversified a company is. Here we look into the commercial relevance of the market.

100.0/100 Score

Market Coverage

It shows the sizes of the market that is covered with the IP and in how many countries the IP guarantees protection. It reflects a market size that is potentially addressable with the invented technology/formulation with a legal protection which also includes a freedom to operate. Here we look into the size of the impacted market.

68.29/100 Score

Technology Quality

It shows the degree of innovation that can be derived from a company’s IP. Here we look into ease of detection, ability to design around and significance of the patented feature to the product/service.

38.0/100 Score

Assignee Score

It takes the R&D behavior of the company itself into account that results in IP. During the invention phase, larger companies are considered to assign a higher R&D budget on a certain technology field, these companies have a better influence on their market, on what is marketable and what might lead to a standard.

22.17/100 Score

Legal Score

It shows the legal strength of IP in terms of its degree of protecting effect. Here we look into claim scope, claim breadth, claim quality, stability and priority.

Citation

Patents Cited in This Cited by
Title Current Assignee Application Date Publication Date
봇넷 악성행위 실시간 분석 시스템 한국인터넷진흥원 21 December 2009 01 June 2011
オペレーティングシステム資源の保護 マイクロソフト コーポレーション 19 December 2007 20 May 2010
Secure remote kernel communication BERG RYAN J.,DANAHY JOHN J.,ROSE LAWRENCE J. 15 February 2001 22 November 2001
マルウェア対処システム、方法及びプログラム 日本電気株式会社 28 March 2008 15 October 2009
アンチウィルスソフトウェアアプリケーションの知識基盤を統合するシステムおよび方法 マイクロソフト コーポレーション 07 October 2005 25 May 2006
See full citation <>

More like this

Title Current Assignee Application Date Publication Date
Systems and techniques for guiding a response to a cybersecurity incident CARBON BLACK, INC. 24 March 2017 28 September 2017
Agent presence for self-healing MCAFEE, INC.,THAKUR, SHASHIN,BOGGARAPU, ARVIND K.,SINGH, HARVIR 27 December 2014 28 April 2016
Software security BRITISH TELECOMMUNICATIONS PUBLIC LIMITED COMPANY 22 December 2016 29 June 2017
Monitoring variations in observable events for threat detection SOPHOS LIMITED 02 December 2015 23 June 2016
System and method for securing an enterprise computing environment CLOUDLOCK, INC. 24 February 2016 01 September 2016
Method and apparatus to detect security vulnerabilities in web application MCAFEE, INC.,YANG, JIN 04 July 2016 11 January 2018
Detecting software attacks on processes in computing devices QUALCOMM INCORPORATED 12 August 2016 23 March 2017
Techniques for detecting malware with minimal performance degradation INTEL CORPORATION 25 November 2016 29 June 2017
Systems and methods for detecting attacks in big data systems UNIVERSITY OF SOUTH FLORIDA 22 May 2017 07 December 2017
Inferential exploit attempt detection CROWDSTRIKE, INC. 17 July 2017 25 January 2018
A cyber-security system and methods thereof for detecting and mitigating advanced persistent threats EMPOW CYBER SECURITY LTD.,M&B IP ANALYSTS, LLC 11 November 2015 09 June 2016
Systems and methods of protecting data from malware processes DIGITAL GUARDIAN, INC. 29 July 2016 09 February 2017
Synchronous execution of designated computing events using hardware-assisted virtualization MCAFEE, INC. 26 September 2016 04 May 2017
Machine learning model for malware dynamic analysis CYLANCE INC. 05 May 2017 09 November 2017
Security within software-defined infrastructure INTERNATIONAL BUSINESS MACHINES CORPORATION,IBM UNITED KINGDOM LIMITED,IBM (CHINA) INVESTMENT COMPANY LIMITED 23 March 2016 29 September 2016
Method and system for monitoring calls to an application program interface (API) function INTEL CORPORATION 27 October 2015 15 November 2016
Execution profiling detection of malicious objects MCAFEE, INC. 28 November 2015 30 June 2016
System, method and device for preventing cyber attacks FUNDACIÓN TECNALIA RESEARCH & INNOVATION 15 September 2016 23 March 2017
Malicious software identification BRITISH TELECOMMUNICATIONS PUBLIC LIMITED COMPANY 15 December 2016 29 June 2017
Methods and systems for detecting fake user interactions with a mobile device for improved malware protection QUALCOMM INCORPORATED 11 January 2016 11 August 2016
See all similar patents <>

More Patents & Intellectual Property

PatSnap Solutions

PatSnap solutions are used by R&D teams, legal and IP professionals, those in business intelligence and strategic planning roles and by research staff at academic institutions globally.

PatSnap Solutions
Search & Analyze
The widest range of IP search tools makes getting the right answers and asking the right questions easier than ever. One click analysis extracts meaningful information on competitors and technology trends from IP data.
Business Intelligence
Gain powerful insights into future technology changes, market shifts and competitor strategies.
Workflow
Manage IP-related processes across multiple teams and departments with integrated collaboration and workflow tools.
Contact Sales
Clsoe
US10002250 Security agent 1 US10002250 Security agent 2 US10002250 Security agent 3